Deep learning intravoxel incoherent motion modeling: Exploring the impact of training features and learning strategies

link to paper

Deep learning intravoxel incoherent motion modeling: Exploring the impact of training features and learning strategies

Misha P. T. Kaandorp, Frank Zijlstra, Christian Federau, Peter T. While

Abstract

Purpose

The development of advanced estimators for intravoxel incoherent motion (IVIM) modeling is often motivated by a desire to produce smoother parameter maps than least squares (LSQ). Deep neural networks show promise to this end, yet performance may be conditional on a myriad of choices regarding the learning strategy. In this work, we have explored potential impacts of key training features in unsupervised and supervised learning for IVIM model fitting.

Methods

Two synthetic data sets and one in-vivo data set from glioma patients were used in training of unsupervised and supervised networks for assessing generalizability. Network stability for different learning rates and network sizes was assessed in terms of loss convergence. Accuracy, precision, and bias were assessed by comparing estimations against ground truth after using different training data (synthetic and in vivo).

Results

A high learning rate, small network size, and early stopping resulted in sub-optimal solutions and correlations in fitted IVIM parameters. Extending training beyond early stopping resolved these correlations and reduced parameter error. However, extensive training resulted in increased noise sensitivity, where unsupervised estimates displayed variability similar to LSQ. In contrast, supervised estimates demonstrated improved precision but were strongly biased toward the mean of the training distribution, resulting in relatively smooth, yet possibly deceptive parameter maps. Extensive training also reduced the impact of individual hyperparameters.

Conclusion

Voxel-wise deep learning for IVIM fitting demands sufficiently extensive training to minimize parameter correlation and bias for unsupervised learning, or demands a close correspondence between the training and test sets for supervised learning.