Adapting model-based deep learning to multiple acquisition conditions: Ada-MoDL

link to paper

Adapting model-based deep learning to multiple acquisition conditions: Ada-MoDL

Aniket Pramanik, Sampada Bhave, Saurav Sajib, Samir D. Sharma, Mathews Jacob

Abstract

Purpose

The aim of this work is to introduce a single model-based deep network that can provide high-quality reconstructions from undersampled parallel MRI data acquired with multiple sequences, acquisition settings, and field strengths.

Methods

A single unrolled architecture, which offers good reconstructions for multiple acquisition settings, is introduced. The proposed scheme adapts the model to each setting by scaling the convolutional neural network (CNN) features and the regularization parameter with appropriate weights. The scaling weights and regularization parameter are derived using a multilayer perceptron model from conditional vectors, which represents the specific acquisition setting. The perceptron parameters and the CNN weights are jointly trained using data from multiple acquisition settings, including differences in field strengths, acceleration, and contrasts. The conditional network is validated using datasets acquired with different acquisition settings.

Results

The comparison of the adaptive framework, which trains a single model using the data from all the settings, shows that it can offer consistently improved performance for each acquisition condition. The comparison of the proposed scheme with networks that are trained independently for each acquisition setting shows that it requires less training data per acquisition setting to offer good performance.

Conclusion

The Ada-MoDL framework enables the use of a single model-based unrolled network for multiple acquisition settings. In addition to eliminating the need to train and store multiple networks for different acquisition settings, this approach reduces the training data needed for each acquisition setting.