Ultrafast water–fat separation using deep learning–based single-shot MRI

link to paper

Ultrafast water–fat separation using deep learning–based single-shot MRI

Xinran Chen, Wei Wang, Jianpan Huang, Jian Wu, Lin Chen, Congbo Cai, Shuhui Cai, Zhong Chen

Abstract

Purpose

To present a deep learning–based reconstruction method for spatiotemporally encoded single-shot MRI to simultaneously obtain water and fat images.

Methods

Spatiotemporally encoded MRI is an ultrafast branch that can encode chemical shift information due to its special quadratic phase modulation. A deep learning approach using a 2D U-Net was proposed to reconstruct spatiotemporally encoded signal and obtain water and fat images simultaneously. The training data for U-Net were generated by MRiLab software (version 1.3) with various synthetic models. Numerical simulations and experiments on ex vivo pork and in vivo rats at a 7.0 T Varian MRI system (Agilent Technologies, Santa Clara, CA) were performed, and the deep learning results were compared with those obtained by state-of-the-art algorithms. The structural similarity index and signal-to-ghost ratio were used to evaluate the residual artifact of different reconstruction methods.

Results

With a well-trained neural network, the proposed deep learning approach can accomplish signal reconstruction within 0.46 s on a personal computer, which is comparable with the conjugate gradient method (0.41 s) and much faster than the state-of-the-art super-resolved water-fat image reconstruction method (30.31 s). The results of numerical simulations, ex vivo pork experiments, and in vivo rat experiments demonstrate that the deep learning approach can achieve better fidelity and higher spatial resolution compared to the other 2 methods. The deep learning approach also has a great advantage in artifact suppression, as indicated by the signal-to-ghost ratio results.

Conclusion

Spatiotemporally encoded MRI with deep learning can provide ultrafast water–fat separation with better performance compared to the state-of-the-art methods.