High-fidelity direct contrast synthesis from magnetic resonance fingerprinting

link to paper

High-fidelity direct contrast synthesis from magnetic resonance fingerprinting

Ke Wang, Mariya Doneva, Jakob Meineke, Thomas Amthor, Ekin Karasan, Fei Tan, Jonathan I. Tamir, Stella X. Yu, Michael Lustig

Abstract

Purpose

This work was aimed at proposing a supervised learning-based method that directly synthesizes contrast-weighted images from the Magnetic Resonance Fingerprinting (MRF) data without performing quantitative mapping and spin-dynamics simulations.

Methods

To implement our direct contrast synthesis (DCS) method, we deploy a conditional generative adversarial network (GAN) framework with a multi-branch U-Net as the generator and a multilayer CNN (PatchGAN) as the discriminator. We refer to our proposed approach as N-DCSNet. The input MRF data are used to directly synthesize T1-weighted, T2-weighted, and fluid-attenuated inversion recovery (FLAIR) images through supervised training on paired MRF and target spin echo-based contrast-weighted scans. The performance of our proposed method is demonstrated on in vivo MRF scans from healthy volunteers. Quantitative metrics, including normalized root mean square error (nRMSE), peak signal-to-noise ratio (PSNR), structural similarity (SSIM), learned perceptual image patch similarity (LPIPS), and Fréchet inception distance (FID), were used to evaluate the performance of the proposed method and compare it with others.

Results

In-vivo experiments demonstrated excellent image quality with respect to that of simulation-based contrast synthesis and previous DCS methods, both visually and according to quantitative metrics. We also demonstrate cases in which our trained model is able to mitigate the in-flow and spiral off-resonance artifacts typically seen in MRF reconstructions, and thus more faithfully represent conventional spin echo-based contrast-weighted images.

Conclusion

We present N-DCSNet to directly synthesize high-fidelity multicontrast MR images from a single MRF acquisition. This method can significantly decrease examination time. By directly training a network to generate contrast-weighted images, our method does not require any model-based simulation and therefore can avoid reconstruction errors due to dictionary matching and contrast simulation (code available at:GitHub - mikgroup/DCSNet: Implementation for Direct Contrast Synthesis from MR Fingerprinting using Pytorch).