Quantifying 3D MR fingerprinting (3D-MRF) reproducibility across subjects, sessions, and scanners automatically using MNI atlases

link to paper

Quantifying 3D MR fingerprinting (3D-MRF) reproducibility across subjects, sessions, and scanners automatically using MNI atlases

Andrew Dupuis, Yong Chen, Michael Hansen, Kelvin Chow, Jessie E. P. Sun, Chaitra Badve, Dan Ma, Mark A. Griswold, Rasim Boyacioglu

Abstract

Purpose

Quantitative MRI techniques such as MR fingerprinting (MRF) promise more objective and comparable measurements of tissue properties at the point-of-care than weighted imaging. However, few direct cross-modal comparisons of MRF’s repeatability and reproducibility versus weighted acquisitions have been performed. This work proposes a novel fully automated pipeline for quantitatively comparing cross-modal imaging performance in vivo via atlas-based sampling.

Methods

We acquire whole-brain 3D-MRF, turbo spin echo, and MPRAGE sequences three times each on two scanners across 10 subjects, for a total of 60 multimodal datasets. The proposed automated registration and analysis pipeline uses linear and nonlinear registration to align all qualitative and quantitative DICOM stacks to Montreal Neurological Institute (MNI) 152 space, then samples each dataset’s native space through transformation inversion to compare performance within atlas regions across subjects, scanners, and repetitions.

Results

Voxel values within MRF-derived maps were found to be more repeatable (σT1 = 1.90, σT2 = 3.20) across sessions than vendor-reconstructed MPRAGE (σT1w = 6.04) or turbo spin echo (σT2w = 5.66) images. Additionally, MRF was found to be more reproducible across scanners (σT1 = 2.21, σT2 = 3.89) than either qualitative modality (σT1w = 7.84, σT2w = 7.76). Notably, differences between repeatability and reproducibility of in vivo MRF were insignificant, unlike the weighted images.

Conclusion

MRF data from many sessions and scanners can potentially be treated as a single dataset for harmonized analysis or longitudinal comparisons without the additional regularization steps needed for qualitative modalities.