KomaMRI.jl: An open-source framework for general MRI simulations with GPU acceleration

link to paper

KomaMRI.jl: An open-source framework for general MRI simulations with GPU acceleration

Carlos Castillo-Passi, Ronal Coronado, Gabriel Varela-Mattatall, Carlos Alberola-López, René Botnar, Pablo Irarrazaval

Abstract

Purpose

To develop an open-source, high-performance, easy-to-use, extensible, cross-platform, and general MRI simulation framework (Koma).

Methods

Koma was developed using the Julia programming language. Like other MRI simulators, it solves the Bloch equations with CPU and GPU parallelization. The inputs are the scanner parameters, the phantom, and the pulse sequence that is Pulseq-compatible. The raw data is stored in the ISMRMRD format. For the reconstruction, MRIReco.jl is used. A graphical user interface utilizing web technologies was also designed. Two types of experiments were performed: one to compare the quality of the results and the execution speed, and the second to compare its usability. Finally, the use of Koma in quantitative imaging was demonstrated by simulating Magnetic Resonance Fingerprinting (MRF) acquisitions.

Results

Koma was compared to two well-known open-source MRI simulators, JEMRIS and MRiLab. Highly accurate results (with mean absolute differences below 0.1% compared to JEMRIS) and better GPU performance than MRiLab were demonstrated. In an experiment with students, Koma was proved to be easy to use, eight times faster on personal computers than JEMRIS, and 65% of test subjects recommended it. The potential for designing acquisition and reconstruction techniques was also shown through the simulation of MRF acquisitions, with conclusions that agree with the literature.

Conclusions

Koma’s speed and flexibility have the potential to make simulations more accessible for education and research. Koma is expected to be used for designing and testing novel pulse sequences before implementing them in the scanner with Pulseq files, and for creating synthetic data to train machine learning models.