MRzero - Automated discovery of MRI sequences using supervised learning

link to paper

MRzero - Automated discovery of MRI sequences using supervised learning

A. Loktyushin, K. Herz, N. Dang, F. Glang, A. Deshmane, S. Weinmüller, A. Doerfler, B. Schölkopf, K. Scheffler, M. Zaiss

Abstract

Purpose

A supervised learning framework is proposed to automatically generate MR sequences and corresponding reconstruction based on the target contrast of interest. Combined with a flexible, task-driven cost function this allows for an efficient exploration of novel MR sequence strategies.

Methods

The scanning and reconstruction process is simulated end-to-end in terms of RF events, gradient moment events in x and y, and delay times, acting on the input model spin system given in terms of proton density, urn:x-wiley:07403194:media:mrm28727:mrm28727-math-0001 and urn:x-wiley:07403194:media:mrm28727:mrm28727-math-0002, and urn:x-wiley:07403194:media:mrm28727:mrm28727-math-0003. As a proof of concept, we use both conventional MR images and urn:x-wiley:07403194:media:mrm28727:mrm28727-math-0004 maps as targets and optimize from scratch using the loss defined by data fidelity, SAR penalty, and scan time.

Results

In a first attempt, MRzero learns gradient and RF events from zero, and is able to generate a target image produced by a conventional gradient echo sequence. Using a neural network within the reconstruction module allows arbitrary targets to be learned successfully. Experiments could be translated to image acquisition at the real system (3T Siemens, PRISMA) and could be verified in the measurements of phantoms and a human brain in vivo.

Conclusions

Automated MR sequence generation is possible based on differentiable Bloch equation simulations and a supervised learning approach.

1 Like

This is truly interesting work. And I appreciate the ‘novel’ (albeit, for me) possibility of the MRM to directly enter the discussion - since usually, while reading a paper, I stumble over some details that I fail to immediately understand. Here, it is no different.

E.g., while looking at figure 1, I am somewhat puzzled by seeing a different reconstruction pipeline in the top row compared to the bottom row; I would expect that the bottom row has to simulate reality, which obviously has to be a simulation for the physics part - but the recon part can be copied 1:1 to the simulation part, which would best ‘simulate’ the reconstruction reality.

As a minor item, I am somewhat surprised that the noise-level is not an input in point 1 of the optimization-task (“produce simulation results as close as possible to the target in the L2 norm sense”) - I would expect that a very different choice is optimal if noise is dominant than in the case where noise is negligible (leading to an L2 difference due to imperfect T1- and T2-sensitivity).

Still another: In the reasoning " Given that each event operator in the chain is linear, we can concatenate all tensors into a single linear operator SCANNER (…)". This is non-obvious to me, particularly since there is also a step (I do not call it ‘operator’ here) like m=RELAX.m+(1-RELAX).m0.
Maybe the message is not that any concatenation of events is linear, but that, if we concatenate from the beginning of time, it is linear w.r.t. m0.

As one may notice, I am reading (and discussing) sequentially. Certainly eager to continue reading - and to engage on discussion.

Miha