papers AI Learner
The Github is limit! Click to go to the new site.

DiamondGAN: Unified Multi-Modal Generative Adversarial Networks for MRI Sequences Synthesis

2019-04-29
Hongwei Li, Johannes C. Paetzold, Anjany Sekuboyina, Florian Kofler, Jianguo Zhang, Jan S. Kirschke, Benedikt Wiestler, Bjoern Menze

Abstract

Recent studies on medical image synthesis reported promising results using generative adversarial networks, mostly focusing on one-to-one cross-modality synthesis. Naturally, the idea arises that a target modality would benefit from multi-modal input. Synthesizing MR imaging sequences is highly attractive for clinical practice, as often single sequences are missing or of poor quality (e.g. due to motion). However, existing methods fail to scale up to image volumes with high numbers of modalities and extensive non-aligned volumes, facing common draw-backs of complex multi-modal imaging sequences. To address these limitations, we propose a novel, scalable and multi-modal approach calledDiamondGAN. Our model is capable of performing flexible non-aligned cross-modality synthesis and data infill, when given multiple modalities or any of their arbitrary subsets. It learns structured information using non-aligned input modalities in an end-to-end fashion. We synthesize two MRI sequences with clinical relevance (i.e., double inversion recovery (DIR) and contrast-enhanced T1 (T1-c)), which are reconstructed from three common MRI sequences. In addition, we perform multi-rater visual evaluation experiment and find that trained radiologists are unable to distinguish our synthetic DIR images from real ones.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1904.12894

PDF

http://arxiv.org/pdf/1904.12894


Similar Posts

Comments