Data-Cycle-Consistent Adversarial Networks for High

0 downloads 0 Views 164KB Size Report
Target Audience: Researchers interested in applications of deep learning machinery in image reconstruction. ... reconstruction from undersampled MRI data.
Data-Cycle-Consistent Adversarial Networks for High-Quality Reconstruction of Undersampled MRI Data Fang Liu, Ph.D. and Alexey Samsonov, Ph.D. Department of Radiology, University of Wisconsin-Madison, Madison, WI, USA

Target Audience: Researchers interested in applications of deep learning machinery in image reconstruction. Introduction: Recently, there has been significant interest in application of deep learning (DL) concept for image reconstruction from undersampled MRI data. However, the straightforward use of deep convolutional neural networks (CNNs) for the MR image restoration is confronted by high complexity of the problem at hand (compared to applications where DL-based methods were most successful, e.g., image classification, segmentation, object recognition). For example, training the network allows learning essential anatomical features but does not guarantee consistency with native k-space domain, which limits accuracy of training and reconstruction. Consequently, most published work concentrates on application of DL to optimize established techniques (e.g., compressed sensing [1]) or requires multiple CNNs each trained specifically for given iteration of projection-onto-convex-sets (POCS)-like reconstruction [2]. Here, we propose a novel DL-based approach based on generative adversarial network (GAN) [3] and termed the data-cycle-consistent GAN (dccGAN) that allows network training consistent with acquired k-space data and preservation of high fidelity image structures using perceptional loss cost function. Theory and Methods: In line with general cycle-consistent GAN (CycleGAN) [4], the proposed dccGAN model (Fig. 1) consists of standard, “CNN-only” path (top one #1, named backward path) and additional forward path (bottom one #2) which generates both adversarial (LossGAN) and data fidelity (Loss(2)) loss terms. The cumulative loss Figure 1. Illustration of proposed dccGAN model. The training data are function is used to train the convolutional subdivided into two subsets (marked by 1 and 2) which are engaged in the encoder/decoder. Such structure not only backward (1) and forward (2) paths yielding two L1-norm-based loss terms allows learning the mapping from input image that measures reconstruction fidelity in image domain and consistency with to output image consistent with k-space data, data, respectively. The subsampler performs FFT followed by sampling mask but also learns a loss function LossGAN that multiplication and inverse FFT. The forward path is used to estimate additional, promotes restoration of images with perceptual loss term based on PatchGAN network. characteristics indistinguishable from other images from the same imaging modality (IUNPAIRED). Such adversarial loss promotes restoration of images with high perceptional quality [3]. Implementation: We selected UNet [5] for convolutional encoder/decoder and PatchGAN [6] for the adversarial network. The network was trained on Nvidia GeForce GTX 1080Ti card using adaptive gradient descent Figure 2. Examples of proposed dccGAN reconstructed images and optimization with a learning rate of 0.0002 for comparison among other recon methods at R=8. 200 epochs. Evaluation: The evaluation was performed on 20 knee datasets from routine clinical scans collected on a 3T scanner (MR750, GE Healthcare) using T2-weighted coronal fast spin-echo (TR/TE=2125/20ms, 420×448 matrix, 32 slices). The training data were obtained by augmenting the subset of knee images using 2D affine spatial transformations. The fast acquisition was simulated by retaining 8% of central k-space lines and undersampling the remaining lines by R=8 (total a 5-fold acceleration). The training and reconstruction took ~15 hours and 2 sec, respectively. Results: Figure 2 shows representative reconstruction results. Undersampling prevented accurate reconstruction by inverse FFT (Zero Filling) or by using available constraints (POCS with object support). “CNNOnly” method (i.e., the submodel utilizing only top, forward path in Fig. 1) reduced aliasing, but led to blurring and sharpness / texture loss. Our dccGAN removed aliasing and simultaneously preserved bone microstructure that appears similar to the reference. The highest degree of correspondence between dccGAN and the reference was confirmed by Structural Similarity Index (SSIM) estimates [7], which were 0.79, 0.83, 0.90 and 0.94 for Zero Filling, POCS, CNN-Only and dccGAN, respectively. Discussion: We proposed a novel image reconstruction approach that yields superior results compared to traditional methods including standard CNN training. Our method utilizes data-cycle-consistent adversarial network for training consistent with k-space data. The data fidelity reinforcement and incorporation of adversarial loss promises to provide image reconstruction from highly undersampled data with high perceptional quality. References: [1] Hammernik et.al. MRM, 2017; [2] Schlemper et.al. IEEE TMI, 2017; [3] Goodfellow et.al. arXiv 2014; [4] Zhu et.al. arXiv 2017; [5] Ronneberger et.al. MICCAI, 2015; [6] Isola et.al. CVPR, 2017; [7] Wang et.al. IEEE TIP, 2004

Suggest Documents