当前位置: X-MOL 学术Biomed. Phys. Eng. Express › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
OA-GAN: organ-aware generative adversarial network for synthesizing contrast-enhanced medical images
Biomedical Physics & Engineering Express Pub Date : 2024-03-18 , DOI: 10.1088/2057-1976/ad31fa
Yulin Yang , Jing Liu , Gan Zhan , Qingqing Chen , Fang Wang , Yinhao Li , Rahul Kumar Jain , Lanfen Lin , Hongjie Hu , Yen-Wei Chen

Contrast-enhanced computed tomography (CE-CT) images are vital for clinical diagnosis of focal liver lesions (FLLs). However, the use of CE-CT images imposes a significant burden on patients due to the injection of contrast agents and extended shooting. Deep learning-based image synthesis models offer a promising solution that synthesizes CE-CT images from non-contrasted CT (NC-CT) images. Unlike natural images, medical image synthesis requires a specific focus on certain organs or localized regions to ensure accurate diagnosis. Determining how to effectively emphasize target organs poses a challenging issue in medical image synthesis. To solve this challenge, we present a novel CE-CT image synthesis model called, Organ-Aware Generative Adversarial Network (OA-GAN). The OA-GAN comprises an organ-aware (OA) network and a dual decoder-based generator. First, the OA network learns the most discriminative spatial features about the target organ (i.e. liver) by utilizing the ground truth organ mask as localization cues. Subsequently, NC-CT image and captured feature are fed into the dual decoder-based generator, which employs a local and global decoder network to simultaneously synthesize the organ and entire CECT image. Moreover, the semantic information extracted from the local decoder is transferred to the global decoder to facilitate better reconstruction of the organ in entire CE-CT image. The qualitative and quantitative evaluation on a CE-CT dataset demonstrates that the OA-GAN outperforms state-of-the-art approaches for synthesizing two types of CE-CT images such as arterial phase and portal venous phase. Additionally, subjective evaluations by expert radiologists and a deep learning-based FLLs classification also affirm that CE-CT images synthesized from the OA-GAN exhibit a remarkable resemblance to real CE-CT images.

中文翻译:

OA-GAN:用于合成对比增强医学图像的器官感知生成对抗网络

对比增强计算机断层扫描 (CE-CT) 图像对于局灶性肝脏病变 (FLL) 的临床诊断至关重要。然而,CE-CT图像的使用由于注射造影剂和长时间拍摄而给患者带来了很大的负担。基于深度学习的图像合成模型提供了一种很有前途的解决方案,可以从非对比 CT (NC-CT) 图像合成 CE-CT 图像。与自然图像不同,医学图像合成需要特别关注某些器官或局部区域,以确保准确的诊断。确定如何有效地强调目标器官是医学图像合成中的一个具有挑战性的问题。为了解决这一挑战,我们提出了一种新颖的 CE-CT 图像合成模型,称为器官感知生成对抗网络(OA-GAN)。 OA-GAN 包括器官感知(OA)网络和基于双解码器的生成器。首先,OA 网络利用地面真实器官掩模作为定位线索,学习有关目标器官(即肝脏)最具辨别力的空间特征。随后,NC-CT 图像和捕获的特征被输入基于双解码器的生成器,该生成器采用本地和全局解码器网络同时合成器官和整个 CECT 图像。此外,从局部解码器提取的语义信息被传输到全局解码器,以便更好地重建整个CE-CT图像中的器官。对 CE-CT 数据集的定性和定量评估表明,OA-GAN 优于合成动脉期和门静脉期等两种类型 CE-CT 图像的最先进方法。此外,放射科医生专家的主观评估和基于深度学习的 FLL 分类也证实,从 OA-GAN 合成的 CE-CT 图像与真实的 CE-CT 图像非常相似。
更新日期:2024-03-18
down
wechat
bug