当前位置: X-MOL 学术Int. J. Imaging Syst. Technol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
An improved semantic segmentation for breast lesion from dynamic contrast enhanced MRI images using deep learning
International Journal of Imaging Systems and Technology ( IF 3.3 ) Pub Date : 2024-01-19 , DOI: 10.1002/ima.23026
C. Sahaya Pushpa Sarmila Star 1 , A. Milton 2 , T. M. Inbamalar 3
Affiliation  

The World Health Organization (WHO) reports that approximately 2.3 million breast cancer cases are diagnosed each year. Early detection is key to tackling this issue, and Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) is a preferred method for detecting tumors. Convolutional Neural Networks (CNNs) can accurately segment images without human assistance. The objective of this study is to develop a computer-aided diagnosis system that can segment breast lesions from DCE-MRI images. A 92-layer deep CNN, called DCNN-92, and a 94-layer deep CNN, called DCNN-94, have been designed to identify lesions. The proposed methods have been validated using images from The Cancer Image Archive (TCIA) database. The proposed DCNN-92 model segments the tumor pixels effectively, but it exhibits some misclassifications where certain background pixels are incorrectly labeled as tumor and tumor pixels are identified as background. To segment the tumor pixels more accurately, two grouped convolution layers are added to the DCNN-92 model. The model with 94 layers, that is, DCNN-94, segments most of the tumor pixels correctly, thereby enhancing the segmentation performance. When compared to DCNN-92, the DCNN-94 model exhibits enhanced performance across standard metrics such as sensitivity, dice coefficient, Jaccard coefficient, and area under the curve (AUC). It was found that the training time for DCNN-94 is shorter. The DCNN-94 model with dilation factor and group convolution is concluded to be an effective method for lesion segmentation from breast DCE-MRI images compared to existing methods.

中文翻译:

使用深度学习从动态对比增强 MRI 图像中改进乳腺病变语义分割

世界卫生组织 (WHO) 报告称,每年诊断出大约 230 万乳腺癌病例。早期检测是解决这一问题的关键,动态对比增强磁共振成像 (DCE-MRI) 是检测肿瘤的首选方法。卷积神经网络 (CNN) 无需人工协助即可准确分割图像。本研究的目的是开发一种计算机辅助诊断系统,可以从 DCE-MRI 图像中分割乳腺病变。92 层深度 CNN(称为 DCNN-92)和 94 层深度 CNN(称为 DCNN-94)被设计用于识别病变。所提出的方法已使用癌症图像档案 (TCIA) 数据库中的图像进行了验证。所提出的 DCNN-92 模型有效地分割了肿瘤像素,但它表现出一些错误分类,其中某些背景像素被错误地标记为肿瘤,而肿瘤像素被识别为背景。为了更准确地分割肿瘤像素,DCNN-92 模型中添加了两个分组卷积层。94层的模型,即DCNN-94,正确分割了大部分肿瘤像素,从而增强了分割性能。与 DCNN-92 相比,DCNN-94 模型在灵敏度、骰子系数、杰卡德系数和曲线下面积 (AUC) 等标准指标上表现出增强的性能。发现DCNN-94的训练时间更短。与现有方法相比,具有扩张因子和组卷积的 DCNN-94 模型被认为是乳腺 DCE-MRI 图像病变分割的有效方法。
更新日期:2024-01-22
down
wechat
bug