当前位置: X-MOL 学术Displays › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
CHDNet: A lightweight weakly supervised segmentation network for lung CT image
Displays ( IF 4.3 ) Pub Date : 2024-01-19 , DOI: 10.1016/j.displa.2024.102650
Fangfang Lu , Tianxiang Liu , Ting Zhang , Bei Jin , Weiyan Gu

Deep learning methods have ushered in an unprecedented transformation in medical image segmentation by automating the segmentation of computed tomography (CT) slices. However, challenges persist in the application of these deep learning methods, including models with a high number of training parameters which hinder their clinical deployment and practical use. Furthermore, acquiring a large volume of high-quality labels from radiologists for training is prohibitively expensive. In light of these challenges, this paper introduces a weakly supervised method aimed at addressing the scarcity of high-quality labels by annotating only a small subset of pixels within regions of interest. This strategy expedites the labeling process, requiring just a few seconds to complete. Additionally, we present a lightweight segmentation model designed to minimize computational complexity. By incorporating parallel dilated convolutions and depthwise separable convolutions, this model consists of 11.03M parameters and can segment lesions in lung CT slices in just 0.05 s, showcasing high segmentation efficiency. To capture shape and size information from point labels, we propose a Convex-Hull-based Segmentation (CHS) loss function. This loss function empowers our weakly supervised model to approximate the performance of a fully supervised model when evaluated against point labels. We conduct a comprehensive evaluation, comparing our proposed method with other state-of-the-art methods using multiple metrics on public datasets. The experimental results demonstrate a significant improvement in performance compared to current state-of-the-art (SOTA) point label segmentation methods, with an increase of 19.08% in Intersection over Union (IoU), 20.54% in Dice Similarity Coefficient (DSC), 2.60% in sensitivity, and 1.33% in specificity. In comparison to the SOTA fully supervised methods, our approach exhibits a minimal IoU difference of just 2.1%. Further ablation experiments validate the effectiveness of the key methodologies introduced in this work.

中文翻译:

CHDNet:一种轻量级弱监督肺部 CT 图像分割网络

深度学习方法通​​过自动执行计算机断层扫描 (CT) 切片的分割,为医学图像分割带来了前所未有的转变。然而,这些深度学习方法的应用仍然存在挑战,包括模型具有大量训练参数,这阻碍了其临床部署和实际使用。此外,从放射科医生处获取大量高质量标签进行培训的成本高昂。鉴于这些挑战,本文引入了一种弱监督方法,旨在通过仅注释感兴趣区域内的一小部分像素来解决高质量标签的稀缺问题。该策略加快了标记过程,只需几秒钟即可完成。此外,我们提出了一种轻量级分割模型,旨在最大限度地减少计算复杂性。通过结合并行扩张卷积和深度可分离卷积,该模型由 11.03M 参数组成,只需 0.05 s 即可分割肺部 CT 切片中的病灶,具有较高的分割效率。为了从点标签中捕获形状和大小信息,我们提出了一种基于凸包的分割(CHS)损失函数。当根据点标签进行评估时,该损失函数使我们的弱监督模型能够近似于完全监督模型的性能。我们进行了全面的评估,使用公共数据集上的多个指标将我们提出的方法与其他最先进的方法进行比较。实验结果表明,与当前最先进的(SOTA)点标签分割方法相比,性能有了显着提升,交集比并集(IoU)提高了19.08%,骰子相似系数(DSC)提高了20.54% ,敏感性为 2.60%,特异性为 1.33%。与 SOTA 完全监督方法相比,我们的方法的 IoU 差异仅为 2.1%。进一步的消融实验验证了本工作中引入的关键方法的有效性。
更新日期:2024-01-19
down
wechat
bug