当前位置: X-MOL 学术Pattern Recogn. Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
YOLO2U-Net: Detection-guided 3D instance segmentation for microscopy
Pattern Recognition Letters ( IF 5.1 ) Pub Date : 2024-03-24 , DOI: 10.1016/j.patrec.2024.03.015
Amirkoushyar Ziabari , Derek C. Rose , Abbas Shirinifard , David Solecki

Microscopy imaging techniques are instrumental for characterization and analysis of biological structures. As these techniques typically render 3D visualization of cells by stacking 2D projections, issues such as out-of-plane excitation and low resolution in the -axis may pose challenges (even for human experts) to detect individual cells in 3D volumes as these non-overlapping cells may appear as overlapping. A comprehensive method for accurate 3D instance segmentation of cells in the brain tissue is introduced here. The proposed method combines the 2D YOLO detection method with a multi-view fusion algorithm to construct a 3D localization of the cells. Next, the 3D bounding boxes along with the data volume are input to a 3D U-Net network that is designed to segment the primary cell in each 3D bounding box, and in turn, to carry out instance segmentation of cells in the entire volume. The promising performance of the proposed method is shown in comparison with current deep learning-based 3D instance segmentation methods.

中文翻译:

YOLO2U-Net:用于显微镜的检测引导 3D 实例分割

显微镜成像技术有助于生物结构的表征和分析。由于这些技术通常通过堆叠 2D 投影来呈现细胞的 3D 可视化,因此面外激励和轴中的低分辨率等问题可能会对检测 3D 体积中的单个细胞构成挑战(即使对于人类专家而言),因为这些非重叠的单元格可能会显示为重叠。本文介绍了一种对脑组织中的细胞进行精确 3D 实例分割的综合方法。该方法将 2D YOLO 检测方法与多视图融合算法相结合,构建细胞的 3D 定位。接下来,3D 边界框与数据体一起输入到 3D U-Net 网络,该网络旨在分割每个 3D 边界框中的主单元,进而对整个体积中的单元进行实例分割。与当前基于深度学习的 3D 实例分割方法相比,该方法显示出良好的性能。
更新日期:2024-03-24
down
wechat
bug