当前位置: X-MOL 学术IEEE Photon. J. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
C-HRNet: High Resolution Network Based on Contexts for Single-Frame Phase Unwrapping
IEEE Photonics Journal ( IF 2.4 ) Pub Date : 2024-03-25 , DOI: 10.1109/jphot.2024.3381544
Wenbo Zhao 1 , Jing Yan 2 , Dongyang Jin 3 , Jing Ling 1
Affiliation  

Phase unwrapping is an important research direction in fringe projection profilometry. Improving the accuracy of phase unwrapping from a single wrapped phase map has been a research focus. Existing the deep learning mathods for phase unwrapping from a single wrapped phase map suffer from accuracy issues when dealing with noise, the large variation range of phase surfaces, or isolated area. In this paper, we propose a novel approach to address these challenges. We treat the phase unwrapping problem as a semantic segmentation problem and introduce a new stage to the high resolution network. Additionally, we add an object contextual representation module. This approach allows us to predict the fringe order map from a single wrapped phase map without the need for any pre-process or post-process. Our method can accurately recover the phase information of objects under various challenging conditions. We validate the effectiveness and superiority of our approach by comparing it with Three deep learning methods for spatial phase unwrapping and one traditional spatial phase unwrapping method, qualitatively and quantitatively.

中文翻译:

C-HRNet:基于单帧相位展开上下文的高分辨率网络

相位展开是条纹投影轮廓术的一个重要研究方向。提高单个包裹相位图相位展开的准确性一直是研究热点。现有的从单个包裹相位图进行相位展开的深度学习方法在处理噪声、相位表面的大变化范围或孤立区域时存在精度问题。在本文中,我们提出了一种解决这些挑战的新方法。我们将相位展开问题视为语义分割问题,并向高分辨率网络引入一个新阶段。此外,我们添加了一个对象上下文表示模块。这种方法允许我们从单个包裹相位图预测条纹阶次图,而不需要任何预处理或后处理。我们的方法可以在各种具有挑战性的条件下准确地恢复物体的相位信息。我们通过与三种空间相位展开深度学习方法和一种传统空间相位展开方法进行定性和定量比较,验证了该方法的有效性和优越性。
更新日期:2024-03-25
down
wechat
bug