当前位置: X-MOL 学术IEEE Geosci. Remote Sens. Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Thick Cloud Removal in Multitemporal Remote Sensing Images Using a Coarse-to-Fine Framework
IEEE Geoscience and Remote Sensing Letters ( IF 4.8 ) Pub Date : 2024-03-19 , DOI: 10.1109/lgrs.2024.3378691
Yue Zi 1 , Xuedong Song 2 , Fengying Xie 3 , Zhiguo Jiang 3
Affiliation  

Remote sensing (RS) images are widely used for Earth observation. However, cloud contamination greatly degrades the quality of RS images and limits their applications. In this letter, we propose a coarse-to-fine thick cloud removal method for a single pair of multitemporal RS images. First, we perform a global color transformation on a cloud-free reference image using linear regression coefficients between the pixels in the cloudy target image and the reference image in the same cloud-free regions and obtain a coarse result. Then, a convolutional neural network (CNN) based on internal constraint is used to refine the coarse result, which does not require any construction of additional external training dataset in advance. We further design a multiscale feature extraction and fusion module and an auxiliary loss involving cloud regions to improve the performance of the CNN. Finally, Poisson image fusion is used to generate a seamless cloud-free result. On a simulated test set containing 500 pairs of multitemporal RS images, the proposed method achieves satisfactory results with 25.1277 dB in peak signal-to-noise ratio (PSNR), 0.9077 in structural similarity (SSIM), and 0.9342 in correlation coefficient (CC). Qualitative and quantitative comparisons of our proposed against several state-of-the-art methods on the simulated and real cloudy images demonstrate the superiority of the proposed method.

中文翻译:

使用从粗到细的框架去除多时相遥感图像中的厚云

遥感(RS)图像广泛用于地球观测。然而,云污染极大地降低了遥感图像的质量并限制了其应用。在这封信中,我们提出了一种针对单对多时相遥感图像从粗到细的厚云去除方法。首先,我们使用多云目标图像和相同无云区域的参考图像中的像素之间的线性回归系数对无云参考图像进行全局颜色变换,并获得粗略结果。然后,使用基于内部约束的卷积神经网络(CNN)来细化粗略结果,这不需要预先构建任何额外的外部训练数据集。我们进一步设计了多尺度特征提取和融合模块以及涉及云区域的辅助损失,以提高 CNN 的性能。最后,使用泊松图像融合生成无缝的无云结果。在包含500对多时相RS图像的模拟测试集上,该方法取得了令人满意的结果,峰值信噪比(PSNR)为25.1277 dB,结构相似性(SSIM)为0.9077,相关系数(CC)为0.9342 。我们提出的方法与模拟和真实多云图像上几种最先进的方法的定性和定量比较证明了所提出方法的优越性。
更新日期:2024-03-19
down
wechat
bug