当前位置: X-MOL 学术Opt. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Physics driven deep Retinex fusion for adaptive infrared and visible image fusion
Optical Engineering ( IF 1.3 ) Pub Date : 2023-08-01 , DOI: 10.1117/1.oe.62.8.083101
Yuanjie Gu 1 , Zhibo Xiao 1 , Yinghan Guan 2 , Haoran Dai 2 , Cheng Liu 1 , Liang Xue 3 , Shouyu Wang 4
Affiliation  

Infrared (IR) imaging can highlight thermal radiation objects even under poor lighting or severe sheltering but suffers from low resolution, contrast, and signal-to-noise ratio. While visible (VIS) light imaging can guarantee abundant texture details of targets, it is invalid in low lighting or sheltering conditions. Therefore, IR and VIS image fusion has more extensive applications, but it is a still challenging work because conventional methods cannot balance dynamic range, edge enhancement, and lightness constancy during fusion. To overcome these drawbacks, we propose a self-supervised dataset-free method for adaptive IR and VIS image fusion named deep Retinex fusion (DRF). The key idea of DRF is first generating component priors that are disentangled from a physical model using generative networks; then combining these priors, which are captured by networks via adaptive fusion loss functions based on Retinex theory; and finally reconstructing the IR and VIS fusion results. Furthermore, to verify the effectiveness of our reported physics driven DRF, qualitative and quantitative experiments via comparing with other state-of-the-art methods are performed using public datasets and in practical applications. These results prove that DRF can provide distinctions between day and night scenes and preserve abundant texture details and high-contrast IR information. Additionally, DRF can adaptively balance IR and VIS information and has good noise immunity. Therefore, compared to large dataset trained methods, DRF, which works without any dataset, achieves the best fusion performance.

中文翻译:

物理驱动的深度 Retinex 融合,用于自适应红外和可见光图像融合

红外 (IR) 成像即使在光线不足或遮挡严重的情况下也可以突出显示热辐射物体,但分辨率、对比度和信噪比较低。可见光成像虽然可以保证目标丰富的纹理细节,但在弱光或遮挡条件下是无效的。因此,红外和可见光图像融合具有更广泛的应用,但仍然是一项具有挑战性的工作,因为传统方法无法平衡融合过程中的动态范围、边缘增强和亮度恒定性。为了克服这些缺点,我们提出了一种用于自适应红外和可见光图像融合的自监督无数据集方法,称为深度 Retinex 融合(DRF)。DRF 的关键思想是首先使用生成网络生成与物理模型分离的组件先验;然后结合这些先验,它们是由网络通过基于 Retinex 理论的自适应融合损失函数捕获的;最后重建红外和可见光融合结果。此外,为了验证我们报告的物理驱动 DRF 的有效性,通过与其他最先进的方法进行比较,使用公共数据集和实际应用进行了定性和定量实验。这些结果证明DRF可以区分白天和夜间场景,并保留丰富的纹理细节和高对比度的红外信息。此外,DRF可以自适应地平衡IR和VIS信息,并且具有良好的抗噪性。因此,与大型数据集训练的方法相比,无需任何数据集即可工作的DRF实现了最佳的融合性能。
更新日期:2023-08-01
down
wechat
bug