当前位置: X-MOL 学术AI Commun. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning invariant representation using synthetic imagery for object detection
AI Communications ( IF 0.8 ) Pub Date : 2022-12-30 , DOI: 10.3233/aic-220039
Ning Jiang 1, 2 , Jinglong Fang 1 , Yanli Shao 1
Affiliation  

Recent years have witnessed a rapid advance in training and testing synthetic data through deep learning networks for the annotation of synthetic data can be automatically marked. However, domain discrepancy still exists between synthetic data and real data. In this paper, we address the domain discrepancy issue from three aspects: 1) We design a synthetic image generator with automatically labeled based on 3d scenes. 2) A novel adversarial domain adaptation model is proposed to learn robust intermediate representation free of distractors to improve the transfer performance. 3) We construct a distractor-invariant network and adopt the sample transferability strategy on global-local levels respectively to mitigate the cross-domain gap. Additional exploratory experiments demonstrate that the proposed model achieves large performance margins, which show significant advance over the other state-of-the-art models, performing a promotion of 10%–15% mAP on various domain adaptation scenarios.

中文翻译:

使用用于对象检测的合成图像学习不变表示

近年来,通过深度学习网络对合成数据进行训练和测试方面取得了快速进展,可以自动标记合成数据的注释。然而,合成数据和真实数据之间仍然存在领域差异。在本文中,我们从三个方面解决域差异问题:1)我们设计了一个基于 3d 场景自动标记的合成图像生成器。2)提出了一种新的对抗域适应模型来学习无干扰的稳健中间表示,以提高迁移性能。3)我们构建了一个干扰不变的网络,并分别在全局-局部级别采用样本可迁移性策略来缓解跨域差距。额外的探索性实验表明,所提出的模型实现了较大的性能裕度,
更新日期:2022-12-31
down
wechat
bug