当前位置: X-MOL 学术Inform. Sci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Active domain adaptation with mining diverse knowledge: An updated class consensus dictionary approach
Information Sciences ( IF 8.1 ) Pub Date : 2024-03-18 , DOI: 10.1016/j.ins.2024.120485
Qing Tian , Liangyu Zhou , Yanan Zhu , Lulu Kang

Domain adaptation (DA) has recently emerged as an effective paradigm for training the target model with labeled source knowledge. When knowledge transfer in DA encounters the bottleneck, one effective crack way is to introduce the labeled data to guide the DA process. Along this line, many active learning-based DA approaches are emerging to improve the quality of samples selection at the decision border. However, these methods have not preserved and exploited cross-domain common knowledge. In this work, we propose active domain adaptation with mining diverse knowledge: an updated class consensus dictionary approach (UCCDA). Specifically, we firstly initialize the class consensus dictionary by the source prior knowledge. Then, we choose high-confident target pseudo samples through self-training, while assigning labels to those low-confident via oracle annotation. In addition, we design the class consensus dictionary to guide the alignment between the source and target domains, instead of traditional direct cross-domain data alignment. Remarkably, to prevent error accumulation during the consensus dictionary learning, we specially design the anti-forgetting mask matrix to randomly restore the original knowledge. Finally, abundant experiments demonstrate that UCCDA outperforms the related state-of-the-art approaches.

中文翻译:

挖掘多样化知识的主动领域适应:更新的班级共识字典方法

领域适应(DA)最近已成为使用标记源知识训练目标模型的有效范例。当DA中的知识转移遇到瓶颈时,一种有效的破解方法是引入标记数据来指导DA过程。沿着这条线,许多基于主动学习的 DA 方法正在兴起,以提高决策边界样本选择的质量。然而,这些方法没有保留和利用跨领域的共同知识。在这项工作中,我们提出了挖掘多样化知识的主动领域适应:更新的类共识字典方法(UCCDA)。具体来说,我们首先通过源先验知识初始化类共识字典。然后,我们通过自训练选择高置信度的目标伪样本,同时通过预言机标注为那些低置信度的伪样本分配标签。此外,我们设计了类共识字典来指导源域和目标域之间的对齐,而不是传统的直接跨域数据对齐。值得注意的是,为了防止共识字典学习过程中的错误积累,我们专门设计了防遗忘掩模矩阵来随机恢复原始知识。最后,大量实验表明 UCCDA 优于相关的最先进方法。
更新日期:2024-03-18
down
wechat
bug