当前位置: X-MOL 学术ACM Trans. Asian Low Resour. Lang. Inf. Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Survey of Knowledge Enhanced Pre-trained Language Models
ACM Transactions on Asian and Low-Resource Language Information Processing ( IF 2 ) Pub Date : 2024-03-01 , DOI: 10.1145/3631392
Jian Yang 1 , Xinyu Hu 2 , Gang Xiao 2 , Yulong Shen 1
Affiliation  

Pre-trained language models learn informative word representations on a large-scale text corpus through self-supervised learning, which has achieved promising performance in fields of natural language processing (NLP) after fine-tuning. These models, however, suffer from poor robustness and lack of interpretability. We refer to pre-trained language models with knowledge injection as knowledge-enhanced pre-trained language models (KEPLMs). These models demonstrate deep understanding and logical reasoning and introduce interpretability. In this survey, we provide a comprehensive overview of KEPLMs in NLP. We first discuss the advancements in pre-trained language models and knowledge representation learning. Then we systematically categorize existing KEPLMs from three different perspectives. Finally, we outline some potential directions of KEPLMs for future research.



中文翻译:

知识增强预训练语言模型综述

预训练的语言模型通过自监督学习在大规模文本语料库上学习信息丰富的单词表示,经过微调后在自然语言处理(NLP)领域取得了可喜的性能。然而,这些模型的稳健性较差且缺乏可解释性。我们将具有知识注入的预训练语言模型称为知识增强型预训练语言模型(KEPLM)。这些模型展示了深刻的理解和逻辑推理,并引入了可解释性。在本次调查中,我们全面概述了 NLP 中的 KEPLM。我们首先讨论预训练语言模型和知识表示学习的进展。然后我们从三个不同的角度对现有的 KEPLM 进行系统分类。最后,我们概述了 KEPLM 未来研究的一些潜在方向。

更新日期:2024-03-01
down
wechat
bug