当前位置: X-MOL 学术arXiv.cs.IR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
LARA: Linguistic-Adaptive Retrieval-Augmented LLMs for Multi-Turn Intent Classification
arXiv - CS - Information Retrieval Pub Date : 2024-03-25 , DOI: arxiv-2403.16504
Liu Junhua, Tan Yong Keat, Fu Bin

Following the significant achievements of large language models (LLMs), researchers have employed in-context learning for text classification tasks. However, these studies focused on monolingual, single-turn classification tasks. In this paper, we introduce LARA (Linguistic-Adaptive Retrieval-Augmented Language Models), designed to enhance accuracy in multi-turn classification tasks across six languages, accommodating numerous intents in chatbot interactions. Multi-turn intent classification is notably challenging due to the complexity and evolving nature of conversational contexts. LARA tackles these issues by combining a fine-tuned smaller model with a retrieval-augmented mechanism, integrated within the architecture of LLMs. This integration allows LARA to dynamically utilize past dialogues and relevant intents, thereby improving the understanding of the context. Furthermore, our adaptive retrieval techniques bolster the cross-lingual capabilities of LLMs without extensive retraining and fine-tune. Comprehensive experiments demonstrate that LARA achieves state-of-the-art performance on multi-turn intent classification tasks, enhancing the average accuracy by 3.67% compared to existing methods.

中文翻译:

LARA:用于多轮意图分类的语言自适应检索增强法学硕士

继大型语言模型(LLM)取得重大成就之后,研究人员将上下文学习用于文本分类任务。然而,这些研究侧重于单语言、单轮分类任务。在本文中,我们介绍了 LARA(语言自适应检索增强语言模型),旨在提高六种语言的多轮分类任务的准确性,适应聊天机器人交互中的众多意图。由于对话上下文的复杂性和不断变化的性质,多轮意图分类尤其具有挑战性。 LARA 通过将微调的较小模型与检索增强机制相结合来解决这些问题,并将其集成到法学硕士的架构中。这种集成使 LARA 能够动态地利用过去的对话和相关意图,从而提高对上下文的理解。此外,我们的自适应检索技术增强了法学硕士的跨语言能力,无需进行大量的再培训和微调。综合实验表明,LARA 在多轮意图分类任务上实现了最先进的性能,与现有方法相比,平均准确率提高了 3.67%。
更新日期:2024-03-27
down
wechat
bug