当前位置: X-MOL 学术arXiv.cs.IR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
InstUPR : Instruction-based Unsupervised Passage Reranking with Large Language Models
arXiv - CS - Information Retrieval Pub Date : 2024-03-25 , DOI: arxiv-2403.16435
Chao-Wei Huang, Yun-Nung Chen

This paper introduces InstUPR, an unsupervised passage reranking method based on large language models (LLMs). Different from existing approaches that rely on extensive training with query-document pairs or retrieval-specific instructions, our method leverages the instruction-following capabilities of instruction-tuned LLMs for passage reranking without any additional fine-tuning. To achieve this, we introduce a soft score aggregation technique and employ pairwise reranking for unsupervised passage reranking. Experiments on the BEIR benchmark demonstrate that InstUPR outperforms unsupervised baselines as well as an instruction-tuned reranker, highlighting its effectiveness and superiority. Source code to reproduce all experiments is open-sourced at https://github.com/MiuLab/InstUPR

中文翻译:

InstUPR:使用大型语言模型进行基于指令的无监督段落重排序

本文介绍了 InstUPR,一种基于大型语言模型 (LLM) 的无监督段落重排序方法。与依赖于查询文档对或检索特定指令的广泛训练的现有方法不同,我们的方法利用指令调整的 LLM 的指令跟踪功能来进行段落重新排序,而无需任何额外的微调。为了实现这一目标,我们引入了软分数聚合技术,并采用成对重新排名来进行无监督的段落重新排名。 BEIR 基准测试表明,InstUPR 的性能优于无监督基线以及指令调整的重排序器,凸显了其有效性和优越性。重现所有实验的源代码在 https://github.com/MiuLab/InstUPR 上开源
更新日期:2024-03-27
down
wechat
bug