当前位置: X-MOL 学术arXiv.cs.IR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Play to Your Strengths: Collaborative Intelligence of Conventional Recommender Models and Large Language Models
arXiv - CS - Information Retrieval Pub Date : 2024-03-25 , DOI: arxiv-2403.16378
Yunjia Xi, Weiwen Liu, Jianghao Lin, Chuhan Wu, Bo Chen, Ruiming Tang, Weinan Zhang, Yong Yu

The rise of large language models (LLMs) has opened new opportunities in Recommender Systems (RSs) by enhancing user behavior modeling and content understanding. However, current approaches that integrate LLMs into RSs solely utilize either LLM or conventional recommender model (CRM) to generate final recommendations, without considering which data segments LLM or CRM excel in. To fill in this gap, we conduct experiments on MovieLens-1M and Amazon-Books datasets, and compare the performance of a representative CRM (DCNv2) and an LLM (LLaMA2-7B) on various groups of data samples. Our findings reveal that LLMs excel in data segments where CRMs exhibit lower confidence and precision, while samples where CRM excels are relatively challenging for LLM, requiring substantial training data and a long training time for comparable performance. This suggests potential synergies in the combination between LLM and CRM. Motivated by these insights, we propose Collaborative Recommendation with conventional Recommender and Large Language Model (dubbed \textit{CoReLLa}). In this framework, we first jointly train LLM and CRM and address the issue of decision boundary shifts through alignment loss. Then, the resource-efficient CRM, with a shorter inference time, handles simple and moderate samples, while LLM processes the small subset of challenging samples for CRM. Our experimental results demonstrate that CoReLLa outperforms state-of-the-art CRM and LLM methods significantly, underscoring its effectiveness in recommendation tasks.

中文翻译:

发挥你的优势:传统推荐模型和大型语言模型的协作智能

大型语言模型 (LLM) 的兴起通过增强用户行为建模和内容理解为推荐系统 (RS) 带来了新的机遇。然而,目前将LLM集成到RS中的方法仅利用LLM或传统推荐模型(CRM)来生成最终推荐,而没有考虑LLM或CRM擅长哪些数据段。为了填补这一空白,我们在MovieLens-1M和Amazon-Books 数据集,并比较代表性 CRM (DCNv2) 和 LLM (LLaMA2-7B) 在不同数据样本组上的性能。我们的研究结果表明,法学硕士在 CRM 表现出较低置信度和精确度的数据领域表现出色,而 CRM 擅长的样本对于法学硕士来说相对具有挑战性,需要大量的训练数据和较长的训练时间才能获得可比较的性能。这表明 LLM 和 CRM 的结合具有潜在的协同效应。受这些见解的启发,我们提出了使用传统推荐器和大型语言模型(称为 \textit{CoReLLa})的协作推荐。在此框架中,我们首先联合训练 LLM 和 CRM,并通过对齐损失解决决策边界转移的问题。然后,资源高效的 CRM 具有较短的推理时间,可以处理简单和中等的样本,而 LLM 则可以处理 CRM 中具有挑战性的样本的小子集。我们的实验结果表明,CoReLLa 显着优于最先进的 CRM 和 LLM 方法,强调了其在推荐任务中的有效性。
更新日期:2024-03-27
down
wechat
bug