当前位置: X-MOL 学术ACM Trans. Knowl. Discov. Data › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
LMACL: Improving Graph Collaborative Filtering with Learnable Model Augmentation Contrastive Learning
ACM Transactions on Knowledge Discovery from Data ( IF 3.6 ) Pub Date : 2024-04-12 , DOI: 10.1145/3657302
Xinru Liu 1 , Yongjing Hao 2 , Lei Zhao 2 , Guanfeng Liu 3 , Victor S. Sheng 4 , Pengpeng Zhao 2
Affiliation  

Graph collaborative filtering (GCF) has achieved exciting recommendation performance with its ability to aggregate high-order graph structure information. Recently, contrastive learning (CL) has been incorporated into GCF to alleviate data sparsity and noise issues. However, most of the existing methods employ random or manual augmentation to produce contrastive views that may destroy the original topology and amplify the noisy effects. We argue that such augmentation is insufficient to produce the optimal contrastive view, leading to suboptimal recommendation results. In this paper, we proposed a Learnable Model Augmentation Contrastive Learning (LMACL) framework for recommendation, which effectively combines graph-level and node-level collaborative relations to enhance the expressiveness of collaborative filtering (CF) paradigm. Specifically, we first use the graph convolution network (GCN) as a backbone encoder to incorporate multi-hop neighbors into graph-level original node representations by leveraging the high-order connectivity in user-item interaction graphs. At the same time, we treat the multi-head graph attention network (GAT) as an augmentation view generator to adaptively generate high-quality node-level augmented views. Finally, joint learning endows the end-to-end training fashion. In this case, the mutual supervision and collaborative cooperation of GCN and GAT achieves learnable model augmentation. Extensive experiments on several benchmark datasets demonstrate that LMACL provides a significant improvement over the strongest baseline in terms of Recall and NDCG by 2.5-3.8% and 1.6-4.0%, respectively. Our model implementation code is available at https://github.com/LiuHsinx/LMACL.



中文翻译:

LMACL:通过可学习模型增强对比学习改进图协同过滤

图协同过滤(GCF)凭借其聚合高阶图结构信息的能力,取得了令人兴奋的推荐性能。最近,对比学习(CL)已被纳入 GCF 中,以缓解数据稀疏和噪声问题。然而,大多数现有方法采用随机或手动增强来产生对比视图,这可能会破坏原始拓扑并放大噪声效果。我们认为这种增强不足以产生最佳的对比视图,从而导致推荐结果不理想。在本文中,我们提出了一种用于推荐的Learable Model Augmentation L MACL (LMACL)框架,该框架有效地结合了图级和节点级协作关系,以增强协作过滤(CF)范式的表现力。具体来说,我们首先使用图卷积网络(GCN)作为骨干编码器,通过利用用户-项目交互图中的高阶连接,将多跳邻居合并到图级原始节点表示中。同时,我们将多头图注意网络(GAT)视为增强视图生成器,以自适应地生成高质量的节点级增强视图。最后,联合学习赋予端到端的训练方式。在这种情况下,GCN和GAT的相互监督和协同合作实现了可学习的模型增强。对多个基准数据集的大量实验表明,LMACL 在召回率NDCG方面比最强基线显着提高,分别提高了 2.5-3.8% 和 1.6-4.0%。我们的模型实现代码可在 https://github.com/LiuHsinx/LMACL 获取。

更新日期:2024-04-12
down
wechat
bug