当前位置: X-MOL 学术Comput. Electr. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Session-based recommendations with sequential context using attention-driven LSTM
Computers & Electrical Engineering ( IF 4.3 ) Pub Date : 2024-02-23 , DOI: 10.1016/j.compeleceng.2024.109138
Chhotelal Kumar , Mukesh Kumar

A Session-based recommender system (SBRS) captures the dynamic behavior of a user to provide recommendations for the next item in the current session. On providing the user’s past interactions of ongoing sessions, the SBRS predicts the next item that a user is likely to interact with. Sessions can vary in duration, from minutes to hours. Many recommender systems prioritize longer sessions, but most datasets have more short sessions. Predicting the next item in short sessions is challenging due to limited context. Additionally, obtaining item embeddings is problematic due to the data sparsity issue in most SBRS, as they rely on one-hot encoding. A long short-term memory (LSTM) with an attention mechanism has been proposed to overcome the abovementioned issues by utilizing LSTM to capture sequential context and incorporating an attention mechanism to focus on the target items. Additionally, to overcome the data sparsity problem, the Word2Vec embedding technique has been used. The proposed model was tested on two publicly available datasets i.e., 30Music and RSC19, and results are compared with basic sequence models i.e., RNN and LSTM. LSTM achieved a 41.95% hit rate on the 30Music, while LSTM-Attention achieved 81.47% on RSC19. In summary, LSTM outperformed RNN and LSTM-Attention on 30Music, whereas LSTM with attention outperformed the other models on RSC19.

中文翻译:

使用注意力驱动的 LSTM 提供具有顺序上下文的基于会话的推荐

基于会话的推荐系统 (SBRS) 捕获用户的动态行为,为当前会话中的下一个项目提供推荐。在提供用户过去与正在进行的会话的交互时,SBRS 预测用户可能与之交互的下一个项目。会话的持续时间可能会有所不同,从几分钟到几小时不等。许多推荐系统优先考虑较长的会话,但大多数数据集都有更多的短会话。由于上下文有限,在短时间内预测下一个项目具有挑战性。此外,由于大多数 SBRS 中的数据稀疏问题,获取项目嵌入存在问题,因为它们依赖于 one-hot 编码。人们提出了一种带有注意力机制的长短期记忆(LSTM)来克服上述问题,利用 LSTM 捕获顺序上下文并结合注意力机制来关注目标项目。此外,为了克服数据稀疏问题,使用了 Word2Vec 嵌入技术。所提出的模型在两个公开数据集(30Music 和 RSC19)上进行了测试,并将结果与​​基本序列模型(即 RNN 和 LSTM)进行了比较。LSTM 在 30Music 上取得了 41.95% 的命中率,而 LSTM-Attention 在 RSC19 上取得了 81.47% 的命中率。综上所述,LSTM 在 30Music 上的表现优于 RNN 和 LSTM-Attention,而带有注意力的 LSTM 在 RSC19 上的表现优于其他模型。
更新日期:2024-02-23
down
wechat
bug