当前位置: X-MOL 学术arXiv.cs.IR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Touch the Core: Exploring Task Dependence Among Hybrid Targets for Recommendation
arXiv - CS - Information Retrieval Pub Date : 2024-03-26 , DOI: arxiv-2403.17442
Xing Tang, Yang Qiao, Fuyuan Lyu, Dugang Liu, Xiuqiang He

As user behaviors become complicated on business platforms, online recommendations focus more on how to touch the core conversions, which are highly related to the interests of platforms. These core conversions are usually continuous targets, such as \textit{watch time}, \textit{revenue}, and so on, whose predictions can be enhanced by previous discrete conversion actions. Therefore, multi-task learning (MTL) can be adopted as the paradigm to learn these hybrid targets. However, existing works mainly emphasize investigating the sequential dependence among discrete conversion actions, which neglects the complexity of dependence between discrete conversions and the final continuous conversion. Moreover, simultaneously optimizing hybrid tasks with stronger task dependence will suffer from volatile issues where the core regression task might have a larger influence on other tasks. In this paper, we study the MTL problem with hybrid targets for the first time and propose the model named Hybrid Targets Learning Network (HTLNet) to explore task dependence and enhance optimization. Specifically, we introduce label embedding for each task to explicitly transfer the label information among these tasks, which can effectively explore logical task dependence. We also further design the gradient adjustment regime between the final regression task and other classification tasks to enhance the optimization. Extensive experiments on two offline public datasets and one real-world industrial dataset are conducted to validate the effectiveness of HTLNet. Moreover, online A/B tests on the financial recommender system also show our model has superior improvement.

中文翻译:

触及核心:探索推荐混合目标之间的任务依赖性

随着商业平台上用户行为的复杂化,在线推荐更加关注如何触达与平台利益高度相关的核心转化。这些核心转化通常是连续目标,例如\textit{观看时间}、\textit{收入}等,其预测可以通过之前的离散转化操作来增强。因此,可以采用多任务学习(MTL)作为学习这些混合目标的范式。然而,现有的工作主要强调研究离散转换动作之间的顺序依赖关系,而忽略了离散转换与最终连续转换之间依赖关系的复杂性。此外,同时优化具有较强任务依赖性的混合任务将遇到不稳定的问题,其中核心回归任务可能对其他任务产生更大的影响。在本文中,我们首次研究了混合目标的 MTL 问题,并提出了名为混合目标学习网络(HTLNet)的模型来探索任务依赖性并增强优化。具体来说,我们为每个任务引入标签嵌入,以显式地在这些任务之间传递标签信息,这可以有效地探索逻辑任务依赖性。我们还进一步设计了最终回归任务和其他分类任务之间的梯度调整机制以增强优化。在两个离线公共数据集和一个真实世界工业数据集上进行了大量实验,以验证 HTLNet 的有效性。此外,对金融推荐系统的在线 A/B 测试也表明我们的模型具有出色的改进。
更新日期:2024-03-27
down
wechat
bug