当前位置: X-MOL 学术ACM Trans. Inf. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Distributional Fairness-aware Recommendation
ACM Transactions on Information Systems ( IF 5.6 ) Pub Date : 2024-04-29 , DOI: 10.1145/3652854
Hao Yang 1 , Xian Wu 2 , Zhaopeng Qiu 2 , Yefeng Zheng 2 , Xu Chen 1
Affiliation  

Fairness has been gradually recognized as a significant problem in the recommendation domain. Previous models usually achieve fairness by reducing the average performance gap between different user groups. However, the average performance may not sufficiently represent all the characteristics of the performances in a user group. Thus, equivalent average performance may not mean the recommender model is fair, for example, the variance of the performances can be different. To alleviate this problem, in this article, we define a novel type of fairness, where we require that the performance distributions across different user groups should be similar. We prove that with the same performance distribution, the numerical characteristics of the group performance, including the expectation, variance, and any higher-order moment, are also the same. To achieve distributional fairness, we propose a generative and adversarial training framework. Specifically, we regard the recommender model as the generator to compute the performance for each user in different groups, and then we deploy a discriminator to judge which group the performance is drawn from. By iteratively optimizing the generator and the discriminator, we can theoretically prove that the optimal generator (the recommender model) can indeed lead to the equivalent performance distributions. To smooth the adversarial training process, we propose a novel dual curriculum learning strategy for optimal scheduling of training samples. Additionally, we tailor our framework to better suit top-N recommendation tasks by incorporating softened ranking metrics as measures of performance discrepancies. We conduct extensive experiments based on real-world datasets to demonstrate the effectiveness of our model.



中文翻译:

分布式公平意识推荐

公平性逐渐被认为是推荐领域的一个重要问题。以前的模型通常通过缩小不同用户组之间的平均性能差距来实现公平。然而,平均性能并不能充分代表用户组中的所有性能特征。因此,同等的平均性能可能并不意味着推荐模型是公平的,例如,性能的方差可能不同。为了缓解这个问题,在本文中,我们定义了一种新型的公平性,要求不同用户组之间的性能分布应该相似。我们证明,在相同的性能分布下,群体性能的数值特征,包括期望、方差和任何高阶矩,也是相同的。为了实现分配公平,我们提出了一个生成性和对抗性的训练框架。具体来说,我们将推荐模型视为生成器来计算不同组中每个用户的性能,然后部署一个判别器来判断性能来自哪个组。通过迭代优化生成器和判别器,我们可以从理论上证明最优生成器(推荐模型)确实可以产生等效的性能分布。为了平滑对抗性训练过程,我们提出了一种新颖的双课程学习策略来优化训练样本的调度。此外,我们通过合并软化排名指标作为性能差异的衡量标准来定制我们的框架,以更好地适应前 N 个推荐任务。我们基于现实世界的数据集进行了广泛的实验,以证明我们模型的有效性。

更新日期:2024-04-29
down
wechat
bug