当前位置: X-MOL 学术ACM Trans. Inf. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Counterfactual Explanation for Fairness in Recommendation
ACM Transactions on Information Systems ( IF 5.6 ) Pub Date : 2024-03-22 , DOI: 10.1145/3643670
Xiangmeng Wang 1 , Qian Li 2 , Dianer Yu 1 , Qing Li 3 , Guandong Xu 4
Affiliation  

Fairness-aware recommendation alleviates discrimination issues to build trustworthy recommendation systems. Explaining the causes of unfair recommendations is critical, as it promotes fairness diagnostics, and thus secures users’ trust in recommendation models. Existing fairness explanation methods suffer high computation burdens due to the large-scale search space and the greedy nature of the explanation search process. Besides, they perform feature-level optimizations with continuous values, which are not applicable to discrete attributes such as gender and age. In this work, we adopt counterfactual explanations from causal inference and propose to generate attribute-level counterfactual explanations, adapting to discrete attributes in recommendation models. We use real-world attributes from Heterogeneous Information Networks (HINs) to empower counterfactual reasoning on discrete attributes. We propose a Counterfactual Explanation for Fairness (CFairER) that generates attribute-level counterfactual explanations from HINs for item exposure fairness. Our CFairER conducts off-policy reinforcement learning to seek high-quality counterfactual explanations, with attentive action pruning reducing the search space of candidate counterfactuals. The counterfactual explanations help to provide rational and proximate explanations for model fairness, while the attentive action pruning narrows the search space of attributes. Extensive experiments demonstrate our proposed model can generate faithful explanations while maintaining favorable recommendation performance.



中文翻译:

推荐公平性的反事实解释

公平意识推荐可以减轻歧视问题,从而构建值得信赖的推荐系统。解释不公平推荐的原因至关重要,因为它可以促进公平诊断,从而确保用户对推荐模型的信任。由于大规模的搜索空间和解释搜索过程的贪婪本质,现有的公平性解释方法承受着很高的计算负担。此外,它们使用连续值执行特征级优化,这不适用于性别和年龄等离散属性。在这项工作中,我们采用因果推理的反事实解释,并提出生成属性级反事实解释,适应推荐模型中的离散属性。我们使用异构信息网络 (HIN) 中的现实世界属性来增强对离散属性的反事实推理。我们提出了一种公平性反事实解释 (CFairER),它可以从 HIN 生成属性级反事实解释,以实现项目暴露公平性。我们的CFairER进行离策略强化学习,以寻求高质量的反事实解释,并通过细心的动作修剪减少候选反事实的搜索空间。反事实解释有助于为模型公平性提供理性和近似的解释,而细心的动作剪枝则缩小了属性的搜索空间。大量的实验表明,我们提出的模型可以生成忠实的解释,同时保持良好的推荐性能。

更新日期:2024-03-22
down
wechat
bug