当前位置: X-MOL 学术ACM Trans. Knowl. Discov. Data › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Properties of fairness measures in the context of varying class imbalance and protected group ratios
ACM Transactions on Knowledge Discovery from Data ( IF 3.6 ) Pub Date : 2024-03-28 , DOI: 10.1145/3654659
Dariusz Brzezinski 1 , Julia Stachowiak 1 , Jerzy Stefanowski 1 , Izabela Szczech 1 , Robert Susmaga 1 , Sofya Aksenyuk 1 , Uladzimir Ivashka 1 , Oleksandr Yasinskyi 1
Affiliation  

Society is increasingly relying on predictive models in fields like criminal justice, credit risk management, or hiring. To prevent such automated systems from discriminating against people belonging to certain groups, fairness measures have become a crucial component in socially relevant applications of machine learning. However, existing fairness measures have been designed to assess the bias between predictions for protected groups without considering the imbalance in the classes of the target variable. Current research on the potential effect of class imbalance on fairness focuses on practical applications rather than dataset-independent measure properties. In this paper, we study the general properties of fairness measures for changing class and protected group proportions. For this purpose, we analyze the probability mass functions of six of the most popular group fairness measures. We also measure how the probability of achieving perfect fairness changes for varying class imbalance ratios. Moreover, we relate the dataset-independent properties of fairness measures described in this paper to classifier fairness in real-life tasks. Our results show that measures such as Equal Opportunity and Positive Predictive Parity are more sensitive to changes in class imbalance than Accuracy Equality. These findings can help guide researchers and practitioners in choosing the most appropriate fairness measures for their classification problems.



中文翻译:

不同阶级不平衡和受保护群体比例背景下公平措施的特性

社会越来越依赖刑事司法、信用风险管理或招聘等领域的预测模型。为了防止此类自动化系统歧视属于某些群体的人,公平措施已成为机器学习的社会相关应用中的重要组成部分。然而,现有的公平性措施旨在评估受保护群体的预测之间的偏差,而不考虑目标变量类别的不平衡。当前关于类别不平衡对公平性的潜在影响的研究侧重于实际应用,而不是与数据集无关的度量属性。在本文中,我们研究了改变阶级和受保护群体比例的公平措施的一般性质。为此,我们分析了六种最流行的群体公平度量的概率质量函数。我们还衡量了实现完美公平的概率如何随着不同类别不平衡率的变化而变化。此外,我们将本文描述的公平性度量的数据集无关属性与现实任务中的分类器公平性联系起来。我们的结果表明,机会均等和积极预测均等等措施对阶级不平衡的变化比准确率平等更敏感。这些发现可以帮助指导研究人员和从业者针对其分类问题选择最合适的公平措施。

更新日期:2024-03-30
down
wechat
bug