当前位置: X-MOL 学术Knowl. Eng. Rev. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Using Pareto simulated annealing to address algorithmic bias in machine learning
The Knowledge Engineering Review ( IF 2.1 ) Pub Date : 2022-05-04 , DOI: 10.1017/s0269888922000029
William Blanzeisky 1 , Pádraig Cunningham 1
Affiliation  

Algorithmic bias arises in machine learning when models that may have reasonable overall accuracy are biased in favor of ‘good’ outcomes for one side of a sensitive category, for example gender or race. The bias will manifest as an underestimation of good outcomes for the under-represented minority. In a sense, we should not be surprised that a model might be biased when it has not been ‘asked’ not to be; reasonable accuracy can be achieved by ignoring the under-represented minority. A common strategy to address this issue is to include fairness as a component in the learning objective. In this paper, we consider including fairness as an additional criterion in model training and propose a multi-objective optimization strategy using Pareto Simulated Annealing that optimizes for both accuracy and underestimation bias. Our experiments show that this strategy can identify families of models with members representing different accuracy/fairness tradeoffs. We demonstrate the effectiveness of this strategy on two synthetic and two real-world datasets.



中文翻译:

使用 Pareto 模拟退火解决机器学习中的算法偏差

当可能具有合理整体准确性的模型偏向于敏感类别的一侧(例如性别或种族)的“好”结果时,机器学习中就会出现算法偏差。这种偏见将表现为低估少数群体的良好结果。从某种意义上说,我们不应该对一个模型在没有被“要求”不存在的情况下可能存在偏见感到惊讶。忽略代表性不足的少数人可以达到合理的准确性。解决这个问题的一个常见策略是将公平作为学习目标的一个组成部分。在本文中,我们考虑将公平性作为模型训练的附加标准,并提出了一种使用帕累托模拟退火的多目标优化策略,该策略可同时优化准确性低估偏差。我们的实验表明,该策略可以识别具有代表不同准确性/公平性权衡的成员的模型系列。我们在两个合成数据集和两个真实数据集上展示了该策略的有效性。

更新日期:2022-05-04
down
wechat
bug