当前位置: X-MOL 学术ACM Trans. Knowl. Discov. Data › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Fairness-Aware Graph Neural Networks: A Survey
ACM Transactions on Knowledge Discovery from Data ( IF 3.6 ) Pub Date : 2024-04-12 , DOI: 10.1145/3649142
April Chen 1 , Ryan A. Rossi 2 , Namyong Park 3 , Puja Trivedi 4 , Yu Wang 5 , Tong Yu 2 , Sungchul Kim 2 , Franck Dernoncourt 6 , Nesreen K. Ahmed 7
Affiliation  

Graph Neural Networks (GNNs) have become increasingly important due to their representational power and state-of-the-art predictive performance on many fundamental learning tasks. Despite this success, GNNs suffer from fairness issues that arise as a result of the underlying graph data and the fundamental aggregation mechanism that lies at the heart of the large class of GNN models. In this article, we examine and categorize fairness techniques for improving the fairness of GNNs. We categorize these techniques by whether they focus on improving fairness in the pre-processing, in-processing (during training), or post-processing phases. We discuss how such techniques can be used together whenever appropriate and highlight the advantages and intuition as well. We also introduce an intuitive taxonomy for fairness evaluation metrics, including graph-level fairness, neighborhood-level fairness, embedding-level fairness, and prediction-level fairness metrics. In addition, graph datasets that are useful for benchmarking the fairness of GNN models are summarized succinctly. Finally, we highlight key open problems and challenges that remain to be addressed.



中文翻译:

公平感知图神经网络:一项调查

图神经网络(GNN)因其在许多基本学习任务上的表征能力和最先进的预测性能而变得越来越重要。尽管取得了如此成功,GNN 仍面临着由于底层图数据和作为一大类 GNN 模型核心的基本聚合机制而出现的公平性问题。在本文中,我们对用于提高 GNN 公平性的公平技术进行了检查和分类。我们根据这些技术是否专注于提高预处理、处理中(训练期间)或后处理阶段的公平性对这些技术进行分类。我们讨论如何在适当的时候一起使用这些技术,并强调其优点和直觉。我们还引入了公平性评估指标的直观分类法,包括图级公平性、邻域级公平性、嵌入级公平性和预测级公平性指标。此外,还简要总结了可用于对 GNN 模型公平性进行基准测试的图数据集。最后,我们强调了仍有待解决的关键开放问题和挑战。

更新日期:2024-04-12
down
wechat
bug