当前位置: X-MOL 学术Camb. Law J. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A LEGAL FRAMEWORK FOR ARTIFICIAL INTELLIGENCE FAIRNESS REPORTING
The Cambridge Law Journal ( IF 1.909 ) Pub Date : 2022-09-16 , DOI: 10.1017/s0008197322000460
Jia Qing Yap , Ernest Lim

Clear understanding of artificial intelligence (AI) usage risks and how they are being addressed is needed, which requires proper and adequate corporate disclosure. We advance a legal framework for AI Fairness Reporting to which companies can and should adhere on a comply-or-explain basis. We analyse the sources of unfairness arising from different aspects of AI models and the disparities in the performance of machine learning systems. We evaluate how the machine learning literature has sought to address the problem of unfairness through the use of different fairness metrics. We then put forward a nuanced and viable framework for AI Fairness Reporting comprising: (1) disclosure of all machine learning models usage; (2) disclosure of fairness metrics used and the ensuing trade-offs; (3) disclosure of de-biasing methods used; and (d) release of datasets for public inspection or for third-party audit. We then apply this reporting framework to two case studies.



中文翻译:

人工智能公平报告的法律框架

需要清楚地了解人工智能 (AI) 使用风险以及如何解决这些风险,这需要适当和充分的公司披露。我们推进了 AI 公平报告的法律框架,公司可以而且应该在遵守或解释的基础上遵守该框架。我们分析了人工智能模型不同方面产生的不公平来源以及机器学习系统性能的差异。我们评估了机器学习文献如何通过使用不同的公平性指标来寻求解决不公平性问题。然后,我们为 AI 公平报告提出了一个细致入微且可行的框架,包括:(1) 披露所有机器学习模型的使用情况;(2) 披露所使用的公平性指标和随之而来的权衡;(3) 披露使用的去偏方法;(d) 发布数据集供公众检查或第三方审计。然后,我们将此报告框架应用于两个案例研究。

更新日期:2022-09-16
down
wechat
bug