当前位置: X-MOL 学术Hum. Rights Law Rev. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The Human Rights Implications of the Use of AI in the Digital Welfare State: Lessons Learned from the Dutch SyRI Case
Human Rights Law Review ( IF 1.150 ) Pub Date : 2022-03-04 , DOI: 10.1093/hrlr/ngac010
Adamantia Rachovitsa , Niclas Johann

ABSTRACT The article discusses the human rights implications of algorithmic decision-making in the social welfare sphere. It does so against the background of the 2020 Hague’s District Court judgment in a case challenging the Dutch government’s use of System Risk Indication—an algorithm designed to identify potential social welfare fraud. Digital welfare state initiatives are likely to fall short of meeting basic requirements of legality and protecting against arbitrariness. Moreover, the intentional opacity surrounding the implementation of algorithms in the public sector not only hampers the effective exercise of human rights but also undermines proper judicial oversight. The analysis unpacks the relevance and complementarity of three legal/regulatory frameworks governing algorithmic systems: data protection, human rights law and algorithmic accountability. Notwithstanding these frameworks’ invaluable contribution, the discussion casts doubt on whether they are well-suited to address the legal challenges pertaining to the discriminatory effects of the use of algorithmic systems.

中文翻译:

在数字福利国家中使用人工智能对人权的影响:从荷兰 SyRI 案例中吸取的教训

摘要 本文讨论了社会福利领域算法决策对人权的影响。它是在 2020 年海牙地方法院判决的背景下对荷兰政府使用 System Risk Indication(一种旨在识别潜在社会福利欺诈的算法)提出质疑的案件作出的判决。数字福利国家举措可能无法满足合法性和防止任意性的基本要求。此外,在公共部门实施算法的故意不透明不仅阻碍了人权的有效行使,而且破坏了适当的司法监督。该分析揭示了管理算法系统的三个法律/监管框架的相关性和互补性:数据保护、人权法和算法问责制。尽管这些框架做出了宝贵的贡献,但讨论对它们是否非常适合解决与使用算法系统的歧视性影响有关的法律挑战提出了质疑。
更新日期:2022-03-04
down
wechat
bug