当前位置: X-MOL 学术J Law Biosci › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Rules for robots, and why medical AI breaks them
Journal of Law and the Biosciences ( IF 3.4 ) Pub Date : 2023-02-16 , DOI: 10.1093/jlb/lsad001
Barbara J Evans 1, 2
Affiliation  

This article critiques the quest to state general rules to protect human rights against AI/ML computational tools. The White House Blueprint for an AI Bill of Rights was a recent attempt that fails in ways this article explores. There are limits to how far ethicolegal analysis can go in abstracting AI/ML tools, as a category, from the specific contexts where AI tools are deployed. Health technology offers a good example of this principle. The salient dilemma with AI/ML medical software is that privacy policy has the potential to undermine distributional justice, forcing a choice between two competing visions of privacy protection. The first, stressing individual consent, won favor among bioethicists, information privacy theorists, and policymakers after 1970 but displays an ominous potential to bias AI training data in ways that promote health care inequities. The alternative, an older duty-based approach from medical privacy law aligns with a broader critique of how late-20th-century American law and ethics endorsed atomistic autonomy as the highest moral good, neglecting principles of caring, social interdependency, justice, and equity. Disregarding the context of such choices can produce suboptimal policies when - as in medicine and many other contexts - the use of personal data has high social value.

中文翻译:

机器人规则,以及为什么医疗 AI 会打破它们

本文批评了对保护人权免受 AI/ML 计算工具侵害的一般规则的探索。人工智能权利法案的白宫蓝图是最近的一次尝试,但以本文探讨的方式失败了。从部署 AI 工具的特定环境中抽象出 AI/ML 工具作为一个类别,伦理学分析可以走多远是有限的。卫生技术为这一原则提供了一个很好的例子。AI/ML 医疗软件的突出困境是隐私政策有可能破坏分配正义,迫使人们在两种相互竞争的隐私保护愿景之间做出选择。第一种,强调个人同意,赢得了生物伦理学家、信息隐私理论家的青睐,和 1970 年后的政策制定者,但显示出不祥的潜力,可能会以促进医疗保健不平等的方式使 AI 培训数据产生偏见。另一种选择是医疗隐私法中一种较旧的基于责任的方法,与对 20 世纪末美国法律和伦理如何认可原子自治作为最高道德善而忽视关怀、社会相互依存、正义和公平原则的更广泛批评相一致. 当(如在医学和许多其他情况下)个人数据的使用具有很高的社会价值时,忽视此类选择的背景可能会产生次优政策。正义和公平。当(如在医学和许多其他情况下)个人数据的使用具有很高的社会价值时,忽视此类选择的背景可能会产生次优政策。正义和公平。当(如在医学和许多其他情况下)个人数据的使用具有很高的社会价值时,忽视此类选择的背景可能会产生次优政策。
更新日期:2023-02-16
down
wechat
bug