当前位置: X-MOL 学术Ethics and Information Technology › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Percentages and reasons: AI explainability and ultimate human responsibility within the medical field
Ethics and Information Technology ( IF 3.633 ) Pub Date : 2024-04-09 , DOI: 10.1007/s10676-024-09764-8
Markus Herrmann , Andreas Wabro , Eva Winkler

With regard to current debates on the ethical implementation of AI, especially two demands are linked: the call for explainability and for ultimate human responsibility. In the medical field, both are condensed into the role of one person: It is the physician to whom AI output should be explainable and who should thus bear ultimate responsibility for diagnostic or treatment decisions that are based on such AI output. In this article, we argue that a black box AI indeed creates a rationally irresolvable epistemic situation for the physician involved. Specifically, strange errors that are occasionally made by AI sometimes detach its output from human reasoning. Within this article it is further argued that such an epistemic situation is problematic in the context of ultimate human responsibility. Since said strange errors limit the promises of explainability and the concept of explainability frequently appears irrelevant or insignificant when applied to a diverse set of medical applications, we deem it worthwhile to reconsider the call for ultimate human responsibility.



中文翻译:

百分比和原因:人工智能在医疗领域的可解释性和人类的最终责任

关于当前关于人工智能伦理实施的争论,尤其有两个要求是相互关联的:要求可解释性和最终的人类责任。在医学领域,两者都浓缩为一个人的角色:人工智能输出应该向医生解释,因此医生应该对基于这种人工智能输出的诊断或治疗决策承担最终责任。在本文中,我们认为黑匣子人工智能确实为相关医生创造了一种理性上无法解决的认知情境。具体来说,人工智能偶尔犯的奇怪错误有时会使其输出脱离人类推理。本文进一步指出,在人类终极责任的背景下,这种认知情况是有问题的。由于上述奇怪的错误限制了可解释性的承诺,并且可解释性的概念在应用于各种医疗应用时经常显得无关紧要或微不足道,因此我们认为值得重新考虑对人类终极责任的呼吁。

更新日期:2024-04-09
down
wechat
bug