当前位置: X-MOL 学术Big Data & Society › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Prediction and explainability in AI: Striking a new balance?
Big Data & Society ( IF 8.731 ) Pub Date : 2024-02-26 , DOI: 10.1177/20539517241235871
Aviad Raz 1 , Bert Heinrichs 2 , Netta Avnoon 1 , Gil Eyal 3 , Yael Inbar 4
Affiliation  

The debate regarding prediction and explainability in artificial intelligence (AI) centers around the trade-off between achieving high-performance accurate models and the ability to understand and interpret the decisionmaking process of those models. In recent years, this debate has gained significant attention due to the increasing adoption of AI systems in various domains, including healthcare, finance, and criminal justice. While prediction and explainability are desirable goals in principle, the recent spread of high accuracy yet opaque machine learning (ML) algorithms has highlighted the trade-off between the two, marking this debate as an inter-disciplinary, inter-professional arena for negotiating expertise. There is no longer an agreement about what should be the “default” balance of prediction and explainability, with various positions reflecting claims for professional jurisdiction. Overall, there appears to be a growing schism between the regulatory and ethics-based call for explainability as a condition for trustworthy AI, and how it is being designed, assimilated, and negotiated. The impetus for writing this commentary comes from recent suggestions that explainability is overrated, including the argument that explainability is not guaranteed in human healthcare experts either. To shed light on this debate, its premises, and its recent twists, we provide an overview of key arguments representing different frames, focusing on AI in healthcare.

中文翻译:

人工智能的预测和可解释性:寻求新的平衡?

关于人工智能 (AI) 预测和可解释性的争论主要围绕实现高性能精确模型与理解和解释这些模型的决策过程的能力之间的权衡。近年来,由于人工智能系统在医疗保健、金融和刑事司法等各个领域的日益采用,这场争论引起了广泛关注。虽然预测和可解释性原则上是理想的目标,但最近高精度但不透明的机器学习 (ML) 算法的传播凸显了两者之间的权衡,将这场辩论标记为跨学科、跨专业的专业知识谈判舞台。对于什么是预测和可解释性的“默认”平衡,人们不再达成一致,各种立场反映了对专业管辖权的主张。总体而言,监管和基于道德的呼吁(将可解释性作为值得信赖的人工智能的条件)与如何设计、同化和谈判之间似乎存在越来越大的分歧。撰写这篇评论的动力来自于最近关于可解释性被高估的建议,包括人类医疗保健专家也无法保证可解释性的论点。为了阐明这场辩论、其前提和最近的曲折,我们概述了代表不同框架的关键论点,重点关注医疗保健领域的人工智能。
更新日期:2024-02-26
down
wechat
bug