当前位置: X-MOL 学术ACM Trans. Interact. Intell. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Co-design of Human-centered, Explainable AI for Clinical Decision Support
ACM Transactions on Interactive Intelligent Systems ( IF 3.4 ) Pub Date : 2023-12-08 , DOI: 10.1145/3587271
Cecilia Panigutti 1 , Andrea Beretta 2 , Daniele Fadda 2 , Fosca Giannotti 3 , Dino Pedreschi 4 , Alan Perotti 5 , Salvatore Rinzivillo 2
Affiliation  

eXplainable AI (XAI) involves two intertwined but separate challenges: the development of techniques to extract explanations from black-box AI models and the way such explanations are presented to users, i.e., the explanation user interface. Despite its importance, the second aspect has received limited attention so far in the literature. Effective AI explanation interfaces are fundamental for allowing human decision-makers to take advantage and oversee high-risk AI systems effectively. Following an iterative design approach, we present the first cycle of prototyping-testing-redesigning of an explainable AI technique and its explanation user interface for clinical Decision Support Systems (DSS). We first present an XAI technique that meets the technical requirements of the healthcare domain: sequential, ontology-linked patient data, and multi-label classification tasks. We demonstrate its applicability to explain a clinical DSS, and we design a first prototype of an explanation user interface. Next, we test such a prototype with healthcare providers and collect their feedback with a two-fold outcome: First, we obtain evidence that explanations increase users’ trust in the XAI system, and second, we obtain useful insights on the perceived deficiencies of their interaction with the system, so we can re-design a better, more human-centered explanation interface.



中文翻译:

用于临床决策支持的以人为本、可解释的人工智能的共同设计

eXplainable AI (XAI) 涉及两个相互交织但又独立的挑战:开发从黑盒 AI 模型中提取解释的技术以及向用户呈现此类解释的方式,即解释用户界面。尽管第二个方面很重要,但迄今为止在文献中受到的关注还很有限。有效的人工智能解释界面是人类决策者有效利用和监督高风险人工智能系统的基础。采用迭代设计方法,我们提出了可解释的人工智能技术的原型设计-测试-重新设计的第一个周期及其用于临床决策支持系统(DSS)的解释用户界面。我们首先提出了一种满足医疗保健领域技术要求的 XAI 技术:顺序、本体链接的患者数据和多标签分类任务。我们展示了其解释临床 DSS 的适用性,并设计了解释用户界面的第一个原型。接下来,我们与医疗保健提供者一起测试这样的原型,并收集他们的反馈,获得双重结果:首先,我们获得证据表明解释增加了用户对 XAI 系统的信任,其次,我们获得了关于他们感知到的缺陷的有用见解。与系统交互,这样我们就可以重新设计更好、更人性化的解释界面。

更新日期:2023-12-08
down
wechat
bug