当前位置: X-MOL 学术ACM Trans. Interact. Intell. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Directive Explanations for Actionable Explainability in Machine Learning Applications
ACM Transactions on Interactive Intelligent Systems ( IF 3.4 ) Pub Date : 2023-01-12 , DOI: https://dl.acm.org/doi/10.1145/3579363
Ronal Singh, Tim Miller, Henrietta Lyons, Liz Sonenberg, Eduardo Velloso, Frank Vetere, Piers Howe, Paul Dourish

In this paper, we show that explanations of decisions made by machine learning systems can be improved by not only explaining why a decision was made but also by explaining how an individual could obtain their desired outcome. We formally define the concept of directive explanations (those that offer specific actions an individual could take to achieve their desired outcome), introduce two forms of directive explanations (directive-specific and directive-generic), and describe how these can be generated computationally. We investigate people’s preference for and perception towards directive explanations through two online studies, one quantitative and the other qualitative, each covering two domains (the credit scoring domain and the employee satisfaction domain). We find a significant preference for both forms of directive explanations compared to non-directive counterfactual explanations. However, we also find that preferences are affected by many aspects, including individual preferences and social factors. We conclude that deciding what type of explanation to provide requires information about the recipients and other contextual information. This reinforces the need for a human-centred and context-specific approach to explainable AI.



中文翻译:

机器学习应用程序中可操作解释性的指令解释

在本文中,我们表明,可以通过不仅解释做出决定的原因,还解释个人如何获得他们想要的结果来改进对机器学习系统做出决定的解释。我们正式定义指令解释的概念(那些提供个人可以采取的特定行动以实现其预期结果的那些),介绍两种形式的指令解释(指令特定和指令通用),并描述如何通过计算生成这些解释。我们通过两项在线研究调查人们对指令性解释的偏好和看法,一项是定量研究,另一项是定性研究,每项研究都涵盖两个领域(信用评分领域和员工满意度领域)。与非指导性反事实解释相比,我们发现两种形式的指导性解释都有明显的偏好。然而,我们也发现偏好受到许多方面的影响,包括个人偏好和社会因素。我们得出结论,决定提供哪种类型的解释需要有关接收者的信息和其他上下文信息。这加强了对以人为本和特定于上下文的方法来解释 AI 的需求。

更新日期:2023-01-13
down
wechat
bug