当前位置: X-MOL 学术arXiv.cs.HC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Incremental XAI: Memorable Understanding of AI with Incremental Explanations
arXiv - CS - Human-Computer Interaction Pub Date : 2024-04-10 , DOI: arxiv-2404.06733
Jessica Y. Bo, Pan Hao, Brian Y. Lim

Many explainable AI (XAI) techniques strive for interpretability by providing concise salient information, such as sparse linear factors. However, users either only see inaccurate global explanations, or highly-varying local explanations. We propose to provide more detailed explanations by leveraging the human cognitive capacity to accumulate knowledge by incrementally receiving more details. Focusing on linear factor explanations (factors $\times$ values = outcome), we introduce Incremental XAI to automatically partition explanations for general and atypical instances by providing Base + Incremental factors to help users read and remember more faithful explanations. Memorability is improved by reusing base factors and reducing the number of factors shown in atypical cases. In modeling, formative, and summative user studies, we evaluated the faithfulness, memorability and understandability of Incremental XAI against baseline explanation methods. This work contributes towards more usable explanation that users can better ingrain to facilitate intuitive engagement with AI.

中文翻译:

增量 XAI:通过增量解释对 AI 进行令人难忘的理解

许多可解释的人工智能(XAI)技术通过提供简洁的显着信息(例如稀疏线性因子)来努力实现可解释性。然而,用户要么只能看到不准确的全局解释,要么看到差异很大的本地解释。我们建议通过利用人类认知能力通过逐步接收更多细节来积累知识来提供更详细的解释。专注于线性因子解释(因子$\times$值=结果),我们引入增量XAI,通过提供基础+增量因子来自动划分一般实例和非典型实例的解释,帮助用户阅读和记住更忠实的解释。通过重用基本因子并减少非典型案例中显示的因子数量来提高记忆力。在建模、形成性和总结性用户研究中,我们根据基线解释方法评估了增量 XAI 的忠实性、可记忆性和可理解性。这项工作有助于提供更可用的解释,让用户可以更好地根深蒂固,以促进与人工智能的直观互动。
更新日期:2024-04-11
down
wechat
bug