当前位置: X-MOL 学术Int. J. Soc. Robotics › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Effects of Explanation Strategy and Autonomy of Explainable AI on Human–AI Collaborative Decision-making
International Journal of Social Robotics ( IF 4.7 ) Pub Date : 2024-04-09 , DOI: 10.1007/s12369-024-01132-2
Bingcheng Wang , Tianyi Yuan , Pei-Luen Patrick Rau

This study examined the effects of explanation strategy (global explanation vs. deductive explanation vs. contrastive explanation) and autonomy level (high vs. low) of explainable agents on human–AI collaborative decision-making. A 3 × 2 mixed-design experiment was conducted. The decision-making task was a modified Mahjong game. Forty-eight participants were divided into three groups, each collaborating with an agent with a different explanation strategy. Each agent had two autonomy levels. The results indicated that global explanation incurred the lowest mental workload and highest understandability. Contrastive explanation required the highest mental workload but incurred the highest perceived competence, affect-based trust, and social presence. Deductive explanation was found to be the worst in terms of social presence. The high-autonomy agents incurred lower mental workload and interaction fluency but higher faith and social presence than the low-autonomy agents. The findings of this study can help practitioners in designing user-centered explainable decision-support agents and choosing appropriate explanation strategies for different situations.



中文翻译:

可解释人工智能的解释策略和自主性对人机协作决策的影响

这项研究考察了解释策略(全局解释、演绎解释、对比解释)和可解释代理的自主水平(高与低)对人类与人工智能协作决策的影响。进行了3×2混合设计实验。决策任务是一种改进的麻将游戏。四十八名参与者被分为三组,每组都与具有不同解释策略的代理人合作。每个代理都有两个自治级别。结果表明,全局解释的脑力负荷最低,可理解性最高。对比解释需要最高的脑力负荷,但会带来最高的感知能力、基于情感的信任和社会存在。就社会存在感而言,演绎解释被认为是最差的。与低自主性个体相比,高自主性个体的脑力负荷和交互流畅性较低,但信仰和社会存在感较高。这项研究的结果可以帮助从业者设计以用户为中心的可解释决策支持代理,并针对不同情况选择适当的解释策略。

更新日期:2024-04-09
down
wechat
bug