当前位置: X-MOL 学术Journal of Trust Research › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Linking precursors of interpersonal trust to human-automation trust: An expanded typology and exploratory experiment
Journal of Trust Research Pub Date : 2019-01-02 , DOI: 10.1080/21515581.2019.1579730
Christopher S. Calhoun 1 , Philip Bobko 2 , Jennie J. Gallimore 3 , Joseph B. Lyons 4
Affiliation  

ABSTRACT This study provides an initial experimental investigation of the extent to which well-known precursors of interpersonal trust (ability, benevolence, integrity, or ABI) will manifest when assessing trust between a human and a non-human referent (e.g. an automated aid). An additional motivation was the meta-analytic finding that the ABI model only explains about half of the variation in interpersonal trust. Based on a review of interpersonal and automation trust literatures, two additional precursors to trust – transparency and humanness – were identified and studied as exogenous variables (with A, B, and I analysed as explanatory mediators of their relationships to trust). In our experimental task, users interacted with an automated aid in decision-making scenarios to identify suspected insurgents. Results indicated that perceived humanness of the aid significantly correlated with trust in that aid (r = .364). This relationship was explained in part by perceptions of both ability and benevolence/integrity (unit-weighted average) of the aid; the latter finding suggesting that human-like intentionality attributed to the aid was a factor in automation trust. Perceived transparency also significantly correlated with trust (r = .464) although much of this relationship was explained by ability rather than benevolence/integrity. Aid reliability was also varied across the experiment. Interestingly, the explanatory power of benevolence/integrity increased when the aid’s reliability was lower, again suggesting human-like intentionality matters in automation trust models. Research and design considerations from these findings are noted.

中文翻译:

将人际信任的前身与人类自动化的信任联系起来:扩展的类型学和探索性实验

摘要这项研究提供了初步的实验研究,用于评估人与人之间的信任(例如自动辅助)之间的信任时,人际信任的先兆(能力,仁慈,正直或ABI)的程度。 。另一个动机是荟萃分析发现,ABI模型仅解释了人际信任变化的大约一半。根据对人际关系和自动化信任文献的回顾,确定并研究了信任的另外两个前体-透明度和人性-作为外生变量(使用A,B和我作为信任关系的解释性中介人)进行了研究。在我们的实验任务中,用户在决策方案中与自动协助进行了交互,以识别可疑的叛乱分子。结果表明,人们对援助的感知程度与对援助的信任度显着相关(r = .364)。这种关系部分是通过对援助的能力和仁慈/完整性(单位加权平均值)的理解来解释的;后者的发现表明,归因于援助的人性意图是自动化信任中的一个因素。知觉的透明度也与信任密切相关(r = .464),尽管这种关系的大部分是由能力而非仁慈/正直来解释的。在整个实验中,援助的可靠性也有所不同。有趣的是,当援助的可靠性较低时,仁慈/正直的解释力会增加,这再次表明在自动化信任模型中类似人的意图的重要性。记录了基于这些发现的研究和设计注意事项。
更新日期:2019-01-02
down
wechat
bug