当前位置: X-MOL 学术Int. J. Hum. Comput. Stud. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Can robot advisers encourage honesty?: Considering the impact of rule, identity, and role-based moral advice
International Journal of Human-Computer Studies ( IF 5.4 ) Pub Date : 2024-01-02 , DOI: 10.1016/j.ijhcs.2024.103217
Boyoung Kim , Ruchen Wen , Ewart J. de Visser , Chad C. Tossell , Qin Zhu , Tom Williams , Elizabeth Phillips

A growing body of human–robot interaction literature is exploring whether and how social robots, by utilizing their physical presence or capacity for verbal and nonverbal behavior, can influence people’s moral behavior. In the current research, we aimed to examine to what extent a social robot can effectively encourage people to act honestly by offering them moral advice. The robot either offered no advice at all or proactively offered moral advice before participants made a choice between acting honestly and cheating, and the underlying ethical framework of the advice was grounded in either deontology (rule-focused), virtue ethics (identity-focused), or Confucian role ethics (role-focused). Across three studies (N=1,693), we did not find a robot’s moral advice to be effective in deterring cheating. These null results were held constant even when we introduced the robot as being equipped with moral capacity to foster common expectations about the robot among participants before receiving the advice from it. The current work led us to an unexpected discovery of the psychological reactance effect associated with participants’ perception of the robot’s moral capacity. Stronger perceptions of the robot’s moral capacity were linked to greater probabilities of cheating. These findings demonstrate how psychological reactance may impact human–robot interaction in moral domains and suggest potential strategies for framing a robot’s moral messages to avoid such reactance.



中文翻译:

机器人顾问能否鼓励诚实?:考虑规则、身份和基于角色的道德建议的影响

越来越多的人机交互文献正在探索社交机器人是否以及如何通过利用其物理存在或言语和非言语行为的能力来影响人们的道德行为。在当前的研究中,我们的目的是研究社交机器人在多大程度上可以通过向人们提供道德建议来有效地鼓励人们诚实行事。机器人要么根本不提供任何建议,要么在参与者在诚实行事和作弊之间做出选择之前主动提供道德建议,而建议的基本道德框架要么基于道义论(以规则为中心),要么以美德伦理(以身份为中心)为基础,或儒家角色伦理(以角色为中心)。跨越三项研究(=1,第693章),我们没有发现机器人的道德建议能够有效阻止作弊。即使我们介绍机器人具有道德能力,在收到机器人的建议之前培养参与者对机器人的共同期望,这些无效结果仍然保持不变。目前的工作使我们意外地发现了与参与者对机器人道德能力的感知相关的心理反应效应。对机器人道德能力的感知越强,作弊的可能性就越大。这些发现证明了心理抗拒如何影响道德领域中的人机交互,并提出了构建机器人道德信息以避免这种抗拒的潜在策略。

更新日期:2024-01-05
down
wechat
bug