当前位置: X-MOL 学术IEEE Intell. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The Importance of an Ethical Framework for Trust Calibration in AI
IEEE Intelligent Systems ( IF 6.4 ) Pub Date : 2023-09-29 , DOI: 10.1109/mis.2023.3320443
Amelie Schmid 1 , Manuel Wiesche 1
Affiliation  

The transformative power of AI raises serious concerns about ethical issues within organizations and implicates the need for trust. To cope with that, numerous ethical frameworks are generally published, but only on a theoretical level. Furthermore, proper trust calibration in AI is of high relevance for the workers. Up to now, only limited studies have been carried out to investigate how an ethical framework can foster proper trust calibration of workers in practice. To close this gap, an ethical framework is investigated that ensures trust calibration by targeting AI reliability and AI safety. Finally, the effectiveness of the applied framework is evaluated based on 17 interviews within an international automotive supplier. As a result, this ethical framework led to a major increase in trust. This is a groundbreaking outcome since workers are willing to accept a lower level of AI safety and AI reliability at the same time.

中文翻译:

人工智能信任校准的道德框架的重要性

人工智能的变革力量引发了人们对组织内部道德问题的严重担忧,并暗示了对信任的需求。为了解决这个问题,人们普遍发布了许多道德框架,但仅限于理论层面。此外,人工智能中适当的信任校准对于工人来说具有很高的相关性。到目前为止,只有有限的研究来调查道德框架如何在实践中培养工人的适当信任校准。为了缩小这一差距,我们研究了一个道德框架,通过针对人工智能可靠性和人工智能安全性来确保信任校准。最后,根据对一家国际汽车供应商的 17 次访谈来评估所应用框架的有效性。结果,这种道德框架导致了信任的大幅增加。这是一个突破性的成果,因为工人们愿意同时接受较低水平的人工智能安全性和人工智能可靠性。
更新日期:2023-09-29
down
wechat
bug