当前位置: X-MOL 学术Bulletin of the Atomic Scientists › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
How to deal with an AI near-miss: Look to the skies
Bulletin of the Atomic Scientists ( IF 2.204 ) Pub Date : 2023-05-08 , DOI: 10.1080/00963402.2023.2199580
Kris Shrishak

ABSTRACT

AI systems are harming people. Harms such as discrimination and manipulation are reported in the media, which is the primary source of information on AI incidents. Reporting AI near-misses and learning from how a serious incident was prevented would help avoid future incidents. The problem is that ongoing efforts to catalog AI incidents rely on media reports—which does not prevent incidents. Developers, designers, and deployers of AI systems should be incentivized to report and share information on near misses. Such an AI near-miss reporting system does not have to be designed from scratch; the aviation industry’s voluntary, confidential, and non-punitive approach to such reporting can be used as a guide. AI incidents are accumulating, and the sooner such a near-miss reporting system is established, the better.



中文翻译:

如何应对 AI 未遂事件:仰望天空

摘要

人工智能系统正在伤害人类。媒体报道了歧视和操纵等危害,这是有关 AI 事件的主要信息来源。报告 AI 未遂事故并了解如何预防严重事故将有助于避免未来事故的发生。问题在于,对 AI 事件进行分类的持续努力依赖于媒体报道——这并不能防止事件发生。应鼓励人工智能系统的开发人员、设计人员和部署人员报告和分享未遂事件的信息。这样的 AI 未遂事故报告系统不必从头开始设计;航空业对此类报告采取的自愿、保密和非惩罚性方法可作为指导。人工智能事件在累积,这种险情报告制度越早建立越好。

更新日期:2023-05-09
down
wechat
bug