当前位置: X-MOL 学术Daedalus › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Distrust of Artificial Intelligence: Sources & Responses from Computer Science & Law
Daedalus ( IF 1.340 ) Pub Date : 2022-01-01 , DOI: 10.1162/daed_a_01918
Cynthia Dwork , Martha Minow

Abstract Social distrust of AI stems in part from incomplete and faulty data sources, inappropriate redeployment of data, and frequently exposed errors that reflect and amplify existing social cleavages and failures, such as racial and gender biases. Other sources of distrust include the lack of “ground truth” against which to measure the results of learned algorithms, divergence of interests between those affected and those designing the tools, invasion of individual privacy, and the inapplicability of measures such as transparency and participation that build trust in other institutions. Needed steps to increase trust in AI systems include involvement of broader and diverse stakeholders in decisions around selection of uses, data, and predictors; investment in methods of recourse for errors and bias commensurate with the risks of errors and bias; and regulation prompting competition for trust.

中文翻译:

对人工智能的不信任:来自计算机科学与法律的来源与回应

摘要 社会对人工智能的不信任部分源于不完整和错误的数据源、不适当的数据重新部署以及经常暴露的错误,这些错误反映并放大了现有的社会分裂和失败,例如种族和性别偏见。不信任的其他来源包括缺乏衡量学习算法结果的“基本事实”、受影响者与设计工具者之间的利益分歧、侵犯个人隐私,以及透明度和参与等措施的不适用性建立对其他机构的信任。增加对人工智能系统的信任所需的步骤包括让更广泛和不同的利益相关者参与围绕用途、数据和预测因素选择的决策;投资于与错误和偏见风险相称的错误和偏见追索方法;以及促进信任竞争的监管。
更新日期:2022-01-01
down
wechat
bug