当前位置: X-MOL 学术Big Data & Society › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Stabilizing translucencies: Governing AI transparency by standardization
Big Data & Society ( IF 8.731 ) Pub Date : 2024-02-26 , DOI: 10.1177/20539517241234298
Charlotte Högberg 1
Affiliation  

Standards are put forward as important means to turn the ideals of ethical and responsible artificial intelligence into practice. One principle targeted for standardization is transparency. This article attends to the tension between standardization and transparency, by combining a theoretical exploration of these concepts with an empirical analysis of standardizations of artificial intelligence transparency. Conceptually, standards are underpinned by goals of stability and solidification, while transparency is considered a flexible see-through quality. In addition, artificial intelligence-technologies are depicted as ‘black boxed’, complex and in flux. Transparency as a solution for ethical artificial intelligence has, however, been problematized. In the empirical sample of standardizations, transparency is largely presented as a static, measurable, and straightforward information transfer, or as a window to artificial intelligence use. The standards are furthermore described as pioneering and able to shape technological futures, while their similarities suggest that artificial intelligence translucencies are already stabilizing into similar arrangements. To rely heavily upon standardization to govern artificial intelligence transparency still risks allocating rule-making to non-democratic processes, and while intended to bring clarity, the standardizations could also create new distributions of uncertainty and accountability. This article stresses the complexity of governing sociotechnical artificial intelligence principles by standardization. Overall, there is a risk that the governance of artificial intelligence is let to be too shaped by technological solutionism, allowing the standardization of social values (or even human rights) to be carried out in the same manner as that of any other technical product or procedure.

中文翻译:

稳定半透明度:通过标准化管理人工智能透明度

标准的提出是将道德和负责任的人工智能理想转化为实践的重要手段。标准化的一项原则是透明度。本文通过将对这些概念的理论探索与人工智能透明度标准化的实证分析相结合,关注标准化和透明度之间的紧张关系。从概念上讲,标准的基础是稳定和巩固的目标,而透明度被认为是一种灵活的透明质量。此外,人工智能技术被描述为“黑匣子”、复杂且不断变化。然而,透明度作为道德人工智能的解决方案却受到了质疑。在标准化的经验样本中,透明度在很大程度上表现为静态的、可测量的、直接的信息传输,或者作为人工智能使用的窗口。这些标准还被描述为开创性的,能够塑造技术的未来,而它们的相似之处表明人工智能的半透明性已经稳定在类似的安排中。严重依赖标准化来管理人工智能透明度仍然存在将规则制定分配给非民主进程的风险,虽然旨在带来清晰度,但标准化也可能会产生新的不确定性和责任分配。本文强调通过标准化管理社会技术人工智能原理的复杂性。总体而言,人工智能的治理存在过于受技术解决主义影响的风险,允许社会价值观(甚至人权)的标准化以与任何其他技术产品或技术相同的方式进行。程序。
更新日期:2024-02-26
down
wechat
bug