当前位置: X-MOL 学术Israel Law Review › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Military Artificial Intelligence and the Principle of Distinction: A State Responsibility Perspective
Israel Law Review Pub Date : 2022-12-13 , DOI: 10.1017/s0021223722000188
Magdalena Pacholska

Military artificial intelligence (AI)-enabled technology might still be in the relatively fledgling stages but the debate on how to regulate its use is already in full swing. Much of the discussion revolves around autonomous weapons systems (AWS) and the ‘responsibility gap’ they would ostensibly produce. This contribution argues that while some military AI technologies may indeed cause a range of conceptual hurdles in the realm of individual responsibility, they do not raise any unique issues under the law of state responsibility. The following analysis considers the latter regime and maps out crucial junctions in applying it to potential violations of the cornerstone of international humanitarian law (IHL) – the principle of distinction – resulting from the use of AI-enabled military technologies. It reveals that any challenges in ascribing responsibility in cases involving AWS would not be caused by the incorporation of AI, but stem from pre-existing systemic shortcomings of IHL and the unclear reverberations of mistakes thereunder. The article reiterates that state responsibility for the effects of AWS deployment is always retained through the commander's ultimate responsibility to authorise weapon deployment in accordance with IHL. It is proposed, however, that should the so-called fully autonomous weapon systems – that is, machine learning-based lethal systems that are capable of changing their own rules of operation beyond a predetermined framework – ever be fielded, it might be fairer to attribute their conduct to the fielding state, by conceptualising them as state agents, and treat them akin to state organs.



中文翻译:

军事人工智能与区别原则:国家责任视角

支持军事人工智能 (AI) 的技术可能仍处于相对初级的阶段,但关于如何规范其使用的争论已经全面展开。大部分讨论围绕自主武器系统 (AWS) 以及它们表面上会产生的“责任差距”展开。这一贡献认为,虽然一些军事人工智能技术确实可能在个人责任领域造成一系列概念障碍,但它们不会根据国家责任法提出任何独特的问题。以下分析考虑了后一种制度,并勾勒出关键节点,将其应用于因使用人工智能军事技术而导致的可能违反国际人道法 (IHL) 基石——区分原则的行为。它揭示了在涉及 AWS 的案件中追究责任的任何挑战都不是由 AI 的加入引起的,而是源于国际人道法先前存在的系统性缺陷以及其下错误的不明确回响。文章重申,通过指挥官根据国际人道法授权武器部署的最终责任,国家对 AWS 部署的影响始终负有责任。然而,有人提议,如果所谓的完全自主武器系统——即能够在预定框架之外改变自己的操作规则的基于机器学习的致命系统——被部署,可能更公平的是通过将他们概念化为国家代理人,将他们的行为归因于部署状态,并将他们视为国家机关。

更新日期:2022-12-13
down
wechat
bug