当前位置: X-MOL 学术Policy & Internet › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Special issue: The (international) politics of content takedowns: Theory, practice, ethics
Policy & Internet ( IF 4.510 ) Pub Date : 2023-11-06 , DOI: 10.1002/poi3.375
James Fitzgerald 1 , Ayse D. Lokmanoglu 2
Affiliation  

INTRODUCTION

Content takedowns have emerged as a key regulatory pillar in the global fight against misinformation and extremism. Despite their increasing ubiquity as disruptive tools in political processes, however, their true efficacy remains up for debate. We “know,” for example, that takedowns had a strong disruptive effect on Islamic State-supporting networks from 2014 onwards (Conway et al., 2019), but we do not know whether constraining avenues for expression may ultimately accelerate acts of violence. We also know that extreme-right networks have weaponised content takedowns as evidence of victimization and the steadfast erosion of “free speech,” often underpinning calls to (violent) action and leveraging the popularity of alt-media—such as Gab, Rumble, Truth Social and Parler—as refuges for the persecuted and de-platformed alike. There is need for caution, too, as takedowns are applied by authoritarian governments to stifle dissent—measures increasingly absorbed into law (see Jones, 2022)—while in various theaters of conflict, content takedowns have erased atrocity and resistance, ultimately disrupting the archiving of war (see Banchik, 2021).

This special issue collates inter-disciplinary perspectives on how the policies and practices of content takedowns interact, with consequences for international politics. Across 11 papers, we explore how content takedowns variously interface with: democracy, history, free speech, national and regional regulations, activism, partisanism, violent extremism, effects on marginilized populations, strategies and techniques (i.e., self-reporting, AI, and variations amongst platforms), and flexibility and adaptability (i.e., migration, hidden messages) of harmful content and actors. The papers in this issue are geographically diverse, with perspectives from Latin America, the Middle East and North Africa, Europe, North America, and Oceania.

The editors consider content takedowns as a function of content moderation, aligning with the consensus view (see Gillespie et al., 2020); nevertheless, a review of the literature finds that content takedowns are rarely treated as the primary object of inquiry. While the subsumption of content takedowns as a subtopic of content moderation is understandable, this Special Issue attempts to foreground content takedowns as the primary focus for analysis: a subtle epistemological shift that provides a collective contribution to academic and policy-facing debates. To that end, it is necessary to define our basic terms of reference. Turning first to content moderation, one of the earliest—and most cited1—interpretations is that of Kaplan and Haenlein (2010), who view it as ‘the self-regulation of social media companies for the safety of its users'. Though useful, this interpretation fails to account for an intractable proviso: tech companies act as intermediaries to the content they are hosting and removing, but do not want to be liable for the content (Caplan & Napoli, 2018; Gillespie, 2010).2 Consequently, there is no single standard of content moderation that is applied by all tech companies, just as, clearly, there is no international governance of the World Wide Web (Wu, 2015).3 Concept moderation is, therefore, a concept born(e) of multiplicity, accounting for a range of actors that necessarily includes, but is not limited to, tech companies. We are more convinced by the holistic perspective of Gillespie at al. (2020), who define content moderation as:

[T]he detection of, assessment of, and interventions taken on content or behavior deemed unacceptable by platforms or other information intermediaries, including the rules they impose, the human labor and technologies required, and the institutional mechanisms of adjudication, enforcement, and appeal that support it (Gillespie at al., 2020, p. 2)

Divining a neat definition of content takedowns is a more difficult task for several reasons. First, there does not exist, to our knowledge, an authoritative definition of content takedowns comparable with, say, Kaplan and Haenlein (2010) and Gillespie et al. (2020). Second—and owing to the novelty of this Special Issue—most studies that engage with content takedowns tend to situate their analyses within the remit of content moderation, assuming recognition of “content takedowns” as a conceptual fait accompli (see, e.g., Lakomy, 2023). We note this trend not as a criticism, but an observation. Third, content takedowns have been studied across several academic fields, including legal studies, media studies, sociology and terrorism/extremism studies, entailing a panoply of contending assumptions and disciplinary tendencies—a useful definition of content takedowns pursuant to copyright law (see Bar-Ziv & Elkin-Koren, 2018), for example, does not quite speak to the intended breadth of this Special Issue. With these provisos to hand, we synthesize Gillespie et al. (2020) definition of content moderation with Singh and Bankston's (2018) extensive typology4 to define content takedowns as:

The removal of “problematic content” by platforms, or other information intermediaries, pursuant to legal or policy requirements, which occur across categories that include, but are not limited to: Government and legal content demands; copyright requests; trademark requests; network shutdowns and service interruptions; Right to be Forgotten delisting requests and; Community guidelines-based removals.

Having established basic parameters, we now turn to provide a commentary on some of the most substantial political dimensions of content moderation and content takedowns, before providing a brief, individual summary of each paper.



中文翻译:

特刊:内容删除的(国际)政治:理论、实践、道德

介绍

内容删除已成为全球打击虚假信息和极端主义的关键监管支柱。然而,尽管它们作为政治进程中的破坏性工具越来越普遍,但它们的真正功效仍然存在争议。例如,我们“知道”,从 2014 年起,打击行动对伊斯兰国支持网络产生了强烈的破坏性影响(Conway 等人, 2019) ,但我们不知道限制表达途径是否最终会加速暴力行为。我们还知道,极右网络将内容删除作为受害和“言论自由”坚定侵蚀的证据,通常支持(暴力)行动的呼吁,并利用另类媒体(例如 Gab、Rumble、Truth)的流行Social 和 Parler——作为受迫害者和失去平台者的避难所。还需要谨慎,因为专制政府会采取取缔措施来压制异议,这些措施越来越多地纳入法律(参见 Jones, 2022),而在各种冲突地区,内容删除消除了暴行和抵抗,最终扰乱了战争档案(参见 Banchik, 2021)。

本期特刊整理了关于内容删除的政策和实践如何相互作用以及对国际政治产生影响的跨学科观点。在 11 篇论文中,我们探讨了内容删除如何与以下方面产生不同的影响:民主、历史、言论自由、国家和地区法规、激进主义、党派主义、暴力极端主义、对边缘化人群的影响、策略和技术(即自我报告、人工智能和技术)。平台之间的差异),以及有害内容和参与者的灵活性和适应性(即迁移、隐藏消息)。本期论文的地域分布广泛,观点来自拉丁美洲、中东和北非、欧洲、北美和大洋洲。

编辑将内容删除视为内容审核的一个功能,与共识观点保持一致(请参阅 Gillespie 等人, 2020);然而,对文献的回顾发现,内容删除很少被视为调查的主要对象。虽然将内容删除作为内容审核的子主题是可以理解的,但本期特刊试图将内容删除作为分析的主要焦点:一种微妙的认识论转变,为学术和面向政策的辩论提供集体贡献。为此,有必要确定我们的基本职权范围。首先谈谈内容审核,最早且被引用最多的解释之一是 Kaplan 和 Haenlein 的解释 (2010),他们将其视为“社交媒体公司为了用户安全而进行的自我监管”。虽然有用,但这种解释未能解释一个棘手的附带条件:科技公司充当其托管和删除内容的中介,但不想对内容承担责任(Caplan 和 Napoli, 2018;吉莱斯皮, 2010)。2 因此,不存在所有科技公司都采用的单一内容审核标准,就像万维网显然不存在国际治理一样(Wu,  2015)。3因此,概念节制是一个诞生于多样性的概念,涵盖了一个范围参与者必然包括但不限于科技公司。我们更相信 Gillespie 等人的整体观点。 (2020),他们将内容审核定义为:

[T]对平台或其他信息中介认为不可接受的内容或行为进行检测、评估和干预,包括它们施加的规则、所需的人力和技术以及裁决的制度机制、执行和上诉(Gillespie 等人, 2020,第 2 页)

出于多种原因,为内容删除制定一个简洁的定义是一项更加困难的任务。首先,据我们所知,不存在与 Kaplan 和 Haenlein (2010) 以及 Gillespie 等人相比的内容删除的权威定义。 。 (2020)。其次,由于本期特刊的新颖性,大多数涉及内容删除的研究倾向于将其分析置于内容审核的范围内,并假设承认“内容删除”是一个概念既成事实(例如,参见 Lakomy, 2023)。我们注意到这一趋势不是作为批评,而是作为观察。第三,内容删除已经在多个学术领域进行了研究,包括法律研究、媒体研究、社会学和恐怖主义/极端主义研究,带来了一系列相互矛盾的假设和学科倾向——根据版权法对内容删除进行了有用的定义(参见《版权法》)例如,Ziv & Elkin-Koren, 2018)并没有完全说明本期特刊的预期广度。有了这些附带条件,我们综合了 Gillespie 等人的观点。 (2020) Singh 和 Bankston 对内容审核的定义 (2018)广泛的类型学4将内容删除定义为:

平台或其他信息中介机构根据法律或政策要求删除“有问题的内容”,这些要求包括但不限于:政府和法律内容要求;版权请求;商标请求;网络关闭和服务中断;被遗忘权除名请求;基于社区准则的清除。

确定了基本参数后,我们现在对内容审核和内容删除的一些最重要的政治维度进行评论,然后再对每篇论文进行简短的单独总结。

更新日期:2023-11-06
down
wechat
bug