Safe from “harm”: The governance of violence by platforms

DeCook, J. R., Cotter, K., Kanthawala, S., & Foyle, K. (2022). Safe from “harm”: The governance of violence by platforms. Policy & Internet, 14(1), 63–78.

Link: https://doi.org/10.1002/poi3.290

Open access: Yes

Notes: The way violence and harm are defined matters—especially when such definitions set different boundaries of what we can and cannot do in online spaces. Such is the argument of DeCook et al., who conducted a critical discourse analysis of platform policies to look at how they define and operationalize terms such as hate and abuse. Drawing on symbolic violence (Bourdieu) and cultural violence (Galtung), the authors argue that, by narrowly defining violence, platforms impose hegemonic visions of when certain violence, targets, and mechanisms are acceptable or unacceptable. More specifically, they argue that “rather than sticking to ‘fixed’ categories constitutive of harm or violence, the platforms seemed to use these terms as concepts that can be molded and interpreted flexibly to fit their needs at any given moment or within a specific context. By rendering ‘harm’ a floating signifier, platforms can respond agiley to public opinion and outcry about emergent concerns around harm, rather than binding themselves to a pre‐established definition, which would oblige them to proactively address these emergent concerns.” Indeed, creating hierarchies of harm—that render specific contexts, types, and manifestations of violence into manageable and computable categories—constrains the possibility of actually addressing the systemic factors that enable such violence and abuse in the first place.

Abstract: A number of issues have emerged related to how platforms moderate and mitigate “harm.” Although platforms have recently developed more explicit policies in regard to what constitutes “hate speech” and “harmful content,” it appears that platforms often use subjective judgments of harm that specifically pertains to spectacular, physical violence—but harm takes on many shapes and complex forms. The politics of defining “harm” and “violence” within these platforms are complex and dynamic, and represent entrenched histories of how control over these definitions extends to people’s perceptions of them. Via a critical discourse analysis of policy documents from three major platforms (Facebook, Twitter, and YouTube), we argue that platforms’ narrow definitions of harm and violence are not just insufficient but result in these platforms engaging in a form of symbolic violence. Moreover, the platforms position harm as a floating signifier, imposing conceptions of not just what violence is and how it manifests, but who it impacts. Rather than changing the mechanisms of their design that enable harm, the platforms reconfigure intentionality and causality to try to stop users from being “harmful,” which, ironically, perpetuates harm. We provide a number of suggestions, namely a restorative justice-focused approach, in addressing platform harm.

Join the ConversationLeave a reply

Your email address will not be published. Required fields are marked *

Comment*

Name*

Website

css.php