Are, C. (2024). Flagging as a silencing tool: Exploring the relationship between de-platforming of sex and online abuse on Instagram and TikTok. New Media & Society.
Link: https://doi.org/10.1177/14614448241228544
Open Access: Yes
Notes: With the expanding array of affordances that platforms offer, it is expected that users will encounter a corresponding set of possibilities to engage in and experience online abuse. A clear example of this is flagging, which allows users to highlight content that might be against the community guidelines. In this context, Are explores flagging as a form of online abuse among Instagram and TikTok users. Here, the author found three critically important themes. First, flagging is perceived by users to be a form of online abuse, which is used as a silencing mechanism. Second, the negative impacts of flagging leave users searching for answers to what might have caused the content removal. Finally, users reported the challenges of trying to appeal a flagging, as procedures that lead to deletion are not clear, accessible, or transparent.
In sum, this research emphasizes that “malicious flagging is a digital silencing strategy driving users offline, a strategy particularly effective against users and topics that are stigmatised or have been the target of platform governance: sex work, art, activism, queer expression and sexual health education” (p. 15). Indeed, this work is an important contribution to studies of online abuse as it helps us to better understand the role of sociotechnical procedures—such as flagging—in sustaining and expanding ecologies of violence in digital platforms.
Abstract: This article investigates Instagram and TikTok’s approach to malicious flagging through users’ experience. Similar to liking, commenting and sharing, flagging is a reaction social media platforms allow users to highlight content that potentially violates community guidelines. However, flagging’s influence on moderation remains opaque: users who flag are largely unaware about the success of their reports; those who are de-platformed cannot be sure if or why their content has been reported, making them feel targeted not just by platforms’ processes, but by the retaliation of audiences themselves. Since the impact of de-platforming on users, and particularly on content creators who work through platforms, can be huge, this study provides scope to investigate flagging as an online abuse technique.