No insides on the outsides

“The main goal and mission of a content moderator is to clean up the dirt.” — unnamed content moderator in The Cleaners (00:06:20).

All systems must rid themselves of things. If they don’t discard, those systems face existential threats to their continuation. This is a fundamental insight of anthropologist Mary Douglas’s work and a core proposition of discard studies. Douglas has famously claimed that “(w)here there is dirt, there is system” (Douglas, 1966, p. 36). Her claim is that dirt is a consequence of any attempt at systematic ordering “in so far as ordering involves rejecting inappropriate elements” (Douglas, 1966, p. 36). Thus, discard studies can also claim that where there is system, there is dirt.

downloadIn some cases, what is discarded causes harm to variously situated people, places, and things. Who and what experiences such threats and harms, how, where, when and under what conditions make differential power relations a central question. This is what discard studies is about.  To date, most cases examined in discard studies have to do with waste, trash, or garbage in the common language sense of those terms–stuff that might end up in a trash bin, a sewer, a materials recycling facility (MRF), or in unintended elsewheres like the shore of a beach or in the human body. But discarding applies to any system. How might non-waste cases of discarding manifest some of the core propositions of discard studies? A recent documentary, aptly titled The Cleaners, offers insight into this question through the case of commercial content moderation of social media.

Commercial content moderation (CCM) is the for-profit creation and maintenance of social media feeds free of content that violate a platform’s terms of use policies (e.g., Facebook, Twitter, see Roberts, 2019). CCM determines what content remains on social media platforms and what content is removed by the organization. It is enacted via a set of delete/ignore decisions about what must be removed (deleted) and what may remain (ignored). These decisions maintain social media feeds as systems of content. Some amount of CCM is automated via sets of rules that are written in computer code (aka algorithms). However, a substantial proportion of CCM is done by actual people–and the people who do this labour experience various forms of harm including clinically diagnosed post-traumatic stress syndrome (PTS) or post-traumatic stress disorder (PTSD). Commercial content moderation is a charismatic non-trash case for discard studies: not only do content moderators engage in discarding as a form of wage labour, but they too, are discarded by the system they work within.

Screen Shot 2019-07-07 at 11.07.41 AM.png

Screenshot from Wired Magazine 10/23/2014

In one particularly gripping scene of The Cleaners, an unnamed moderator—a young Filipino woman who has recently decided to leave her job as a content moderator—is describing what are, for her, three deeply entwined and conflicting emotions: she yearns for economic independence and the ability to financially support herself and members of her family; she is deeply traumatized by the experience of working as a content moderator for Facebook due to the graphic content she has had to work with; and, as she walks through trash-strewn streets of her Manila neighbourhood, she grapples with a childhood fear ingrained in her by a parent’s incessant admonition that if she does not work hard she herself will end up in the street as a trash picker. As the scene culminates, a viewer is given the impression that, indeed, her work as a moderator and the work of the trash picker on the street are collapsing together for her and that, given the psychological harm she experiences as a moderator, she might actually prefer making a living as a trash picker. This is the illusion of choice. Her options, at least as depicted by the scene, are between two kinds of managing waste, both of which are about maintaining systems of power focused not only on creating and managing waste, but wasting those who do that labour.

How Commercial Content Works as a System

Content moderation guidelines are astonishingly ad hoc systems that continue to stymie attempts at automation. While social media companies continually tout the role of algorithms for content moderation, much of the work comes down to a human being sitting at a screen and making ignore/delete decisions (Chen, 2014). Early on, around 2012, the guidelines at Facebook were a one-page document that said little more than, “nudity is bad, so is Hitler, and if it [a Facebook post] makes you [the moderator] feel bad, take it down” (Radio Lab, 2018, 00:11:15). But things began to change rapidly in the face of increasing public pushback about take-down notices some Facebook users were receiving. Facebook had to get increasingly specific about moderation guidelines. Seemingly simple fixes to the system of content moderation guidelines (like those against gore stating ‘no insides on the outsides’) kept being overwhelmed by cases that didn’t fit. What was a one-page document quickly expanded to 50 pages as more and more guidance was needed by moderators to make decisions based on the platform’s policies as well as applicable laws where the platform operates. Soon, examples of these guidelines covering terrorism, sexuality, graphic violence, and hate propaganda (among other topics) were leaked and reported in the press. Typically, the guidelines are an admixture of bullet points, screenshots, and examples seemingly cobbled together from company presentations, written documents, and the like.

Screen Shot 2019-07-07 at 11.15.48 AM.png

From The Guardian‘s Facebook Files, a special series focusing mainly on issues relating to CCM

Besides the growing complexity of the guideline documents, another important change occurred. In a series of blog posts in 2017 called ‘Hard Questions‘ Facebook disclosed that they rely on something like 15,000 content moderators at 20 locations including in “Germany, Ireland, Latvia, Spain, Lisbon, Philippines and the United States.” (Silver 2018, italics in the original). In most cases, these moderators are usually hired by third-party contract staffing firms such as Upwork (formerly oDesk), CPL (O’Connell, 2019), or Cognizant (Newton, 2019a). The result is that people actually doing content moderation are not direct employees of Facebook or other such platforms (we’ll see why that matters in a moment).

Scholarly research on CCM is relatively young (Roberts 2016, 2018; Gillespie 2018) and much of what is known to outsiders of the business comes from investigative reporting and companies’ own public disclosures (often coming on the heels of public pushback resulting from content takedown decisions). The trailer for The Cleaners offers a glimpse of how CCM actually gets done.

On the surface, CCM may appear to be an easy job. In essence, it comes down to making a binary decision: either ignore flagged content or delete it. Moderators are typically required to maintain a 98 percent quality rating i.e., their quality managers must agree with 98 percent of their ignore/delete decisions. Perhaps you think this kind of work is simple. Try taking a quiz that mimics working as a content moderator for Facebook. You’ll be asked 11 questions on which you have to make a call to delete or ignore in actual situations like: “You come across an image of plus-sized models in bikinis. Delete or ignore?” (I managed 72 percent on the quiz which is well below the 98 percent requirement for actual moderators even though I had already watched the documentary and done some research on this topic. What’s your score on the quiz?).

Working as a contractor at the staffing firms to whom Facebook and others outsource content moderation pays at or somewhat above minimum wage (for a given jurisdiction). For example, in Dublin, Ireland content moderators make 12.98 Euro per hour, equivalent to between Euro 25,000 — 32,000 per year. This compares to an average Facebook employee in Ireland pay of Euro 154,000 per year (O’Connell, 2019 for stats). In Phoenix, USA moderators are paid $15 per hour which is above the minimum wage of $11 per hour in Arizona and works out to about $28,000 per year (Newton, 2019). As is well documented, minimum wages are very often below the actual cost of living.

Beyond low pay, many other aspects of the job of content moderation make for poor working conditions. According to The Cleaners, a moderator’s goal is to make a determination on 25,000 images per shift. Assuming an eight hour shift with no breaks that is barely over 1 second per image. Facebook itself claims content reviewers are not set quotas (Silver, 2018), but that claim is contradicted by moderators interviewed in various investigative reports. At a CCM firm in Dublin, moderators had between 20 and 30 seconds to make decisions depending on the type of content (O’Connell, 2019).

The contract nature of employment as a content moderator matters for several reasons. For one, labour laws protecting employees in terms of scheduling, dismissal, and benefits such as sick leave typically do not cover contract employees (or do so in much weaker ways) than for other, non-contract employees. Content moderators also typically have to sign non-disclosure agreements (NDAs) which bar them from speaking publicly about  specifics of their employment. These types of arrangements enhance the unevenness of power between employers over employees and shift the balance toward managers and their firms and away from the people working as content moderators. A net result of this assembling of unequal power relations is that the third-party contract staffing firms are  at least partially shielded from employee criticisms of their work conditions. Meanwhile, the clients of these third-party firms—Facebook and other brand name social media platforms—are shielded from criticisms about working conditions because content moderators do not, from a legal point of view, work for Facebook. Thus legal systems intersect with the business of content moderation that help shore up its failures to contain problematic content.

In discard studies, waste and wasting are always part of systems that decide what is valuable, what is discardable, and allows these practices to occur infrastructurally. that is, where material, economic, legal and other supports in the forms of “buildings, bureaucracies, standards, forms, technologies, funding flows, affective orientations” uphold power relations (Murphy 2017: 6). Workers are wasted systemically.

The Cleaners-scavenging

Image from press kit (PDF) for “The Cleaners”.

All sinks are spills

Attempts are often made to manage discards via containment. Landfills sequester municipal solid waste. Sewage treatment plants impound contaminated water. Geologic forms hold barrels of radioactive waste. Each of these examples of systems of containment share something in common other than their purpose to hold things in place: they all leak and, eventually, they all spill (Gabrys, 2009). The proposition that all sinks are spills holds for commercial content moderation, too.

Facebook’s content moderation guidelines grew increasingly complex as public pushback against its and other social media platforms’ takedown notices and deleted posts proliferated. What started out as a one-page document for in-house moderators, expanded to at least 50 pages and content moderation becoming an outsourced service. Each iteration attempted to contain more and more types or categories of content deemed to be objectionable. Facebook’s VP of Operations, Ellen Silver, wrote, “We want to keep personal perspectives and biases out of the equation entirely — so, in theory, two people reviewing the same posts would always make the same decision. Of course, judgments can vary if policies aren’t sufficiently prescriptive”. For Silver, the solution is sufficient prescription such that any content moderator anywhere looking at content from anywhere about anything will come to the same ignore/delete decision.

A stark example helps put the consequences of such aspirations to universality in perspective. In the US, anti-discrimination laws protect groups defined by such categories as race, religion, and gender but not age. As a consequence, a Facebook policy lead to racist posts about Black children receiving ignore decisions while posts deemed derogatory to men were deleted (Radio Lab, 2018, 00:36:00+). Facebook’s explanation for the decisions was about how their policies incorporate deference to US anti-discrimination laws. In Facebook’s reasoning, the category ‘children’ (which denotes age) modifies ‘Black’ (which denotes race) such that the racist posts in question did not violate US anti-discrimination laws, regardless of whether the posts originated or were seen within the US or elsewhere.

A search for ever-finer categorizations as a system to control objectionable content leads to ever more failures of the system to contain and control. Arguably, these failures occur when a specific system and its peculiar particulars are mistaken for a universal. What Facebook and other US-based social media platforms are struggling with is that their terms of service make all kinds of assumptions based on, for example, notions of free expression that are peculiar to place-based and historically specific norms such as the US Constitution’s First Amendment (Roberts, 2019). Outside the jurisdiction of what is currently the United States, other norms operate and come into conflict with the unacknowledged assumptions of universality baked into the system.

People employed as content moderators are given quality ratings based on the accuracy of their ignore/delete decisions. Accuracy is determined by audits of those decisions by managers, amounting to a random selection of about 3 percent of a moderator’s decisions.  Moderators find themselves in a position where they must “second-guess the system” (O’Connell citing Gray) because even if content moderators make the correct decision to delete, but do so for the wrong reason their employee quality score goes down which could lead to loss of employment. In practice, this might mean it’s not enough that a moderator deletes a video showing, say, an execution. Moderators have to ask themselves about a variety of contextual elements accompanying the video. For example, is the person attempting to condemn the act by posting it? Does the video contain symbolism relating to any listed terrorist organization? Is a video praising and advocating lethal violence? Answers to two of these questions point to a delete decision, but which is the reason the content moderator’s quality manager might use? Making the right decision for the wrong reason (as determined by the quality manager) dings the content moderator on quality. Accumulate enough dings and your employment contract is terminated. Making the wrong decision is even worse for your quality rating. Meanwhile, Silver (Facebook’s VP of Operations) suggests sufficient prescription will solve the problem, “we always strive to clarify and improve our policies — and audit a sample of reviewer decisions each week to make sure we uncover instances where the wrong call was made. Our auditors are even audited on a regular basis” (Silver, 2018).

But even auditing the auditors can’t contain the system’s dirt, as Facebook’s manager of contractors, Arun Chandra admits. Speaking about addressing the working conditions of content moderators Chandra states in an interview with The Verge, “We’ll never solve 100 percent, but I’m trying to show I can solve 80 to 90 percent of the larger problems.” In his own words, then, the normal, everyday operation of the containment system for objectionable content entails spills of 10-20 percent. That range might seem small or manageable but, like other spills of contaminants, what slops out of the containment system doesn’t spread its effects evenly. Some people are more affected than others, in this case it is the people working as contractors, making the least money and with least control over their working conditions.

“So many graphic videos were reported that they could not be contained in Speagle’s queue. […] I [Speagle] was getting the brunt of it, but it was leaking into everything else.” — Shaun Speagle, former content moderator, in an interview with journalist Casey Newton for The Verge.

Psychological counseling services are available to moderators when they are employed (Newton, 2019). However, the moment they leave or are fired, those supports end. Of course, that’s not how psychological trauma works (Perez et al, 2010). Discard studies would note the various forms of externalization going on here: content deemed problematic by a platform’s rules is removed from its system, yet in a process whose normal operation places that content in the minds of moderators–enough so that harms such as clinical depression and PTSD are experienced. Content moderators’ brains become the dumping grounds for that which threatens the social media platforms continuance as the system that it is. Also, via the quality ratings (or the decision of a moderator to leave), workers themselves are discarded from the system, taking with them the emotional and financial costs of the psychological harms their work has caused them. When that happens and content moderators lose access to what therapeutic counseling they had while employed, their psyches become sacrifice zones (Lerner, 2010).

In response to these containment failures, Facebook has created what it calls a ‘global resiliency team’. Tellingly, the team’s corporate lead, Chris Harrison, is thinking about solutions in terms of threshold theories of harm. “Is there such thing as too much? […] Scientifically, do we know how much is too much? Do we know what those thresholds are? The answer is no, we don’t. Do we need to know? Yeah, for sure” (Harrison interviewed for The Verge).

Threshold theories of harm have their origins in early 20th Century sanitation engineering and were enshrined into US laws between 1945 and the 1970s (e.g., US FDA limits for DDT; US EPA pollution limits in the Clean Air and Clean Water Acts, see Liboiron, 2015). Noticing this incipient deference to the science of thresholds points to a potential critical intervention for discard studies of commercial content management via the work of biologist Mary O’Brien (1993). O’Brien discusses the difference between risk assessment and alternatives assessment. Risk assessments often rely on threshold theories of allowable harm (e.g., how much dioxin in mothers milk is too much?). Alternatives assessments ask what other possibilities might there be for the industrial use of chlorine which leads to dioxin releases? By analogy, one might ask what alternatives there are to content moderation in its current form? In Canada and the US, other forms of mass communication such as radio and television are regulated through licensing requirements. Those laws find a balance between civil liberties that both protect free speech and sanction hate speech.

Conclusion

In a twist of irony, Facebook’s chief technology officer, Mike Schroepfer, was recently profiled by the New York Times about his role in finding solutions to “toxic content” on the platform. The piece narrates his personal struggles with finding those solutions without a single mention of the thousands of other people specifically hired in outsourced contract work to perform exactly the duties that, sometimes, the Times tells readers, bring Mr. Schroepfer “to tears”. Speaking about the live streamed horror of the killings perpetrated in New Zealand by a self-professed white supremacist, Mr. Schroepfer states, “I wish I could unsee it”.  But he can’t. And neither can the legions of people working as content moderators for Facebook. In order to make this story about a personal heroic individual to which readers are empathetic, the daily maintenance of content moderation done by people working in slightly better than minimum-wage jobs have to be erased. Mr. Schroepfer is a centre of power, deeply upset by that which must be discarded from the systems he oversees.

Like all systems, CCM begins to break as it strives toward universalism. It attempts to take a place-based ethics (e.g., the US Constitutional protections of free speech) and export them to all places where social media platforms operate (more generally, see Daston, 2006). The more places to which the platforms’ content policies extend, the more the taken-for-granted universalist intentions face gaps in their ability to work smoothly. As instances of these gaps in the system’s ability to operate on its own terms grow, the more recourse there is to discard so as to shore up the order of the system’s operation.

In the case of CCM, discard is about deleting and ignoring content, but it is also about discarding workers’ mental health. CCM workers are used as sinks to contain existential challenges to the platforms’ systems of operation. In this way, containment is essential to how the power of social media platforms operates to keep generating value. Containment works to generate value until containment itself breaks — CCM workers’ psyches fail to compartmentalize the horrendous content they are tasked to moderate and employees leave.

It is telling that as a system CCM works at the expense of mental health of the workers doing the moderation. Mental health is literally made disposable, making the psyche of workers sinks to contain content deemed unfit to be included on this or that social media platform. It is also telling that instead of workers’ mental health being cultivated in terms of care, repair, health or wellbeing, workers’ mental health–their psyches–are treated as disposal sites for objectionable content and organized as sacrifice zones.

Discard studies and waste studies share similarities, but they are not inherently the same thing. Discard studies examines who and/or what must be removed from systems for those systems to maintain their integrity (i.e., their persistence and order). Discards are what threaten that integrity if they are not removed and put questions of power front and center (e.g., who and what must be removed by whom and what and under what conditions for a given system to continue as it is?). Discard studies examines many kinds of systems, of which systems of waste management are one, albeit primary, case. Commercial content moderation helps illustrate some of the distinctions between discard studies and waste studies.

Works Cited

Chen, Adrian. “The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook Feed.” Wired, October 23, 2014. https://www.wired.com/2014/10/content-moderation/.
Daston, Lorraine. “The History of Science as European Self-Portraiture.” European Review 14, no. 4 (October 2006): 523–36. https://doi.org/10.1017/S1062798706000536.
Gabrys, Jennifer. “Sink: The Dirt of Systems.” Environment and Planning D: Society and Space 27 (2009): 666–81.
Gillespie, Tarleton. Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale University Press, 2018.
Lerner, Steve. Sacrifice Zones: The Front Lines of Toxic Chemical Exposure in the United States. MIT Press, 2010.
Liboiron, Max. “Redefining Pollution and Action: The Matter of Plastics.” Journal of Material Culture, December 29, 2015, 1–24. https://doi.org/10.1177/1359183515622966.
Murphy, Michelle. The economization of life. Duke University Press, 2017.
Newton, Casey. “The Secret Lives of Facebook Moderators in America.” The Verge, February 25, 2019a. https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona.
Newton, Casey. “Three Facebook Moderators Break Their NDAs to Expose a Company in Crisis.” The Verge, June 19, 2019b. https://www.theverge.com/2019/6/19/18681845/facebook-moderator-interviews-video-trauma-ptsd-cognizant-tampa.
O’Brien, Mary H. “Being a Scientist Means Taking Sides.” BioScience 43, no. 10 (1993): 706–8. https://doi.org/10.2307/1312342.
O’Connell, Jennifer. “Facebook’s Dirty Work in Ireland: ‘I Had to Watch Footage of a Person Being Beaten to Death.’” The Irish Times, March 30, 2019. https://www.irishtimes.com/culture/tv-radio-web/facebook-s-dirty-work-in-ireland-i-had-to-watch-footage-of-a-person-being-beaten-to-death-1.3841743.
Perez. L.M, Hones, J., Englert, D.R and Sachau, D. (2010) ‘Secondary Traumatic Stress and Burnout among Law Enforcement Investigators Exposed to Disturbing Media Images’, Journal of Police and Criminal Psychology, 25 (2).
Radio Lab. “Post No Evil | Radiolab.” 2018. https://www.wnycstudios.org/story/post-no-evil.
Roberts, Sarah T. Behind the Screen: Content Moderation in the Shadows of Social Media. Yale University Press, 2019.
———. “Digital Detritus: ‘Error’ and the Logic of Opacity in Social Media Content Moderation.” First Monday 23, no. 3 (March 1, 2018). https://doi.org/10.5210/fm.v23i3.8283.
Silver, Ellen. “Hard Questions: Who Reviews Objectionable Content on Facebook — And Is the Company Doing Enough to Support Them? | Facebook Newsroom.” Accessed April 25, 2019. https://newsroom.fb.com/news/2018/07/hard-questions-content-reviewers/.

 

One thought on “No insides on the outsides

  1. Pingback: The Dirt | Discard Studies

Comments are closed.