Content moderation is not easy. We explore the nuances, complexity, and human toll of moderating all of today's user-generated content
View in browser
PL logo

April 25, 2023

Meet the people, protocols, and possibilities building a better tech future.

Did someone forward you this newsletter? Sign up to receive your own copy here.

Unsplash photo of a computer screen, by DESIGN ECOLOGIST

Photo credit: Design Ecologist, from Unsplash

There’s nothing moderate about content moderation

 

More than four billion videos are watched on Facebook every day.

 

On Pinterest, people watch a billion videos a day.

 

On YouTube, 500 hours of content are uploaded every minute.

 

On Instagram, almost 350,000 stories are posted every minute.

 

We are generating a 🤯-volume of content every day, which is one of many reasons why it’s so hard to moderate content online. As a definition, content moderation is the process of reviewing content for it to align to standards and guidelines—whether it's rooted in reducing harm and toxicity, promoting healthy discourse, helping people find what they’re looking for, or something else.

 

This week we’re pulling back the layers to understand the complexity, nuance, debates, and proposed solutions around content moderation.

 

Content doesn’t moderate itself

Big tech platforms use a combination of artificial intelligence algorithms, user reporting, and human review to identify, view, and remove user-generated content that is flagged as harmful or toxic. 

  • Sophisticated technologies are increasingly being leveraged to detect toxic and harmful content at scale—from new image recognition software to meta-data filtering to natural language processing.
  • But according to Roi Carthy, the chief marketing officer of L1ght, a content moderation AI company, “The human brain is the most effective tool to identify toxic material.” The nuance and context a human can detect outperforms many tech tools, which is why Facebook, alone, employs over 40,000 content moderators worldwide through a network of contractors. 
  • Once harmful content has been identified, content moderators collect data about why it was flagged and what policy it violates so that algorithms can be trained to better detect it next time.
  • Platforms like TikTok, Meta, and Twitter have policies against certain types of harmful content, but often fail to catch everything, or are accused of being too slow in removing content.
  • Different countries have different laws. Last year the EU passed the Digital Services Act (DSA), which outlines new obligations for tech companies to moderate content. 
  • Trust and Safety Teams, an emerging division at big tech companies in the last decade, are often playing a game of “whack a mole,” even as their ranks are depleted in the recent rounds of tech layoffs. 

 

Context matters

It’s no surprise that there is horrible content online, but many of the hardest content moderation decisions revolve around questions of what content is considered harmful. 

  • According to Crispin Pikes, a member of the Online Safety Tech Industry Association and co-founder of AI moderation software Image Analyzer, any content moderation technology needs to be highly configurable because context dictates meaning. A glamor shot in one culture is considered pornography somewhere else. A harmless label in one country is an insult in another.
  • A recent study by researchers at Cornell University found that the biases in content moderation algorithms negatively impacted non-western cultures. In the case of Bangladeshi users, Facebook’s content moderation system “frequently misinterpreted their posts, removed content that was acceptable in their culture and operated in ways they felt were unfair, opaque and arbitrary.” 
  • The Guardian found that AI algorithms used to detect social media images were rife with gender bias. Algorithms consistently rated images of women as more sexually suggestive than images of men.

 

The human toll

Content moderators have tough jobs, and investigative journalism and lawsuits are beginning to shine a light on the jobs’ mental health impacts.

  • The Bureau of Investigative Journalism released a report last October chronicling the 42,000 content moderators employed by a TikTok contractor in Colombia who are paid minimum wage to sift through content, leaving them anxious and traumatized.
  • In 2020, Facebook agreed to pay $52 million to US content moderators who developed PTSD on the job. Meta has often chosen to outsource content moderation to contractors to keep labor, tax, and regulatory costs at a minimum, according to the New York Times.
  • After Time Magazine reported last year on the conditions inside one of Meta’s subcontractors in Kenya, where content moderators were paid as little as $2.20 per hour to review violent content, one worker took Meta to court in a high-profile lawsuit that will be heard in Kenya. 
  • A new lawsuit against Reddit claims that it neglected to care for content moderators diagnosed with PTSD. 

Despite the poor working conditions, content moderators can serve as the first line of defense, detecting early warning signs and spotting concerning trends on platforms. As Sarah Roberts, a faculty member at the Center for Critical Internet Inquiry at UCLA, explored in the Harvard Business Review interview from last fall, “A lot of the collective intelligence that moderators gain from being on the front lines of the internet is lost. There’s no effective feedback loop for them to tell their employers what they’re seeing…” 

 

The free speech question

Does content moderation encroach on the US’s First Amendment to freedom of speech? 

 

A new study from Brookings Institute looked at the public misunderstandings around content moderation and the First Amendment. It found that many Americans mistakenly believed that decisions that moderate or remove content on private digital platforms violate a person’s constitutionally guaranteed speech rights. Private platforms have their own speech rights, but the more people thought a platform’s content moderation was a violation of their First Amendment, the lower their support for content moderation.

 

Meanwhile, two laws passed in 2021, one in Florida and one in Texas, made it illegal for tech companies to block or demote content, forcing them to host content on their platforms, even if it violates the platform’s terms of service. The laws were contested, and the US Supreme Court has agreed to hear the cases (but has since delayed). As we’ve written about in a previous newsletter, the Supreme Court is hearing two cases in 2023 on Section 230.

 

Complicated, not impossible

The path towards better content moderation is complicated, but not impossible.

  • New technologies. The UK just awarded funds to five new technologies that are paving the way: from facial-recognition technology that detects child abuse images before they’re even uploaded (though Google has run into challenges with similar technology) to apps that block harmful images before users see them. Web3 presents an opportunity for different forms of content moderation that could be more community-led.
  • Better platform policies. Platforms like Twitter are rolling out new policies and labels on tweets intended to limit the spread of problematic content. The policies, designed to “promote and protect the public conversation,” have been met with cautious optimism.
  • New policies. New policies could help in standardizing content moderation. Project Liberty’s McCourt Institute released a governance brief on the EU’s DSA and the US’s section 230.

Effective, nuanced content moderation is hard work. It requires more than just armies of moderators, the latest algorithm, and government policies. Today’s challenges of moderating content are rooted in the design features of our platforms. To move forward, it will require rethinking how we design our social media platforms—from the protocol layer up.

📰 Other notable headlines

🤔 Willy Staley of The New York Times Magazine went long-form with an article that explores what Twitter is (and was), how it broke our brains, and what it all means. He writes: “The site feels a little emptier, though certainly not dead. More like the part of the dinner party when only the serious drinkers remain.”

 

🔑 Are passwords going away? WIRED explored how “pass-keys” are on track to replace passwords and what’s ahead for a future free of passwords (but where your information is still secure).

 

📺 Influencers, vloggers, and other social media stars are going meta (no, not that Meta), by creating meta-content that talks about the problems of vlogging, the addictiveness of view counts, and the mental health issues of consuming and creating online content. The Atlantic uncovered this new form of digital authenticity in a recent article.

 

🚸 Slate interviews Mitch Prinstein, Chief Science Officer at the American Psychological Association, to understand what social media is doing to our kids’ brains. One insight was that kids tend to over-generalize online content—accepting it as truth—because they have less life experience to help put things in a broader context.

 

😷 Would you trust medical advice generated by artificial intelligence? This is the question the MIT Technology Review poses in an article that raises concerns about how AI tools used in the medical profession are trained on limited or biased data.

 

🤖 We know what content AI chatbots generate as outputs, but what data is inputted into these models? This is what research from the Washington Post sought to uncover in an article that analyzed one of these data sets to fully reveal the types of proprietary, personal, and often offensive websites that go into an AI’s training data.


📱 A Wall Street Journal poll found that nearly half of U.S. voters would ban TikTok, especially those who have never used it. 62% of Republicans favor a ban on TikTok, while just 33% of Democrats do.

🗣 Partner news & opportunities

Virtual Event on the Teenage Brain & Technology

May 8, 2023 7pm ET

Being a teenager can be complicated— especially when it comes to technology. For teens, devices can unleash essential opportunities for connection and learning. Yet educators, mental health professionals, and parents alike are concerned about the impact of increased screen time on adolescent mental health and wellbeing. In this webinar by Fairplay, Erin Walsh and Dr. David Walsh, co-authors of national bestseller Why Do They Act That Way?, will connect the dots between the science of the teenage brain and the risks and rewards amplified by their digital activities. Register for free here.

 

a16z’s 2023 State of Crypto Report

The 2023 State of Crypto Report aims to address the imbalance between the noise of fleeting price movements – and the data that tracks the signals that matter, including the durable progress of web3 technology. Overall, the report reflects a healthier industry than market prices may indicate, and a steady cycle of development, product launches, and ongoing innovation. Check it out here.

Thank you for reading.

Project Liberty, 888 Seventh Avenue, 16th Floor, 

New York,New York,10106.

Unsubscribe Manage Preferences

LinkedIn