May 7, 2024 // Did someone forward you this newsletter? Sign up to receive your own copy here.
The unexpected alliance to protect kids online
It’s not often that the biggest and most competitive tech companies come together to form an alliance, but that’s exactly what happened last month.
Ten major tech firms committed to adopt principles that will protect children against the growing threats of online sexual abuse posed by generative AI.
This week, we’re exploring what that commitment actually means and if it could serve as a turning point toward a more proactive approach to Trust & Safety.
As a warning, this newsletter contains references to child sexual abuse. Please read with care.
// Commitments to safety
Spearheaded by the Project Liberty Alliance members Thorn and All Tech Is Human, ten tech companies (Amazon, Anthropic, Civitai, Google, Meta, Metaphysic, Microsoft, Mistral AI, OpenAI, and Stability AI) committed to incorporate new safety measures to protect children from online exploitation exacerbated by generative AI technology.
//
In 2023, more than 104 million files of suspected CSAM were reported in the US.
//
// The risks of generative AI for kids
On the internet, child sexual abuse material (CSAM) has been a problem for years.
One in three internet users around the world is a child and 800 million children are actively using social media.
In 2023, more than 104 million files of suspected CSAM were reported in the US.
But generative AI tools can create and distribute CSAM content at scale, posing a massive threat to children and families around the world.
Victim identification is harder: For online CSAM, identifying the victim is already a “needle in a haystack problem” for law enforcement. But with AI-generated CSAM (AIG-CSAM), it’s even harder to identify the victim because of the ways images are blended into photorealistic permutations.
AI creates new ways to victimize and re-victimize children: Bad actors can now easily generate new CSAM, sexualize benign imagery of children, and generate content to target children.
More AIG-CSAM begets more AIG-CSAM: Thorn reports that the growing frequency of CSAM generates more demand for CSAM. They point to research that the more engagement there is with this material, the greater the risk for future offenses.
AI models can make bad actors more effective: AI chatbots can provide instructions to bad actors on everything from how to manipulate victims to how to destroy evidence.
// Safety by design
Instead of responding to an offense that has already occurred, “safety by design” takes a more proactive approach. It requires tech companies to anticipate where threats may occur—from design to development to deployment—and build in the necessary safeguards.
The Thorn white paper outlines three steps for tech companies to get ahead of the problem:
Develop, build, and train generative AI models that proactively address child safety risks. This includes responsibly sourcing training datasets free from CSAM (a report last year found CSAM in an open data set used to train AI models), conducting CSAM-oriented stress-testing during the development process, and building media provenance tools that help law enforcement track down bad actors.
Release and distribute generative AI models only after they have been trained and evaluated for child safety. This includes responsibly hosting models and supporting the developer ecosystem in their efforts to address child safety risks.
Maintain model and platform safety by continuing to understand and respond to child safety risks. This includes removing AI models built specifically to produce AIG-CSAM (some services “nudify” benign images, an issue that has shown up in high schools this year), investing in research to stay ahead of the curve, and detecting and removing harmful content.
David Ryan Polgar, Founder & President of All Tech Is Human, who worked with Thorn to establish the principles, said, “The biggest challenge for companies making this commitment, along with all other companies and key stakeholders, is recognizing that a thorny issue like reducing AIG-CSAM is hard but it is not impossible. The turning point that this alliance represents is a growing recognition that time is of the essence so we need to move at the speed of tech.”
// Impossible to be perfect
Tech companies are not starting from scratch. They’ve been working on Trust & Safety initiatives for years, but there is a wide chasm between pledging to do something and actually doing it.
Even the most proactive by design efforts can’t catch everything. While the existence of CSAM can be minimized in training AI models, it’s far harder to clean or stop the distribution of CSAM on open datasets that have no central authority, according to research from last year.
While images of CSAM are illegal if they contain real children or if they were trained on data with real children, 100% synthetically-made images that do not contain real source images could be protected as free speech, according to a new report by Stanford University.
The Stanford report also found that CyberTipline, the federal clearinghouse for all reports on child sexual abuse material, which is run by the National Center for Missing and Exploited Children, is overwhelmed and unprepared to handle the amount of AI-generated CSAM.
// It will take all of us
Commitments and actions are a step in the right direction, but they’re just one dimension of a more comprehensive solution that likely requires federal regulation to ensure the safety of today’s tech ecosystem. There is reason to believe that today’s breaking point might become tomorrow’s tipping point.
Polgar said, “This is an issue that relies on people coming together across civil society, government, industry, and academia. So instead of playing Whac-A-Mole or searching for an elusive silver bullet, we need to conceive of complex tech & society issues like a Rubik’s Cube. It’s hard, it’s connected, but it’s solvable with enough hard work.”
Project Liberty in the news
// MSNBC’s Chuck Todd featured “Our Biggest Fight” in his article about the race to build a better internet. Read here.
Other notable headlines
// 🇪🇺 The EU is investigating Meta over the spread of disinformation on its platforms ahead of the EU’s elections in June, according to an article in The New York Times.
// 🤖 Teens are making friends with AI chatbots. But what happens when AI advice goes too far? An article in The Verge explored the benefits and costs when your first call is to an algorithm.
// 🏛 An article in WIRED profiled Arati Prabhakar, the woman who introduced ChatGPT to President Biden and has the ear of the president on all things AI.
// 🚗 An investigation by The Markup found that car tracking can enable domestic abuse. Internet-connected cars allow abusers to track domestic violence survivors.
// 💵 Videos on TikTok about the economy and consumerism are rewiring the brains of Gen Z and creating cases of ‘money dysmorphia,’ according to an article in The Wall Street Journal.
// Virtual & in-person event on The Anxious Generation
May 20th at 2-3pm ET
The Sustainable Media Center is hosting an in-person and virtual webinar with Jonathan Haidt, Scott Galloway, and Emma Lembke for a discussion of Haidt’s book The Anxious Generation. The three will discuss the underlying causes and potential solutions for the rising levels of anxiety among young people. Register here.