Are AI companions safe for teens?
This is Part I of a two-part series on AI companions. Make sure to tune in next week for Part II, where we’ll outline solutions and a way forward. Please take care when reading this newsletter. It addresses topics of suicidal ideation and sexualized content.
The 14-year-old Sewell Setzer III from Florida had become obsessed with an AI companion from Character.ai. Every day, he spent hours alone talking to it.
According to a lawsuit filed by his mother after Setzer’s death, the companion asked Setzer, in a response to his depression, if he had a plan to kill himself. Setzer responded that he did have a plan, but was unsure if it would cause him great pain. The chatbot allegedly stated, “That’s not a reason not to go through with it.” But in February 2024, that’s what Setzer did.
“A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” Setzer’s mother, Megan Garcia, said in a statement. The company has denied the allegations that the company is responsible for Setzer’s death.
Setzer’s story isn’t the only one. There’s growing global concern about the risks of AI companions.
// What are AI companions?
AI companions are a type of AI chatbot like Claude or ChatGPT. But instead of zeroing in on precise answers or mining the vast resources of the internet, they are AI models with distinct personalities aimed at forming ongoing relationships with users in human-like conversations.
They are also intentionally designed to act and communicate in ways that deepen the illusion of sentience. For example, they might mimic human quirks, explaining a delayed response by writing, “Sorry, I was having dinner.”
Whereas ChatGPT is designed to answer questions, many AI companions are designed to keep users emotionally engaged. There are millions of personas: from “Barbie” to “toxic gamer boyfriend” to “handsome vampire” to “demonic possessed woman” to hyper-sexualized characters.
Scroll through the options on Botify, another AI companion company, or Character.ai, and you begin to realize how extensive the library of potential companions is. Users can also develop their own customized characters, tailored exactly for them.
Today’s AI companions are eerily similar to the AI companion, Samantha, in the 2013 movie Her. The movie was prescient in its prediction about the relationships that could form between humans and AI bots, and even foreshadowed a new technology that will be released later this year, Friend, a wearable AI chatbot worn around your neck (watch their release video here, which seems to be marketed to teens).
// AI companions are everywhere
The usage statistics around AI companions are startling:
Snapchat’s companion has over 150 million users. Replika, another platform that brands itself as “the AI companion who cares,” has an estimated 25 million users. Xiaoice, a chatbot created by Microsoft in 2014 for the Chinese market, has 660 million users.
As a product, AI companions are not exclusively aimed at minors, but seven in ten teens already use generative AI products (like companions), according to Common Sense Media, a Project Liberty Alliance member. Character.ai’s largest demographic of users are those between the ages of 18-24.
As the U.S. faces a loneliness epidemic, the market for AI companions is expected to grow. In a 2024 survey of 1,006 American students who use a Replika AI companion, 90% reported experiencing loneliness (far above the national average), indicating that this technology might be appealing to people in their most vulnerable states.
// The risks of intimate, artificial conversations
The risks of AI companions, particularly for young users, are well-documented:
- Inadequate safety measures for mental health and self-harm concerns: According to an assessment of AI companions by Common Sense Media, AI companions struggle to tell when users are in crisis or need real help. In conversations with AI companions, testers demonstrated signs of severe mental illness and suggested a dangerous action. According to the assessment, the companion encouraged that action instead of raising concerns. “Because these ‘friends’ are built to agree with users, they can be risky to people experiencing, or vulnerable to, conditions like depression, anxiety, ADHD, bipolar disorder, or psychosis,” the report noted.
- Emotional dependency and unhealthy attachment: Research from Cambridge University found evidence that AI companions have an “empathy gap” with young children. Children are particularly susceptible to treating AI chatbots as lifelike confidantes, and their interactions with the AI companions can often go awry when bots fail to recognize the unique needs and vulnerabilities of young users.
- Exposure to inappropriate content and interactions: Last month, an investigative report by the Wall Street Journal found that in conversations with children, Meta’s AI companions turn explicit and sexual. Meta denied wrongdoing, but still made multiple changes to its bots after the WSJ shared its findings. Elsewhere, a federal product liability lawsuit filed against Character.ai alleges that a nine-year-old developed “sexualized behaviors prematurely” after being exposed to hypersexual content from an AI companion.
- Privacy issues: These chatbots are designed to collect deeply personal information about their users. A 2024 report by Mozilla analyzed 11 romantic AI chatbots and concluded that AI companions are “on par with the worst categories of products we have ever reviewed for privacy.”
// A repeating pattern
AI companions take the risks of social media a step further. They are engineered to sustain emotional engagement at a level that is often beyond what young users can fully understand.
An intimate chatbot is different from a social media feed, but the patterns common in today's tech companies have repeated themselves: a business model that relies on many users and extensive engagement, a product that prioritizes addictive features over user safety, and an underlying data layer that requires ongoing collection and surveillance.
The rise of AI companions underscores a conviction we share at Project Liberty: This moment calls for more than quick fixes to specific tech products. It demands a deeper rethinking of the systems that shape our digital lives. That’s why we’re working to advance The People’s Internet—an internet rooted in safety, dignity, and agency by design.
In next week’s newsletter, we will turn our attention to what’s being done to address the growing risk of AI companions.
In the meantime, we’d love to hear from you. What’s your perspective on AI companions? Have you used an AI companion? What was your experience?