As teens spend hours with AI companions, experts warn of dangerous mental health advice and emotional dependency.
View in browser
logo_op-02

May 20th, 2025 // Did someone forward you this newsletter? Sign up to receive your own copy here.

Image by Project Liberty

Are AI companions safe for teens?

 

This is Part I of a two-part series on AI companions. Make sure to tune in next week for Part II, where we’ll outline solutions and a way forward. Please take care when reading this newsletter. It addresses topics of suicidal ideation and sexualized content.

 

The 14-year-old Sewell Setzer III from Florida had become obsessed with an AI companion from Character.ai. Every day, he spent hours alone talking to it.

 

According to a lawsuit filed by his mother after Setzer’s death, the companion asked Setzer, in a response to his depression, if he had a plan to kill himself. Setzer responded that he did have a plan, but was unsure if it would cause him great pain. The chatbot allegedly stated, “That’s not a reason not to go through with it.” But in February 2024, that’s what Setzer did.

 

“A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” Setzer’s mother, Megan Garcia, said in a statement. The company has denied the allegations that the company is responsible for Setzer’s death.

 

Setzer’s story isn’t the only one. There’s growing global concern about the risks of AI companions.

 

// What are AI companions?

AI companions are a type of AI chatbot like Claude or ChatGPT. But instead of zeroing in on precise answers or mining the vast resources of the internet, they are AI models with distinct personalities aimed at forming ongoing relationships with users in human-like conversations.

 

They are also intentionally designed to act and communicate in ways that deepen the illusion of sentience. For example, they might mimic human quirks, explaining a delayed response by writing, “Sorry, I was having dinner.”

 

Whereas ChatGPT is designed to answer questions, many AI companions are designed to keep users emotionally engaged. There are millions of personas: from “Barbie” to “toxic gamer boyfriend” to “handsome vampire” to “demonic possessed woman” to hyper-sexualized characters. 

 

Scroll through the options on Botify, another AI companion company, or Character.ai, and you begin to realize how extensive the library of potential companions is. Users can also develop their own customized characters, tailored exactly for them.


Today’s AI companions are eerily similar to the AI companion, Samantha, in the 2013 movie Her. The movie was prescient in its prediction about the relationships that could form between humans and AI bots, and even foreshadowed a new technology that will be released later this year, Friend, a wearable AI chatbot worn around your neck (watch their release video here, which seems to be marketed to teens).

 

// AI companions are everywhere

The usage statistics around AI companions are startling:

  • Character.ai reported that it receives 20,000 queries per second, about one-fifth of the estimated search volume served by Google.
  • Interactions with AI companions last 4x longer than the average time interacting with ChatGPT.
  • Botify reported that its users (mostly Gen Z) spend an average of two hours per day on their platform.

Snapchat’s companion has over 150 million users. Replika, another platform that brands itself as “the AI companion who cares,” has an estimated 25 million users. Xiaoice, a chatbot created by Microsoft in 2014 for the Chinese market, has 660 million users.

 

As a product, AI companions are not exclusively aimed at minors, but seven in ten teens already use generative AI products (like companions), according to Common Sense Media, a Project Liberty Alliance member. Character.ai’s largest demographic of users are those between the ages of 18-24. 


As the U.S. faces a loneliness epidemic, the market for AI companions is expected to grow. In a 2024 survey of 1,006 American students who use a Replika AI companion, 90% reported experiencing loneliness (far above the national average), indicating that this technology might be appealing to people in their most vulnerable states.

 

// The risks of intimate, artificial conversations

The risks of AI companions, particularly for young users, are well-documented:

  • Inadequate safety measures for mental health and self-harm concerns: According to an assessment of AI companions by Common Sense Media, AI companions struggle to tell when users are in crisis or need real help. In conversations with AI companions, testers demonstrated signs of severe mental illness and suggested a dangerous action. According to the assessment, the companion encouraged that action instead of raising concerns. “Because these ‘friends’ are built to agree with users, they can be risky to people experiencing, or vulnerable to, conditions like depression, anxiety, ADHD, bipolar disorder, or psychosis,” the report noted.
  • Emotional dependency and unhealthy attachment: Research from Cambridge University found evidence that AI companions have an “empathy gap” with young children. Children are particularly susceptible to treating AI chatbots as lifelike confidantes, and their interactions with the AI companions can often go awry when bots fail to recognize the unique needs and vulnerabilities of young users.
  • Exposure to inappropriate content and interactions: Last month, an investigative report by the Wall Street Journal found that in conversations with children, Meta’s AI companions turn explicit and sexual. Meta denied wrongdoing, but still made multiple changes to its bots after the WSJ shared its findings. Elsewhere, a federal product liability lawsuit filed against Character.ai alleges that a nine-year-old developed “sexualized behaviors prematurely” after being exposed to hypersexual content from an AI companion.
  • Privacy issues: These chatbots are designed to collect deeply personal information about their users. A 2024 report by Mozilla analyzed 11 romantic AI chatbots and concluded that AI companions are “on par with the worst categories of products we have ever reviewed for privacy.”

 

// A repeating pattern

AI companions take the risks of social media a step further. They are engineered to sustain emotional engagement at a level that is often beyond what young users can fully understand.

 

An intimate chatbot is different from a social media feed, but the patterns common in today's tech companies have repeated themselves: a business model that relies on many users and extensive engagement, a product that prioritizes addictive features over user safety, and an underlying data layer that requires ongoing collection and surveillance.

 

The rise of AI companions underscores a conviction we share at Project Liberty: This moment calls for more than quick fixes to specific tech products. It demands a deeper rethinking of the systems that shape our digital lives. That’s why we’re working to advance The People’s Internet—an internet rooted in safety, dignity, and agency by design.

 

In next week’s newsletter, we will turn our attention to what’s being done to address the growing risk of AI companions.

 

In the meantime, we’d love to hear from you. What’s your perspective on AI companions? Have you used an AI companion? What was your experience?

Image by Project Liberty

Project Liberty in the news

// In a segment on CNBC, Project Liberty founder Frank McCourt said the bid for TikTok is aligned with U.S. national security priorities. Watch here.


// One year ago, we launched The People’s Bid with a bold vision: to return TikTok to the people and catalyze a world where individuals have total ownership of their personal data, are free from toxic algorithms, and have the freedom of digital choice across platforms. Whether or not the acquisition proceeds, we have always been dedicated to building The People’s Internet, rooted in individual empowerment, transparency, and innovation. Read our reflection in Politico and engage with our milestones over the last year on LinkedIn.

Other notable headlines

// 🤖 The use of deepfakes in scams is on the rise, according to an article in Ars Technica. The FBI has warned of a current scam that uses deepfake audio to impersonate government officials. (Free)

 

// 😮‍💨 TikTok will show teens guided meditation after 10PM. The app will now show breathing exercises for all users under 18 by default, according to an article in the Verge. (Free)

 

// 🇨🇳 Chinese startups once downplayed their origin. Now some celebrate it. Following DeepSeek's rise, more Chinese companies are highlighting their roots as they expand overseas, according to an article in Rest of World. (Free)

 

// 🏛 There is a push to halt AI oversight that’s buried in Congress’s budget bill, according to an article in Tech Policy Press. (Free)

 

// 📱 Threatening social media posts targeting U.S. judges have increased by more than 300% since last year, according to an article in WIRED. (Paywall)

 

// 🇰🇵 Tech companies have a big remote worker problem: they’re unintentionally hiring North Korean operatives, according to an article in Politico. (Free)


// 🤔 A seven-part essay series explores how deliberative democratic processes like citizen’s assemblies and civic tech can strengthen AI governance (featuring Project Liberty collaborators like Audrey Tang, Deb Roy, and others). (Free)

Partner news & opportunities

// Karen Hao talks “Empire of AI” in an upcoming livestream

May 23 | 11am ET | Virtual

Award-winning journalist Karen Hao will join an All Tech Is Human livestream to discuss her forthcoming book, “Empire of AI.” Known for her reporting for The Atlantic and leadership in the Pulitzer Center’s AI Spotlight Series, Hao promises an engaging conversation at the intersection of tech and ethics. Register for free here.

 

// Podcast episode: Decentralized Research Center explores new metrics for decentralization

In the latest episode of their podcast Techquitable podcast, Miles Jennings, of a16z, and Sarah Brennan, of Delphi Ventures, discuss critical frameworks for evaluating decentralization.

What did you think of today's newsletter?

We'd love to hear your feedback and ideas. Reply to this email.

/ Project Liberty builds solutions that help people take back control of their lives in the digital age by reclaiming a voice, choice, and stake in a better internet.

 

Thank you for reading.

Facebook
LinkedIn
Sin título-3_Mesa de trabajo 1
Instagram
Project Liberty footer logo

10 Hudson Yards, Fl 37,
New York, New York, 10001
Unsubscribe  Manage Preferences

© 2025 Project Liberty LLC