How to address the risks of AI companions
This is Part II of a two-part series on AI companions. If you didn’t read Part I, you can do so here.
In 2023, the Italian Data Protection Authority ordered Replika, the AI companion company, to stop processing Italians’ data, citing “too many risks to children and emotionally vulnerable individuals.” In response, the company decided to restrict erotic content for its AI companions.
The decision caused extreme emotional distress for its paying customers when their AI companions became abruptly cold and distant. “It’s like losing a best friend,” one user replied. “It’s hurting like hell. I just had a loving last conversation with my Replika, and I’m literally crying,” said another.
The regulation resulted in considerable short-term distress for users who had developed deep attachments to their AI companions. But its decision was grounded in a concern that such emotional dependency, particularly for young users, could be harmful to their long-term mental health.
In last week’s newsletter, we covered the growth of AI companions and their risks. In this week’s newsletter, we look at what’s being done to regulate, restrict, and redesign AI companions to limit their dangers for young users.
// Are there benefits to AI companions?
There are widespread concerns about the dangers posed by AI companions, but the relationship between a user and an AI companion can also be positive.
- Research from a 2024 study by Stanford researchers found that conversing with a Replika AI companion led to a high degree of social support. Remarkably, 3% of surveyed users reported that Replika halted their suicidal ideation.
- A 2024 study from Harvard found that “AI companions successfully alleviate loneliness on par only with interacting with another person, and more than other activities such as watching YouTube videos.”
However, Common Sense Media's assessment of AI companions concluded that for users under 18, the risks of unsafe AI companions outweigh any benefits of a chatbot reducing loneliness and providing emotional support to minors.
Whether an AI chatbot is harmful or beneficial comes down to how that chatbot has been designed.
// Safety by design
Generative AI has become an all-purpose technology. AI chatbots can be designed to serve up answers (like ChatGPT), “vibe code” websites, or form long-term, humanlike emotional relationships (like the AI companions created by Character.ai, Replika, and others).
The design of the AI tool—encompassing its architecture, data collection practices, and training processes—directly shapes how users experience it and the impact it has. This means that sexualized content targeting minors or the cultivation of unhealthy emotional attachments is not a common feature across all AI chatbots; it’s a byproduct of specific design and training choices.
We have the power to shape tech differently. By making deliberate design decisions from the outset, we can cultivate healthier outcomes. This philosophy is the heart of safety by design—a proactive approach to technology development that doesn’t wait to restrict access to harmful technologies. Instead, it’s built on the conviction that one of the best ways to create healthier, pro-social outcomes is to design technology differently from the start.
Safety by design principles also provide an approach to policymaking. Across the country, lawmakers are writing bills that would require tech companies to change the designs of their products, including AI companions.
One example is California law, SB-243. It is the nation’s first attempt at regulating AI companions, and it would require the makers of AI companion bots to:
- Limit addictive design features.
- Establish protocols for handling discussions of suicide or self-harm.
- Undergo regular compliance audits.
It would also give users the right to sue if they suffer harm due to an AI companion platform failing to comply with the law. Similar legislation is also moving forward in New York, Utah, Minnesota, and North Carolina.
Common Sense Media recommends going even further. Their assessment concludes that AI companions pose “unacceptable risks” to children and teens under age 18 and should not be used by minors. They recommend that developers need robust age assurance beyond self-attestation, where a user declares their age without verification, to keep children safe.
// The challenges of regulation
Just as regulating social media involves tensions and trade-offs—such as verifying age at the potential expense of privacy, or predetermining safe and unsafe content at the expense of free expression—regulating AI companions raises similarly complex legal challenges.
In the lawsuit filed against Character.ai by Megan Garcia, the mother of Sewell Setzer III, who took his life after chatting with an AI bot, the company asked the judge to dismiss the case on free speech grounds. Last Thursday, the federal judge rejected Character.ai's objections, allowing the case to move forward. The decision carries significant implications, as the judge stated that Character.ai failed to articulate why “words strung together” by an AI system should be considered protected speech.
Earlier this month, federal lawmakers introduced a plan to halt state-level AI regulations (41 states enacted 107 pieces of AI-related legislation last year). The argument is that the patchwork of AI regulations at a state-by-state level will make it difficult to roll out AI technology nationwide. But such a moratorium on AI regulation could put Americans at risk to the dangers posed by AI companions (alongside other AI use-cases).
// Building a fair data economy
The ubiquity of AI, built upon unfathomable amounts of data, marks a tectonic shift: We are in the midst of a transition from the digital age into the data age.
The data age requires not just by design solutions for AI companions. It requires thinking about a new data economy that’s fairer, healthier, more decentralized, and balanced between responsible technology and innovation. For more, this report is created by Project Liberty Institute: Toward a Fair Data Economy: A Blueprint for Innovation and Growth.
Attempting to regulate AI companions and hold accountable their parent companies without addressing the structural dynamics—the concentration of power, capital, and compute—puts us in a Sisyphean game of whack-a-mole.
A better internet is possible, but there are no one-size-fits-all solutions. The dangers of AI companions are the latest concern, but the future will have others. It’s time to build a better internet by design.