New research reveals that trust is flowing away from institutions and toward digital companions.
View in browser
logo_op-02

September 30th, 2025 // Did someone forward you this newsletter? Sign up to receive your own copy here.

Image by Project Liberty

Who do you trust more: AI or politicians?

 

Do you trust your AI chatbot to act in your best interests? What about your elected officials?

​

Earlier this year, global research conducted by the Collective Intelligence Project (CIP) found that 58% of participants trust their chatbot to act in their best interests, compared to just 28% for elected officials. That gap is no longer just theoretical: In Albania, the government recently appointed the world’s first AI minister—not a human minister for AI, but an actual chatbot named Diella—underscoring how quickly these trust dynamics are moving from survey results into political reality.

​

Comparing chatbots to elected officials is a bit of an apples-to-oranges comparison, mainly because chatbots are becoming omnipresent digital companions, sycophantic cheerleaders, personalized therapists, and even romantic partners—whereas the same is often not true for our elected officials.

​

Yet the contrast reveals something larger: trust is flowing away from institutions and toward digital companions. Chatbots feel present and responsive, while institutions often feel distant and unaccountable. This shift sets the stage for a broader crisis of trust that extends far beyond AI.

​

In this newsletter, we explore trust in the age of AI—where it’s placed, where it’s absent, and why it matters.

 

// What the research found

Earlier this year, the CIP conducted its fourth round of Global Dialogues research, which spanned 70 countries, 7 languages, and over 1,000 participants.

 

This research explored people's current relationships with AI systems, trust patterns, emotional dependencies, and expectations for appropriate boundaries in human-AI interactions. Here were a few of the key takeaways from their research:

  • Romantic relationships: 54% find romantic AI companions acceptable, specifically for lonely people, 17% consider AI romantic partners broadly acceptable, and 11% would personally consider a romantic relationship with an AI.
  • Daily emotional support: Nearly 15% of respondents use AI for emotional support daily, with an additional 28% weekly.
  • Reciprocity: Most people don't believe that AI, as a technology, genuinely cares about them, and yet 70.5% still use AI for emotional support.
  • Loneliness: Respondents who reported higher baseline loneliness are more likely to be open to AI companionship and more intimate AI relationships.

This paradox—trusting tools we know can’t care—shows how fragile institutional trust has become. If people feel more understood by algorithms than by elected officials, the issue isn’t only with AI, but with the weakening of institutions themselves. That’s the heart of today’s broader crisis of trust.

 

// A crisis of trust

We live in low-trust times. A 2025 analysis by Pew Research found that Americans trust each other less than they did a few decades ago. Individuals with lower levels of social trust also have lower levels of trust in institutions, like news outlets, schools, law enforcement, and the federal government.

​

Institutional trust is also in decline. The 2025 Edelman Trust Barometer highlighted how low institutional trust globally has “erupted in grievance.” Around the world, the Trust Barometer found that 61% of people have a moderate or high sense of grievance, which is defined by a belief that government and business not only make their lives harder, but also that these institutions serve a narrow sliver of interests.

 

Meanwhile, democratic elections in 2024 have failed to improve trust, and the Trust Barometer found an “unprecedented global decline” in employees trusting their employers to do what is right.

​

The decline in trust has no single cause. Research by Pew points to a mix of factors, including economic insecurity, political polarization, internet usage, and demographic change.

​

If distrust has many causes, trust often comes from personal experience. That may explain why CIP research found individuals place trust in AI chatbots tailored for highly personalized, always-available experiences.

 

// Digital companions designed to earn our trust

The CIP research underscores just how far this shift has gone. Respondents reported more trust in AI chatbots than in the AI companies that created them (58% vs. 35%).

​

As we’ve explored before in this newsletter (see editions on AI companions and chatbot-fueled delusions), the level of trust and depth of intimacy people are sharing with chatbots can have harmful consequences (and we don’t yet fully understand the impact their use will have on interpersonal and institutional trust).

​

But there's good news. Many of the same features that make chatbots trustworthy companions (dialogue, responsiveness, accessibility) can be designed into democratic systems. That’s the promise of deliberative technologies now being tested around the world.

 

// Building bridges between people and institutions

Technologists and policymakers are beginning to leverage new tools in ways that foster two-way conversations between citizens and institutions. These tools work by offering something AI chatbots cannot: the ability to feel heard by real people and institutions capable of taking collective action.

​

In her 2025 report, a Blueprint on Prosocial Tech Design Governance, Dr. Lisa Schirch, a professor at the University of Notre Dame who specializes in peacebuilding and technology, describes these tools as deliberative technologies, defined as “a class of civic tech that enables a large-scale exchange of views between the public in an iterative discussion, allowing participants to evolve in their understanding.”

​

The Tech and Social Cohesion Substack recently highlighted some examples:

  • In Taiwan, Audrey Tang, the country’s first Digital Minister and now a digital ambassador, used an online deliberation technology called Pol.is to engage the public on digital issues. Unlike traditional polling that directs questions in one direction from researchers to the public, Tang’s team used Pol.is to create a participatory agenda-setting process where members of the public share sentiments, surface issues, and propose policies.
  • In Iceland, ​​Better Reykjavik is an online platform for crowdsourcing solutions to urban challenges built on Citizens.is. It uses AI to conduct agenda setting, participatory budgeting, and policymaking.
  • In the EU, Make.org has piloted Panoramic AI, a tool aimed at bridging the gap between citizens and institutions. It helps citizens break down complex subjects into clear, accessible information by sourcing speeches, legal documents, and policy materials.
  • In Pakistan, Numainda, the country’s AI-powered legislative bot, uses AI to make Pakistan’s constitution accessible in both Urdu and English.

Deliberative technologies demonstrate how design can strengthen civic trust. Yet that trust only lasts if individuals feel secure in how their information is handled. Emerging infrastructure, such as Frequency, points in this direction—ensuring that people, not platforms, carry their identities and context with them. When individuals feel their agency is respected, they’re more willing to participate, which is a key ingredient in nurturing democratic trust.

 

// A democracy tech stack

To restore trust in institutions, governance, and policymaking, Dr. Schirch, Tang, and other co-authors called for “a democracy tech stack” in an article in Tech Policy Press earlier this month. This tech stack is composed of tools that can help “people dialogue, deliberate, and make decisions together.”

​

“We believe a democracy tech stack could supercharge public participation, harnessing polarized views and experiences into unprecedented levels of collective intelligence,” they wrote.

​

Deliberative technologies like these are not a panacea, but they represent a way for people to recognize that fellow citizens and institutions, not just AI chatbots, are listening, responsive, and trustworthy. Trust is not just a feeling; it's a form of civic power that can be harnessed for collective action.

// Project Liberty in the news

// Project Liberty Institute’s Sarah Nicole co-authored an article in Tech Policy Press about how the United Nations General Assembly could shift power in the data economy. (Free).

📰 Other notable headlines

// 🖥 Tim Berners-Lee, a computer scientist who invented the world wide web, wrote an op-ed in the Guardian about why he gave the world wide web away for free. (Free).

 

// 🛡 An article in Tech Policy Press considered why simple bot transparency won’t protect users from AI companions. (Free).

 

// 🤖 Walmart’s CEO issued a wake-up call: “AI is going to change literally every job,” according to an article in the Wall Street Journal. (Paywall).

 

// 📄 An article in Business Insider reported that leaked Meta guidelines show how it trains AI chatbots to respond to child sexual exploitation prompts. (Free).

 

// 🏛 An article in Science provided a roadmap for how to apply the ELSI (Ethical, Legal, and Social Implications) model to AI governance. (Free).

 

// 🛡 An article in The Economist argued why AI systems may never be secure, and what to do about it. It argued that a “lethal trifecta” of conditions opens them to abuse. (Paywall).

 

// 🏫 AI systems could move American classrooms beyond rigid curricula toward adaptive programs that respond to individual learners, according to an essay in Noema Magazine. (Free).

 

// 🇦🇺 In Australia, a man was fined $340,000 for deepfake pornography of prominent Australian women. It was a first-of-its-kind case, according to an article in the Guardian. (Free). 


// 📹 In a New York Times short video, actor Joseph Gordon-Levitt makes the case that Meta’s A.I. chatbot is dangerous for kids. (Paywall).

Partner news

// How to make AI safe for democracy?

October 14 | 2:00 - 3:30 PM CET | Virtual

Make.org and the European Center for Not-for-Profit Law (ECNL) are hosting an interactive webinar on AI’s impact on democracy. Experts from research, policy, and civil society will discuss both the risks and opportunities AI brings for democratic systems and public participation. Register here.

 

// Podcast on the future of “broad listening”

An episode of the Nodestar podcast explores the idea for “broad listening” (in contrast to broadcasting). Tech-enabled broad listening is a way to learn from many voices without sacrificing privacy or control. Listen here.

 

// Metagov x Future of Science seminar

October 22 | 12:00 - 1:00 PM ET | Virtual

Metagov is hosting a virtual seminar exploring the future of science and the ethical frontiers of healthcare and artificial intelligence. Register here.

What did you think of today's newsletter?

We'd love to hear your feedback and ideas. Reply to this email.

// Project Liberty builds solutions that advance human agency and flourishing in an AI-powered world.

 

Thank you for reading.

Facebook
LinkedIn
Sin título-3_Mesa de trabajo 1
Instagram
Project Liberty footer logo

10 Hudson Yards, Fl 37,
New York, New York, 10001
Unsubscribe  Manage Preferences

© 2025 Project Liberty LLC