Who do you trust more: AI or politicians?
Do you trust your AI chatbot to act in your best interests? What about your elected officials?
Earlier this year, global research conducted by the Collective Intelligence Project (CIP) found that 58% of participants trust their chatbot to act in their best interests, compared to just 28% for elected officials. That gap is no longer just theoretical: In Albania, the government recently appointed the world’s first AI minister—not a human minister for AI, but an actual chatbot named Diella—underscoring how quickly these trust dynamics are moving from survey results into political reality.
Comparing chatbots to elected officials is a bit of an apples-to-oranges comparison, mainly because chatbots are becoming omnipresent digital companions, sycophantic cheerleaders, personalized therapists, and even romantic partners—whereas the same is often not true for our elected officials.
Yet the contrast reveals something larger: trust is flowing away from institutions and toward digital companions. Chatbots feel present and responsive, while institutions often feel distant and unaccountable. This shift sets the stage for a broader crisis of trust that extends far beyond AI.
In this newsletter, we explore trust in the age of AI—where it’s placed, where it’s absent, and why it matters.
// What the research found
Earlier this year, the CIP conducted its fourth round of Global Dialogues research, which spanned 70 countries, 7 languages, and over 1,000 participants.
This research explored people's current relationships with AI systems, trust patterns, emotional dependencies, and expectations for appropriate boundaries in human-AI interactions. Here were a few of the key takeaways from their research:
- Romantic relationships: 54% find romantic AI companions acceptable, specifically for lonely people, 17% consider AI romantic partners broadly acceptable, and 11% would personally consider a romantic relationship with an AI.
- Daily emotional support: Nearly 15% of respondents use AI for emotional support daily, with an additional 28% weekly.
- Reciprocity: Most people don't believe that AI, as a technology, genuinely cares about them, and yet 70.5% still use AI for emotional support.
- Loneliness: Respondents who reported higher baseline loneliness are more likely to be open to AI companionship and more intimate AI relationships.
This paradox—trusting tools we know can’t care—shows how fragile institutional trust has become. If people feel more understood by algorithms than by elected officials, the issue isn’t only with AI, but with the weakening of institutions themselves. That’s the heart of today’s broader crisis of trust.
// A crisis of trust
We live in low-trust times. A 2025 analysis by Pew Research found that Americans trust each other less than they did a few decades ago. Individuals with lower levels of social trust also have lower levels of trust in institutions, like news outlets, schools, law enforcement, and the federal government.
Institutional trust is also in decline. The 2025 Edelman Trust Barometer highlighted how low institutional trust globally has “erupted in grievance.” Around the world, the Trust Barometer found that 61% of people have a moderate or high sense of grievance, which is defined by a belief that government and business not only make their lives harder, but also that these institutions serve a narrow sliver of interests.
Meanwhile, democratic elections in 2024 have failed to improve trust, and the Trust Barometer found an “unprecedented global decline” in employees trusting their employers to do what is right.
The decline in trust has no single cause. Research by Pew points to a mix of factors, including economic insecurity, political polarization, internet usage, and demographic change.
If distrust has many causes, trust often comes from personal experience. That may explain why CIP research found individuals place trust in AI chatbots tailored for highly personalized, always-available experiences.
// Digital companions designed to earn our trust
The CIP research underscores just how far this shift has gone. Respondents reported more trust in AI chatbots than in the AI companies that created them (58% vs. 35%).
As we’ve explored before in this newsletter (see editions on AI companions and chatbot-fueled delusions), the level of trust and depth of intimacy people are sharing with chatbots can have harmful consequences (and we don’t yet fully understand the impact their use will have on interpersonal and institutional trust).
But there's good news. Many of the same features that make chatbots trustworthy companions (dialogue, responsiveness, accessibility) can be designed into democratic systems. That’s the promise of deliberative technologies now being tested around the world.
// Building bridges between people and institutions
Technologists and policymakers are beginning to leverage new tools in ways that foster two-way conversations between citizens and institutions. These tools work by offering something AI chatbots cannot: the ability to feel heard by real people and institutions capable of taking collective action.
In her 2025 report, a Blueprint on Prosocial Tech Design Governance, Dr. Lisa Schirch, a professor at the University of Notre Dame who specializes in peacebuilding and technology, describes these tools as deliberative technologies, defined as “a class of civic tech that enables a large-scale exchange of views between the public in an iterative discussion, allowing participants to evolve in their understanding.”
The Tech and Social Cohesion Substack recently highlighted some examples:
- In Taiwan, Audrey Tang, the country’s first Digital Minister and now a digital ambassador, used an online deliberation technology called Pol.is to engage the public on digital issues. Unlike traditional polling that directs questions in one direction from researchers to the public, Tang’s team used Pol.is to create a participatory agenda-setting process where members of the public share sentiments, surface issues, and propose policies.
- In Iceland, Better Reykjavik is an online platform for crowdsourcing solutions to urban challenges built on Citizens.is. It uses AI to conduct agenda setting, participatory budgeting, and policymaking.
- In the EU, Make.org has piloted Panoramic AI, a tool aimed at bridging the gap between citizens and institutions. It helps citizens break down complex subjects into clear, accessible information by sourcing speeches, legal documents, and policy materials.
- In Pakistan, Numainda, the country’s AI-powered legislative bot, uses AI to make Pakistan’s constitution accessible in both Urdu and English.
Deliberative technologies demonstrate how design can strengthen civic trust. Yet that trust only lasts if individuals feel secure in how their information is handled. Emerging infrastructure, such as Frequency, points in this direction—ensuring that people, not platforms, carry their identities and context with them. When individuals feel their agency is respected, they’re more willing to participate, which is a key ingredient in nurturing democratic trust.
// A democracy tech stack
To restore trust in institutions, governance, and policymaking, Dr. Schirch, Tang, and other co-authors called for “a democracy tech stack” in an article in Tech Policy Press earlier this month. This tech stack is composed of tools that can help “people dialogue, deliberate, and make decisions together.”
“We believe a democracy tech stack could supercharge public participation, harnessing polarized views and experiences into unprecedented levels of collective intelligence,” they wrote.
Deliberative technologies like these are not a panacea, but they represent a way for people to recognize that fellow citizens and institutions, not just AI chatbots, are listening, responsive, and trustworthy. Trust is not just a feeling; it's a form of civic power that can be harnessed for collective action.