Five AI paradoxes
Something strange is going on.
- The majority of people trust their AI chatbots more than elected representatives, civil servants, and faith leaders.
- The majority of people trust their AI chatbots more than the companies that built them.
- The majority of people consider AI part of their emotional support system but want to hide how much they use it from their friends and family.
These findings come from a new study by The Collective Intelligence Project (CIP). Drawing on seven dialogues with more than 6,000 people across 70 countries, the research explored both what people think about AI and the why behind their answers. (You can learn more about their methodology here. Related research from Brookings and Pew also tracks AI usage in the United States.)
Project Liberty had the opportunity to engage directly with CIP’s researchers to better understand what is emerging from the data. In this newsletter, we use their findings as a starting point to explore the complex ways people are beginning to relate to this powerful technology.
// Five paradoxes
Five paradoxes or contradictions began to emerge around the relationship between AI, trust, and emotions.
// Paradox #1: People are embracing AI, while simultaneously resisting it.
According to the researchers, people are “enabling themselves through AI (adopting it voluntarily and enthusiastically)” while “simultaneously resisting the conditions under which AI could stifle agency.”
Three-quarters of respondents expect to use AI weekly at work, and 44% are expected to use it daily. Yet willingness to let AI act autonomously is declining, dropping from roughly 30% to just over 22% in a year’s time.
People want AI to be a tool they direct, not a system that directs them. But as AI agents begin taking action rather than simply offering answers, norms could shift.
What to watch: Will AI agents push the boundaries of where people are comfortable having AI do work on their behalf?
// Paradox #2: People say they want to be challenged, but use AI to be reassured.
Research has shown that, in certain instances, social media can create echo chambers of belief on digital platforms. But the Collective Intelligence Project’s report found that AI might have an even greater effect.
According to CIP's analysis, AI interactions are three times less likely than social media to introduce doubt into someone’s beliefs. At the same time, a large majority of users (77%) say AI should challenge inaccurate statements. Many report feeling more certain of their views after engaging with AI.
The researchers interpret this as a “simultaneous demand for both comfort and correction, push back on my facts, but validate my feelings.”
What to watch: Will AI become a bridge across disagreement or a system that reinforces what people already think?
// Paradox #3: The public trusts AI chatbots more than elected officials.
In CIP's trust rankings, AI systems fall just below family doctors and research institutions, yet above elected officials, civil servants, and faith leaders.
Participants often explained this in disarmingly simple terms: The chatbot has neither agenda, nor ambition, nor obvious reason to mislead. Others described something more relational. AI feels present, responsive, and attentive in ways institutions often do not.
Low institutional trust did not begin with AI, but AI is arriving into a world already hungry for reliability and accountability. When we asked the researchers about this, they said, “[If] people's most trusted advisor is the one with the least reason to push back, the ecosystem of challenge and friction that sharpens judgment quietly thins out. This suggests on some level people know that their trust machinery is being hijacked.”
What to watch: While AI delivers instant reassurance and personalized answers, institutions risk appearing unresponsive and inattentive. How will AI we consider trustworthy influence our trust in less personalized institutions or leaders with whom we might not 100% agree?