We trust the bots. Just not who made them. ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­    ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­  
View in browser
logo_op-02

February 24th, 2026 // Did someone forward you this newsletter? Sign up to receive your own copy here.

Image by Project Liberty

Five AI paradoxes

 

Something strange is going on.

  • ​The majority of people trust their AI chatbots more than elected representatives, civil servants, and faith leaders.
  • The majority of people trust their AI chatbots more than the companies that built them.
  • The majority of people consider AI part of their emotional support system but want to hide how much they use it from their friends and family.

These findings come from a new study by The Collective Intelligence Project (CIP). Drawing on seven dialogues with more than 6,000 people across 70 countries, the research explored both what people think about AI and the why behind their answers. (You can learn more about their methodology here. Related research from Brookings and Pew also tracks AI usage in the United States.)

​

Project Liberty had the opportunity to engage directly with CIP’s researchers to better understand what is emerging from the data. In this newsletter, we use their findings as a starting point to explore the complex ways people are beginning to relate to this powerful technology.

    // Five paradoxes

    Five paradoxes or contradictions began to emerge around the relationship between AI, trust, and emotions.

     

    // Paradox #1: People are embracing AI, while simultaneously resisting it.

    According to the researchers, people are “enabling themselves through AI (adopting it voluntarily and enthusiastically)” while “simultaneously resisting the conditions under which AI could stifle agency.”

     

    Three-quarters of respondents expect to use AI weekly at work, and 44% are expected to use it daily. Yet willingness to let AI act autonomously is declining, dropping from roughly 30% to just over 22% in a year’s time.

     

    People want AI to be a tool they direct, not a system that directs them. But as AI agents begin taking action rather than simply offering answers, norms could shift.

     

    What to watch: Will AI agents push the boundaries of where people are comfortable having AI do work on their behalf?

     

    // Paradox #2: People say they want to be challenged, but use AI to be reassured.

    Research has shown that, in certain instances, social media can create echo chambers of belief on digital platforms. But the Collective Intelligence Project’s report found that AI might have an even greater effect.

     

    According to CIP's analysis, AI interactions are three times less likely than social media to introduce doubt into someone’s beliefs. At the same time, a large majority of users (77%) say AI should challenge inaccurate statements. Many report feeling more certain of their views after engaging with AI.

     

    The researchers interpret this as a “simultaneous demand for both comfort and correction, push back on my facts, but validate my feelings.” 


    What to watch: Will AI become a bridge across disagreement or a system that reinforces what people already think?

     

    // Paradox #3: The public trusts AI chatbots more than elected officials.

    In CIP's trust rankings, AI systems fall just below family doctors and research institutions, yet above elected officials, civil servants, and faith leaders.

    Participants often explained this in disarmingly simple terms: The chatbot has neither agenda, nor ambition, nor obvious reason to mislead. Others described something more relational. AI feels present, responsive, and attentive in ways institutions often do not.

     

    Low institutional trust did not begin with AI, but AI is arriving into a world already hungry for reliability and accountability. When we asked the researchers about this, they said, “[If] people's most trusted advisor is the one with the least reason to push back, the ecosystem of challenge and friction that sharpens judgment quietly thins out. This suggests on some level people know that their trust machinery is being hijacked.”

     

    What to watch: While AI delivers instant reassurance and personalized answers, institutions risk appearing unresponsive and inattentive. How will AI we consider trustworthy influence our trust in less personalized institutions or leaders with whom we might not 100% agree?

    Image by Project Liberty

    // Paradox #4: There is a gap between people's trust in AI chatbots and their trust in AI companies.

    Part of what makes AI so powerful is its ability to respond instantly and sustain ongoing interaction. When asked to rank chatbots compared with the companies that built them, a telling gap emerges: 55% trust AI chatbots, while only 34% trust the companies behind them.

     

    The researchers note there's historic precedent for this: “People trust their cars more than automakers. But AI introduces a different dynamic because the product isn't static.” Chatbots are constantly being reshaped to drive engagement and user attachment. Their behavior is adjusted, optimized, and retrained in ways users rarely see. 

     

    What to watch: Will the long-term adoption of AI tools depend on the confidence in the institutions building these systems?

     

    // Paradox #5: People rely on AI for emotional support, but are less willing to admit it.

    Of the 6,000 people surveyed, 67% reported using AI monthly for emotional support, 43% weekly, and 15% daily.

     

    AI is filling a void where human emotional support systems usually exist, a safe place to process emotions, make decisions, and receive feedback. The researchers recounted multiple participants who described chatbots as “non-judgmental counseling—a space to think through decisions without the social cost of vulnerability.” 

     

    This kind of “emotional infrastructure” can begin to create dependencies that may erode users’ sense of agency. Researchers found that respondents were more likely to accept terms they might otherwise question when they were feeling vulnerable and seeking support.

     

    Researchers observed that people with higher levels of delusional thinking tend to use AI more compulsively and are more likely to hide how much they use it from family members and therapists. The result is what they call “social secrecy”—private dependence paired with public silence.

     

    What to watch: Will societies begin to treat emotionally influential AI more like other trusted intermediaries that carry recognized responsibilities?

     

    // What remains: The human capacity to trust

    AI is moving beyond the role of a tool for tasks. For many people, it is becoming part of how they work through questions, make choices, and organize their day-to-day.

     

    Unlike doctors, therapists, or public institutions, these systems do not come with clear structures of transparency or accountability. Yet they are already influencing how people form judgments, build confidence, and process personal decisions.

     

    The question is no longer whether people will integrate AI into their daily experience. That shift is already underway. The harder question is whether the surrounding structures of responsibility, transparency, and recourse will evolve at the same pace.

     

    Trust and adoption have moved faster than governance. How society responds to that imbalance may determine whether AI strengthens or erodes our agency. Trust, once given, can also be guided. The task ahead is to build systems worthy of it.

    Project Liberty in the news

    // Paul Fehlinger, Senior Director of Policy, Investment & Innovation at Project Liberty Institute, wrote an article in ImpactAlpha about how investors are rethinking alpha, risk, and demand in the AI stack.

    📰 Other notable headlines

    // 🇪🇺 Why is Silicon Valley fighting with the EU? American tech companies are enlisting the White House against European regulation, according to an article in The Signal. (Paywall).

     

    // 👗 The University of Texas at Austin just banned Shein, cutting students off from the fast fashion giant. The move is meant to comply with a directive from Governor Greg Abbott targeting Chinese-affiliated companies, according to an article in Quartz. (Paywall).

     

    // 🤖 What do AI chatbots discuss among themselves? The New York Times sent one to find out. (Paywall).

     

    // 🛡 Can social media age verification really protect kids? As countries start enforcing new age-limit laws, platforms like Roblox are using facial technology. But critics warn of privacy leaks and surveillance, according to an article in Rest of World. (Free).

     

    // 🇨🇳 The U.S. is planning a Peace Corps-style “Tech Corps” to counter China’s AI exports, according to an article in Rest of World. The volunteers will promote American AI models as Chinese open-weight models are proving popular. (Free).

     

    // 🌐 An article in Tech Policy Press made the case that AI sovereignty depends on interoperability standards. (Free).


    // 👨‍💻 An article in WIRED reported on the rise of RentAHuman, the marketplace where bots put people to work. (Paywall).

    Partner news

    // New Report offers strong support for age assurance online

    New research from Common Sense Media finds that 95% of adults believe children should be protected from certain online content and features. Read their report to learn more about the growing public demand for effective solutions to help keep kids safe online.

     

    // Protect What’s Human: Rallying support for AI regulation

    The Future of Life Institute (FLI) has launched its Protect What’s Human campaign, an effort calling for commonsense regulation of artificial intelligence to safeguard jobs, families, and human dignity as AI systems become more capable. The campaign aims to mobilize Americans around the impacts of AI on their communities and encourage broad public support for policy action.

     

    // Launch of the Better Deal for Data Playbook

    The Better Deal for Data launched their playbook, which offers practical guidance for nonprofit and social sector organizations ready to adopt the new Better Deal for Data Standard: a set of seven clear commitments to ethical, transparent data stewardship that prioritizes community benefit over profit.

    What did you think of today's newsletter?

    We'd love to hear your feedback and ideas. Reply to this email.

    // Project Liberty builds solutions that advance human agency and flourishing in an AI-powered world.

     

    Thank you for reading.

    Facebook
    LinkedIn
    Twitter
    Instagram
    Project Liberty footer logo

    10 Hudson Yards, Fl 37,
    New York, New York, 10001
    Unsubscribe  Manage Preferences

    © 2026 Project Liberty LLC