We explore the shift from the attention economy to the intimacy economy.
View in browser
logo_op-02

October 28th, 2025 // Did someone forward you this newsletter? Sign up to receive your own copy here.

Image by Project Liberty

AI’s shift to the intimacy economy

 

Few organizations have anticipated the social consequences of technology as clearly as the Center for Humane Technology (CHT). First with social media, now with AI.

 

CHT co-founder Tristan Harris, then a Design Ethicist at Google, was among the first to identify how incentives to capture and hold our attention were shaping digital design.

 

These insights became the foundation for the CHT, which helped spark a global reckoning over how manipulative design choices on social media can lead to harmful outcomes.

 

Founded in 2018, the CHT (which is a Project Liberty Alliance member) is on a mission to ensure that the most consequential technologies can serve humanity. It focuses on illuminating how the tech ecosystem works—in order to shift the incentives that drive it. A notable example of its work is its collaboration with documentary filmmakers to release The Social Dilemma in 2020, which reached 100 million people across 189 countries.

 

But today, the stakes are even higher. What Harris saw in the early 2010s—the emergence of a new economy optimizing for attention—is repeating itself with AI.

 

But there’s a crucial difference: If social media optimizes for attention, then AI optimizes for attachment.

Project Liberty sat down with CHT Executive Director Daniel Barcay to explore the incentives—and the future—of AI. Barcay’s career as a technologist spans a stint at Google, where he helped build Google Earth, to VP of Product and CTO roles at various high-growth tech companies. He has led the CHT since early 2024.

 

As technology moves from capturing our focus to shaping our emotions, Project Liberty sees a deeper shift underway: from the attention economy to the "intimacy economy."

Daniel Barcay

Daniel Barcay

This moment calls for new systems that protect human agency and put technology back in service of humanity.

 

// “It was happening again.”

In the early days of social media, optimism was high that the technology would give everyone a voice. Many pointed to the role of social media in the Arab Spring as an example of how social media could democratize and disseminate power. In 2011, Barack Obama held a Twitter town hall (and there is a video of him sending the first “live tweet” as a sitting president from the White House). 

 

The dominant narrative around AI casts this moment as the next great leap in technology’s promise to improve our lives and change the world.

 

“With AI, we had this feeling of it was happening again,” Barcay said. The starry-eyed optimism around social media over a decade ago obscured what was really going on: optimizing for engagement, profiting from outrage, polarizing people into echo chambers, and driving addiction.

 

We risk making a similar mistake, Barcay believes, unless we can get ahead of the AI wave currently hitting society. Project Liberty’s work begins with the same premise to drive awareness, shape incentives, and create the conditions where progress serves people, not platforms.

 

// From the attention economy to the intimacy economy

If social media is designed for attention, AI chatbots, agents, and companions are designed for attachment and intimacy. An entirely new economy is emerging around the relationship between a human and their AI companion or agent.

 

AI chatbots are not just incentivized to capture your attention; they’re designed to be an “AI product you interact with that feels like the best confidant and deepest friend,” Barcay said. “The AI product that just gets you is going to be the AI product that you use.”

 

Companies are vying to capture the full context of our lives—to know us completely—in order to promise new levels of convenience and productivity. In theory, this means serving us with more personalized information. In practice, it demands unprecedented access to our inner worlds—our thoughts, data, emails, and financial lives. AI chatbots aspire to be our most trusted companions. AI agents aim to become the very interface through which we experience the web. Together, they are competing to mediate and ultimately define our most human connections.

 

As the intimacy economy emerges through personalized AI agents, the attention economy won’t vanish—it will evolve. Its incentive structures remain deeply embedded, as OpenAI’s launch of Sora (and its new browser) makes clear. What’s unfolding may not be a clean handoff but a fusion: attention and intimacy intertwining in ways we don’t yet fully understand. The same forces that once monetized our focus are now learning to monetize our trust, reshaping business models around ever-deeper forms of entanglement between humans and machines.

 

Project Liberty, for example, advocates for digital self-determination—the principle that people, not platforms, should control their data and digital experience. When AI systems demand unprecedented access to our inner worlds, the question of who owns and governs that data becomes central to human flourishing in the age of AI.

 

// Solutions for a narrow window of time

Barcay believes we’re in a narrow window—the next 12-18 months—when the incentive structures and business models surrounding AI chatbots and agents are still “malleable.”

 

One of the lessons the CHT took from the social media era was that it’s so much harder to shape a technology when it’s become entrenched and ubiquitous. But with AI chatbots and agents, it’s still the early days. Policies can shape incentives and business models. Public awareness can shape products.

 

This is why Barcay is optimistic. We are in a window of possibility. The potential to influence a malleable technology has never been greater.

 

And yet, there are no silver bullet approaches, Barcay was quick to acknowledge. But he highlighted two areas where the CHT is active:

  1. Understanding: From the beginning, building awareness has been core to the CHT’s model. By growing in awareness of what’s happening to each of us individually and collectively—whether on social media or in the subtle manipulations of AI companions—we discover the choice and the agency to act and choose differently. The CHT is exploring various methods to drive public awareness about AI chatbots and companions, including collaborations with documentary filmmakers.

  2. Liability: Regulators have struggled to decide how to classify AI systems. Are they a product (like a software operating system, like Microsoft Windows) or a service (like other software-as-a-service tools)? The designation as a product can trigger stricter liability laws than services. As a product, developers of AI tools would be liable for defective design and product defects, failures to warn of harms, breaches of express warranty, and other forms of negligence tied to dangerous or defective products. 

    The CHT applauds a new federal bill introduced last month, the AI Lead Act, which proposed product liability standards for AI systems. This liability approach of classifying AI tools as products is compelling because it is simple and bipartisan, while avoiding the challenges of a patchwork of new laws specifically focused on AI. “The market manages risk when harms appear on balance sheets,” Barcay said. “Unfortunately, with a lot of these AI products, we're nowhere close to having those harms appear on the balance sheets.” The CHT has also supported three high-profile litigation cases, including the Sewell Setzer case and the Adam Raine case.

// Harnessing agency in the AI era

The intimacy economy is being rapidly built around us, but we're not entirely powerless in shaping our relationship with it.

 

The window for structural intervention may be narrow, but to Barcay, progress begins with each of us asking expansive questions around our relationship with AI tools: “How can I be using these tools less reflexively? How can I put them in a place where they're giving me more agency, more choice, and more reflection?”

 

Just like with social media, it starts there. “Awareness creates choice,” Barcay said. “Clarity creates agency.”

Project Liberty in the news

// On the Frequency Network Foundation podcast "Making Waves", Sheila Warren, CEO of the Project Liberty Institute, discussed AI, blockchain, and digital agency in the new age of communication. Listen here.

Other notable headlines

// 💼 An article in The Verge warned that teens have been pushed out of the workforce by tech, and now we’re training robots to do the few jobs teens have left. (Paywall).

 

// 🤖 AI models may be developing their own ‘survival drive’, researchers say. Some AIs seem to resist being turned off and will even sabotage shutdown, according to an article in The Guardian. (Free).

 

// 🔬 OpenAI wants to cure cancer. So why did it make a web browser? An article in The Atlantic argued that the AI giant has lost its imagination. (Paywall).

 

// 🚨 People who say they’re experiencing AI psychosis have been begging the FTC for help. Several attributed delusions, paranoia, and spiritual crises to the chatbot, according to an article in WIRED. (Paywall).

 

// 🤔 A teen in love with a chatbot killed himself. An article in The New York Times asks, can the chatbot be held responsible? (Paywall).

 

// 🇻🇦 An article in Project Syndicate considered the Vatican’s voice of reason on AI. It argued that the Catholic Church is uniquely positioned to influence public debate and policymaking. (Paywall).

 

// 💬 An article in Semafor explored how Reddit’s data becomes a battleground in the AI gold rush. (Free).

Partner news

// All Tech Is Human launches Responsible AI Course series

All Tech Is Human has released five new free courses on Responsible AI, designed to help professionals build the knowledge and skills needed to implement ethical and effective AI governance. Explore the free courses here.

 

// Designing Governance for the attention economy

Nathan Schneider, director of the Media Economies Design Lab at the University of Colorado Boulder, joined Decentralization Research Center’s Techquitable podcast to discuss his co-authored paper, Online Governance Surfaces and Attention Economies. Listen here.

What did you think of today's newsletter?

We'd love to hear your feedback and ideas. Reply to this email.

// Project Liberty builds solutions that advance human agency and flourishing in an AI-powered world.

 

Thank you for reading.

Facebook
LinkedIn
Twitter
Instagram
Project Liberty footer logo

10 Hudson Yards, Fl 37,
New York, New York, 10001
Unsubscribe  Manage Preferences

© 2025 Project Liberty LLC