What we give up with AI agents ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­    ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­  
View in browser
logo_op-02

March 3rd, 2026 // Did someone forward you this newsletter? Sign up to receive your own copy here.

Image by Project Liberty

Is autonomous AI inevitable?

 

“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”
​
These are the words of science fiction author Frank Herbert, published in Dune in 1965. Today, sixty years later, they still serve as a warning, though the danger looks different from what Herbert imagined.

 

We don't need other humans using machines to enslave us. We might be beginning to do it ourselves. The cognitive outsourcing starts with the small stuff. Drafting emails. Planning travel. Scheduling meetings. Synthesizing research. Systems handle the small tasks, then the medium ones, then increasingly complex chains of work.

 

In this newsletter, we examine the growing shift from AI as a “tool” to AI as an agent, and what happens when generative AI, limited to whatever it is prompted to do next, graduates into agentic AI.

    // Tool AI vs. Agentic AI

    Let’s first address the definitions. 

    • AI, as a tool (there’s a term called “Tool AI”), is intended to remain under direct human control. It executes a set command, and once it has delivered that information or completed that task, it stops. While still generative (meaning it creates new content from its existing data), it exists for a specific bounded purpose, and it doesn’t execute multiple tasks or commands without human prompting at each step.
    • AI, as an agent, goes beyond one discrete task. The system makes decisions and chains together multiple tasks until a larger project is complete—all without a human guiding it at each step.

    Here are a few examples:

    • A driver must apply pressure to a car’s brake to avoid a collision, making it more of a tool. But in newer cars with autobrake features, a car can detect heightened risk of a collision and brake without the driver ever touching the brake, making it more agentic. And then there are self-driving cars that operate entirely without humans.

    • Google Maps uses AI, but it can’t operate autonomously: People still need to search for the best Dim Sum nearby, plan a trip with multiple stops, and ultimately, decide if they’ll follow the map IRL. By contrast, Booking Holdings recently announced an autonomous rebooking feature that can change travelers’ flights and shift accommodations without constant user interaction.
    • Nest thermostats illustrate how agentic features get seamlessly introduced into everyday tools. An older thermostat is purely a tool in that it does exactly what you set it to do (if it’s working, that is). Nest learns your patterns and adjusts the temperature without asking. It's a modest handoff of control, but it’s marketed as saving someone the hassle of maintaining a home’s temperature. The shift from tool to agent is repackaged as convenience, making it easier to accept and harder to question.

    While there are dozens of more examples, there isn’t a clear line dividing Tool AI and agentic AI. What changes is not the capability of the system, but where it sits on the spectrum of the decision-making that authority we choose to hand over.

     

    // Do tools inevitably become more agentic?

    Technology does not automatically slide toward autonomy. Many systems are deliberately bounded. A calculator performs a single computation. A spreadsheet applies formulas. They expand human capability, but they do not initiate action on their own.

     

    Greater computational power and new capabilities can push a system further along the autonomy spectrum. But incentives matter just as much. 

     

    As Gwern Branwen, an AI researcher and writer, has argued, systems oriented toward action gain an economic edge over systems that require continuous human input. That advantage creates an economic incentive to move toward greater autonomy. This is the core value proposition behind all the AI hype: Autonomous AI can boost productivity, reduce labor costs, streamline workflows, and eliminate inefficiencies in many human-in-the-loop scenarios.

    • Claude CoWork, an AI agent, describes its level of autonomy like this: “Give Claude access to your local files, set a task, and step away. Come back to completed work.”
    • Google’s Gemini Agent “makes a plan, then combines advanced features like live web browsing, deep research capabilities, and seamless integration with some of your Google apps to execute that plan on your behalf.”

    // A different strategic choice

    As leading AI companies embrace autonomy, some voices in the AI safety community are urging a different path. The Future of Life Institute has argued that many of AI’s benefits can be realized through bounded, tool-like systems designed to solve specific problems under human direction. Their concern is governance. Systems that operate within defined constraints are easier to audit, regulate, and align. Once systems begin to initiate and chain actions independently, oversight becomes more challenging.

     

    There’s another unfortunate trade-off: The more people outsource to AI, the more they lose cognitive proximity to the work.

     

    Last year, we explored the idea of cognitive offloading. In a 2025 MIT Media Lab study, participants who used AI chatbots (not necessarily autonomous agents) to write essays showed reduced neural engagement compared to those who relied only on search or no tools at all. A large majority of AI users could not identify which phrases had originated from the model.

     

    Researchers described the phenomenon as “cognitive debt”: short-term efficiency accompanied by long-term costs such as diminished critical inquiry and reduced creative engagement.

     

    The survey’s participants hadn’t delegated complex workstreams to an AI agent, but they had still surrendered a degree of their agency to a chatbot. Imagine the level of cognitive debt when AI agents take on entire workstreams.

    Image by Project Liberty

    // Cultivating human agency in the age of agentic AI

    Resisting the pull of full automation doesn't mean resisting AI—it means demanding something harder to build: systems that make us more capable rather than more dependent.

    • Building agentic tools that preserve human agency: Project Liberty is developing digital infrastructure to give people control over their data, facilitate interoperability among AI systems, and treat people not as users to be monetized, but as citizens with decision-making power over their relationships with AI.

       

    • Building AI systems that remain as tools: Along the spectrum of AI tools and AI agents, there are opportunities to center human judgment and enhance human agency. One example is narrow-scope AI assistants, such as Khan Academy’s AI tool, Khanmigo. It socratically guides users toward an answer without simply delivering it, keeping students in the loop of learning and grappling with the material rather than outsourcing their thinking.

       

    • Building policy and regulatory precedents: On the regulatory front, California’s SB 243 law, which we covered in a recent newsletter on AI advertising, establishes constraints on AI chatbots. These include restrictions on companion chatbots, reminders that people are chatting with AI, and crisis protocols. SB 243 doesn’t restrict the level of AI autonomy, but it is an example of establishing boundaries aimed at preserving human agency. It’s possible that today’s AI policies could set precedents for future policies that restrict how agentic AI will be.

     

    // Positive-sum agency

    The movement from tools to agents is not inevitable. It is being negotiated by regulators, companies, researchers, and users—each optimizing for different forms of efficiency, control, and risk.

     

    Some institutions are choosing constraints. Others are choosing acceleration.

    The outcome will depend less on technical capability and more on which of those logics prevails.

     

    Herbert’s warning was not about machines themselves. It was about the consequences of relinquishing choice.

    Project Liberty in the news

    // Braxton Woodham, President of Project Liberty Labs, wrote an article on Medium arguing that a primary personal agent, exclusively dedicated to one individual, must stand at the frontier between human consciousness and autonomous artificial intelligence in order to safeguard human agency in the agentic era.

    📰 Other notable headlines

    // 🎥 60 minutes profiled AI artist Refik Anadol, who uses massive datasets and AI to create immersive works shown around the world. (Free).

     

    // 🎒 More than half of teens use chatbots for schoolwork, according to a new study from the Pew Research Center. (Free).

     

    // 🛡️ AI safety researchers have long worried that a government would seek to use AI for domestic surveillance and autonomous killing. The Pentagon’s fight with Anthropic threatens to make it a reality, according to an article in Platformer. (Free).

     

    // 🎲 Polymarket defended its decision to allow betting on war as ‘invaluable’. According to an article in The Verge, everything is now gambling, even human suffering. (Paywall).

     

    // 🔎 A congressional investigation estimates broker breaches have cost consumers $20 billion in identity theft, according to an investigation by The Markup. (Free).

     

    // 📣 The next AI whistleblower could come from anywhere in the world. A digital safe allows workers to speak up about concerns even in places without strong whistleblower protections, according to an article in Rest of World. (Free).

     

    // ⛪ Pope Leo tells priests not to use AI to write homilies or seek likes on TikTok, according to an article in the National Catholic Reporter. (Free).


    // 🤔 We’ve been searching for a mind inside the machine. The rise of AI agents suggests its shell may be enough, according to an article in Noema Magazine. (Free).

    Partner news

    // Lessons from Europe’s largest digital identity experiment

    February 18 | 7:00 - 8:00 EST | Virtual 

    During the next MyData Global webinar, Meeco CEO and CPO will share insights from the EUDI Large Scale Pilots, Europe’s most extensive digital identity testing initiative. The pilots spanned more than 1,300 interoperability scenarios and issued over 1,500 credentials to hundreds of real users, offering practical lessons for the future of digital identity systems. Register here.

     

    // AI is a massive problem; here’s why, according to Palisade Research

    Palisade Research has released a new video tracing the evolution of artificial intelligence and examining the serious risks that accompany rapid progress. Watch the explainer here. 

     

    // Confronting disinformation with global leaders

    Associate Research Professor at the McCourt School of Public Policy, Renée DiResta, joined Nobel Peace Prize laureate and Rappler CEO Maria Ressa for a discussion on combating disinformation, moderated by Prince Harry, The Duke of Sussex. Watch the full interview here.

    What did you think of today's newsletter?

    We'd love to hear your feedback and ideas. Reply to this email.

    // Project Liberty builds solutions that advance human agency and flourishing in an AI-powered world.

     

    Thank you for reading.

    Facebook
    LinkedIn
    Twitter
    Instagram
    Project Liberty footer logo

    10 Hudson Yards, Fl 37,
    New York, New York, 10001
    Unsubscribe  Manage Preferences

    © 2026 Project Liberty LLC