The more AI knows, the harder it is to leave ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­    ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­  
View in browser
logo_op-02

February 17th, 2026 // Did someone forward you this newsletter? Sign up to receive your own copy here.

Image by Project Liberty

Your data is AI’s memory

 

Earlier this year, OpenClaw broke onto the scene.

​

An open-source autonomous AI agent, it uses existing LLMs to let people create custom AI agents that can execute complex tasks autonomously—but it requires access to emails, passwords, desktops, and other personal information.

​

What could go wrong?

​

Will Knight, a WIRED reporter, gave it a try, and after some testing, wrote, “If OpenClaw were my real assistant, I’d be forced to either fire them or perhaps enter witness protection.”

​

Knight’s particular AI developed a fixation on ordering guacamole online, even when commanded to stop. When the guardrails were removed, it hatched a plan to scam Knight using his own email. (Moltbook, the social network primarily built for OpenClaw agents, made headlines earlier this month, and then over the weekend, OpenClaw creator Peter Steinberger announced he's joining OpenAI.)

 

Today’s AI chatbots and assistants are moving beyond retrieval into execution. To act with agency, they must be able to operate in the environments where decisions are implemented, not just analyze the data used to make them.

​

Gaining more access to people’s online lives has been a goal for AI companies. Last year, multiple AI companies launched AI-powered browsers, promising a seamless AI-mediated experience across the internet: browsing the web, distilling information, serving as digital companions, autonomously performing tasks, and managing personal data across devices. AI companies don’t want to help you browse the internet; they want to become your internet.

​

This newsletter examines how AI memory works, the tradeoffs it introduces for privacy, and the steps individuals can take to manage their data.

    // How AI memory features work

    Last month, Google launched Personal Intelligence, which connects data from apps such as Gmail, Google Photos, and Google Search to Gemini, its AI product.

    ​

    Personal Intelligence is a form of AI memory that “connects the dots” across all the Google products to “provide suggestions tailored to your world.” With each additional data point, the system adds to its understanding of you.

    ​

    Artificial intelligence “remembers” something by storing data—from chat histories to user patterns to online behavior—in a context window, which is the amount of information an LLM can process at one time.

    • The context window is specific to the program and is measured in a processing unit called a token (roughly four characters or 3/4ths of a word).
    • Google Gemini has a context window of up to 2 million tokens; Anthropic's Claude Sonnet 4 has 1 million tokens (up from a previous limit of 200,000), and OpenAI's GPT-5 has 400,000 tokens, all of which are expanding.

    Many chatbots have a context window that resets after each session, but newer memory features, such as Google’s Personal Intelligence, store selected data across sessions. This allows the system to carry forward user information as people move from email to search to chatbot interactions.

    • A Stanford study found that all six major AI companies typically train their models on individual data by default. As we explored in this newsletter, people on free AI plans are included in training by default unless they have actively opted out.
    • Once sensitive data is included in the context window, it becomes vulnerable. According to Lena Cohen, a staff technologist at the Electronic Frontier Foundation, this data “could be misused in ways that most people aren't thinking about, whether it's by a hacker or by a government.”
    • AI platforms are exploring ways to monetize the behavioral data these systems generate, often through highly personalized advertising. As we covered in a recent newsletter, the economic model that once tracked clicks and location can now extend to patterns of inquiry, expression, and decision-making.

     

    // More power, more risk

    More powerful AI models can store more data, expanding their ability to carry context over time. That expanded memory is enabling systems, such as Google’s Personalized Intelligence, to move beyond chat interfaces and carry out tasks across the web, creating a new set of privacy risks.

     

    Here are three ways more powerful systems with longer memory and greater autonomy can raise the stakes for privacy.

     

    1. AI agents can leak sensitive data

    What if your Google search for gluten-free restaurants leads an AI system to infer that you have celiac disease, and that assumption begins to shape your health insurance options?

    ​

    In a piece for the MIT Technology Review, Miranda Bogen, Director of the AI Governance Lab at the Center for Democracy & Technology, and Ruchika Joshi, a fellow there, refer to this spillage as “information soup,” where AI agents process, combine, and pass along data outside the context in which it was originally shared. Similar concerns surfaced after the 2023 breach at 23andMe.

     

    When these agents are connected to external applications, they don’t just access discrete records. They can aggregate signals across services, producing a composite picture of a person’s behavior, preferences, and circumstances, thereby increasing the likelihood of exposure on a much broader scale.​

     

    2. AI agents can accidentally delete important data

    And then there’s the risk of agents making mistakes in handling important data. In 2025, a Replit AI tool, when instructed to “read only,” deleted the company’s entire database of client records. The system then generated roughly 4,000 synthetic customer profiles to help reconcile the unexpected state change. Logs later showed it had run commands outside the intended permission scope, bypassing safeguards meant to require human review and approval.

     

    3. AI agents can be hacked

    AI systems have also been vulnerable to malicious prompt injection, in which attackers insert instructions or prompts that override an AI’s existing rules, leading to fraud or the extraction of user data without their knowledge or consent. PromptArmor, an AI security design firm, found that Claude’s Cowork was vulnerable to this attack. During testing, the AI sent private financial documents to a stranger, a flaw that was first reported in October 2025.

     

    // Platform lock-in

    As AI systems retain more of your data, they deepen your dependence on the platforms that hold it. When a model is calibrated to how you work, your specific preferences, and what captures your attention, it becomes more useful and harder to replace. The very memory that improves performance raises the cost of leaving.

     

    If this sounds familiar, it’s because social media platforms used this strategy to keep people from leaving: individuals who chose to close their accounts risked losing their entire social graph and all the content they had created.

    ​

    We are trading privacy and agency for personalization and convenience, often not fully understanding it’s a transaction that keeps us locked in. But the AI companies are thinking ahead; Perplexity CEO Aravind Srinivas said, “If people are in the browser, it's infinite retention.”

     

    // Restoring human agency

    How can people who rely on AI tools—there are over 700 million weekly active users of OpenAI’s ChatGPT alone—regain control over their data and agency in their digital lives?

    1. “By Design” changes to existing AI systems: The technical architecture of AI tools needs to be able to determine which content should be shared, which shouldn’t, and in what contexts. Claude's project-based memory separation and OpenAI's ChatGPT Health compartmentalization are workable baselines, but more rigor is needed in where, when, and how AI systems share memory. Other by design changes would also help, such as greater transparency and regular behavioral audits.

    2. Greater control: Users need clear visibility into what is remembered about them—and the ability to edit, manage, or revoke that memory. This principle is central to our work at Project Liberty. We believe AI systems should interact with individuals through a user-governed layer that routes interactions, minimizes unnecessary data sharing, and keeps sensitive context under the individual’s control. By placing people at the center of these exchanges, we can enable meaningful personalization and interoperability without requiring individuals to surrender ownership of their data or attention.

    3. New research and standards: Loyal Agents, a research collaborative uniting Consumer Reports, Stanford’s Digital Economy Lab and Project Liberty Institute, is advancing research and standards to make AI agents secure, loyal, and effective advocates for consumers everywhere. Their research centers in three areas:
      1. Enabling AI agents to securely transact on behalf of consumers.
      2. Defining duties of AI agents to consumers.
      3. Ensuring agents are effective and aligned with consumers' preferences.

    It can take time for regulation to catch up to the “jagged frontier” of AI memory, personalized intelligence, and ever-expanding context windows. But existing frameworks like the EU's General Data Protection Regulation and California's Consumer Privacy Act provide some privacy precedents that will need to be expanded to AI-specific risks, including how our data can be pulled out of context, misused, or stolen.

     

    // The price of convenience

    Convenience has often come with tradeoffs in privacy and control, especially when new technologies scale faster than the norms that govern them. We have seen this dynamic before as personal data has become easier to collect, share, and monetize. With AI systems increasingly embedded in how we work and make decisions, similar questions are emerging again about how much of our digital lives we hand over and on what terms.

     

    All of this adds up to a context flywheel. Each conversation, file upload, and clickstream detail expands what the system can retain. That can improve results, which draws more use, which generates even more context. At the same time, it enlarges the privacy and security surface area and raises the switching cost because people cannot easily recreate years of accumulated context elsewhere.

     

    As AI systems take on more of this role, questions about how memory is designed, governed, and understood move from the margins to the center. The challenge is not simply managing risk, but shaping tools, rules, and user expectations so that accumulated context remains legible, portable, and subject to human choice.

    Project Liberty in the news

    // In an op-ed in Politico, Frank McCourt, founder of Project Liberty, argued that personal data is the new battleground for democracy. “We must build alternative systems that respect individual rights, return ownership and control of personal data to individuals, and align with democratic principles,” he wrote.

    📰 Other notable headlines

    // 💼 A WIRED journalist tried RentAHuman, where AI agents hired him to hype their AI startups. Rather than offering a revolutionary new approach to gig work, RentAHuman is filled with bots that just want humans to be another cog in the AI hype machine. (Paywall).

     

    // 🤔 AI is getting very good at making predictions. An article in The Atlantic asked, will superforecasters become obsolete? (Paywall).

     

    // 💵 Anthropic put $20 million into a Super PAC to counter OpenAI, according to an article in The New York Times. (Paywall).

     

    // ✊ An article in The Guardian reflected on what technology takes from us, and how to take it back. Silicon Valley is giving us life void of connection. There is a way out, but it’s going to take collective effort. (Free).

     

    // 🤖 An article in Semafor reported on the mysterious wave of bot traffic that is sweeping the web. (Free).

     

    // 📱 An article in The Economist made the case that we shouldn’t ban teenagers from social media. It argued that restrictions would do more harm than good. (Paywall).

     

    // 🚫 An extraordinary example of online aggression by a bot is contributing to fears of real-world harm caused by artificial intelligence, according to an article in The Wall Street Journal. (Paywall).

     

    // 🕺 An article in Noema made the case why human intuition is still science’s greatest tool in the age of AI. Our sense for aesthetics, meaning and embodiment gives us a vital advantage over our technological creations. (Free).


    // 💬 What is Claude? Anthropic doesn’t know, either. That’s the big idea at the heart of an article in The New Yorker. (Paywall).

    Partner news

    // Your Attention Please premieres at SXSW 2026

    March 12–14, 2026 | Austin, TX

    “Your Attention Please” has been selected as a Documentary Spotlight film at SXSW 2026. Tickets and advance access details will be available through SXSW, with limited day-of seating expected. View the screening schedule here. 

     

    // With AI, we have speech, but no speaker 

    Deb Roy, a professor of Media Arts and Sciences at MIT,  and director of the MIT Center for Constructive Communication, wrote an article in The Atlantic where he asks: What does it mean to have speech without a speaker?

    What did you think of today's newsletter?

    We'd love to hear your feedback and ideas. Reply to this email.

    // Project Liberty builds solutions that advance human agency and flourishing in an AI-powered world.

     

    Thank you for reading.

    Facebook
    LinkedIn
    Twitter
    Instagram
    Project Liberty footer logo

    10 Hudson Yards, Fl 37,
    New York, New York, 10001
    Unsubscribe  Manage Preferences

    © 2026 Project Liberty LLC