Brain scans reveal: ChatGPT makes us less engaged learners. But there's a smarter way to use AI.
View in browser
logo_op-02

July 15th, 2025 // Did someone forward you this newsletter? Sign up to receive your own copy here.

Image by Project Liberty

Your Brain on AI

 

Could AI tools like ChatGPT enhance—or hinder—how we learn?

A recent study suggests that the answer may depend on how we use them.

While AI has the potential to enrich the learning process—by offering feedback, sparking curiosity, or simplifying complex ideas—it can also lead to what's known as “cognitive offloading,” where we rely on tools to do the thinking for us.

In this newsletter, we unpack early findings from a new study from MIT and explore what it reveals about AI’s double-edged role in education. Whether it’s calculators for long division, keyboards for note-taking, or chatbots for editing essays, our tools inevitably shape how—and how deeply—we learn.

So when does AI help us learn better? And when might it get in the way? Let’s dig in.

 

// The study

A study published last month by researchers at MIT’s Media Lab found that using ChatGPT to write essays negatively influenced the learning process.

 

The study divided 54 subjects, aged 18 to 39, into three groups, with the assignment to write an SAT essay.

  • The first group was assigned to use OpenAI’s ChatGPT to write the essay.
  • The second group was assigned to use Google’s search engine to write the essay.
  • The third group was given no tools at all.

This process was conducted three times over the course of a few months, with participants using their tool (or, in the case of the third group, no tool) to write three subsequent essays.

 

Then, researchers used electroencephalography (EEG), a test that measures electrical activity in the brain, to record brain activity across 32 regions for all three groups.

 

Here’s what they found:

  • ChatGPT users had the lowest brain engagement across the groups. They “consistently underperformed at neural, linguistic, and behavioral levels.”
  • ChatGPT users got lazier each time the process was run. By the end of the study, they were copying and pasting content into their essays.
  • According to two English teachers who reviewed the essays, the group that used ChatGPT wrote essays that lacked original thought or personal anecdotes.
  • 83% of participants from the ChatGPT group could not recall their own sentence minutes after “writing” it. Only 11% of individuals in the other two groups had this issue.
  • The EEG results showed that the group that didn’t use any technology exhibited the highest levels of neural connectivity, particularly in areas of the brain associated with creativity, memory, and information processing. Participants who wrote without the aid of any technology also reported greater ownership and higher satisfaction with their essays.

There are shortcomings to the study—from its small sample size to its pre-release before peer review—but its findings are aligned with another study out of Turkey that found that high school students who used ChatGPT to help them with homework thought they learned a lot but actually performed 17% worse on their final exam than students who didn’t use ChatGPT.

 

// Is this news?

The MIT researchers have received so much interest in the last month that they’ve created a website with results, images of the brain, FAQs for the press, and logos of the publications they’ve been featured in.

 

On one hand, is it any surprise that having a computer (or anything or anyone, for that matter) help write your essay leads to less brain activity, weaker memory of what was written, and essays that are less personal and original?

 

Anyone who has relied on a calculator to divide two big numbers is aware of how much easier it is to cognitively offload that tedious task than to summon the memory of how to set up a long division equation from the 4th grade.

 

It should not be a surprise that the less time we spend immersed in a book, grappling with a math problem, or pondering an idea, the harder it is to learn and recall the material. But in the age of AI, when the answers are just one prompt away, how important is it to have memory recall?

 

On the other hand, the study’s results raise questions about how best to use AI tools to support the learning process. Tools like ChatGPT, Claude, Khanmigo, and others have become daily companions to students of all ages. How they’re used matters, especially if the goal is not just to complete tasks, but to cultivate genuine comprehension and critical thinking.

 

Consider handwriting. Research has shown that taking handwritten notes leads to better memory development and learning than simply typing those notes on a keyboard. A study from 2024 found that, “whenever handwriting movements are included as a learning strategy, more of the brain gets stimulated, resulting in the formation of more complex neural network connectivity.”

 

      // Can we use AI to help us think for ourselves?

      AI can be used to offload our cognitive work, but it can also be used to facilitate personalized learning.

      • A randomized, controlled World Bank study released in May found that using a GPT-4 tutor with teacher guidance over a six-week after-school program in Nigeria had “more than twice the effect of some of the most effective interventions in education,” all at an extremely low cost.
      • A Harvard experiment found that “students learn more than twice as much in less time when using an AI tutor, compared with the active learning class.”

      Additional research (this study and this study) suggests that better learning outcomes are possible when AI is involved.

       

      The existence of AI in educational settings isn’t necessarily the problem. It’s how it’s used. If the default is for AI to do the work for you, not with you, as Ethan Mollick, a professor at Wharton, suggests on his Substack on the topic, learning outcomes might suffer. Last week, OpenAI released a new feature called “Study Together,” which aims to provide more socratic tutoring than automatic answers, more working with than working for.

       

      Rebecca Winthrop, Director of the Center for Universal Education at Brookings, wrote last month, “To all educationalists thinking about generative AI in education: let’s be deliberate. Before we teach young people how to use AI tools to enhance their work, let’s first make sure they know how to think for themselves.”

       

      Mollick agrees. “The key is sequencing,” he said. “Always generate your own ideas before turning to AI. Write them down, no matter how rough. Just as group brainstorming works best when people think individually first, you need to capture your unique perspective before AI's suggestions can anchor you.” 

       

      // Reexamining the purpose of education

      The authors of the MIT study caution against drawing the conclusion that using ChatGPT diminishes our intelligence, nor do they conclude that ChatGPT usage is causing damage.

       

      There has been a long history of people worrying that the latest technology would render people dumber. Plato thought the process of writing would undermine our memory. When cell phones were introduced, some expressed concern that not having to remember telephone numbers would dumb us all down as we bypassed our brains.

       

      The existence of the latest technology can shape our thinking and steal our agency, but only if we let it. By design tweaks to algorithms, platforms, and bots can make our experience with technology healthier. More research is needed (peer-reviewed, longitudinal studies, larger sample sizes of participants).

       

      Using AI in the learning process sparks bigger, more existential questions.

      • When is it smart to cognitively offload our work to AI, and when is it valuable for us to use our brains and grapple with the material?
      • How might the rise of AI tools reshape not only how we approach education, but the very philosophy that underpins it?
      • What is lost when we optimize for speed and efficiency?

      Ezra Klein interviewed Winthrop, of Brookings, on his podcast in March. She suggested that AI will prompt us all to cultivate skills that are fundamentally human: the ability to pay attention, the capacity for reflection, the process of making meaning in a fast-changing, uncertain world.

      Project Liberty news & updates

      // Project Liberty Institute is releasing a new report, “How Can Data Cooperatives Help Build a Fair Data Economy?” In collaboration with the Decentralization Research Center, the report explores how data co-ops can be a scalable alternative to the centralized digital economy.

       

      // Going to the Web3 Summit in Berlin this week? Meet representatives of Project Liberty and Frequency. They’ll be diving into Tokenized Sentiment, Data Protocol, and Policy in the Protocol.

      📰 Other notable headlines

      // 🔬 For algorithms, memory is a far more powerful resource than time. An article in WIRED featured how one computer scientist’s discovery marks a major leap forward in computer science research. (Paywall).

      // 🛡 Cybersecurity’s global alarm system is breaking down. The US system to track vulnerabilities is struggling to keep up with its backlog, according to an article in MIT Technology Review. (Paywall).

      // 🌎 An article in Rest of World explored why Big Tech is threatened by a global push for data sovereignty. Countries are forcing tech giants to store citizen data locally, challenging the standard business model of harvesting data abroad while keeping profits at home. (Free).

      // 📄 News publishers are building fences around their content in an effort to cut off crawlers that don’t pay for content. An article in the Wall Street Journal explored the AI scraping fight that could change the internet. (Paywall).

      // 🤖 An article in WIRED featured a new kind of AI model that lets data owners take control. A novel approach from the Allen Institute for AI enables data to be removed from an artificial intelligence model even after it has already been used for training. (Paywall).

      // 🇪🇺 Europe has much to gain from a pragmatic, technology-centered transatlantic relationship. An article in Project Syndicate argued that a thriving, technologically advanced Europe is in America’s own interest. (Free).

      Partner news & opportunities

      // New conference: “The Future of Digital Finance”
      September 24-25 | Virtual | Abstracts due July 18
      CIGI is convening policymakers, industry leaders, and researchers for a forum exploring how India, China, and Africa are reshaping digital payments. Participants may submit policy briefs (abstracts due July 18). RSVP here.

      // Researchers slash racial bias in AI loan decisions
      Researchers at the Stanford Digital Economy Lab have discovered that a simple adjustment within LLMs can reduce racial gaps in loan approvals and interest-rate quotes by up to 70% without compromising accuracy. Read here about this transparent, low-cost way to audit and correct algorithmic bias.

      What did you think of today's newsletter?

      We'd love to hear your feedback and ideas. Reply to this email.

      // Project Liberty builds solutions that help people take back control of their lives in the digital age by reclaiming a voice, choice, and stake in a better internet.

       

      Thank you for reading.

      Facebook
      LinkedIn
      Sin título-3_Mesa de trabajo 1
      Instagram
      Project Liberty footer logo

      10 Hudson Yards, Fl 37,
      New York, New York, 10001
      Unsubscribe  Manage Preferences

      © 2025 Project Liberty LLC