Should we slow the development of AI?
In 1942, scientists at the Manhattan Project briefly feared that detonating the atomic bomb might ignite Earth’s atmosphere and destroy the planet.
Robert Oppenheimer brought in top physicists to examine the risk. It wasn’t an ethical debate—it was a physics problem. And fortunately, it had a clear answer: The laws of nature made such a chain reaction virtually impossible.
Physicists could calculate the outcome with confidence.
AI researchers face a different reality. The neural networks driving today’s most advanced systems are “black boxes”—even the experts can’t fully explain how they work or predict how they’ll behave.
We’ve entered a widening gap between the speed of AI development and our ability to understand and govern it.
Unlike an atomic bomb, AI also has the potential to be a tremendous force for good, such as accelerating cures for disease, helping reverse climate change, improving education, and unlocking solutions we haven’t yet imagined.
So how do we embrace the benefits while guarding against the risks? And would slowing down AI cost us breakthroughs that could save lives?
This is Part II of our exploration of sophisticated AI systems. In last week’s newsletter, we investigated AI agents. In this week’s newsletter, we explore the risks and the upsides of powerful AI, and what’s possible as we near Artificial General Intelligence (AGI).
// Definitions: AGI & ASI
First, some definitions are necessary:
- Artificial General Intelligence (AGI) refers to an AI system that can learn and apply knowledge across a wide range of tasks—much like a human mind. According to an article in the New York Times from earlier this month, some tech leaders believe we could see rudimentary AGI within the next decade. AGI sits at the intersection of autonomy (independence of action), generality (breadth of scope), and intelligence (task competence).
- Artificial Superintelligence (ASI) is a step beyond AGI. Dan Hendrycks, The Director of the Center for AI Safety, defined superintelligence as AI that “would decisively surpass the world’s best individual experts in nearly every intellectual domain.”
ASI might be a ways off, but AGI could be around the corner. Anthropic’s CEO, Dario Amodei thinks it could be as close as 2026. Sam Altman, CEO of OpenAI—a company whose stated mission is to ensure that AGI benefits all of humanity—has written that AGI is “coming into view.”
// The risks of AI
Like the atomic bomb, some think that AI could pose an existential risk to humanity by either contributing to the creation of smart weapons or outmaneuvering their handlers.
Keep The Future Human, an initiative of the Future of Life Institute (a Project Liberty Alliance Member) and spearheaded by physicist Anthony Aguirre, explored AI's risks. The initiative outlines six areas of risk:
- Power concentration: Unprecedented accumulation of power by corporations, governments, AI systems, or other actors.
- Massive societal disruption: Widespread displacement of human labor and the collapse of social systems or economic structures.
- Catastrophic events: Dramatically increased risk of devastating attacks or accidents enabled by AI capabilities.
- Geopolitical instability: Automation of warfare, destabilizing shifts in global power, and increased likelihood of conflict.
- Loss of human agency: Surrendering human decision-making to automated systems we cannot fully understand or control.
- Environmental tipping points: Failed interventions, mismanagement, or runaway energy consumption.
“The only way to avoid AGI’s risks is not to build it—at least, not until we are sure it’s safe,” the report said. The public agrees. In a July 2023 survey conducted by the AI Policy Institute, 82% of Americans said we should “go slowly and deliberately” in AI development.
// The optimists' case for AGI
However, not building AGI means slowing its potential to be a transformational force for good. There are legitimate reasons to be optimistic:
- Biomedical progress and cures to disease: In his essay, “Machines of Loving Grace,” Amodei believes powerful AI can help researchers rapidly accelerate the potential cures for cancer or Alzheimer’s. “AI-enabled biology and medicine will allow us to compress the progress that human biologists would have achieved over the next 50-100 years into 5-10 years. I’ll refer to this as the ‘compressed 21st century’,” he said. The VC Marc Andreessen was more blunt. “We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder,” he said in 2023.
- Economic abundance: A McKinsey report in 2023 estimated that AI has the potential to deliver additional total economic activity globally of approximately $13 trillion by 2030 (the global economy stands at $115 trillion in 2025). Research by the Federal Reserve Bank of St. Louis on labor productivity found that using AI tools can boost productivity, reduce hours worked, with some of the greatest gains for people who use it consistently.
- Innovations that build a more equitable world: While it’s unclear how distributed such economic gains would be, AI has the potential to transform the global economy into one powered by data. In education, AI tutors can radically transform the landscape for education and opportunity. In agriculture, AI can help develop climate-resilient crops and soil, driving enhanced food security and access.
// How to keep the future human
There are likely many middle paths to harness the best of AI, without losing control of it. In the Keep The Future Human initiative, Aguirre proposes an alternative to AGI. He describes it as “Tool AI,” which avoids the risks of AGI by limiting an AI system’s autonomy, generality, and intelligence. Tool AI would be:
- Intelligent and general-purpose, but requiring human oversight.
- General and autonomous, but of limited capability.
- Intelligent and autonomous, but confined to specific domains.
Aguirre outlines four practical measures to prevent uncontrolled AGI:
- Compute oversight: Standardized tracking and verification of AI computational power usage.
- Compute caps: Hard limits on computational power for AI systems, enforced through law and hardware.
- Enhanced liability: Strict legal responsibility for developers of highly autonomous, general, and capable AI.
- Tiered safety & security standards: Comprehensive requirements that scale with system capability and risk.
These measures would require a coordinated effort by policymakers and industry leaders to redirect AI away from its breakneck pace toward general intelligence and instead steer it into safer harbors of responsible innovation and development.
One thing AI is not good at: summoning the political will to pass policies regulating AI. This is partially because the development of AI has become a geopolitical race between nations. A country that self-regulates its use of AI might put itself at a disadvantage if other nation-states don’t follow suit (a phenomenon also observed with nuclear weapons). After the AI Action Summit in Paris last month, the Hard Fork podcast noted how AI safety had taken a back seat to AI opportunism.
The EU passed the first comprehensive legal framework for AI worldwide with its AI Act in 2024. The AI Act regulates AI along a spectrum of risk, while attempting to cultivate innovation (a tricky balance to strike).
In the U.S., the Biden Administration produced a symbolic (not enforceable) Blueprint for an AI Bill of Rights. In the new Trump Administration, it’s still unclear how AI policy might unfold.
// The People's AI
Are autonomous, general AI systems yet another technology we will lose control of?
Is AGI the next step in a continuous progression where everyday people have less control over the technologies that rule their lives?
It’s possible to see something so intelligent and autonomous as a threat to our human voices and choices—principles that make up a big part of Project Liberty’s mission. Keeping such fast-moving technologies within the domains of our control is crucial for us to preserve our human rights, but it is a false dichotomy to conclude that we must choose between progress or safety. We need the full spectrum of our human imagination to see what’s possible.