Scammers are targeting families with fake emergency calls, but there’s a way to protect yourself.
View in browser
logo_op-02

July 22nd, 2025 // Did someone forward you this newsletter? Sign up to receive your own copy here.

Image by Project Liberty

Your voice can be cloned in 10 seconds

 

“The initial caller’s voice sounded very much like my nephew’s. He knew family details, pleaded with me not to call his father, and promised to pay me back as soon as he got home.”

“The voice on the other end sounded just like my grandson, and it said, ‘Gramie, I've been in an accident.’”

“I received a call and heard my daughter crying hysterically! She wasn't making sense, so an ‘officer’ took over the call. He stated I needed to come right away but would not answer my questions.”

“I received a phone call from my grandson explaining that he was in a car accident at college and needed $5,000. He sounded scared and upset and asked that I not tell his parents. So I went to my bank to get the money.”

These quotes are a small sample of the hundreds of stories that Americans shared with Consumer Reports about audio deepfake scams.

Today’s AI tools need to train on less than 10 seconds of your voice to create a clone that sounds just like you.

Earlier this summer, an imposter did just that by creating a voice-based deepfake of Secretary of State Marco Rubio. The impersonator sent voice and text messages that perfectly parroted Rubio’s voice and writing style to foreign ministers, a U.S. governor, and members of Congress.

Rubio wasn’t the only one. The FBI noticed a concerning increase in audio deepfakes that impersonated senior U.S. officials. “If you receive a message claiming to be from a senior US official, do not assume it is authentic,” the FBI said in an announcement in May.

 

This week’s newsletter looks at the rise of voice cloning and deepfakes—how the technology works, the risks it poses, and how you can protect yourself.

 

// When fake news sounds like you

It’s never been easier to clone one’s voice. An online search will yield dozens of companies offering the service. What once required sophisticated equipment and expertise can now be done on a smartphone with a free app. 

 

While there are harmless use cases of voice clones (e.g., a voice-over for an audiobook or the creation of a podcast), voice cloning technology has also been harnessed for nefarious purposes.

 

A report released by Consumer Reports (CR) earlier this year found two types of threats from voice clones:

  1. Bad actors use voice clones to impersonate everyday Americans. Often, they are the voice of a loved one. Since November 2022 when ChatGPT was launched, deepfakes have surged more than twentyfold.
  2. Bad actors use voice clones to impersonate trusted public figures. The voices of influencers, celebrities, and politicians have said the darndest things (most of which were untrue). A 2024 ProPublica investigation found videos and audio on Facebook and Instagram that mimicked the distinctive tones of President Trump and President Biden, offering cash handouts to people who completed a survey. A deepfake of Taylor Swift in 2024 deceived fans with a fake Le Creuset giveaway. An analysis by The New York Times concluded that deepfakes of Elon Musk have led to billions of dollars of fraud. And yet companies like Parrot exist, selling tools to “make a celebrity say anything.” In a world flooded with fakes, they’re handing out megaphones.

The risks are amplified when voice clones are used in an attempt to sway elections.

  • In 2024, Democrats in New Hampshire received a phone call from a voice that sounded like President Biden, encouraging them not to vote in the state’s primary. It was an audio deepfake created by a Democratic political consultant who was later fined and indicted on criminal charges.
  • In 2024, in Indonesia, a deepfake of a long-dead president was "resurrected" in an attempt to endorse a candidate in an upcoming election.

 

// The rise of voice-based deepfakes

Imposter scams are not new, but AI technology makes them more believable.

 

A 2024 report by Deloitte predicted that AI-generated fraud could grow at a 32% year-over-year rate, exceeding $40 billion in losses by 2027 (see graph below).

Graph by Deloitte

In 2023, deepfake incidents increased 700% in one year in the financial sector.

 

A 2024 report by Mastercard found that nearly one in two U.S. businesses had been targeted by identity fraud using deepfakes, and almost two in five were targeted by deepfake voice fraud.

 

// Making it harder to create audio deepfakes

Addressing audio deepfakes requires a multifaceted approach at all levels, from individual awareness to technical solutions to policy changes.

 

After analyzing AI voice cloning companies, the CR report from March concluded that today’s technology lacks proper safeguards to protect consumers from potential harms caused by audio deepfakes.

 

Some companies allow a user to create a voice clone from publicly available audio. Others lack the verification to ensure someone has provided their consent.

 

The report makes specific recommendations for tech companies:

  1. Consent: Companies should have mechanisms and protocols in place to confirm the speaker's consent (such as uploading audio of a unique script).
  2. Customer Verification: Companies should enhance “know-your-customer” practices, such as collecting names and emails, to facilitate the tracing of fraudulent audio.
  3. Watermarks: Companies should watermark AI-generated audio to aid in future detection.
  4. Blocking: Companies should detect and prevent the unauthorized creation of voice clones of influential figures.
  5. Guardrails: Companies should build “semantic guardrails” into their cloning tools that automatically flag and prohibit the creation of audio with phrases that are frequently used in scams.
  6. Supervise Tools: Companies should consider supervising AI voice clones, instead of offering do-it-yourself products.

Such enforcement could come from the Federal Trade Commission Act; Section 5 creates a legal obligation for companies to protect their products from being used for harm.

 

State Attorneys General have also begun to raise concerns and pass laws. Multiple states, from California to Colorado and New York to Tennessee, have laws on the books regulating the creation and use of deepfakes. Many other states have specific laws surrounding audio and video deepfakes in relation to elections. Earlier this year at the federal level, President Trump signed the Take It Down Act, bipartisan legislation that enacts stricter penalties for deepfakes and non-consensual intimate imagery.

 

// What individuals can do

One approach to protect yourself and your family from audio deepfakes is to create a family password. Pick a word that you and your loved ones can easily remember. Then, if someone reaches out in distress seeking money or confidential information, ask them for this safe word.

 

The nonprofit Identity Theft Resource Center (ITRC) offers additional resources for individuals. The best defense is to be skeptical and proceed slowly. Before taking action, take a breath, seek alternative ways to reach loved ones, and never send money in response to a threatening phone call.

 

The surge in audio deepfakes underscores the urgent need for greater integrity and accountability in our digital systems. Project Liberty is advancing a vision for a healthier internet, where identity is protected, voices are authentic, and data is owned. As deepfake fraud spreads, so does our collective awareness to recognize and resist it.

Project Liberty updates

// Sheila Warren, Project Liberty Institute’s CEO, was on New York Stock Exchange TV Live, to talk about Crypto Week and the implications of the passage of the GENIUS Act, the first major congressional overhaul of cryptocurrency rules. Watch here.

📰 Other notable headlines

// 🖥 AI is moving from chatbots to the browser. An article in The Verge explains why and what it means for the future of AI. (Paywall).

// 🤔 To make sure AI advances democracy, an article in Tech Policy Press argued that we first must ask, ‘Who does it serve?’ (Free).

// 🚨 A major AI training data set contains millions of examples of personal data, according to an article in MIT Technology Review. Personally identifiable information has been found in DataComp CommonPool, one of the largest open-source data sets used to train image generation models. (Paywall).

// 🏛 The crypto industry reached a major milestone with the passage of its first major bill. An article in the New York Times reported on the legislative wins during Crypto Week. (Paywall).

// 📸 The rise of AI art is spurring a revival of analogue media. It is not just vinyl. Film cameras and print publications are trendy again, according to an article in The Economist. (Paywall).

// 📱 Grok's new porn companion is rated for kids 12+ in the App Store. AI companions are arriving faster than platforms can build guardrails around them, according to an article in Platformer. (Free).

 

// 🎒 AI is helping students be more independent, but the isolation could be career poison, according to an article in The Markup. Chatbots may give students quick answers when they have questions, but they won’t help students form relationships that matter for college and life success. (Free).

 

// 🤖 An article in Semafor predicted that the next AI breakthrough will be how well your agent knows you. (Free).

Partner news

// Metagov Seminar: TRANSFER Data Trust
July 23 | 12pm ET | Zoom
Metagov welcomes curator Kelani Nichole for a virtual seminar into TRANSFER Data Trust—a decentralized, artist-owned archive and cooperative value-exchange network. The session explores how the model empowers creators to control and monetize their digital works. Register here.

// Webinar on the Ethereum Name Service
July 23 | 3pm ET | Zoom
Join a webinar with the ENS Domain team for a hands-on look at how the Ethereum Name Service enables decentralized naming, digital identity, and cross-platform interoperability. This virtual event is ideal for developers and product teams exploring Web3 identity integration. Register here.

What did you think of today's newsletter?

We'd love to hear your feedback and ideas. Reply to this email.

// Project Liberty builds solutions that help people take back control of their lives in the digital age by reclaiming a voice, choice, and stake in a better internet.

 

Thank you for reading.

Facebook
LinkedIn
Sin título-3_Mesa de trabajo 1
Instagram
Project Liberty footer logo

10 Hudson Yards, Fl 37,
New York, New York, 10001
Unsubscribe  Manage Preferences

© 2025 Project Liberty LLC