January 9, 2024 // Did someone forward you this newsletter? Sign up to receive your own copy here.
Image from Project Liberty
The rise of AI-generated fraud
Last year, Jennifer DeStefano received a horrifying call.
It was a call from an unknown number, but when the Arizona mother picked up, her 15-year-old daughter’s voice was unmistakable: “Mom, I messed up. Help me, Mom. Please help me.”
With her daughter crying in the background, a man’s voice came on the phone demanding a $1 million ransom for her safe return.
“It was completely her voice. It was her inflection. It was the way she would have cried. I never doubted for one second it was her,” DeStefano said. “A mother knows her child.”
Fortunately, almost immediately, she determined that her daughter was actually safe—at which point she realized she had a scammer on the phone. But how was her daughter’s voice so accurate?
According to FBI investigators, the voice of her daughter on the phone was the work of an AI-generated voice deepfake, an increasingly common type of fraud.
We’ve entered a dangerous renaissance in AI-powered online fraud, and this week, we’re exploring the latest in scam tech and the solutions to tackle it.
//How synthetic fraud works
Creating a clone of a loved one’s voice is an example of synthetic fraud, where scammers use real data found online to create fake personas and new identities.
To create a voice deepfake, a scammer can download an audio clip of someone's voice from a social media video or voice message—even if the clip is only 30 seconds long—and use AI-powered synthesizing tools to make that voice say anything.
Alain Meier, Head of Identity at Plaid, a company that builds tech products for the banking and financial industries, said that scammers are finding photos of victims from their social media and 3D-printing realistic masks to create fake IDs that bypass facial recognition technology. Plaid has begun to analyze the texture and translucency of skin to authenticate when users are logging in with facial-recognition technology.
//
Globally, the cost of cybercrime was projected to hit $8 trillion in 2023, or more than the GDP of Japan.
//
//The AI x-factor
AI is making scams more convincing, harder to detect, and easier to perpetrate at scale.
AI chatbots: AI chatbots are making text-based online fraud far easier to perpetrate because they eliminate obvious grammatical mistakes and typos. SlashNext, a cybersecurity company, found through its own data that AI chatbots helped drive a 1,265% increase in the number of malicious emails sent between 2022 and 2023.
The rise of “cheapfakes:”Cheapfakes are manipulated media made with readily-available technology (from in-camera effects to Photoshop) that’s often easy-to-use and either cheap or completely free.
//The numbers don't lie
According to Federal Trade Commission (FTC) data, Americans lost nearly $8.8 billion to all scams (not just cybercrime) in 2022, a 44% increase from 2021.
Globally, the cost of cybercrime was projected to hit $8 trillion in 2023, or more than the GDP of Japan, the world’s third-largest economy. By 2025, cybercrime costs will exceed $10 trillion, a 3x increase in just one decade.
Consumers are not the only victims. Just in the last year, 37% of global businesses have been impacted by synthetic voice fraud.
//The solutions
Where there is growing fraud, there is a growing effort to combat it.
The individual level
Consumer education and action is critical.
The FBI provides a list of recommendations on how to avoid becoming a victim (for example, do not purchase or send sensitive information when connecting to a public Wi-Fi network), and the Federal Trade Commission offers guidance on how to protect personal data (for example, use multi-factor authentication to secure your accounts).
For those who have been scammed, one of the biggest dangers is the “recovery scam” where victims get re-scammed by fake scam mitigation services that promise to recover the initial money lost.
By filing a fraud report with the FTC and sharing what happened with their communities, consumers can reduce the likelihood that others will become victims.
The MIT Media Lab built Detect DeepFakes to help people identify the telltale signs of what’s a deepfake and what’s not.
The technology level
The same technology enabling online fraud can also be used to detect and prevent it.
AI can also help in fraud detection on social media platforms. Platforms like Meta use AI to moderate harmful content and remove deepfakes, which are banned across their platforms. In 2020, Meta ran a competition for new tech innovations to identify deepfakes.
The company Pindrop, which monitors audio calls for financial institutions, provides anti-fraud and authentication technologies to major financial institutions.
In the face of more sophisticated fraud, banks and other financial institutions are turning to more advanced biometrics to authenticate customers.
The government level
Regulators and government agencies are also stepping up.
The US federal government has a Consumer Financial Protection Bureau that protects customers from unfair practices and fraud in banking, lending, and financial services.
//From vulnerability to vigilance
While a phone call from a kidnapper might sound like an extreme, worst-case scenario, we’ve entered an age when it’s harder than ever to detect truth from fiction and actual voices from synthetic ones. This creates opportunities for fraudsters, but it also requires consumers, companies, and government agencies to step up.
Who do you know who needs to be prepared for AI-powered fraud? Consider forwarding this email.
Project Liberty news
// 🤔 Can disruptive technology be responsible in 2024 and beyond? That’s the question posed in a recent policy brief in Tech Policy Press, co-written by Project Liberty’s Institute and the Centre for International Governance Innovation.
Other notable headlines
// 🍪 In the biggest change in the $600 billion-a-year online-ad industry, Google is finally killing cookies, according to an article in the Wall Street Journal.
// 🗣 An article in The Markup reported on new research that found assigning a role to a chatbot doesn’t result in more accurate responses.
// 🤖 An article in WIRED explored a new generation of robots built to provide care and emotional support for people with dementia.
// 👂 New AI technology can differentiate between tuberculosis and other respiratory conditions just by hearing the sound of a cough, according to an article in the MIT Technology Review.
// 👩💻 An article in Tech Policy Press considered the high cost everyone will pay for big tech laying off trust and safety teams.
// 📞 The most radical new year’s resolution: switching to a flip phone. Kashmir Hill, a reporter at The New York Times gave up her iPhone for a month and shared her experience.
// 🧮 An article in TechCrunch explored how the world is shifting from software to data, and how data ownership will lead the next tech megacycle.
Partner news & opportunities
// Virtual conference on open data & civic tech
January 17-19
The U.S. Census Bureau is hosting the Census Open Innovation Summit 2024, an annual innovation conference showcasing technology built with open data, and highlighting government innovations, cross-sector collaboration, and federal-community partnership. Register here.
// Responsible AI Symposium
January 18-19, Washington DC
The National Fair Housing Alliance is hosting a Responsible AI Symposium in Washington DC where researchers, innovators, civil and human rights experts, and regulators will explore algorithmic fairness. Register here.
// Virtual celebration of works entering the public domain
January 25 at 1pm ET
Creative Commons, Internet Archive, and other leaders are hosting Public Domain Day 2024, a virtual celebration of previously copyrighted works that are entering the public domain in 2024 (like the mouse that became Mickey). Register here.
// New collaboration between Georgetown & Ukrainian government
A new collaboration between Georgetown’s McCourt School of Public Policy and the Ukrainian government will connect Ukrainian digital leaders with McCourt and Georgetown faculty for an immersive learning experience in areas including public policy, digital public infrastructure, data privacy, and cybersecurity. Learn more.
/ Project Liberty is advancing responsible development of the internet, designed and governed for the common good. /