How misinformation could affect the election
Will AI-powered misinformation sway the upcoming US election?
The US election is only 28 days away, and technologists, researchers, policymakers, and citizens are bracing for the worst, as generative AI technology has dramatically increased the amount of misinformation online.
In one prominent example from earlier this year, a series of fake robocalls in New Hampshire used an AI-generated impersonation of President Biden's voice to urge Democrats not to vote in the state's primary.
As we near the 2024 US Presidential Election, the (possible) role that foreign misinformation played in the 2016 US election looms large. But despite the proliferation of AI-generated false news, progress has been made in the last eight years—from content moderation policies at tech firms to laws passed by numerous states to new AI tools that identify deepfakes.
Will it be enough? Let’s take a closer look.
// The growth of misinformation
Misinformation in the age of AI takes on many forms: news sites built completely by AI, social media posts and images, misguided AI chatbot answers, and audio and video deepfakes.
- A paper released by Google researchers earlier this year found meteoric growth of image-related misinformation starting in 2023.
- In May of last year, Newsguard, an organization that tracks misinformation and a Project Liberty Alliance member, found 49 news and information sites that were completely written by AI chatbots. Today, less than 18 months later, it has identified over 1,000 such sites.
- Over the summer, the US Justice Department announced that it had disrupted a Russian propaganda campaign using fake social media accounts, powered by AI. Meanwhile, OpenAI banned ChatGPT accounts that were linked to an Iranian group attempting to sow division among US voters.
//
94% of US citizens are concerned that the spread of misinformation will impact the upcoming election.
//
// Citizens are alarmed
// The bias for action
The ubiquity of AI-generated misinformation requires a multidimensional response.
What big tech is doing
- Meta has made changes to its algorithms in recent years. In 2021, Meta decided to push political and civic content lower in its feeds. Earlier this year, the company announced that it would deprioritize the recommendation of political content on Instagram and Threads.
- X made changes to its AI chatbot after five secretaries of state warned it was spreading election misinformation.
- TikTok outlined the steps it's taking to protect election integrity in 2024, including deterring covert influence operations and preventing paid political advertising on its site.
What policymakers are doing
- In 2019, Texas became the first state to ban the creation and distribution of deepfake videos intended to hurt a candidate or influence an election.
- Today, 26 US states have passed or are considering bills regulating the use of AI in election-related communications, according to an analysis by Axios.
At the federal level, agencies like the Federal Elections Commission have walked back efforts to control election-related misinformation, citing that it lacks the statutory authority to make rules about misrepresentations using deepfake audio or video. An article in The Atlantic argued that the deeper issue at the federal level is that Congress has not clearly given any agency the responsibility to keep political advertisements grounded in reality.
// Fighting misinformation vs. protecting speech
Taking down misleading or false content can encroach on protections for free speech. In one notable example of overreach, in the lead-up to the 2020 Presidential election, the FBI alerted Facebook that Russian operatives were possibly attempting to spread false information about Hunter Biden and his relationship to a Ukrainian energy company. Based on this information, Facebook (now Meta) suppressed reporting by The New York Post about emails found on Hunter Biden’s laptop.
Later, it turned out the story was not Russian disinformation, but was actually true. Mark Zuckerberg of Meta conceded that Meta shouldn’t have demoted the story on its newsfeeds.
“In 2021, senior officials from the Biden Administration, including the White House, repeatedly pressured our teams for months to censor certain COVID-19 content, including humor and satire, and expressed a lot of frustration with our teams when we didn’t agree,” he wrote in a letter to the House Judiciary Committee.
Tech platforms and regulators need to walk a tightrope: identifying potentially misleading or fake information, fact-checking it, and deciding if and when to take it down, all while protecting speech.
// New incentive structures for social media
Social media is not the only location for misinformation online, but it’s becoming increasingly clear—and new research backs up the insight—that the incentive systems of social media platforms encourage users to spread misinformation.
How? One study by researchers at Yale found that a small number of Facebook super-users and super-sharers of content had a disproportionately outsized impact in the spread of misinformation. They found that the 15% of the most habitual Facebook users were responsible for 37% of the false headlines shared. In 2021, The Center for Countering Digital Hate discovered a similar disproportionality: just twelve individuals spread two thirds of anti-vaccination content on Facebook and X.
Combating misinformation requires a number of approaches, policies, and technologies, but no solution will be effective if we don’t address the underlying incentive systems and architecture of our digital spaces.
Project Liberty is focused on innovations at both the governance and protocol levels of the web. Changing the incentive structures that underpin social media platforms like TikTok could change the content that gets shared and elevated, as well as the amount of control users have over their online experience. This vision anchors The People’s Bid to acquire TikTok.
// The battle for truth
And yet, redesigning the incentive structures of the internet won’t happen before November’s election. In the meantime, not all hope is lost.
For individuals, there are steps everyone can take (especially you, Gen Z and Millennials!) to ensure they’re not susceptible to misleading claims and misinformation online.
Have you seen election-related misinformation recently? Send us a screenshot!