December 5, 2023 // Did someone forward you this newsletter? Sign up to receive your own copy here.
Image from Project Liberty
How to protect democracies against disinformation
Last week, Meta announced that it shut down nearly 4,800 fake social media accounts impersonating Americans that were created in China. The fake accounts were designed to spread polarizing and fake political content ahead of next year’s elections in the US. And since 2017, Meta has detected over 200 clandestine influence campaigns by foreign groups in 68 countries.
With national elections in 76 countries in 2024, impacting half of the global population, foreign groups (often on behalf of governments) are stepping up their digital campaigns to influence elections, question truth, polarize voters, and sow discord.
This week, we’re highlighting the growing threat to democracies around the world by foreign digital influence campaigns, and what we can do to ensure the integrity of democratic processes.
//The perfect storm
There are a number of reasons why elections in 2024 are threatened by disinformation campaigns.
The year of elections: More people worldwide will have the opportunity to vote in 2024 than any other year in history.
Declining trust in institutions and government: Globally, trust in governments is declining, with 66% of global respondents in Edelman’s Trust Barometer believing their country’s governments are purposely trying to mislead people and only 36% of respondents believing government is a unifying force in society.
Fewer trust and safety staff: Trust and safety teams at platforms like X, Meta, and YouTube have been reduced in rank. Such teams are critical to detecting and moderating content.
Yoel Roth, the former head of Twitter’s Trust and Safety team, said at an event at UCLA this fall, “I’m worried about the fact that in 2024, platforms will have fewer resources in place than they did in 2022 or in 2018, and that what we’re going to see is platforms, again, asleep at the wheel.”
//
Since 2017, Meta has detected over 200 clandestine influence campaigns by foreign groups in 68 countries.
//
//How digital influence campaigns work
When aiming to spread disinformation, foreign groups use a proven playbook to advance their own geopolitical interests and introduce instability and confusion in their adversaries.
Step 1 - Produce content: A piece of digital media (a video, an article, a tweet) is created by AI or commissioned by local journalists and produced in the local press. In Latin America, Russian companies commissioned a network of local writers to write pro-Russian articles.
Step 2 - Amplify content via social media platforms: Once the content has been produced, foreign governments and actors leverage tech platforms to make it go viral. In India, “disinfluencers” have shared disinformation about the Hamas-Israel conflict to their followers.
Step 3 - Access local news cycle to drive political division: If incendiary content can scale, it can stoke existing partisan divides. In Taiwan, a video containing anti-American disinformation that was generated from China reached a level of local virality that newspapers and talk shows considered it real news and began to discuss it. This further politicized the issue and caused internal divisions between local political groups in Taiwan.
//Around the world
Countries in every region of the world are facing a growing threat.
Overthrowing governments in Africa: In the Sahel region of North Africa, Russian disinformation campaigns have been credited with toppling multiple governments of former French colonies, from Burkina Faso to Mali to the Central African Republic.
The battle for truth in the Middle East: The disinformation campaigns throughout the Middle East in recent months have intensified with the Israel-Hamas conflict. Ian Bremmer, a foreign policy expert, has suggested that disinformation on X is “being algorithmically promoted,” and the EU has threatened X with fines if it doesn’t manage fake content on its platform.
China reshaping the global narrative: In October, the US State Department released a comprehensive report, warning that China is investing billions in its global information campaign to sway how decisions are made in dozens of countries, undermine US interests, and positively shape the global perception of China.
Foreign governments aren’t the only ones spreading disinformation and causing instability. Domestic groups are also leveraging digital platforms to advance agendas. In India, a disinformation campaign run by the Indian military tried to crack down on the Kashmir region’s dissidents and journalists, and the government has used WhatsApp to spread false information.
//How to protect democracies
While the threat has never been greater, there are effective strategies to limit the impact of information campaigns.
Government detection: The US State Department’s Global Engagement Center has disputed false or misleading information in advance, proactively disclosing the influence campaigns of Russia throughout Latin America to lessen their power.
Platform reform: A report issued earlier this year by multiple Project Liberty Global Alliance for Responsible Tech members (like IssueOne, Center for Humane Tech, and Public Knowledge), Democracy by Design, outlines specific steps platforms can take to ensure election integrity on their platforms—such as bolstering resilience, countering election manipulation, and improving transparency.
Updated laws: Many countries have laws against foreign infiltration in elections, but some laws need updating for the digital context. For example, Taiwan has laws against election interference in broadcast media, but it doesn’t cover print or digital outlets. In Thailand, a bright spot for digital activism, the country’s existing laws give state agencies the power to exert control over digital information.
Strengthen civil society: A report by Brookings recommended a strong civil society response to disinformation campaigns throughout Asia by supporting research and data sharing with government officials. In October, the Ford Foundation launched the Digital Resilience Network to provide frontline organizations across the Global South with resources to tackle online surveillance, censorship, and misinformation.
Improve media literacy: While deepfakes can be convincing and disinformation may be difficult to detect, media literacy is crucial. Reuters profiled the work of journalists in Nigeria to distribute media literacy programs throughout the country to fight misinformation.
Invest in local journalism: Nobel Peace Prize laureate Maria Ressa, of the Filipino-based news organization Rappler, launched a 10-point action plan last year to address the information crisis, and focused on rebuilding independent journalism as an antidote to tyranny.
//Organizations on the front lines
Project Liberty Alliance members are leading the charge in protecting democracies against the threats posed by foreign disinformation campaigns.
Bellingcat is an independent investigative collective of researchers, investigators, and citizen journalists conducting open-source research that exposes everything from disinformation campaigns to war crimes.
Starling Lab is an academic research initiative exploring the latest cryptographic methods and decentralized web protocols to restore trust in digital media.
The Global Disinformation Index aims to disrupt the business model of disinformation by providing independent, transparent data to advise policymakers and business leaders about how to combat disinformation.
With more people voting next year around the world than ever before, protecting the democratic process from digital campaigns has become an issue of urgent national security.
Project Liberty news
Project Liberty’s technical report on open-source standards was approved by the International Telecommunication Union yesterday. Project Liberty’s submission highlights the importance of open-source standards to drive the development of immersive worlds. It displays a use case of DSNP and explains key concepts such as social graphs, data agency, and decentralization.
There are over 4,000 enforced ITU-T Recommendations. Our goal is to have the DSNP recommendation enforced so it can be recognized as a standard, allowing it to be used widely by the industry.
Other notable headlines
// 🤖 In November, OpenAI announced that anyone can create custom chatbots, but some of the data they’re built on is easily exposed, according to an article in WIRED.
// 🏫 An investigation by The Markup examines how college students are subject to a vast and growing array of watchful tech, including homework trackers, test-taking software, and license plate readers.
// 🕵 Meta no longer receives notifications of global influence campaigns from the Biden administration, halting a longtime practice by the federal government to inform tech companies like Meta. An article in The Washington Post explores why.
// 🥳 An article in the Atlantic reflects on the one year anniversary of ChatGPT’s public launch. It argues that the technology is less important than the ideas it represents.
// 📱 Montana’s TikTok ban has been blocked by federal judge who ruled that the state used the ban to target China instead of protect Montana residents, according to an article in Ars Technica.
// 🏛 As pressure mounts from media reports and government investigations, Meta is expanding child safety measures across Facebook and Instagram, according to an article in The Verge.
// 🤔 Project Liberty Alliance member New_ Public interviewed Amy Zhang, who is leading the development of PolicyKit, a computer-assisted governance tool, on how to democratize power in online communities.
Partner news & opportunities
// Open call: receive £3,000 for your idea related to ownership in art and culture
RadicalxChange and Serpentine Arts Technologies are seeking radical new ideas that evolve our understanding of the concepts and practices that underwrite ‘ownership’ in art and culture. Learn more here and submit your ideas by December 20th.
// New digital tool estimates damage in Gaza
Bellingcat has developed a new tool, originally developed to estimate damage in Ukraine, that can estimate the number of damaged buildings and the pre-war population in a given area within the Gaza Strip. Check it out here.
// Partnership on AI seeking public comment
Partnership on AI released its Guidance for Safe Foundation Model Deployment for public comment. The Model Deployment Guidance is a framework for AI model providers to responsibly develop and deploy AI foundation models. Through the public comment process, stakeholders can help shape this collective effort. Submit your public comment here by January January 15th, 2024.
/ Project Liberty is advancing responsible development of the internet, designed and governed for the common good. /