February 13, 2024 // Did someone forward you this newsletter? Sign up to receive your own copy here.
Image from Project Liberty
New research on social media harms
Around the world, majorities of adults believe that social media companies bear “a great deal” of responsibility for making the internet safe, and a median of 70% of adults globally say it is very important that privacy settings for social media products are set at a high level by default.
Those are two of the findings in new research published today by Project Liberty Foundation documenting the worldwide concern that many adults have about social media’s potential harms for young people.
In late 2023, Project Liberty Foundation conducted an international survey of 14,000 people, aged 18-75, in seven countries across five continents: Brazil, China, France, India, South Africa, the UK, and the US.
Today, we’re publishing the findings in our insights report, “Publics Around the World: Kids – and Parents – Need a Safer Internet,“ which provides clear evidence that citizens are looking to social media executives, as well as policymakers, to build safer social media platforms and a healthier tech ecosystem. In this week’s newsletter, we’re sharing five insights from our conversation with the researchers behind the report.
//Five Insights
Led by Project Liberty Foundation’s head of strategic insights, Dr. Jeb Bell, and research manager, Jessica Theodule, our first insights report draws on a comparative, multi-country survey to gain a better understanding of public attitudes toward social media and its impact on youth. In our conversation with Bell and Theodule, we touched on five key insights and implications from the research.
//Insight #1: Kids are not adults
One major takeaway from the report was that kids are not adults. Two-thirds of respondents globally believe that age 13—a frequent threshold used by tech companies to mark the age when someone can have their own online presence—is too young to be an active social media user.
This is not the same as majorities calling for bans on young people’s access to social media platforms. But if social media companies were to treat kids and adults differently, they might consider changes that are quickly becoming law in places like the EU, like banning targeted advertising to kids and banning addictive designs on social media platforms.
The simplicity of “kids are not adults,” however, belies the complexity between safety, privacy, and free speech. By introducing age verification laws and measures to allow parental supervision, a child’s privacy could be threatened and their ability to exercise free speech, find support, and access vital information reduced.
The conclusions from Project Liberty Foundation’s insights report suggest that before laws impose restrictions or age verifications, social media platforms themselves must first create safe digital spaces for kids.
// Insight #2: People need more agency over their data
According to Bell, the research points to the ways that parents are stepping up to protect their kids online. There is evidence that people are recognizing the agency they have in shaping and managing their digital experience. But the research is also clear that “people are not satisfied with the level of agency and control they can exert over their online experiences or the online experiences of their kids.”
To Bell, the research revealed a common thread amongst people that personal and private data “should not be for consumption and harvesting by social media companies, especially amongst young people.”
Finding: 70% of respondents believe it is very important that privacy settings for social media products are set at a high level by default and that social media companies provide easy-to-use tools to control who has access to a child's data.
//Insight #3: The risks of social media for kids are a global issue
What made Project Liberty Foundation’s insight report unique, according to Theodule, was its global reach. While there are many studies evaluating the sentiments of people in the US or Europe, there’s a dearth of research seeking to understand global sentiments about social media safety.
With respondents from around the world, the belief that kids deserve safe and secure experiences online is a fundamental value shared globally.
Finding: One finding that surprised Theodule is that both parents and non-parents were concerned about the harms of social media. “This speaks to a society-level concern. You don't see a difference where people who have children are more concerned about the harms of social media than those without kids,” she said.
//Insight #4: It's not anti-tech, it's safe tech
The sentiments from this research underscored an important distinction: It wasn’t that respondents were explicitly anti-tech, it’s that they were in favor of safe-tech.
While people hold platforms primarily accountable for the harms they cause, they’re not necessarily rejecting those platforms, but calling on them to make their products safer. The research highlighted that while parents are taking steps to protect and support their kids to have safe online experiences, parents also believe that their individual actions are no substitute for tech companies making platform-wide changes to increase safety and reduce harm.
Finding: 66% of respondents said it was “very important” for social media companies to design their algorithms to ensure they do not persuade children to spend excessive time online.
//Insight #5: Support for government regulation varies
Along with social media companies, many people in the countries surveyed are strongly in favor of the government playing a major role in making online spaces safer.
Americans are somewhat less inclined than other publics to embrace government regulation as a general solution. Bell believes this may be linked to declining trust in “government and its ability to effect change.”
Pew Research, which has tracked the American views of government since the 1950s, found that public trust in national institutions had reached a record-low in 2023 (it is worthwhile to consider how low trust shapes the ability of US elected officials to mobilize popular support for online safety legislation).
Finding: Lackluster enthusiasm for government intervention is evident on both sides of America’s political aisle, although more intense among Conservatives. Just 45% of Democrats say the government should bear a great deal of responsibility for keeping social media and the internet safe, while even fewer (34%) of Republicans say the same.
//From research to action
The purpose of this research is to spur action around solutions.
According to Theodule, one goal is to provide social media companies with data showing specifically which new features users want them to adopt to make platforms safer.
Another goal, according to Bell, is for policymakers to understand how universal and non-partisan public demand for online safety is.
In the decades-long campaign to reign in big tobacco, research played a crucial role in shifting public perception and passing legislation, and in the recent congressional hearing on how social media harms kids, research was cited again and again.
Translating research into action by social media companies and policymakers doesn’t happen overnight, but this new insights report provides a unique global perspective about what everyday people want for the future of tech and how companies can make their platforms safer. It contributes to the growing body of evidence that’s turning into a groundswell of momentum for change.
Other notable headlines
// 🤖 Researchers at Anthropic taught AI chatbots how to lie, and they became disturbingly good at it, according to an article in Business Insider.
// 📰 An article in The Atlantic explored why there’s a rise in conspiracy theories online. Information online has conditioned people to treat everything as evidence that supports their ideological positions on any subject.
// 🏛 In Silicon Valley, a California lawmaker unveiled an AI bill that would require companies to test AI models before release, according to an article in the Washington Post.
// 📜 Almost 2,000 years ago, the Mount Vesuvius volcanic eruption preserved a library of scrolls but left them unreadable. An article in Bloomberg explored if AI can decipher them.
// 📱 Instagram and Threads will stop recommending political content, according to an article in The Verge.
// 🧠 An AI chatbot, which screens people for mental-health problems, led to a significant increase in referrals among minority communities in the UK, according to an article in MIT Technology Review.
// 🔒 Parental controls have failed, according to an article in The Wall Street Journal. Instead, companies should protect kids, and parents should teach teens to defend themselves.
// 🚫 An article in The Markup explored how arbitrary TikTok account bans affect the livelihoods of creators with disabilities.
// 🇵🇰 Pakistan’s former prime minister, who is in jail, is rallying his supporters with speeches that use artificial intelligence to replicate his voice, according to an article in The New York Times.
Partner news & opportunities
// Virtual event on leveraging AI for corruption
Thursday February 15th at 9am ET
Ashoka will continue their AI for Justice series with “AI for Justice–Leveraging AI to fight corruption networks.” Paul Radu, the founder of the Organized Crime and Corruption Reporting project will share how to leverage AI to detect corruption and strengthen democracy. Register here.
// Meta whistleblowers unpack the latest Congressional hearing
Last week, Issue One’s Council for Responsible Social Media brought Facebook whistleblowers Frances Haugen and Arturo Béjar to unpack the Congressional hearing on social media. They explored what we can trust from the hearing, what we can’t, and what Congress can do next. Watch the livestream.
/ Project Liberty Foundation is advancing responsible development of the internet, designed and governed for the common good. /