August 22, 2023 // Did someone forward you this newsletter? Sign up to receive your own copy here.
Photo from Adobe
Child online safety - a world tour
Globally, one out of every three internet users is a child under the age of 18, and screen usage has increased dramatically in the last few years (youth in the US are spending between five and eight hours a day staring at a screen).
The heavy internet use of children has led to a range of policy responses from countries around the world. As the US inches closer to possible federal legislation on internet safety that would impose restrictions, greater content moderation, and new legal standards for tech companies, other countries like the UK, China, India, and Australia are grappling with the same tangle of interconnected issues: how to keep the internet safe for kids, how to protect the privacy of users of all ages, how to moderate content, and how to enforce such restrictions.
This week's newsletter explores it all. Buckle up for an around-the-world tour of how different countries are approaching the task of building a safer web for its youngest users. 🌎
//Why safer?
While the research on the causation between internet usage and declining mental health is still up for discussion, there is growing concern that the internet isn’t safe for kids. This spring, the US Surgeon General issued a health advisory on the risks of social media usage on youth mental health.
Harmful content: It’s no surprise that harmful content is everywhere online. A report by Center for Countering Digital Hate from last year found that platforms like TikTok did not restrict their algorithms from recommending and boosting harmful content about eating disorders and suicide.
One out of every three internet users is a child under the age of 18
//
//The core tensions
From the US to the UK to China, countries around the world are in various stages of passing legislation designed to make the internet safer. Such approaches have common themes:
Greater scrutiny on tech firms, and expanded liability for harms they cause
Greater moderation against harmful content
Preventing tech platforms from collecting data from minors
Requiring parental consent and control over certain underage usage
Reducing (or prohibiting) screen time for youths
While many can agree that we should create safer digital spaces, there are trade-offs in making the internet safer for children, and these trade-offs are sparking fierce debate amongst policymakers, tech companies, advocacy groups, and researchers.
Greater child safety → less privacy? Before you can protect kids, you need to know who is a kid, and this is challenging in digital spaces. One way to do this is to ask users to verify their age, but doing so can infringe on a person’s rights to their data privacy. Greater protection of kids requires more data about kids, and this is raising alarms among those concerned with privacy, especially when facial recognition might be used (it's worth remembering that digital footprints are already huge and tech firms regularly collect data on us).
Greater child safety → more censorship? Facing fines and restrictions, tech platforms might over-moderate the content on their platforms and imprecisely filter content that’s considered controversial (for example, content on LGBT issues, eating disorders, abuse, or mental health). As this newsletter has explored before, content moderation is extremely complicated: there is a fine line between banning harmful content and censoring protected speech. Further, what is considered harmful is often context- and culture-specific.
//Around the world
There’s a wave of legislation sweeping the globe:
🇺🇸 The United States: At the federal level, a panel in the US Senate moved two bills forward last month: Kids Online Safety Act (KOSA), and Children’s Online Privacy Protection Act (COPPA 2.0). Both bills are aimed at making the internet safer for kids, but are facing opposition from groups concerned about censorship and privacy. At the state level, states like Arkansas, Utah, Texas, and California already have legislation protecting youths from social media harms (one prominent example is California’s Age Appropriate Design Code, adopted originally from the UK).
🇬🇧 The United Kingdom: The UK aspires to be “the safest place in the world to be online” when it passes the Online Safety Bill later this year, a bill designed to minimize an extremely wide range of harms kids could encounter online. The bill has been in the works since 2019, but has faced tumultuous UK politics and scathing criticism from tech platforms and digital privacy groups.
🇪🇺 The European Union: The EU has already passed the Digital Services Act, a set of standards that cracks down on hate speech, disinformation, and other harmful and illegal material on the web (with specific stipulations aimed at protecting children online). On August 25th, it will go into effect and platforms will have to comply, or be fined up to 6% of their global annual revenue.
🇨🇳 China: China has some of the most restrictive policies aimed at protecting kids and limiting their screen time. Two years ago, the government instituted a 3-hour per week limit for children playing video games, and this month, they’ve expanded their laws to require coordination between app developers, app store providers, and makers of smartphones and other smart devices to coordinate with each other on a comprehensive “minors’ mode” for devices. The updated laws this month allow children between the ages of 8 and 15 to be on social media only one hour per day, while those under 8 would only be allowed to spend 40 minutes per day on social media.
🇦🇺 Australia: In June, Australia’s eSafety Commissioner issued legal notices to platforms like Twitter and Google requiring companies to explain how they’re enforcing basic online safety measures on their platforms. This is part of a larger “Safety by Design” initiative from the government to make online spaces safer and more inclusive.
🇮🇳 India: This year, the Indian Parliament introduced the Digital Personal Data Protection Bill, which is aimed at safeguarding children and protecting personal data online.
🇵🇠The Philippines: UNICEF has classified 80% of Filipino children as vulnerable to online sexual abuse, and the Philippines was named the world's top source of online child sexual exploitation content in a 2020 study by the International Justice Mission. The Filipino government requires telecommunication companies and internet service providers to inform law enforcement agencies of child sexual abuse material and update their technology to block it, or face prosecution.
//From passage to implementation
It’s one thing for countries around the world to pass legislation focused on online safety for children. It’s another thing altogether to enforce and implement such regulations. Not only could major tech companies face fines for non-compliance, but changing the underlying settings of the internet around privacy, safety, content, moderation, and data has implications for all of us.
For example, Wikipedia has signaled that it will not comply with the UK’s Online Safety Bill, given the proposed requirements to collect ages of users, something Wikipedia considers an invasion of its users’ privacy (it would also require a “drastic overhaul” of the site’s technical systems and would necessitate a change in how Wikipedia moderates its articles).
The laws and regulations from governments around the world will change the way the internet operates. As more regulations roll out, people of all ages will need to be part of the discussion (including the internet’s youngest users), and companies will need to step up. How recent laws are applied and enforced will mark a new era for a more regulated and governed web—and one we hope is better for our youth.
This is an area where Project Liberty is taking special notice and is organizing a collaborative effort to support this opportunity for change.
Other notable headlines
//🇧🇷 In Brazil’s favelas, a new tech-led initiative focused on expanding broadband internet is leading to new economic opportunities that are driven by the voices and input of residents, according to an article in WIRED.
//🤖 AI is setting off a great scramble for data, and companies who own vast troves of high-quality data are finding creative ways to profit, according to an article in The Economist.
//đź“” According to an article in The Atlantic, authors Stephen King, Zadie Smith, and Michael Pollan are among thousands of writers whose copyrighted works are being used to train large language models. More than 170,000 books are being pirated to train the newest AI tools.
//🎒 In a three-part video series, the Wall Street Journal explored how generative AI is being used in classrooms across the country, and what it means for the future of education.
//🇰🇪 The CEO of Kenyan outsourcing firm Sama, which was contracted to moderate Facebook posts, has said she regrets taking on the work, after her staff said they were left traumatized by graphic content on the social media platform, according to an article in The Guardian.
//⛪ Our partner All Tech is Human was profiled in an article in the MIT Technology Review that highlighted the organization’s mission to galvanize a community and build a movement around tech ethics.
//🏛 After years of little regulation on data, the US is finally tightening its grip on data privacy. An article in Fast Company explored what companies can do to stay ahead of the coming regulatory wave.
//💼 The best historical analog to the ways AI will disrupt the workforce are the European craft guilds of the middle ages, who regulated skilled professions across the continent and were generally resistant to innovation. An article in Project Syndicate drew the comparison between craft guilds and how today’s workforce might respond to AI.
Partner news & opportunities
// Tech and Public Policy Fellowship
The McCourt School of Public Policy has opened applications for the 2023-2024 Tech & Public Policy Fellowship. Apply by August 25th for a chance to collaborate with Facebook whistleblower Frances Haugen on projects aimed at reducing the negative impact of social media. Learn more and apply here.
// The Heartland Forward Builders + Backers Idea Accelerator
Heartland Forward has opened applications for the Fall Cohort of the Builders + Backers Idea Accelerator. If you have an innovative, entrepreneurial idea that you're passionate about, and you’re in one of their four targeted geographic regions, learn more and apply here by September 11th.
// Event in NYC on Generative AI & the creativity cycle
September 13th, 2023
Creative Commons is hosting a symposium at NYU's Engelberg Center on September 13th, exploring the intersection of generative AI and the creativity cycle. If you're in New York City and interested in learning more about how generative AI intersects with creativity, register here to be part of this live conversation.
/ Project Liberty is a advancing responsible development of the internet, designed and governed for the common good. /