October 17, 2023 // Did someone forward you this newsletter? Sign up to receive your own copy here.
Image from Project Liberty
Online safety vs. data privacy
At the heart of building a more humane web, a fundamental tension exists between data privacy and online safety.
To build a better web, the internet needs to be safer and people’s data needs to be protected. This week, we’re exploring the nuances, the complexities, and the age-old debate between data privacy and online safety.
//Making the internet safer
While internet safety is important for everyone, it’s particularly important for kids. From the US Surgeon General’s warning this year about social media being potentially harmful for kids’ wellbeing to Pew’s research citing that one in every two teens has been harassed online to last week’s news that graphic videos about Hamas’s surprise attack in Israel have garnered billions of views on TikTok alone, the internet can be unsafe and unhealthy for kids and adults alike.
In response to the growing concern about the linkages between social media use and declining teen mental health, efforts to make the internet safer get grouped into the following categories:
Efforts to verify age: In the UK, the Online Safety Act, which passed earlier this year, requires tech platforms to introduce age-verification measures before accessing certain content, along with a host of other requirements. Age verification technologies often rely on a third party to verify identity (like a credit card or government-issued ID).
Efforts to limit usage & restrict access: The Kids Online Safety Act (KOSA) at the national level in the US and New York’s SAFE bill at the state level, would require tech companies to treat minors differently from adults. In China, new laws passed earlier this year restrict the number of hours minors can spend on a smartphone to two hours per day.
Efforts to moderate content: The European Union’s Digital Services Act is one of many laws that hold tech companies accountable for the harms they cause online. It will swiftly remove harmful content online, from cyberbullying to content inciting violence. For their part in moderating content, tech firms employ thousands of content moderators to remove harmful content from their platforms, but recent layoffs have led to reductions in force for internal teams focused on trust and safety.
When these efforts are codified into laws holding tech platforms accountable, it marks the end of an era of self-regulation where tech companies could set their own policies about content moderation and safety.
//
Data privacy is a crucial form of consumer protection in the 21st-century. But in the process of making the internet safer, users face infringements on that privacy.
//
//The data economy
Data privacy is a crucial form of consumer protection in the 21st-century. But in the process of making the internet safer, users face infringements on that privacy. For example:
Inaccurate user verification and algorithmic biases: As companies and governments attempt to keep kids away from harmful content, companies like Meta are beginning to experiment with facial recognition cameras that estimate a user’s age, raising concerns over privacy and algorithmic biases (as facial recognition software has been known to be less accurate in detecting people with darker skin). The biases encoded into algorithms are well-documented. Consumer Reports’ Bad Input series (as featured in a recent newsletter) explores the algorithmic biases in healthcare, mortgage lending, and facial recognition. For example, in healthcare, biases in algorithms have been found to refer White patients to receive enhanced services at higher rates than their Black counterparts with similar healthcare needs.
Threatening private channels: The requirements for user verification in the UK’s Online Safety Act could undermine the privacy of end-to-end encrypted messages. Signal, Telegram, and other privacy-focused messaging apps are popular amongst activists looking to resist state-sanctioned surveillance (Signal has been used by protestors in Iran, and Telegram has been used by terrorists).
“Surveillance capitalism” business models: The more data that’s provided to authenticate users, the more data big tech platforms can use to profit from users in the surveillance capitalism business models of targeted online advertising.
Surveillance: The data that is collected online has huge implications beyond the digital economy. In the post-Roe world, law enforcement agencies in states that have banned abortions can still obtain court orders to access detailed online information about a person’s whereabouts. “Reproductive privacy,” or data about a person’s reproductive or sexual health details, has become the focus of new bills like the “My Body, My Data Act” introduced in Congress last year.
Collecting intimate data about a person can increase the safety of the internet by ensuring people are protected from harmful content, but all that data can also increase the vulnerability and exposure of that person to the worst parts of our digital ecosystem.
//The free speech question
To introduce another realm of complexity, there’s the question of how to balance 1) a desire for online safety where the most harmful content is removed from platforms with 2) the right to free speech in the US.
In the face of new laws and fines, there is a risk that tech companies might over-moderate the content on their platforms and imprecisely filter or censor content that’s considered controversial (e.g., content on LGBTQI issues, disordered eating, abuse, or mental health).
Last month, the US Supreme Court agreed to hear two cases alleging that the over-moderation of content infringed upon the US First Amendment’s guarantee of free speech. The First Amendment prevents the government from dictating what kind of content might be harmful to minors.
//The balancing act
It turns out it’s quite difficult to keep harmful content away from kids and kids away from harmful content, while protecting both an individual’s data privacy and their right to free speech.
From state laws across the US to sweeping country-wide legislation in Europe to strong-arm policies limiting screen time for minors in China, we’ve entered a new era of major regulatory activity globally (though the US still lacks meaningful legislation at the federal level). This is an unprecedented moment in building and regulating the future of the internet, which is why Project Liberty is launching its youth, mental health & tech campaign, Safe Tech, Safe Kids to engage a plurality of voices in advocating for an internet defined by safety, privacy, security, and accountability.
While striking the right balance between these trade-offs is challenging, the results from different approaches will help policymakers, technologists, and parents continue to refine the mix between safety, privacy, and speech.
Other notable headlines
//🧬 An article in Noema Magazine explored the increasingly symbiotic relationship between humans and technology, and how it signals a new era in the evolution of life on Earth.
//📹 The graphic videos and images going viral from the terrorist attacks in Israel have tested social media’s rules and revealed stark differences between platforms, according to an article in The Washington Post.
//🏘 An investigation by The Markup found that Amazon Ring owners may be unknowingly emailing the police in an act of neighborhood surveillance.
//👤 Facial recognition software is speeding up the check-in process at airports, cruise ships, and theme parks, but, according to an article in The New York Times, experts are worried about risks to security and privacy.
//📱 An article in The Atlantic discussed how the war in Israel underscores how broken social media has become. The idea of a digital global town square is in ruins.
//🏛 An article in The Wall Street Journal revealed how ads on your phone can aid government surveillance by enabling federal agencies to buy bulk data.
//👩⚖️ An article in The New York Times found that federal judges in three states have blocked children’s privacy and parental oversight laws, saying they violate free speech rights.
//👂 A Podcast from WIRED explored where crypto goes from here, one year after the collapse of FTX.
Partner news & opportunities
// Project Liberty’s new campaign on kids online safety
October 23rd at 1pm ET
In partnership with the Safe Tech, Safe Kids campaign, Issue One will host an in-person and virtual event: “Safe Tech, Safe Kids: Confronting Social Media’s Harms to Youth.” Safe Tech, Safe Kids is a team of organizations–including Project Liberty, 5Rights, and Issue One–who believe that tech should be a safe space for young people. This first event of the campaign will bring together stakeholders from children’s advocacy, tech reform, pediatrics, education, and other communities for a comprehensive look at how social media impacts children. Register for the livestream here and read more about the event here.
// Announcing the Next Gen Tech Fellowship
Project Liberty and Aspen Digital, a policy program of the Aspen Institute, launched the Next Gen Tech Fellowship. This initiative will train and provide a platform for young voices seeking to build an interconnected world that is accessible, safe, and inclusive—both online and off. Five fellows were chosen from a diverse group of US-based Gen-Z leaders, ages 18-25, to participate in the 2023 pilot of the Next Gen Tech Fellowship.
// Virtual FairPlay event on harmful edtech
October 26th at 4pm ET
Fairplay is hosting a free webinar where parent and former teacher Kailan Carr will discuss how she got bad edtech out of her son's classroom and how you can do the same. Learn more and register here.
/ Project Liberty is advancing responsible development of the internet, designed and governed for the common good. /