Should we verify the ages of internet users?
Last month, the website WikiHow, which claims to be the most trusted how-to site on the internet, published the article, “How to Convince Your Parents to Get You a Cell Phone.”
For young people who desperately want a smartphone, it offers a step-by-step process to persuade their parents (one step teaches kids how to play on their parents' emotions).
According to a 2021 report by Common Sense Media, a Project Liberty Alliance member, 42% of kids in the US have a phone by age 10. By age 12, it’s 71%. By 14, it’s 91%.
Yet there is no clear answer for parents about when might be a good time to get your child a phone. Experts are quick to say it depends on multiple factors, including a child’s maturity and dynamics at home and at school. A parent’s decision to give their child a smartphone represents not just access to a device, but almost constant access to everything on the internet.
When kids should get their first phone is a hotly debated topic, as is a set of related questions that is playing out in public policy circles, media, and kitchen tables alike:
- In the name of safety, should we limit or ban the internet’s youngest users from accessing content online?
- How might age verification measures protect children from harmful content?
- How might those measures threaten privacy and free speech?
- Should children be treated differently than adults?
There issues are nuanced and complex, so we are dedicating the next two newsletters to this topic.
In Part one of this two-part series, we explore the tensions between the principles of safety, privacy, and speech, and how they manifest in age verification measures and policy efforts. Next week, we’ll explore the range of possible solutions to protecting kids, protecting privacy, and protecting speech.
//
The central trade-off between safety and privacy is that to protect kids online, you need to know who is a kid, and that requires getting data from them.
//
// The latest in age verification
Many states have taken initiative to pass new legislation to protect kids, leading to a fragmentation of online safety regulations. In 2023, more than 60 bills were introduced at the state and federal levels requiring greater parental consent, age restrictions, or other safety measures. By mid-2024, laws focused on online child safety have been passed in states including Florida, Louisiana, Texas, and California.
For example, Florida signed into law HB 3 in March. It requires:
- Social media platforms to prevent kids under 14 from creating accounts.
- Online platforms to verify their users’ ages.
- Consent from a parent or guardian for a 14- or 15-year-old to maintain social media accounts.
However well-intentioned, the effort to protect kids online has significant legal consequences, as some argue that it’s unconstitutional. Last week, the US Supreme Court agreed to hear an appeal from the adult entertainment industry seeking to overturn a Texas law that requires pornographic websites to verify the age of their users. Louisiana and Utah have faced similar lawsuits claiming that age restrictions threaten privacy and freedom of speech, but in both states, those appeals have been tossed. In Arkansas, an online child safety law was blocked by a federal judge.
// The vacuum at the federal level
The flurry of legislative activity at the state level is making up for a lack of progress at the federal level. The US is one of the few countries that has not passed comprehensive privacy legislation at the federal level. As recently as May, experts believed that could all change with the American Privacy Rights Act (APRA), a bill that seemed to have the bipartisan support needed to make its way through Congress. But now that bill appears doomed.
Another prominent bill, KOSA (Kids Online Safety Act), which also faces an uphill battle to becoming law, does not require age verification explicitly, but critics caution that it could lead to age verification measures in the future.
// Nuances & tradeoffs
What’s impeding legislative progress at both the federal and state levels is different beliefs about the trade-offs between safety, privacy, and speech.
// The push to make the web safer
The growing interest in age verification is a response to the growing concern about how the internet’s youngest and most vulnerable users face numerous risks: from cyberbullying to access to illicit drugs to harmful content like nude deepfakes.
- According to global research by Project Liberty, 65% of adults in seven countries were “very concerned” that kids might be subjected to cyberbullying or harassment.
- Researchers like Jonathan Haidt, a Project Liberty Fellow, believe the harms are serious enough that kids under 16 shouldn’t be on social media platforms at all. Age verification is one of four tenets in his book The Anxious Generation.
- The US Surgeon General issued an advisory in 2023 on social media usage and youth mental health and, more recently, has called for a warning label on social media platforms.
// The tradeoff between safety and privacy
The central trade-off between safety and privacy is that to protect kids online, you need to know who is a kid, and that requires getting data from them. At a time when our data is regularly bought, sold, and used to target ads, critics of age verification worry that it will infringe on individual privacy, placing more power in the hands of large tech companies.
- Privacy proponents like the Electronic Frontier Foundation believe that age verification and ensuing age restrictions will incentivize tech companies to collect even more data about users. EFF says that “age verification systems are surveillance systems.” They’re not alone, STOP, the Surveillance Technology Oversight Project, is concerned that age verification will heighten online surveillance and harm the very communities officials seek to protect.
- Privacy proponents are not opposed to online safety, but they are opposed to age verification or age estimation tools (powered by AI) that threaten fundamental rights to privacy, anonymity, and control over data.
// The tradeoff between safety and speech
Free speech advocates in the US caution that limiting access to platforms limits freedom of speech. David French, a New York Times opinion columnist, wrote in March: “When you regulate access to social media, you’re regulating access to speech, and the First Amendment binds the government to protect the free-speech rights of children as well as adults.”
Not only would age verification measures prevent citizens from accessing information, but it could lead platforms to engage in overly cautious moderation practices to avoid legal liabilities. The result? De facto censorship of content considered harmful.
// The complexity of determining what's safe and what's not
While many are aligned about wanting kids to be safe, many disagree about what constitutes harm. As we’ve covered in previous newsletter editions, content moderation is frequently infused with cultural perspectives about what’s safe and what’s not. By preventing young people from accessing platforms that offer community and self-expression, there is a risk of isolating people who seek community and solidarity online. It’s possible that content that’s merely controversial is tossed out as harmful, and people are deprived of resources for issues like drug addiction, disordered eating, or mental health. Too often what is considered harmful is dictated by adults making decisions on behalf of young people.
// No easy answers
There are tradeoffs between protecting kids online, protecting data privacy, and protecting freedom of speech. Next week in part two we’ll dive into possible ways forward, but before we do, we’d love to hear from you. What’s your perspective on these tradeoffs? What nuances are missing?
Stay tuned for Part Two next week!