Is age verification necessary to protect kids online?
This is part two in our series on age verification. If you haven’t checked out last week’s newsletter, read it first as it surfaces the core tensions around making the internet safer through age verification.
~~~
“Prepare the child for the road, not the road for the child” is an old parenting saying that advocates for building children's self-sufficiency instead of building environments for safety.
How might this apply to online safety? Would even the most mature kid be able to navigate the internet’s “road” successfully?
In this second newsletter on age verification, we use the metaphor of the child and the road to explore the wide range of approaches aimed at protecting the internet’s youngest users.
While we outline various methods of age verification, the approaches below go beyond just verifying someone’s age, as there is no silver bullet to balancing safety, privacy, and speech.
// Definitions
First, let’s clarify definitions. The 5Rights Foundation, a Project Liberty Alliance member, offers the following definitions for two related terms:
- Age verification: A system that relies on hard (physical) identifiers and/or verified sources of identification, which provide a high degree of certainty in determining the age of a user.
- Age estimation: A process that establishes a user is likely to be of a certain age, fall within an age range, or is over or under a certain age.
//
Age verification doesn’t need to be perfect to be effective. It just needs to be better than the alternative.
//
// Prevent access to the road: age verification
For our metaphor, age verification means attempting to prevent access to the road so young people are kept off of its most dangerous stretches. There are numerous ways to estimate or verify someone’s identity online, and many of the child safety bills making their way through state legislatures use one or more of the following methods:
- Verify a user by having them submit documentary evidence that establishes their age range. This is a method that Utah is using for its social media regulations.
- Because the US government knows how old a person is through their social security number, the Social Security Administration could expand its services to verify identity and age for online platforms. Or there could be other decentralized eID systems that could help verify ages.
- AI can be used for facial estimation. A user uploads a selfie, and the AI assesses a person's age. Then, it runs a “liveness check” to verify they’re a human and not a bot. The social media app Yubo is using this estimation method, and a white paper on AI-powered facial estimation found that it has a 99% success rate.
- Instagram has been experimenting with additional methods to verify age, including asking friends to vouch for a user’s age.
- Third-party age verification platforms, like CLEAR, which is common in airport security, could be used to verify someone’s age without sharing that information with the tech companies themselves. Florida’s recent law, HB 3, would rely on age verification through third-party providers.
- Instead of verifying age at the platform level, devices could limit access based on the user’s age. A parent could give their child a phone, but designate the age of the child in the phone’s settings to limit access to certain platforms or content.
Each of these methods is imperfect, but proponents argue that age verification doesn’t need to be perfect to be effective. It just needs to be better than the alternative.
// Make the road better: improve the internet
In addition to preventing access to the road for kids, the road can be safer for everyone.
This includes a range of approaches that aren’t directly connected to age verification but give internet users of all ages greater control over their online experience.
- New “safety by design” and “privacy by design” principles: Instead of putting the burden on users to go through multiple steps to enact safe and private settings on their online platforms, “safety by design” and “privacy by design” principles are platform-level default settings that take a proactive approach to safety and privacy. One instantiation of by design principles is the Age Appropriate Design Code, which has been incorporated into bills like Maryland's Kids Code that requires “safe search” for kids, installs the highest privacy settings for minors, and prevents adults from messaging users who are underage.
- New approaches to content moderation: Content moderation is complex, but its umbrella field of Trust & Safety is evolving rapidly, and new approaches and new technologies like AI could revolutionize how platforms do content moderation and protect young users from dangerous content like CSAM. In a recent newsletter, we profiled the insights from Eli Sugarman, an expert on Trust & Safety, who advocates, among other things, for open source tools to help emerging platforms manage their content effectively.
- New governance-level protocols: Protocols like Project Liberty’s DSNP are rooted in principles of data privacy, control, and interoperability. It gives users greater control to manage their digital experience, enabling people to move between platforms if they’re unhappy with a given platform’s policies around data privacy, online safety, and content moderation. DSNP is already underpinning the social platform MeWe, and in the People’s Bid to acquire TikTok, Project Liberty Founder, Frank McCourt, has put forward a vision to return the platform to the people through DSNP.
- Greater regulation of tech companies: An expanded regulatory environment of tech companies could lead to a more responsible private sector. The US government could pursue a similar suite of bills that have passed in the EU in recent years: regulating AI, moderating content, and protecting user privacy—especially for the internet’s youngest users.
// Prepare the child: digital skill-building
In addition to preventing access to the road and improving the road’s conditions, there are ways to prepare the child for it.
For many parents, the question of when to give their kids a phone and enable access to the internet hinges on whether their child is capable of navigating the inevitable complexities and pressures inherent in digital spaces.
No matter how successful AI is in estimating someone’s age by their selfie or the latest developments in Washington about potential legislation, there is—at the end of the day—each individual’s relationship to their technology and how they comport themselves online.
// The future of the internet
No approach will perfectly optimize for child safety, privacy, and free speech. In every approach, there is a trade-off, and reasonable people will disagree about what principle to design for. But what’s undeniable is the conviction amongst parents, policymakers, and researchers that collectively we need to take back control of our lives in the digital age by reclaiming a voice, choice, and stake in a better internet.