Our personal information is all over the internet.
From our bank account details to our social media profiles to our phone’s location data, our digital identity leaves a digital footprint, and our digital footprints are huge.
This week we’re exploring digital identities: what they are, how they work, and their fundamental challenges. We’ll also dive into the work of Project Liberty's McCourt Institute to spur the use of human-centered digital identities that guarantee privacy and security.
What is a digital identity?
Digital identity refers to all the information online that’s used to represent a real-life person (or organization). Our digital identity spans specific accounts we have, passwords we maintain, profiles we’ve created, and other online information we’ve generated or represents us (like our social graph, our driver’s license number, and biometric data from a smart watch).
Here is an (incomplete) list of what comprises our digital identity:
Personal identifiers (like birthday and social security number)
Government identification (like a driver’s license number or passport number)
Usernames & passwords
Bank account information
Vaccination records
Location data
Social graph (or the digital representation of all the relationships between members of a social network)
Financial information
Personal biometric data
Online transaction history (our purchase history, credit card activity, and other online transactions)
The problems with sprawling digital identities
As we spend more time online, our digital footprint grows and our digital identity gets more unwieldy to manage. There are a number of problems with how digital identities are managed today:
Privacy & Security: Because a person’s digital identity often contains sensitive information, privacy and security are top concerns. Data breaches and unsecured data can lead to identity theft and fraud. The number of fraud complaints in the US has doubled from 2019 to 2021, according to the Federal Trade Commission. Despite claims that data is anonymized, research has found it’s actually quite hard to disconnect someone’s data from their identity.
Ownership & Control: We don’t actually own and control our digital identities. That information is distributed across hundreds of platforms and accounts. Many sites aren’t transparent about the data they collect about a user, meaning that a person’s digital identity isn’t fully in their control.
Discrimination: Characteristics of one’s digital identity have been used to discriminate against certain groups. In 2022, Meta settled a lawsuit with the US Justice Department, after the Justice Department found that its algorithms were using characteristics like race, religion, sex, disability, familial status, and national origin to feed people specific housing advertisements (which is a violation under the Fair Housing Act).
Surveillance: Digital identities can enable state-sanctioned surveillance as well. In China, the government collects massive amounts of its citizens’ behavioral and location data linked to individual identifiers. Since COVID, “everyone in China must now submit location data and recent travel data to maintain digital QR codes on their smartphones,” according to a recent article by NPR. For India’s biometrically secured national identification system, some are concerned that it could be used for surveillance and control.
Fragmentation: While some nation-states are creating digital identity systems (like a digital driver's license or passport) that can be used in multiple instances, the profile information in one account often doesn’t translate to another. Our digital identities are fragmented, and interoperable digital identities are uncommon.
Next generation of digital identity
The future digital version of ourselves will likely look very different from the disorganized, fragmented information that currently makes up our digital identity.
Leaders in the better web movement know that improving digital identity is fundamental. There are a number of opportunities for upgrades to our digital identity to be more usable, secure, and self-sovereign.
Leverage human-centered policies to build new digital identity systems: As nation-states begin exploring new types of digital identities (like national identification cards), they’ll need to bring a user-centered approach. The Human Technology Foundation, an international think tank, has been exploring the potential of user-centered approaches to digital identification efforts.
Create user-owned and controlled digital identities: Instead of major private corporations deploying digital identities for their users, new technologies (like web3) can place that control in the hands of end-users, leading to a more decentralized system of digital identities where users get to decide when and how their information is shared (the social media site MeWe, with 20 million users, is leveraging the blockchain to enable user-controlled digital ID). Such web3 digital identifiers encompassing cryptographic keys, wallet addresses, and blockchain-based identity verification protocols allow users to control their digital identities (though these new technologies can still be hard to use).
Make digital identities interoperable: Instead of each platform and website having its own profile and account, new protocols, like Project Liberty’s DSNP, create interoperable standards where identity blockchains like KILT enable digital identities to be more manageable across digital social web services.
There is also a massive opportunity to leverage the next generation of digital identity tools for financial inclusion: 1.7 billion people globally don’t have access to formal financial services. New forms of digital identity can help the unbanked and underbanked gain access to credit and other financial services by creating verifiable identities. A recent McKinsey report predicts that unlocking digital identity tools is a powerful lever for inclusive economic growth in emerging markets, leading to between 3-13% growth in a country’s GDP by 2030.
Individuals should have the right to own their digital identity;
Decentralized digital identity systems can foster trust by prioritizing privacy and security by design; and
Multi-stakeholder ethical technical governance of digital identity provides a complementary path to regulation.
Here’s more:
The McCourt Institute is hosting an in-person and virtual event on the intersection of web3 and digital identity on June 14th, 11:30 am ET/5:30 pm CEST) with speakers from the OECD, Human Technology Foundation, Thales, KILT protocol and the European Commission. Register here.
The World Economic Forum has developed a Digital Identity & Blockchain Toolkit which is designed to be a learning module for those looking to learn more.
📰 Other notable headlines
🎒 As ChatGPT and other AI tools gain increasing adoption across college campuses, professors are turning to the oral exam. The Wall Street Journal reported on how the resurgence of the old-school testing method—dating back to ancient Greece—is being used to combat plagiarism, cheating, and AI.
💀 Even the heads of OpenAI and Google Deepmind acknowledge that artificial intelligence could lead to extinction. The Centre for AI Safety released a statement outlining a number of disaster scenarios, which were endorsed by experts, according to an article in Time Magazine.
📣 New tools like Substack, Patreon, and OnlyFans have created opportunities for millions of creators, writers, and podcasters to become a “media of one.” But for all the ways the internet has allowed independent creators to thrive, it’s also given rise to niche propaganda, according to an article in Noema Magazine.
🚀 Last year, Silicon Valley was struggling: from layoffs to falling tech stocks to pessimism about everything from crypto to the metaverse. Then came ChatGPT. The Washington Post reported on how the boom from AI has given new life into the tech industry.
📱 The ad-supported internet, which is dominant in today’s digital world, is about to get worse. An article in The Atlantic explored how the flawed advertising model will produce low-quality, AI-generated content that likely won't get read.
📹 In a reversal of its existing policy on election integrity, YouTube has decided it will leave up content claiming fraud, errors, or glitches occurred in the 2020 presidential election. The reason, according to an article in Axios, is YouTube is concerned that its existing policy could have the effect of “curtailing political speech without meaningfully reducing the risk of violence or other real-world harm."
🇨🇳 China is moving quickly with plans to regulate AI. Its recent draft regulation in April sets the stage for regulations that some in the west are already calling for—and some that might seem like an invasion of privacy, according to the MIT Technology Review.
🧠 Last week, Neuralink, Elon Musk’s brain implant company, received regulatory approval to conduct the first clinical trial of its experimental device in humans. An article in The Guardian asks how concerned we should be, given the billionaire’s questionable record of responsibly overseeing tech development.
🗣 Partner news & opportunities
The Integrity Institute released a new guide detailing the best practices for election integrity online. The guide is the first in a planned series as part of their Elections Integrity Program–on best practices for online companies to help better support healthy elections across platforms, particularly those who are newer or smaller. Learn more here.
Aspen Digital, in partnership with Omidyar Network, has announced the Council for a Fair Data Future, which brings together 30 experts, advocates, and practitioners to imagine an equitable data economy, share insights, and develop recommendations.
The U.S. Surgeon General released an advisory on social media and youth mental health. The Advisory cites research, raises concerns, and outlines specific steps policymakers, tech companies, parents, researchers, and children can take. Learn more here.