For more than a decade, researchers, journalists, and computer scientists have been sounding the alarms about algorithmic biases: the systematic and repeatable errors in an algorithm that produce unfair results, whether producing a disadvantage for one group or a leg-up for another.
With algorithms increasingly underpinning new technologies like artificial intelligence, and an AI arms race afoot amongst big tech firms, algorithmic biases could become more pervasive than ever.
This week we’re exploring algorithmic bias: what it is, its impacts across gender and race, the changemakers tackling it, and what we can do to address the biases embedded into algorithms.
The myth of code neutrality
Algorithms are trained on datasets. If those data sets are not representative samples or if they’re classified incorrectly, the judgments and analyses of the algorithm will be skewed.
The biases an algorithm produces are functions of who built the algorithm, how it was developed and tested, and how it is ultimately used. Once an algorithm is built, however, it’s harder to fix it than it is to design a more neutral one from the ground up.
Revanur, who grew up in Silicon Valley with the perception that “technology could do no wrong,” was shocked.
In this case, technology wasn’t some neutral, objective tool, but rather something that could keep people behind bars and exacerbate racial inequities.
At 15, she jumped into tech advocacy by volunteering to defeat California’s Proposition 25, which would have replaced the state’s cash bail system with a risk-based algorithmic approach. The campaign to defeat Proposition 25 was successful, and Revanur built upon that success to found Encode Justice, a youth-led movement of activists and changemakers fighting for human rights, accountability, and justice under AI.
Encode Justice has developed a local chapter network of 600 high-school and college volunteers, which creates a powerful ground-game in 40 US states and over 30 countries around the world. At the chapter level, Encode Justice educates the public in their communities, hosts workshops and town halls, engages with lawmakers to advocate for bills like the Algorithmic Accountability Act, and conducts policy research.
Proven bias in algorithms
Algorithmic bias has been the focus of researchers for years:
UCLA professor Safiya Noble has been a leading voice in naming the ways algorithms can be tools of oppression. She released a book in 2018 called Algorithms of Oppression: How Search Engines Reinforce Racism, in which she revealed the negative biases against women of color in search engine algorithms.
The Princeton Professor Ruha Benjamin has long been exploring the relationship between racial inequity and technology. Her 2019 book, Race After Technology, explains the ways our technologies can reinforce white supremacy.
Algorithmic bias is already well documented across everything from consumer lending to healthcare, but as AI algorithms become increasingly ubiquitous, the use cases will multiply.
Credit limits: The Apple credit card ran into criticism when people found that the algorithm driving its credit limits offered smaller lines of credit to women than to men.
Encode Justice, has an exhaustive list of studies, articles, and examples of algorithmic bias on their website.
The challenges with the latest version
Algorithmic bias is hard to detect. Revanur shared with Project Liberty three reasons that contribute to algorithmic bias flying under the radar as an invisible, structural form of discrimination:
Perception of neutrality: Unlike people, computers are perceived to be neutral, which means it’s tempting to “buy into the false myth of neutrality,” according to Revanur, which prevents us from critically evaluating the biased impacts of algorithms on daily life.
Algorithms seem abstract: Revanur also points out algorithms are perceived as abstract and disconnected from our daily lives. Because they’re not tangible, it’s easy to underestimate how integrated they are into many aspects of our lives.
Limited digital literacy: Because the technology is so new, many people do not fully understand how algorithms work, how biases can be built into them from the beginning, and what to do about it. There’s a need for greater digital literacy—both for the general public and for policy makers.
Possible solutions
Revanur believes the lack of regulations and oversight into algorithms needs to be addressed. Last month, she co-authored an open letter to legislators outlining specific policy proposals, which received signatures from other youth-led organizations. The letter also called for the voices of young people to be centered in any policy response because they’re often left out of policy conversations, even though they’ll be disproportionately impacted.
The letter recommended the following:
Support and fund research and development that advances safe, human-aligned, trustworthy AI systems.
Build governance structures to audit AI products and manage risk, like an independent FDA-style regulatory agency that conducts impact assessments, while simultaneously stressing a proactive approach to corporate accountability.
Improve technical literacy in government through programs that train lawmakers to govern in the age of AI and that recruit technologists for public service.
Operationalize existing multilateral frameworks, including the White House’s AI Bill of Rights, which Encode Justice championed; UNESCO’s Recommendation on the Ethics of AI; and the OECD’s Principles on AI.
Design measures to redress harm caused by AI, starting with laws (like the Algorithmic Accountability Act) that create clear liability for thoughtless development and intentional misuse.
What you can do
Revanur recommends two immediate actions:
Put pressure on lawmakers at the local, state, and federal level to ensure they’re aware of their constituents’ concerns.
Speak to your neighbors to overcome public disengagement and lack of literacy on these issues. To learn more about algorithmic bias, check out this “Algorithmic Bias Playbook” created by the Center for Applied Artificial Intelligence at the Chicago Booth School of Business.
Algorithms—like any technology—are a product of the humans who build and interact with them. The more ubiquitous they become, the more important it will be for humans to shape them ethically.
📰 Other notable headlines
💡 To meet the new era of artificial intelligence, we need a cultural and philosophical revolution that is rooted in humanity. This is the argument in an article from the Atlantic by Adrienne LaFrance.
🕵️♂️ New AI tools are emulating the voices of public figures. But, according to an article in the Wall Street Journal, they represent potential legal troubles: from publicity rights to risks of character defamation.
🇮🇳 Apple’s biggest gamble isn’t the Vision Pro—it’s India. That’s the contention in an article by Rest of the World that shows Apple’s ambitious plans to expand into a country where only 4% of smartphones are iPhones.
🎒 The New York Times delved into the debate about what students should learn about AI in schools. From classes that are raising alarms to ones with more optimistic predictions, schools are wrestling with how to talk about and teach with AI.
📱 WIRED profiled a programmer named Christopher Bouzy who is trying to create a Twitter alternative that weeds out harassment or disinformation. As we’ve learned in recent months, it’s hard to unseat Twitter.
😰 An investigation from The Markup has found 650,000 ways that advertisers label users by audience segment. The research revealed that advertisers are targeting users based on creepily specific and sometimes sensitive information like being prone to depression or buying numerous pregnancy tests.
🏛 An article in Vox explored what the SEC’s lawsuits against Binance and Coinbase mean for the future of cryptocurrencies.
🚫 An article in Fast Company studied how states are trying to keep kids off social media and porn sites: from parental permission for minors to restricting access to certain websites to obscenity filters.
🤖 New hyper-realistic robots are designed to read human emotions and serve as companions for people with autism or Alzheimer's, according to an article in Axios.
🗣 Partner news & opportunities
Project Liberty’s Institute announced a partnership with Stanford University to advance cutting-edge research, education, and training in technology, ethics, policy, and governance. For more on the Institute’s work, read this governance brief on how the social web can become home to a personal data economy that is both fair and empowering for individuals, businesses, and other organizations.
Creative Commons has opened registration, calls for proposals, and scholarship applications for its Global Summit, October 3-6, 2023 in Mexico City. This year’s theme will be AI & the Commons.
Partnership on AI has released its Guidelines for AI and Shared Prosperity, informing AI developers, AI users, policymakers, labor organizations, and workers on how they can help steer AI so its economic benefits are shared by all.