September 19, 2023 // Did someone forward you this newsletter? Sign up to receive your own copy here.
Image from Project Liberty
The unexpected risks of algorithms
Algorithms are becoming more sophisticated and complex, and most of us have little awareness of exactly how they work. We know what data inputs are fed in, and we know what outputs pop out, but we don’t fully understand what happens in between, which is why algorithms are considered this mysterious, opaque system.
This week, we’re diving into understanding what algorithms are, where they came from, how they are biased, and what that means for society that is increasingly shaped by them.
//What are algorithms?
An algorithm is a meticulously detailed sequence of steps aimed at accomplishing a task or solving a problem. In this respect, a step-by-step recipe to make banana bread is a kind of algorithm.
In computers, algorithms are sequences of steps that perform computations and process data. In an increasingly digitized world, they show up nearly everywhere. From more obvious places, like online search, social media feeds, and AI chatbots that use machine learning to less expected places like mortgage lending, predictive law enforcement, facial recognition, stock market trading, and medicine.
//A brief history of algorithms
The history of algorithms dates back thousands of years.
The oldest example of a recorded algorithm is considered to be the Euclidean algorithm: a mathematical proof in ancient Greece by the mathematician Euclid in 300 BC.
The word algorithm comes from "Algoritmi," which is the latinized surname of a Persian mathematician and astronomer from the 8th century, Abu Abdullah Muhammad ibn Musa Al-Khawarizmi. Al-Khawarizmi is considered ”the father of algebra,” advancing mathematical concepts around the world and influencing mathematicians.
In the 1800s, English mathematician Ada Lovelace wrote the first algorithm for a machine (the analytical engine), making her the world’s first computer programmer.
In 1936, Alan Turing invented the Turing Machine, a mathematical model of computation described as an abstract machine. Turing’s insights became the mathematical formulation for today’s digital computers. His work is considered to be the foundation of computer science and artificial intelligence.
The 20th century saw major advances in the development of algorithms, computer science, breakthrough research, and new programming languages.
In 1997, Deep Blue, IBM’s computer that played chess via artificial intelligence algorithms, beat chess world champion Garry Kasparov—a first in history.
In the 21st century, algorithms became further embedded in everything like Google Search, Netflix recommendations, advances in drug discovery, and new developments on the battlefield.
//
One in every two American adults has photos in a facial recognition network used by law enforcement.
//
//Algorithmic bias
Algorithms are shaped by the knowledge and assumptions of their human creators, and trained on data that is frequently, if not always, incomplete or selective in its scope. As a result, algorithms have the potential to exacerbate some of the biggest problems in society.
Algorithmic bias is considered to be the systematic and repeatable errors in an algorithm that produce unfair results, whether producing a disadvantage for one group or a leg-up for another.
Algorithms are trained on datasets. If those data sets are not representative samples or if they’re classified incorrectly, the judgments and analyses of the algorithm will be skewed.
Meredith Broussard, an AI expert, told The Guardian earlier this year, “Racism, sexism and ableism are systemic problems that are baked into our technological systems because they’re baked into society.”
//The hidden problems inside algorithms
Because algorithms are hard to understand, operate in unexpected domains, and are often not visible to the people they impact, it’s challenging to recognize the ways they cause harm.
Here is a sampling of industries where algorithmic bias leads to inequitable outcomes:
Facial recognition: One in every two American adults—over 117 million people—has photos in a facial recognition network used by law enforcement, and at least 18 federal agencies use facial recognition technology powered by AI algorithms. But facial recognition technology struggles to accurately detect faces with darker skin. The error rate in facial recognition software is 0.8% for light-skinned men, but 34.7% for dark-skinned women, according to research from computer scientist Joy Buolamwini. Such high error rates could impact facial recognition in a range of use-cases: from accurately detecting criminals to unlocking a phone.
Mortgage lending: There is a long history of racial discrimination in mortgage lending and access to credit (see: redlining), but algorithms are still being implicated in biases with mortgage lending. For example, an investigation by Markup in 2021 found that nationally, loan applicants of color were 40-80% more likely to be denied than their White counterparts.
//The need to act while algorithms are still visible
As AI becomes more ubiquitous, algorithms will rapidly become more deeply embedded in the fabric of society—from doorbell surveillance to car insurance to mortgage applications. The moment to insist on responsibly designed algorithms is now, before they become an invisible, unquestioned part of everyday life.
Consumer Reports, an independent, nonprofit organization focused on consumer protection, is working to raise awareness, build people power, and pass legislation aimed at tackling algorithmic bias.
Earlier this year, CR, in partnership with Kapor Foundation, launched a new campaign called “BAD INPUT” that comprises:
Short videos explaining how algorithmic bias shows up across different sectors that were directed by documentary filmmaker Alice Gu (see above).
A petition to sign to urge companies to stop algorithmic bias (which already has 15,000 signatures), and a list of organizations to engage with working on this issue.
Amira Dhalla, Director of Impact Partnerships at Consumer Reports, said, “It’s hard to advocate for something you can’t see or don’t understand,” which is why Consumer Reports is trying to elevate the awareness of everyday people about how algorithms work.
Greater awareness about algorithms leads to expanded “people power." These people are then equipped to effect change within their local domains (from their specific workplaces to the wider communities). Ultimately, Dhalla explained, this people power can translate into policy change at the state and national level.
While algorithms are increasingly showing up in our everyday lives, it's critical we slow down their integration now to ensure we address their biases before they cause further social and economic harm on already-marginalized communities.
Other notable headlines
//🔎 A Q&A in the Harvard Gazette broke down exactly what Google is accused of, in the government’s antitrust case against the tech company.
//🏛 Last week the US Senate held a private meeting with big tech’s billionaire elites about the risks and potential of AI. An article in WIRED went behind the scenes.
//🇮🇪 Irish authorities fined TikTok $367.2 million, saying it breached the country’s data-protection laws, including the misuse of children’s information, according to an article in The Wall Street Journal.
//🛑 Last week, Biden asked the US Supreme Court to pause a ruling by a lower appeals court that barred many kinds of contacts between administration officials and social media platforms, according to an article in The New York Times.
//🔬 Previous scientific revolutions have been led by academic journals and laboratories. Robots might create the next one. That’s the contention in an article in The Economist.
//🇦🇷 An article in WIRED profiled how an AI scandal unfolding in Argentina is revealing the dangers of implementing facial recognition.
Partner news & opportunities
// Virtual event: Ethical paths forward for AI
September 21st at 1pm ET
Dweb is hosting a virtual meetup on how technologists, policymakers, governments, and artists can help build AI for the public good. Learn more and register here.
// Common Sense Media’s framework for AI
Last week Common Sense Media, the nation's leading advocacy group for children and families, revealed a framework for its AI ratings and reviews system designed to assess the safety, transparency, ethical use, and impact of AI products. Check it out here.
/ Project Liberty is a advancing responsible development of the internet, designed and governed for the common good. /