January 23, 2024 // Did someone forward you this newsletter? Sign up to receive your own copy here.
Image from Project Liberty
AI audits: the front lines of responsible AI
Building inspectors certify that buildings are inhabitable. Food inspectors ensure that food is safe to eat. Aviation audits make sure planes are safe to fly in, and auditors in healthcare, tax, corporate finance, and other industries play crucial roles in ensuring legal compliance and protecting consumers.
But the same cannot be said for today’s AI algorithms. While AI algorithms are nearly as ubiquitous as the food on our plates and the planes in the sky, they do not have a robust system of inspection, regulation, and auditing.
But that all might change.
This week, we’re exploring a new concept that will quickly become mainstream: AI audits.
//High stakes moment
An AI audit is conducted by running randomized, controlled experiments and then measuring the results to draw conclusions about an algorithm’s inner workings and societal consequences, often with an eye toward bias, safety, legality, and ethics.
By measuring the data inputs into an algorithm, and its outputs and results, an audit of an AI algorithm can answer questions like:
Is an AI algorithm fair and ethical?
Is an AI algorithm biased? Does it discriminate against certain groups?
Does an algorithm recommend illegal activity or create unsafe conditions?
With this growing need, organizations like Eticas are stepping up. Founded by Dr. Gemma Clavell, Eticas conducts AI audits. Clavell outlined its process:
Obtain a “model card” of an algorithm. The model card outlines the purpose of the algorithm, its data sources, how the algorithm was trained, and its technical features.
Conduct an initial risk assessment: The Eticas team then conducts an initial risk assessment to determine the potential and known risks of the algorithm.
Identify protected groups: The Eticas team identifies protected or vulnerable groups in accordance with Europe’s GDPR definitions (from age, gender, sexual orientation, race, religion, etc.).
Measure quantitative results: Eticas lets the algorithm run and then evaluates how equitable those results are and the degree to which the algorithm produces false positive and false negative results.
Evaluate the human-machine interactions: Finally, the team evaluates the human interactions with the algorithm and how the organizational context adheres to laws and guidelines around data privacy and security.
//
AI audits can serve as a bridge between the policies regulating AI and the on-the-ground implementation of those policies.
//
//The need for audits
Algorithms need to be audited:
Algorithms are often “black boxes”: How an AI algorithm precisely works is often not clear, and this is especially a problem when algorithms impact lives or make decisions with major consequences. For example, AI algorithms are used in homeland security to instantly detect security threats by digesting facial recognition data and behavioral patterns in airports, stadiums, and other large venues. While these algorithms can flag individuals or situations that are suspicious, the way those decisions are made is often not straightforward. What makes one situation suspicious and another less so? An audit can determine if an algorithm is unfairly making those determinations.
Algorithms can be biased and flawed: In her book Weapons of Math Destruction, Cathy O’Neil outlines how big data and algorithms have the potential to increase inequality and threaten democracy. In one example, O’Neil found that in Florida, “adults with clean driving records and poor credit scores paid an average of $1,552 more than the same drivers with excellent credit and a drunk driving conviction.” From biases in facial recognition that favor White men over Black women to gender biases in the workplace that are exacerbated by AI, algorithmic bias, algorithmic mistakes, and algorithmic hallucinations are common.
//The bridge between policy and practice
The more ubiquitous AI becomes, the more we will need greater AI accountability. AI audits can serve as a bridge between stated policies and on-the-ground implementation.
In the US, New York City enacted the Automated Employment Decision Tool in 2021, which requires algorithmic fairness audits (the first of its kind) for systems that hire people. Companies like ORCAA, O’Neil’s risk consulting and AI auditing company, help organizations comply with the law by performing AI audits.
Europe has passed multiple laws aimed at regulating big tech, but Clavell, who is from Spain, discovered that while regulations like GDPR were major steps forward in data privacy, there was still a gap between the principles of the law and its enforcement. “The last mile of implementation is where everything fails,” she said.
In the absence of sweeping regulations demanding accountability for AI in the US, most audits are voluntary at this point, but they will quickly become a necessary means of enforcement as laws are passed.
According to Clavell, AI audits create a virtuous cycle:
For consumers: AI audits generate better and safer AI systems. Without accountability, there’s an incentive for bad and cheap AI, and consumers are the ones who suffer.
For companies: AI audits create incentives for industry compliance and the ability for companies to stand behind their AI systems and represent themselves as responsible actors (although one cautionary example highlights how hard it is for outside auditors to hold companies accountable). The consent of a company is not required to audit their algorithms, though partnership with a company leads to more effective audits and a greater chance companies will incorporate changes.
For regulators: AI audits enable regulators to understand how the principles behind their policies are translated into data that can be assessed and verified.
//AI transparency drives data rights
A consumer’s ability to control their data starts with understanding how that data is being used, and audits surface those answers. “Only through inspection can you create the right to recourse,” Clavell said. “Right now, if you feel that you've been wrongly treated by an AI system, there’s little you can do.”
Clavell, who also serves on the newly-formed International Association of Algorithmic Auditors, the first trade association for the profession, plans to change that. She believes that audits will create better, fairer AI and put consumers back in control of their data.
The process of turning legislative principles into on-the-ground practices and safeguards is never straightforward or easy. From food safety to AI algorithms, a robust system of regulations, inspections, audits, reports, and continuous improvement is needed to keep people safe and generate consumer trust. We’re at the dawn of the era of AI accountability, and audits will play an important role in responsible tech innovation and data rights.
Project Liberty news
// OUR BIGGEST FIGHT by Project Liberty executive chairman and founder Frank H. McCourt Jr. and Michael J. Casey is available for pre-order. OUR BIGGEST FIGHT is a call to stop using patchwork regulations to fix this problem and reimagine the very architecture of the internet. Barnes & Noble Rewards and Premium Members get 25% off pre-orders of OUR BIGGEST FIGHT now through Friday 1/26. Premium Members get an additional 10% off! Go here.
Other notable headlines
// 🧭 According to an article in The Washington Post, regulators are cracking down on companies that profit from people’s most sensitive personal information.
// 📱 The state of Iowa has filed a lawsuit against TikTok, alleging that the company misleads parents about the harmful content available to young users, according to an article in TechCrunch.
// 👩💻 An article in WIRED explored how big tech platforms create barriers for users to leave, but new regulations could change that.
// 🗣 The future of AI may shape how we understand the role that language plays in consciousness, according to an article in Noema.
// ⚖ The copyright lawsuit between The New York Times and OpenAI could kill OpenAI, according to an article in Vox.
// 🇪🇺 Meta will let EU users unlink their Instagram, Facebook, and Messenger accounts to comply with the Digital Markets Act, according to an article in The Verge.
Partner news & opportunities
// New fellowship on surveillance capitalism
Harvard Kennedy School’s Carr Center is announcing the 2024 Technology and Human Rights Fellowship, with a focus on surveillance capitalism and democracy. It seeks scholars and practitioners to explore the intersection of technology, surveillance, and democracy. The deadline for applications is February 2nd. Learn more.
// Virtual series on AI for justice
January 25th at 9am ET
Join Ashoka’s AI for Justice virtual series when Argentinian social entrepreneur and data scientist Ivana Feldfeber will give a demo of AymurAI, an open source software that helps criminal courts collect data on gender-based violence. Register here.
// Virtual and in-person event on misinformation and disinformation