August 20th, 2024 // Did someone forward you this newsletter? Sign up to receive your own copy here.
Can AI help make online discourse more civil?
When Chris Bail was 10 years old, his family moved from the US to the French Congo (now The Republic of Congo).
Bail’s father worked for the World Health Organization, and he was tasked with helping to direct the country’s response to the AIDS/HIV epidemic.
The central African country endured multiple civil wars throughout the 1990s, and Chris now believes growing up in a war-torn country seeded his interest in conflict resolution. “I think the social scientist in me was born then,” Bail said.
Today, Bail is a Professor of Sociology, Political Science, and Public Policy at Duke University, where he founded the Polarization Lab. The Polarization Lab brings together scholars from the social sciences, statistics, and computer science to study how to bridge America’s partisan divide. Bail is also a Project Liberty Institute Fellow, powered by Common Era.
Chris Bail
In his work, Bail isn’t stopping at just minimizing the harms caused by technologies like social media and artificial intelligence. He aims to use these technologies to find common ground.
In this week’s newsletter, we explore Bail’s latest work and how AI can nudge humans to have healthier, more civil conversations.
// Navigating conflict is hard
“Most people are bad at navigating conflict,” Bail said. “We overestimate our capacity to persuade others, talk past one another, or avoid stressful discussions altogether.”
Proven methods of conflict mediation like active listening and perspective taking do exist. But becoming well-versed in them takes effort and time. For people who are conflict-avoidant, they might not even seek them out.
The designs of our digital platforms also make it hard to navigate conflict online.
Algorithms herd us into echo chambers where we mainly see perspectives reinforcing our worldview.
Online harassment is rampant: two in five Americans have personally experienced some form of online harassment, according to research from Pew.
Social media has been found to fuel polarization, particularly in democracies.
// The big idea: using AI to have more civil conversations
Bail and a team of researchers have conducted research to see whether Large Language Models (LLMs) like ChatGPT can make online conversations more productive and civil.
In their research, they used LLMs to assist people with conflict-mediation techniques at scale.
In a 2023 study, Bail recruited 1,574 people with different opinions about gun control. He then organized them into pairs with opposing views on the issue and had them communicate on an online chat platform.
As the pairs exchanged messages, half of the research participants began to receive pop-up messages suggesting alternative phrasing for the messages they were about to send to their counterparts. These pop-ups, which were generated by GPT-3, used proven conflict-mediation principles to alter the phrasing to be more civil and constructive while not shifting the underlying opinion of the person.
Based on proven approaches to listening and understanding, the pop-up messages were categorized into three types: 1) restatement, where the message repeats back to the person their main point to demonstrate understanding, 2) validation, where the message affirms the legitimacy of others holding different opinions without explicitly agreeing, and 3) politeness, where the message modifies the original words with more polite language.
Participants then had the option to replace their previous message with the rephrased version from GPT-3, or ignore it.
How AI proposes alternatives to messages
// The results
Bail found that the group that used an AI assistant to nudge them with more civil messages described their conversations as more productive and less stressful.
On average, 12 total messages were sent in each conversation with a total of 2,742 AI rephrasings suggested. AI-suggested rephrasings were accepted by chat participants two-thirds (1,798) of the time. Accepted rephrasings were roughly evenly split between the restate (30%), validate (30%), and politeness (40%) interventions.
Perhaps most crucial, people who used AI-generated rephrasings expressed greater willingness to consider alternative viewpoints.
BYU professor David Wingate, one of Bail's co-authors said, “We found the more often the rephrasings were used, the more likely participants were to feel like the conversation wasn’t divisive and that they felt heard and understood.”
It’s important to note that the AI-generated re-phrasings didn’t lead to shifts in perspective on gun control. This was not the intention of the experiment, and “persuasive AI” is considered ethically fraught and a form of dangerous manipulation; recent research found that personalized messages crafted by ChatGPT exhibit significantly more influence than non-personalized messages in persuading people.
// The Nextdoor case-study
Outside of a controlled research setting, the social networking site Nextdoor, which provides a platform for neighbors to connect, ran a similar study in 2023 where it used AI to propose alternatives to messages that users were about to send.
These “Kindness Reminders” were split between AI-generated alternatives and a simple pop-up without AI-generated text that said, “Your reply looks similar to content that’s been reported for violating guidelines.” The reminders presented an additional step between someone writing a post and clicking send—a brief moment of friction to compel a person to reconsider.
36% of neighbors who saw the AI Kindness Reminder or the traditional Kindness Reminder (where neighbors self-edit without AI assistance) chose to edit or withhold their content.
Of those who encountered the generative AI-revised text, 26% adopted the suggestion and published more constructive content.
Many users chose to use alternative messages instead, others edited their original post, and some chose not to send their original messages at all. The intervention created a 15% drop in toxic content.
Bail’s research is promising, but it requires something that’s perhaps even more difficult to do in the current digital landscape: actually getting people with varying perspectives to interact online. Today’s algorithm-driven online experience is increasingly fractured and siloed into groups of likemindedness. AI might be able to assist in the hard work of softening prickly, charged conversations and help two people on different sides of a spectrum learn to hear each other, but such interactions are increasingly rare.
There is an emerging group of technologists, researchers, and civil society organizations who are looking to AI to help build common ground and shared spaces online. From DepolarizationGPT, a new chatbot used to tackle polarization head-on to new technologies used by the UN to broker peace treaties on the geopolitical stage, there’s growing evidence that AI, if used well, can help unite us.
For Bail, there’s more work to be done. "There are so many people rightly concerned about novel threats from generative AI. But if all that we do is try to mitigate those threats—instead of 'going on offense' to get ahead of them with new, pro-social forms of artificial intelligence—I worry we will lose the battle."
Other notable headlines
// 📹 In a USA Today op-ed, a parent who lost a daughter to the harms caused by social media made the case for the ‘People's Bid for TikTok.’
// ❓ An article in Politico featured answers from Audrey Tang, Project Liberty Institute Senior Fellow, to five big questions.
// 🌐 An article in Pirate Wires featured an interview with Dr. Larry Sanger, the co-founder of Wikipedia on the establishment takeover of wikipedia, corporate control of online knowledge, and why information disappears from the internet.
// 🕵 Hackers may have leaked every American’s Social Security information. An article in Futurism reported that 2.7 billion data records were stolen.
// 📱 Elon Musk said he’d eliminate bots from X. Instead, election influence campaigns are running wild, according to an article in Rest of the World.
Partner news & opportunities
// Metagov unveils 2024 Public AI White Paper
Project Liberty Alliance Member, Metagov, has published a groundbreaking whitepaper on Public AI, introducing a visionary framework for AI development.
// Announcing the launch of the Center for Rising Generations
Celebrate a decade of exploring the intersection of art, technology, and culture at the Grand Theater in San Francisco from September 12–15 for the 10th Gray Area Festival. Secure your early bird pass by August 22.
What did you think of today's newsletter?
We'd love to hear your feedback and ideas. Reply to this email.
/ Project Liberty is leading a movement of people who want to take back control of their lives in the digital age by reclaiming a voice, choice, and stake in a better internet.