March 19, 2024 // Did someone forward you this newsletter? Sign up to receive your own copy here.
The DIY era of regulating tech
Have we entered the everyone-for-themselves era of regulating tech and moderating content?
How do you make tech safe when it’s moving faster than government regulation? Even for the public, it’s hard to keep up with the pace of technological development.
Today, the answer in the US might be DIY.
In the absence of federal, enforceable laws governing artificial intelligence and social media content moderation in the US, states are taking it upon themselves, individuals are engaging in their own personalized content moderation system, school districts are rolling out their own policies, and tech companies are introducing new features.
This week we explore how regulation and content moderation are working in the liminal space when the harms of technology are well-documented, but the speed of that technology is outpacing our ability to reign it in.
//Growing concern
From the discrimination of algorithms, to AI-generated deepfakes and disinformation, to content on social media platforms that's harmful for children’s mental health, there is a growing consciousness worldwide about the problems caused by today’s technology.
Last week, a report issued by the US State Department was published, with the conclusion that the most advanced AI systems could “pose an extinction-level threat to the human species.”
Last month, Project Liberty Foundation released research finding that the majority of adults globally believe that social media companies bear “a great deal” of responsibility for making the internet safe.
Last year, a poll done by Project Liberty Alliance member Issue One found that 87% of the US electorate want government action to combat the harms being caused by social media platforms.
//
In the absence of comprehensive laws and sound enforcement, there’s a patchwork of solutions emerging at every level.
In the absence of comprehensive laws and sound enforcement, there’s a patchwork of solutions emerging at every level.
States: Filling the void left by inaction at the US federal level, US states are taking action. Nearly 200 bills were introduced in local state legislatures in 2023 aimed at regulating AI (only 12 of which became law), and this year states across the US will debate over 400 AI-related bills. To limit the harms caused by social media, US states have taken a variety of approaches, leading to a lack of consistency and a patchwork of directives, according to a report by Brookings last year.
Project Liberty Founder Frank McCourt released OUR BIGGEST FIGHT, his book on how we can transition to a web anchored in data privacy and data ownership.
Is it inevitable that the DIY era of regulating tech will translate into new laws, new norms, and new beliefs about the role of technology in our lives? Time will tell, but we’re optimistic.
Project Liberty in the news
Last week, Project Liberty Founder, Frank McCourt, released his first book: OUR BIGGEST FIGHT. In support of the book, he spoke with a variety of media outlets:
America cannot get ‘blinded’ by TikTok and needs to look at the ‘bigger picture’: Frank McCourt provided analysis of the US’s latest efforts to ban TikTok. Mornings with Maria on Fox Business.
'Our personhood is now owned by someone else': How to reclaim dignity in the digital age, according to Frank McCourt from Project Liberty. MSNBC.
Podcast: Frank McCourt joined Jennifer Strong from MIT Technology Review's podcast SHIFT to discuss his new book in front of a live audience. SHIFT.
Other notable headlines
// 📱 An article in the Atlantic argued we need to end the phone-based childhood. The environment in which kids grow up today is hostile to human development.
// 🏛 This week, the US Supreme Court is hearing a case on how the government communicates with social media companies, according to an article in The Verge.
// 🤔 Social media’s unregulated evolution over the past decade holds lessons that apply directly to AI companies and technologies, according to an article in MIT Technology Review.
// 🚰 As the amount of available content grows with the use of AI, social media’s role as curator will become even more important. An article in The Atlantic proposed three solutions.
// 🚫 An article in The Wall Street Journal highlighted how researchers are warning against data poisoning. By tampering with the data used to train AI models, hackers can spread misinformation and steal data.
// 🚸 An article in The Washington Post highlighted research from Pew, which found that almost half of teenagers think their parents get distracted by their phones.
// 🚚 An article in The Financial Times featured a story about how an Uber Eats delivery driver was sick of the algorithms that controlled his day, so he decided to fight back.
Mothers Against Media Addiction (MAMA) is hosting a rally and press conference in New York City in support of putting kids before big tech and pushing legislative efforts forward. Sign up here.
// Virtual event on deepfakes and synthetic media
March 27th at 1:00pm ET
All Tech is Human is hosting a virtual discussion with leaders on how deepfakes and synthetic media will impact society. Register here.
// Virtual event: AI and 2024 Global Elections
March 28th at 1:30pm ET
The Institute of Global Politics at Columbia University’s School of International and Public Affairs and Aspen Digital are hosting an afternoon of discussions examining how AI has already played a role in the elections this year and what it means for the elections ahead in 2024. Register here.
What did you think of today's newsletter?
We'd love to hear what you thought of today's newsletter. Reply to this email with:
Feedback for how we can make this newsletter better
Ideas for future editions
A recommendation of someone we should interview
/ Project Liberty Foundation is advancing responsible development of the internet, designed and governed for the common good. /