The hype-cycle of internet drama just doesn’t end. The latest big tech news. Silicon Valley Bank’s implosion. New ethical questions around ChatGPT and generative AI. Awkwardly public conversations between Elon Musk and a Twitter employee. We could spend all of our time just trying to keep up.
But we predict that one of the biggest news in tech in 2023 could be connected with a mere 26 words from a 1996 law. These words enabled the growth of the internet as we know it. But now, the US Supreme Court is hearing two cases that have the potential to rewrite those words and change the ways content is published and platforms are governed online.
Section 230
The 26 words in question come from Section 230 of the 1996 Communications Decency Act. It reads:
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
Why these words matter
Section 230 says that the companies that operate online platforms—like Facebook, Twitter, Reddit, Yelp, and even recipe blogs that receive online comments—cannot be considered the publishers of the content users post on their platforms. This legal protection enabled the growth of major social media platforms like Facebook and Twitter.
Section 230 acts as a legal shield for companies that otherwise would be responsible for the content users post on their platform. If the contents of a post break the law or wrongly defame someone, the platform can’t be held responsible.
Instead of having to adhere to legal standards and regulations about content moderation, Section 230 enables companies to choose their own approach to moderating content that is unique to their site.
Content moderation is a hotly debated topic. Depending on who you ask, either we’re leaving unacceptable content online when it should be taken down or we’re giving private companies the power to censor the free expression of speech.
Chris Lewis, President/CEO of Public Knowledge (pictured below), an organization working to shape policy at the intersection of copyright, telecommunications, and internet law, told us, “What is most often missing is the importance of section 230’s protections to free expression online. Much of the focus is on the liability of the platform, but at its core section 230 is about third party speech protections—protecting a platform for hundreds of millions of users to speak within the moderation rules of that platform.”
The Supreme Court cases
The US Supreme Court is hearing two cases (one against Google and one against Twitter) that could lead to Section 230 being rewritten.
The case concedes that while tech companies may not be liable for the content of users’ posts, they should be liable for the posts their algorithms promote.
Not only did YouTube fail to remove the ISIS recruitment videos, the case alleges that YouTube also recommended those videos to users through its algorithm, ultimately leading to the attack.
The case against Twitter is similar, alleging that the platform recommended extremist content through Twitter’s algorithms that led to the deaths in a terrorist attack by the Islamic State.
Earlier today, Project Liberty's McCourt Institute published an in-depth conversation with two professors who are experts on the topic of social media regulation, Anupam Chander (Georgetown Law) and Florence G'sell (Sciences Po). Chander told us, “After a quarter-century of relatively liberal rules for internet platforms, we are seeing a significant regulatory push.”
How the internet could change
If Section 230 is rewritten, the internet could look very different.
If big tech platforms are responsible for the content on their platforms, they could become more cautious. This is what Craigslist did in 2018 when the passage of a sex-trafficking law created concern that Craigslist’s “personals” section was facilitating sex work. Even though that wasn’t the purpose of the personals section, it was taken down out of an abundance of caution around legal liability.
On the other hand, tech platforms could abandon content moderation altogether and leave all content unmoderated. Not only would the vitriol, trolls, and bots proliferate, but as we’ve seen with corporate advertisers and Twitter, less content moderation could be bad for the advertising business.
According to Lewis from Public Knowledge, “If Section 230 is reformed, we are likely to see an increase in moderation that could limit marginalized voices and voices outside the mainstream. We support the current language in section 230 and so we would not rewrite it. Instead we would craft other accountability policies for digital platforms.”
Generative AI could also impact the legal shield offered by Section 230. “When the AI generates the content, Section 230 is likely unavailable,” Chander predicts.
Yes, but…
While the stakes couldn’t be higher, it’s also possible the Supreme Court might decide they’re not best suited to render a judgment.
Supreme Court Justice Elena Kagan said “We’re a court. We really don’t know about these things. You know, these are not like the nine greatest experts on the internet.” And Justice Brett Kavanaugh suggested that it might be better for Congress to make a final decision than the court itself.
No matter what, whether it’s new regulation and laws, better policies specific to platforms, or an improved civil discourse in online spaces, something needs to and will change to make our digital spaces more inclusive and humane.
We’ll keep bringing you updates on this consequential moment in internet history.
📰 Other notable headlines
🇪🇺 New Regulation: If you use Google, Instagram, Wikipedia, or YouTube, you're going to start noticing changes to content moderation, transparency, and safety features. According to an article in the MIT Technology Review, this is because major tech legislation in the EU is beginning to take effect. The Digital Services Act (DSA) and the Digital Markets Act (DMA) will bring an end to the era of tech companies self-regulating. Instead, these laws will require big tech companies to assess risks on their platforms (like election manipulation and illegal content) and become more transparent.
🤔 Decentralized Social Media: Meta is in the early stages of building a dedicated, decentralized social networking app for people to post text-based updates, according to Platformer. While details haven’t yet been released, this new decentralized social network could allow users to set up their own, independent servers and establish rules for how content is moderated.
🕵 Problematic Algorithms: Like many other cities around the world, the Dutch city of Rotterdam has been using a machine learning algorithm to determine welfare fraud. Trained on 12,000 previous investigations, it claims it can predict whether a person is likely to commit welfare fraud. But its risk-scoring model is under investigation after auditors discovered a bias to discriminate against certain groups, including single mothers. WIRED reports on how the welfare fraud algorithms sold on claims that they make governments more efficient have made people’s lives worse.
🤖 Artificial Intelligence: What’s eerie about artificial intelligence chatbots is how human their communication appears. But a fascinating article by Jacob Browning and Yann Lecun in Noema identifies that while chatbots end up loosely conforming to human norms, they are not bound by them. “They don’t recognize social norms that define the territory between what a person should and shouldn’t say — they’re oblivious to the underlying social pressures that shape how we use language.” This makes chatbots smart, but not necessarily human.
🗣 Partner News
📱 Attend: Virtual Event - A People’s History of Twitter
Virtual event: March 16th, 12pm ET / 9am PT
While Twitter is in crisis, another generation of social media is emerging. But before we decide to stay or go or divide our attention across more platforms, we first need to figure out what we expect—or demand—from any platform we use. So, Better Platform, RadicalxChange, and others are hosting an event on March 16th: "A People's History of Twitter” for people who depended on Twitter to share their experiences and insights. Register here.
📰 Keep Reading: Another newsletter about society and innovation.
The Weekend Briefing is a weekly newsletter by Kyle Westaway on society and innovation. Every weekend, the newsletter highlights the seven most interesting stories from the week, so you don’t have to read everything on the internet. Last week the Weekend Briefing featured Project Liberty’s DSNP. Check it out and subscribe here.
🏫 Opportunity: Apply at Heterodox Academy
It’s the final week to apply to work as an inaugural scholar at the new HxA Center for Academic Pluralism opening in NYC this summer. Faculty fellows will conduct individual research projects, enter into dialogue with intellectual rivals, convene other academics for collaboration, distill their findings in an annotated bibliography, and more. Learn more and apply here.