We explore why schools are suing tech platforms for the harms they cause children, and how schools and educators are responding
View in browser
PLslashes_logo_green
Recurso 31

October 31, 2023 // Did someone forward you this newsletter? Sign up to receive your own copy here.

Image from Project Liberty

Image from Project Liberty

Why schools are suing social media

At the same time last week that 41 states and the District of Columbia sued Meta in federal court claiming that its platforms harm kids, Project Liberty launched the Safe Tech, Safe Kids campaign to build momentum behind the solutions that can keep young people safe online.

 

Both the Safe Tech, Safe Kids campaign and the lawsuits from attorneys general across the country are aimed at the same goal: holding big tech platforms accountable for the addictive and harmful features that are negatively impacting young people’s well-being across the US.

 

With schools on the front lines of this movement, one of the most important voices is Randi Weingarten, the president of the American Federation of Teachers (AFT). Weingarten shared with us how teachers and schools are impacted by social media, and what they’re doing about it.


“Even as our nation’s public schools race to help our students thrive and recover academically and emotionally from the trauma and loss of the past several years, not much affects our students more than the influence of social media and its insidious algorithms on their mental health and well-being,” Weingarten said.

 

// As dangerous as lead paint

According to Weingarten, social media “detracts from the primary mission of our schools, which is to protect and educate our children. It threatens the welfare and future of our kids.”

 

“Sadly, we are in the middle of an unprecedented mental health crisis among our nation’s youth, exacerbated by the lack of regulation and safeguards on dangerous social media products—products that are as dangerous as lead paint on toys.”

//

Only one in two public schools can effectively provide the mental health services that students need.

//

 

// The kids aren't alright

Underpinning the lawsuits and Project Liberty’s campaign is a growing body of academic evidence of the harms of social media:

  • ⬇ Declining mental health: The suits from school districts across the country blame declining student mental health on social media, and they connect that decline in mental health to worse academic outcomes and poor behavior in school.
  • ⬆ Increasing cyberbullying: Nearly half of US teens have been bullied or harassed online, according to Pew Research, and cyberbullying is on the rise in the social media era (from group chats to anonymous messaging apps).
  • ⬇ Declining sleep: One study from 2019 found that heavy use of screen media was associated with shorter sleep duration, longer sleep latency, and more mid-sleep awakenings. There is a substantial body of research connecting social media usage to sleep.

 

// The impact on schools

Students probably can’t blame social media for eating their homework, but its use is causing both students and educators a major headache. According to Weingarten, schools are shouldering additional costs to address today’s challenges: from hiring additional school counselors and psychologists to expanding teacher training to changing student lesson plans. 

  • Mental health resources: Only one in two public schools can effectively provide the mental health services that students need. Already, the average student-to-school-counselor ratio in the US is 408-to-1, which is far above the recommended ratio of 250-to-1. In the lawsuits brought by school districts, schools are seeking restitution from social media companies to fund prevention, education and treatment.
  • Phone usage: Earlier this year, a report from UNESCO, the UN’s education agency, recommended that smartphones should be banned from school classrooms, and countries like France, Italy, and Finland have already banned smartphones. Studies have shown that banning phones can lead to higher test scores, and heavy phone use correlates with lower GPAs for college students.
  • Damage control: Online challenges, like TikTok’s “devious licks” where students upload a video of them pilfering school supplies or vandalizing school property, are also on the rise. Schools, in turn, are spending more time confiscating devices, fixing vandalized property, and managing disciplinary actions.
  • Teacher burnout: While teacher burnout is not solely tied to student social media use, some see a connection. Weingarten said that devices are making it harder for teachers to get students’ attention. “The student is then opting out emotionally, mentally, and socially from the classroom. The teacher doesn't have the resources either to compete with it or to actually help the student. That's the burnout factor."

 

// Likes vs. learning

Weingarten shared with Project Liberty that school-level interventions aren’t enough. Systemic, technology-level change is needed. “Let’s be frank. We are not going to out-teach, out-treat or out-parent the harms being done by social media products being rushed to market without proper safety guardrails.”

 

Which is why the AFT, along with a coalition of other organizations, released Likes vs. Learning The Real Cost of Social Media for Schools, a report that calls on tech companies to incorporate five key principles to make their platforms safer for students and schools.

  1. Turn on the strongest safety features by default: Tech companies should 1) undertake independent evaluations of the risks posed by their platforms, 2) enforce age limits on platforms, and 3) remove illegal content from platforms.
  2. Make changes that deter students from overuse and addictive behavior: Tech companies should change their platforms to be less addictive by 1) stopping algorithms being designed to maximize view times, 2) eliminating autoplay and infinite scroll, and 3) stopping sending push notifications to students.
  3. Protect children’s privacy: Tech companies should increase privacy for students by 1) defaulting young users’ accounts to the most private settings, 2) stopping excessive data tracking and harvesting, and 3) stopping the delivery of personalized, data-driven marketing to minors.
  4. Shield students from risky algorithms: Algorithms can be trained to promote unhealthy content, and the report calls on tech platforms to change the training of “content recommender” algorithms so they stop pushing harmful and traumatic content to students.
  5. Directly engage and work with schools and families: Tech platforms have a responsibility to work closely with educators, parents, and researchers, be more accessible and responsive to the education sector, and to provide access to data. At the Safe Tech, Safe Kids campaign event last week in Washington DC, Weingarten suggested a hotline between schools and tech platforms to streamline this communication.

 

// A movement building power

From AFT’s new report to Project Liberty’s Safe Tech, Safe Kids campaign, there is a growing movement of organizations, educators, parents, researchers, and students committed to safer kids, safer schools, and safer technology. Join us at #safetechsafekids to hold tech companies accountable while supporting the education and growth of the next generation.

Other notable headlines

// 🏛 (Relevant to today's topic) Americans are broadly united in support of laws to make the internet safer for kids. So why doesn’t Congress act? That's the question at the center of an article from The Atlantic.

 

// 🚨 Breaking news: yesterday, President Biden signed an executive order on AI, the US government’s most significant effort to date to harness the technology's potential and address its risks, according to an article in the Washington Post.

 

// 🤯 Is social media addictive? Here’s what the science says, according to an article in The New York Times.

 

// 🇬🇧 As the UK’s Online Safety Bill becomes law, The Guardian provided a guide to its key rules on everything from pornographic content to protecting children.

 

// 🌐 An article in Slate suggested that Wikipedia is covering the war in Israel and Gaza better than social media platforms like X.

 

// 🚪 The Google antitrust trial has been held mostly behind closed doors, but the New York Times just filed a motion to open it up, according to an article in The Verge.

 

// 💀 An article in IEEE Spectrum explored the creepy new digital afterlife industry. Companies could use your data to bring you back—without your consent.

 

// 📱 An article in The Markup offered a guide on how to anonymize your phone and take it off the grid.

 

// 📺 A new study found that the awareness of deepfakes can make people more suspicious about real videos as well, according to an article in Fast Company.


// 📰 An article in The New Yorker explored how the Israel-Hamas war has revealed the perils of relying on our social media feeds for updates about events unfolding in real time.

Partner news & opportunities

// Launch of the Safe Tech, Safe Kids campaign

October 23 was the official launch of Safe Tech, Safe Kids, a campaign led by Project Liberty in collaboration with Issue One, their Council for Responsible Social Media, and 5Rights Foundation. Issue One brought together stakeholders from across children’s advocacy, tech reform, pediatrics, education, and other communities for a comprehensive look at how social media impacts children, and how we can advance meaningful solutions to protect kids online. Check out the panels here.

 

// Virtual event on dark patterns

Wednesday November 1st at 1pm ET

Integrity Institute is hosting a virtual webinar to dive deep into the world of Dark Patterns, also known as “Deceptive Patterns.” Dark patterns are tricks used in websites and apps that make you do things that you didn't mean to, like buying or signing up for something. The webinar will shed light on their impact, prevalence, and ethical implications. Learn more and register here.

 

// Virtual event on AI and the creator economy

November 1st at 6pm ET

The Foundation for American Innovation is hosting a virtual event exploring both the ways AI can be leveraged to empower individual creators and how artists’ work is being used without license to teach AI models. This event will explore how we can channel AI so that it strengthens individual agency in the creator economy. Learn more and register here. 

 

// Partnership on AI seeking public comment

Partnership on AI has just released their Guidance for Safe Foundation Model Deployment, a framework for model providers to responsibly develop and deploy a range of AI models, promote safety for society, and adapt to evolving capabilities and uses. They are seeking public comment until Jan. 15, 2024. Check it out here.

/ Project Liberty is advancing responsible development of the internet, designed and governed for the common good. /

 

Thank you for reading.

Facebook
LinkedIn
X Logo (formerly Twitter)
Instagram
PLslashes_logo_green

501 W 30th Street, Suite 40A,
New York, New York, 10001
Unsubscribe  Manage Preferences

© 2023 Project Liberty