Today on Decoder, we’re talking about the landmark social media addiction trials that just resulted in two major verdicts against Big Tech. There’s one case in New Mexico against Meta, and another in California against both companies, which have said they plan to appeal.
These are complicated cases with some huge repercussions for both how these platforms work and the very nature of speech in America, so to help us work through it all, I’ve brought on two heavy hitters: my friend Casey Newton, who is founder and editor of the excellent newsletter Platformer and co-host of the Hard Fork podcast, as well as Verge senior policy reporter Lauren Feiner. Lauren was actually in that Los Angeles courtroom where executives like Mark Zuckerberg took the stand in the case of a 20-year-old woman named Kaley, who successfully argued Meta and Google negligently designed their platforms in ways that contributed to her mental health issues.
These cases, the first in a wave of injury lawsuits targeting tech companies, are about the design decisions of platforms like Instagram and YouTube. They argue that the platforms have fundamental flaws that harm users, especially teenagers, and that these companies knew about these problems and were negligent in shipping these features anyway. These cases are part of much larger set of moves that aim to fundamentally change the legal mechanisms that exist that might regulate social media platforms.
Verge subscribers, don’t forget you get exclusive access to ad-free Decoder wherever you get your podcasts. Head here. Not a subscriber? You can sign up here.
When we say harm, we’re not just talking about addictive design that brings users back compulsively. It’s also about features like algorithmic recommendations and camera filters that make issues like anxiety, depression, and body dysmorphia worse. This emphasis on how the platforms work, as opposed to focusing solely on the content, is part of a movement that’s been building for years. It focuses on the argument that social media is not and cannot be healthy — that it might in fact be defective, the same way that cigarettes, when used as designed, cause cancer.
There are a lot of complex ideas, and Casey, Lauren, and I really spent some time working through them. The first of these ideas is whether there is a distinction between product features — like recommendation, auto-play video, infinite scroll — and the types of harmful yet legal speech served to young people on these platforms using these tools, like eating disorder videos or posts designed to convince young men to hate women.
But it’s very difficult, if not unconstitutional, to force these companies to moderate this kind of content in specific ways. The First Amendment obviously prohibits the government from regulating what speech these companies promote and moderate, and private action is usually blocked by Section 230 of the Communications Decency Act, which protects tech platforms from being held responsible for the content their users post.
It’s really hard to pull all these ideas apart. An algorithmic feed with no content in it simply isn’t a compelling product, let alone a negligently defective one that causes harm. A lot of smart people who we’ve had on this show and on The Verge these past few years have said these rulings are just an end run around 230 — just a way to make platforms liable for what, ultimately, is just speech, in a way that will cause more speech to be restricted. You’ll hear us talk a lot about that idea, and whether the growing calls to repeal Section 230 entirely have any logical connection to these cases, or whether they’re just politically opportunistic.
But there are many more ideas at play here and even more layers of compilation. You will hear Casey and I even crash out a few times in this episode, because we have both been covering tech regulation for so long it feels silly to act like everything is working well for regular people, who have negative experiences with social media all of the time. Section 230 is three decades old now, and it’s unclear whether the world it was designed to help create ever came into existence.
You’ll hear Lauren talk about how the authors of Section 230 are open to changes, particularly around AI and speech online. At the same time, any changes to that law run headlong into the First Amendment and potentially open the door to government speech regulations at scale. Like I said, it’s complicated, and I‘m very curious to hear what you all think about this, because it’s clear a lot of this is about to be up for grabs.
... continue reading