Skip to content
Tech News
← Back to articles

Meta’s legal defeat could be a victory for children, or a loss for everyone

read original get Meta Portal Video Calling Device → more articles
Why This Matters

The recent legal rulings against Meta and YouTube mark a significant shift in holding social media platforms accountable for harm caused to minors, challenging long-standing legal protections like Section 230. These decisions could reshape the responsibilities of tech companies and influence future regulations, impacting both industry practices and consumer safety. The cases signal a potential new era where platforms may be liable for the content and harm they facilitate, especially involving minors.

Key Takeaways

is a senior tech and policy editor focused on online platforms and free expression. Adi has covered virtual and augmented reality, the history of computing, and more for The Verge since 2011.

Is social media not just bad, but illegally bad? Should tech companies pay for making it that way? According to two US juries — and no shortage of outside commentary — the answer to both questions is “yes.”

Earlier this week, two juries — one in New Mexico, one in Los Angeles — held Meta liable for a total of hundreds of millions of dollars for harming minors. YouTube was also found liable in Los Angeles, and both companies are appealing their losses. In one sense, the decisions were surprising. Meta and Google operate platforms for transmitting speech and are typically protected in a variety of ways by Section 230 and the First Amendment; it’s unusual for suits to clear these hurdles. In another, it feels inevitable. The web of 2026 has become almost synonymous with a few widely disliked for-profit platforms, and the harm they’ve caused is often tangible — but it’s still far from certain what this defeat will change, and what the collateral damage could be.

If these decisions survive appeal — which isn’t certain — the direct outcome would be multimillion-dollar penalties. Depending on the outcome of several more “bellwether” cases in Los Angeles, a much larger group settlement could be reached down the road. Even at this early stage, it’s a victory for a legal theory that social media platforms should be treated like defective products — a strategy designed to get around the shield of Section 230, but one that’s often failed in court. “The California case specifically is the first time social media has ever had to face the staredown and judgment of a jury for specific personal injuries,” attorney Carrie Goldberg, who pushed forward major early social media liability suits, including an unsuccessful case against Grindr, told The Verge. “It’s the dawn of a new era.”

“It’s the dawn of a new era.”

For many activists, the overall goal is to make clear that lawsuits will keep piling up if companies don’t change their business practices. What practices? In New Mexico, a jury was swayed by arguments that Meta had made statements misleading users about the safety of its platforms. In LA, the plaintiffs successfully claimed Instagram and YouTube were designed in a way that facilitated social media addiction that harmed a teenage user. Meta and Google (and other nervous companies) could plausibly change specific features or be more cautious in their public statements and disclosures. But each case depends on a set of highly specific circumstances, and there’s no one-size-fits-all answer about what needs to change.

Eric Goldman, a legal blogger and expert on Section 230, sees clear legal danger ahead for social media services. “These rulings indicate that juries are willing to impose major liability on social media providers based on claims of social media addiction,” Goldman wrote after the ruling. In an email to The Verge, he noted the issue was bigger than just juries. “Judges are certainly aware of the controversies around social media,” Goldman said. In the Los Angeles case and other upcoming bellwether trials, “the judges have not given social media defendants much benefit of the doubt, which is how the plaintiffs’ novel cases were able to reach trials in the first place.” It’s a situation, he says, that “does feel differently compared to a decade ago.”

Goldman pointed out that New York and California have also passed laws banning “addictive” social media feeds for teens — so even if an appeals court reverses the recent decisions, that won’t necessarily turn back the clock.

The best-case outcome of all this has been laid out by people like Julie Angwin, who wrote in The New York Times that companies should be pushed to change “toxic” features like infinite scrolling, beauty filters that encourage body dysmorphia, and algorithms that prioritize “shocking and crude” content. The worst-case scenario falls along the lines of a piece from Mike Masnick at Techdirt, who argued the rulings spell disaster for smaller social networks that could be sued for letting users post and see First Amendment-protected speech under a vague standard of harm. He noted that the New Mexico case hinged partly on arguing that Meta had harmed kids by providing end-to-end encryption in private messaging, creating an incentive to discontinue a feature that protects users’ privacy — and indeed, Meta discontinued end-to-end encryption on Instagram earlier this month.

“Judges have not given social media defendants much benefit of the doubt.”

... continue reading