Skip to content
Tech News
← Back to articles

Meta, Google under attack as court cases bypass 30-year-old legal shield

read original get LegalShield for Meta & Google → more articles
Why This Matters

The weakening of legal protections like Section 230 signals a significant shift in accountability for tech giants such as Meta and Google, potentially leading to increased regulation and liability. This development could reshape how online platforms manage content, impacting both industry practices and consumer safety. As lawsuits challenge longstanding legal shields, the tech industry faces a pivotal moment in balancing innovation with responsibility.

Key Takeaways

Meta Platforms CEO Mark Zuckerberg arrives outside court to take the stand at trial in a key test case accusing Meta and Google's YouTube of harming kids' mental health through addictive platforms, in Los Angeles, California, U.S., Feb. 18, 2026. Mike Blake | Reuters

For the last three decades, internet giants have been able to avoid legal exposure for content on their platforms, thanks to a law that differentiates the companies from online publishers. But those safeguards appear to be weakening. Meta and Google , which dominate the U.S. digital ad market, find themselves as defendants in a host of lawsuits that collectively serve to undermine the long-held notion that they have legal protection for what surfaces on their sites, apps and services. Companies like TikTok and Snap are in the same predicament. The unifying aspect of the recent cases is that they're crafted to circumvent Section 230 of the Communications Decency Act, which Congress passed in 1996 and President Bill Clinton signed into law. Established in the early days of the internet, the law protects websites from being sued over content posted by their users, and allows them to act as moderators without being held liable for what stays up. Last week, a jury in New Mexico found Meta liable in a case involving child safety, while jurors in Los Angeles held the Facebook parent and Google's YouTube negligent in a personal injury trial. Days after those verdicts were revealed, victims of the notorious sex offender Jeffrey Epstein filed a class action lawsuit against Google and the Trump administration over allegations related to the wrongful disclosure of personal information. In that complaint, the plaintiffs argue that Google's AI Mode, which serves up AI-powered summaries and links, is "not a neutral search index," a clear effort to make the case that Google isn't just a platform sitting between users and the information they seek. "The plaintiffs' bar is winning the war against section 230 through systematic, relentless litigation that is causing there to be divots and chinks in its protection," said Eric Goldman, a law professor at Santa Clara University School of Law, in an interview.

watch now

The stakes are massive as the technology sector exits the era of traditional online search and social networking and enters a world defined by artificial intelligence, where models designed by the owners of the largest platforms are serving up conversational chats, pictures and videos that can range from controversial to potentially illegal. The financial penalties to date have been minimal — less than $400 million in damages between the two verdicts last week — but the cases establish a troubling precedent for tech giants that are betting their future on AI. "For so long, tech companies have used Section 230 as an excuse to avoid taking meaningful action to protect users, but especially kids from egregious harms, harassment and abuse, frauds and scams," Sen. Brian Schatz (D-Hawaii) said in March during a U.S. Senate Commerce Committee hearing tied to the 30th anniversary of Section 230. "It's not that they don't know what's happening or even why it's happening. It's that to do something about it would be to hurt their bottom line. And so long as federal law provides a shield, why even bother?" Meta declined to comment for this story. Google didn't respond to a request for comment. Both companies said they plan to appeal last week's verdicts.

Politicians on both sides of the aisle have proposed all sorts of reforms to Section 230 over the years, and company executives have faced public grilling in congressional hearings over the alleged harms caused by their platforms. President Donald Trump, during his first term in office, supported greater restrictions on social media companies for what he viewed as their bias against him. And Joe Biden, when he was a presidential hopeful in 2020, told The New York Times editorial board that Section 230 "should be revoked" for tech platforms including Facebook, which he said was "propagating falsehoods they know to be false." Nadine Farid Johnson, policy director of the Knight First Amendment Institute at Columbia University, said about legislative efforts that "none of those things have fully come to fruition, in part because they are such complicated questions." But while the issue has stagnated in Washington, D.C., plaintiff attorneys are finding other routes toward holding big tech companies accountable.

Meta Platforms CEO Mark Zuckerberg testifies before Los Angeles Superior Court Judge Carolyn Kuhl at a trial in a key test case accusing Meta and Google's YouTube of harming kids' mental health through addictive platforms, in Los Angeles, California, U.S., Feb. 18, 2026 in a courtroom sketch. Mona Edwards | Reuters

The verdict last week against Meta and YouTube was the first time a jury found social media platforms liable for what plaintiff attorneys alleged was intentionally engineering addiction in minors with their products. The case went after how the platforms were designed, not just what content they carried. Plaintiffs argued that the combination of features like autoplay, recommendation algorithms, notifications and certain filters acted like "digital casinos," leading to serious mental health problems for a young girl who claimed she couldn't stop using the apps. The class action suit against Google, filed last week by a plaintiff with the pseudonym Jane Doe, alleged that the company's AI Mode created its own summaries and links, exposing Epstein victims' personal identifying information (PII), including names, phone numbers and email addresses. Kevin Osborne, the plaintiff's attorney in the case, told CNBC in an interview that the suit was filed after Google declined a request to take down the victims' contact information from AI mode. Osborne said the case has to move quickly because of how fast the information is spreading. "We filed when we filed because we needed to act as soon as possible to get this stuff taken down," said Osborne, a partner at Erickson Kramer Osborne in San Francisco. "People are getting calls from total strangers and death threats. It's a nightmare." Osborne added that the timing was "serendipitous" given Meta's court defeats last week, but he said there's overlap in that they all involve efforts by the plaintiffs to skirt Section 230. Osborne said that in his case, "this is AI mode coming up with its own content and that's something that's not been explored very thoroughly by the courts." Matthew Bergman, one of the lawyers representing the plaintiffs in the Los Angeles case, testified before a Senate committee in March and said the tech industry has relied on overly broad interpretations of Section 230 in order "to evade all possible legal accountability simply because third-party content is found somewhere in the causal chain of their misconduct." Bergman said he looked closely at a 2021 ruling in an appeals court involving allegations about the role a Snapchat feature played in a fatal car crash. The court reversed an earlier decision to dismiss the case under Section 230, citing the plaintiff's allegations that Snap's negligent design incentivized young people to drive recklessly. "I charted a very narrow legal theory that might legally permit certain cases brought by parents to proceed despite Section 230," Bergman told lawmakers. The evidence presented in Los Angeles bolstered the plaintiff's arguments that Meta and YouTube executives knew of their products' design harms and failed to adequately address them. At a press briefing about the case on Monday, Bergman said "the best way to prove our case is through their own documents." In the Google AI Mode suit, the plaintiff also pointed to design flaws related to the public display of personal information. "Google is intentionally furnishing that PII in a way designed, or at least substantially certain, to fuel harassment and fear," the suit says. Osborne expanded on that idea. "Google didn't just provide our client's email address," he said. "They created a link, so when you're reading the content, looking at AI mode, all you've got to do is click a button and you've generated an email directly to the [Epstein] survivor."

watch now

It's not the first time Google has been sued for how its AI interacted with users, an issue that's also created legal challenges for ChatGPT creator OpenAI. Earlier In March, the father of Jonathan Gavalas filed a lawsuit against Google, accusing the Gemini chatbot of convincing his son to carry out a series of missions, including staging a "catastrophic accident." The younger Gavalas then committed suicide at the instruction of Gemini, the lawsuit alleges. And in January, Google settled with families who sued the company and Character.AI, alleging their technology caused harm to minors, including suicides. Last year OpenAI was sued by a family who blamed ChatGPT for their teenage son's death by suicide.

Supreme Court?

... continue reading