As Meta heads to trial in the state of New Mexico for allegedly failing to protect minors from sexual exploitation, the company is making an aggressive push to have certain information excluded from the court proceedings.
The company has petitioned the judge to exclude certain research studies and articles around social media and youth mental health; any mention of a recent high-profile case involving teen suicide and social media content; and any references to Meta’s financial resources, the personal activities of employees, and Mark Zuckerberg’s time as a student at Harvard University.
Meta’s requests to exclude information, known as motions in limine, are a standard part of pretrial proceedings, in which a party can ask a judge to determine in advance which evidence or arguments are permissible in court. This is to ensure the jury is presented with facts and not irrelevant or prejudicial information and that the defendant is granted a fair trial.
Meta has emphasized in pretrial motions that the only questions the jury should be asked are whether Meta violated New Mexico’s Unfair Practices Act because of how it has allegedly handled child safety and youth mental health, and that other information—such as Meta’s alleged election interference and misinformation, or privacy violations—shouldn’t be factored in.
But some of the requests seem unusually aggressive, two legal scholars tell WIRED, including requests that the court not mention the company’s AI chatbots, and the extensive reputation protection Meta is seeking. WIRED was able to review Meta’s in limine requests through a public records request from the New Mexico courts.
Got a Tip? Are you a current or former tech employee who wants to talk about what's happening? We'd like to hear from you. Using a nonwork phone or computer, contact the reporter securely on Signal at ChaoticGoode.12.
These motions are part of a landmark case brought by New Mexico attorney general Raúl Torrez in late 2023. The state is alleging that Meta failed to protect minors from online solicitation, human trafficking, and sexual abuse on its platforms. It claims the company proactively served pornographic content to minors on its apps and failed to enact certain child safety measures.
The state complaint details how its investigators were easily able to set up fake Facebook and Instagram accounts posing as underage girls, and how these accounts were soon sent explicit messages and shown algorithmically amplified pornographic content. In another test case cited in the complaint, investigators created a fake account as a mother looking to traffic her young daughter. According to the complaint, Meta did not flag suggestive remarks that other users commented on her posts, nor did it shut down some of the accounts that were reported to be in violation of Meta’s policies.
Meta spokesperson Aaron Simpson told WIRED via email that the company has, for over a decade, listened to parents, experts, and law enforcement, and has conducted in-depth research, to “understand the issues that matter the most,” and to “use these insights to make meaningful changes—like introducing Teen Accounts with built-in protections and providing parents with tools to manage their teens’ experiences.”
“While New Mexico makes sensationalist, irrelevant and distracting arguments, we're focused on demonstrating our longstanding commitment to supporting young people,” Simpson said. “We’re proud of the progress we’ve made, and we’re always working to do better.”
... continue reading