Tech News
← Back to articles

Meta wants to block data about social media use, mental health in child safety trial

read original related products more articles

As Meta heads to trial in the state of New Mexico for allegedly failing to protect minors from sexual exploitation, the company is making an aggressive push to have certain information excluded from the court proceedings.

The company has petitioned the judge to exclude certain research studies and articles around social media and youth mental health; any mention of a recent high-profile case involving teen suicide and social media content; and any references to Meta’s financial resources, the personal activities of employees, and Mark Zuckerberg’s time as a student at Harvard University.

Meta’s requests to exclude information, known as motions in limine, are a standard part of pretrial proceedings, in which a party can ask a judge to determine in advance which evidence or arguments are permissible in court. This is to ensure the jury is presented with facts and not irrelevant or prejudicial information and that the defendant is granted a fair trial.

Meta has emphasized in pretrial motions that the only questions the jury should be asked are whether Meta violated New Mexico’s Unfair Practices Act because of how it has allegedly handled child safety and youth mental health, and that other information—such as Meta’s alleged election interference and misinformation, or privacy violations—shouldn’t be factored in.

But some of the requests seem unusually aggressive, two legal scholars tell WIRED, including requests that the court not mention the company’s AI chatbots, and the extensive reputation protection Meta is seeking. WIRED was able to review Meta’s in limine requests through a public records request from the New Mexico courts.

These motions are part of a landmark case brought by New Mexico Attorney General Raúl Torrez in late 2023. The state is alleging that Meta failed to protect minors from online solicitation, human trafficking, and sexual abuse on its platforms. It claims the company proactively served pornographic content to minors on its apps and failed to enact certain child safety measures.