Tech News
← Back to articles

Meta's Teen Accounts Are Sugar Pills for Parents, Not Safety for Kids

read original related products more articles

When Meta announced last week that “Teen Accounts are bringing parents more peace of mind,” they failed to mention that bringing parents peace of mind is largely all they do. Now, after piloting Teen Accounts on Instagram for a year, hundreds of millions of young people are being automatically enrolled in these new accounts across Messenger and Facebook.

But a report released the very same day, “Teen Accounts, Broken Promises” by researchers from NYU, Northeastern, groups like Fairplay and ParentsSOS, and former Meta executive Arturo Béjar says these tools don’t work. After testing 47 of the safety tools bundled into Instagram’s Teen Accounts, they found that just 17 percent worked as described. Nearly two-thirds were either broken, ineffective, or quietly discontinued.

With this contrast between Meta’s marketing promise and the independent findings, Teen Accounts seem less about protecting teens and more about protecting Meta. Less cure and more sugar pill, meant to make parents and lawmakers feel better without adequately addressing the issue.

According to Meta, Teen Accounts limit who teens can message, reduce exposure to sensitive content, and give parents new supervision tools. Adam Mosseri, head of Instagram, said: “We want parents to feel good about their teens using social media.” But wanting parents to feel good and keeping kids safe aren’t the same–-when researchers ran realistic scenarios, the safety features failed.

The report documents how Instagram’s design has contributed to tragedies like the deaths of 14-year-old Molly Russell and 16-year-old David Molak, both of whom were bombarded with harmful content or relentless bullying on the platform. In safety tests, teen accounts were still shown sexual material, violent videos, and self-harm content at “industrial scale,” while unknown adults could continue initiating conversations directly with kids. Meta’s own reporting tools rarely provided relief: only 1 in 5,000 harmed users received meaningful assistance.

Meta has largely denied the report’s findings, telling the BBC, “This report repeatedly misrepresents our efforts to empower parents and protect teens.”

Former Meta Director and report co-author Arturo Béjar told me, “The findings were surprisingly bad, and sadly their response predictable. Meta minimizes or dismisses any studies that don’t fit the image they want people to get, including their own studies, no matter how carefully made and communicated.” Béjar also testified before Congress in 2023 about warning Mark Zuckerberg, Adam Mosseri, and other leaders that Instagram was harming teen mental health.

“The report is constructive feedback, the recommendations proportionate. And I know from my work at Meta, that they could be implemented quickly and at low cost,” said Béjar.

If parents knew Instagram was unsafe, many would keep their teens off it. But Teen Accounts give the impression that guardrails are firmly in place. That false sense of security is exactly what Meta is selling: peace of mind for parents and plausible deniability for regulators, not protection for kids.

I recognize this pattern from my own time inside Meta. I spent nearly 15 years at the company, last as Director of Product Marketing for Horizon Worlds, its virtual reality platform. When I raised alarms about product stability and harms to kids, leadership’s focus was on decreasing risk to the company, not making the product safer. At one point, there was a discussion about whether or not it was appropriate to imply parental controls existed where they didn’t. I’ve since become a federal whistleblower and advocate for kids online safety.

... continue reading