When Meta announced last week that “Teen Accounts are bringing parents more peace of mind,” they failed to mention that bringing parents peace of mind is largely all they do. Now, after piloting Teen Accounts on Instagram for a year, hundreds of millions of young people are being automatically enrolled in these new accounts across Messenger and Facebook.
But a report released the very same day, “Teen Accounts, Broken Promises” by researchers from NYU, Northeastern, groups like Fairplay and ParentsSOS, and former Meta executive Arturo Béjar says these tools don’t work. After testing 47 of the safety tools bundled into Instagram’s Teen Accounts, they found that just 17 percent worked as described. Nearly two-thirds were either broken, ineffective, or quietly discontinued.
With this contrast between Meta’s marketing promise and the independent findings, Teen Accounts seem less about protecting teens and more about protecting Meta. Less cure and more sugar pill, meant to make parents and lawmakers feel better without adequately addressing the issue.
According to Meta, Teen Accounts limit who teens can message, reduce exposure to sensitive content, and give parents new supervision tools. Adam Mosseri, head of Instagram, said: “We want parents to feel good about their teens using social media.” But wanting parents to feel good and keeping kids safe aren’t the same–-when researchers ran realistic scenarios, the safety features failed.
The report documents how Instagram’s design has contributed to tragedies like the deaths of 14-year-old Molly Russell and 16-year-old David Molak, both of whom were bombarded with harmful content or relentless bullying on the platform. In safety tests, teen accounts were still shown sexual material, violent videos, and self-harm content at “industrial scale,” while unknown adults could continue initiating conversations directly with kids. Meta’s own reporting tools rarely provided relief: only 1 in 5,000 harmed users received meaningful assistance.
Meta has largely denied the report’s findings, telling the BBC, “This report repeatedly misrepresents our efforts to empower parents and protect teens.”
Former Meta Director and report co-author Arturo Béjar told me, “The findings were surprisingly bad, and sadly their response predictable. Meta minimizes or dismisses any studies that don’t fit the image they want people to get, including their own studies, no matter how carefully made and communicated.” Béjar also testified before Congress in 2023 about warning Mark Zuckerberg, Adam Mosseri, and other leaders that Instagram was harming teen mental health.
“The report is constructive feedback, the recommendations proportionate. And I know from my work at Meta, that they could be implemented quickly and at low cost,” said Béjar.
If parents knew Instagram was unsafe, many would keep their teens off it. But Teen Accounts give the impression that guardrails are firmly in place. That false sense of security is exactly what Meta is selling: peace of mind for parents and plausible deniability for regulators, not protection for kids.
I recognize this pattern from my own time inside Meta. I spent nearly 15 years at the company, last as Director of Product Marketing for Horizon Worlds, its virtual reality platform. When I raised alarms about product stability and harms to kids, leadership’s focus was on decreasing risk to the company, not making the product safer. At one point, there was a discussion about whether or not it was appropriate to imply parental controls existed where they didn’t. I’ve since become a federal whistleblower and advocate for kids online safety.
Parents cannot afford to mistake peace of mind for actual harm reduction. Until real standards are in place, the safest choice is opting your teen out of social media altogether.
While this might seem extreme, let’s not forget that when the tobacco industry faced evidence that cigarettes caused cancer, it responded with light cigarettes and cartoon mascots. Meta’s Teen Accounts are the modern equivalent: a sop to worried parents and regulators, designed to preserve profit while avoiding real accountability. There once was even student smoking sections in high schools, and now we know the science of how harmful smoking cigarettes is to our health, so we take steps to prevent children from buying these products. Social media should be no different.
The Kids Online Safety Act (KOSA) currently in Congress offers one path toward real safety. KOSA’s duty of care provision would force social media companies to prioritize child welfare over shareholder profits. But Meta’s Teen Accounts represent exactly the kind of corporate theater that has historically convinced lawmakers to delay necessary regulation, allowing companies to continue extracting wealth from children’s attention while avoiding genuine accountability.
Other companies show it’s possible to do better. Pinterest, for example, has made the decision that teen accounts are private by default. That means strangers can’t discover them through search, comments, or messages, and unlike Meta, there’s no way around this guardrail for those under 16. While this impacts their short term profit, Pinterest CEO Bill Ready told Adam Grant that he hopes these actions inspire other tech companies to follow suit in prioritizing customer well-being as a long-term business strategy.
Meta has the resources and technical capacity to more effectively innovate and it chooses not to. Instead, they provide ineffective solutions for kids while pouring billions into projects like circumnavigating the globe with subsea fiber to reach more users and make more money.
Until KOSA passes or Meta can prove that these features actually work, parents should treat Teen Accounts for what they are: a PR strategy.
Your child is not safer because Meta says so—they are only safer when you keep them off these harmful platforms until the billionaires behind them can protect kids as effectively as they extract profit from them.
Share