Facebook and Instagram icons are seen displayed on an iPhone. Meta on Wednesday introduced new safety features for teen users, including enhanced direct messaging protections to prevent "exploitative content." Teens will now see more information about who they're chatting with, like when the Instagram account was created and other safety tips, to spot potential scammers. Teens will also be able to block and report accounts in a single action. "In June alone, they blocked accounts 1 million times and reported another 1 million after seeing a Safety Notice," the company said in a release. This policy is part of a broader push by Meta to protect teens and children on its platforms, following mounting scrutiny from policymakers who accused the company of failing to shield young users from sexual exploitation. Meta said it removed nearly 135,000 Instagram accounts earlier this year that were sexualizing children on the platform. The removed accounts were found to be leaving sexualized comments or requesting sexual images from adult-managed accounts featuring children. The takedown also included 500,000 Instagram and Facebook accounts that were linked to the original profiles.