Is social media getting any safer for children? Not so much, according to new research suggesting that many young people are still experiencing troubling situations and coming across inappropriate content online. A study reported on by Time magazine suggests that 60% of teens aged 13-15 are encountering unsafe content or unwanted messages on Instagram, despite parent company's Meta's introduction of Teen Accounts and efforts it's making to improve safety on the platform, including using AI to sort user accounts. According to the study -- the findings of which Meta reportedly is disputing -- 40% of young teens who got unwanted messages on Instagram said those sending messages appeared to want to start a sexual or romantic relationship with them. Time reports that the study was completed by two child-advocacy groups, ParentsTogether Action and The Heat Initiative and created by Design It for Us, but a link to the report doesn't appear to be available yet from Time or from Heat Initiative website. Emails to ParentsTogether and Heat Initiative were not immediately returned. Meta did not return an email seeking a response to the study. This study follows a similar one published in September with the involvement of a former Meta executive, Arturo Bejar, that was highly critical of child safety on Instagram. That report was called, "Teen Accounts, Broken Promises: How Instagram is Failing to Protect Minors." Meta has said that it has made changes to direct messaging on its platforms to address child safety.