Some Android users are starting to see a new privacy safeguard pop up in Google Messages. Images flagged as nudity are being blurred before you open them. It's part of Google's Sensitive Content Warning system, a feature designed to protect people from receiving unwanted or explicit photos. When enabled, as Google's Help Center post describes, the phone automatically scans images for nudity, blurs anything that looks explicit, and shows a warning before you view, send or forward it. The detection happens entirely on your device, so flagged content isn't uploaded to Google's servers. Along with the blur, the system also offers resources on what to do if you're dealing with sensitive images. The setting isn't switched on by default for adults, but teens using supervised accounts will have it locked in place unless a parent changes it through the Google Family Link app. And while the tool is meant to help, Google admits there's a chance it might flag harmless images by mistake. Don't miss any of CNET's unbiased tech content and lab-based reviews. Add us as a preferred Google source on Chrome. How to enable or disable the feature For adults who want to be warned about nude photos or to disable the feature, the toggle switch is under Google Messages Settings / Protection & Safety / Manage sensitive content warnings / Warnings in Google Messages. The nude content feature is part of SafetyCore on Android 9 plus devices. SafetyCore also includes features Google has been working on to protect against scams and dangerous links via text and to verify contacts. Measuring the feature's effectiveness Filters that screen for objectionable images have become more sophisticated due to a better understanding of context through AI. "Compared to older systems, today's filters are far more adept at catching explicit or unwanted content, like nudity, with fewer mistakes," says Patrick Moynihan, the co-founder and president of Tracer Labs. "But they're not foolproof. Edge cases, like artistic nudity, culturally nuanced images or even memes, can still trip them up." Moynihan says that his company combines AI systems with Trust ID tools to flag content without compromising privacy. "Combining AI with human oversight and continuous feedback loops is critical to minimizing blind spots and keeping users safe," he says. Compared to Apple's iOS operating system, Android can offer more flexibility. However, its openness to third-party app stores, sideloading and customization creates more potential entry points for the kind of content Google is trying to protect people against. "Android's decentralized setup can make consistent enforcement trickier, especially for younger users who might stumble across unfiltered content outside curated spaces," Moynihan says. 'Kids can unblur it instantly' While Apple does offer Communication Safety features that parents can turn on, Android's ability to enable third-party monitoring tools "makes this kind of protection easier to roll out at scale and more family-friendly," says Titania Jordan, an author and chief parenting officer at Bark Technologies, which makes digital tools to protect children. Jordan says mobile operating systems have not made it easy for parents to proactively protect against content like nude images. "Parents shouldn't have to dig through system settings to protect their kids," she says. She points out that Google's new feature only blurs images temporarily. "Kids can unblur it instantly," she says, "That's why this needs to be paired with ongoing conversations about pressure, consent, and permanence, plus monitoring tools that work beyond just one app or operating system." According to Moynihan, making the system automatically opt-out for adults and opt-in for minors is a practical way to offer some initial protection. But he says, "The trick is keeping things transparent. Minors and their guardians need clear, jargon-free info about what's being filtered, how it works, and how their data is protected."