is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO.
Posts from this author will be added to your daily email digest and your homepage feed.
Australia is joining a wave of governments around the globe in regulating how kids spend their time online. On December 10th, most major social media platforms will boot children in the country under 16 from their services. Under the law, social platforms will also need to implement a “reasonable” age verification method there — while critics argue kids will get around it anyway.
These changes stem from Australia’s Online Safety Amendment (Social Media Minimum Age) Bill, which passed in November 2024. They’re not only a big deal for Australian youth, but also a preview of a policy that’s been floated in numerous other places. Here’s a rundown of what the new law means and how it will affect each platform.
What is the Social Media Minimum Age Bill?
Australia passed the Online Safety Amendment (Social Media Minimum Age) Bill last year as an amendment to the Online Safety Act 2021, which holds a wide range of online platforms accountable for hosting harmful and illegal content online. The expanded version requires online social platforms to take “reasonable steps” in removing accounts belonging to users under 16, while preventing them from creating new ones, if they detect a user is from Australia. For now, it’s applied to at least 11 services: Facebook, Instagram, TikTok, Snapchat, X, Reddit, YouTube, Twitch, Kick, Threads, and even smaller platforms like the ByteDance-owned Lemon8. Bluesky is also banning users under 16.
Kids aren’t barred from browsing platforms while logged out, meaning they’ll still be able to do things like view Reddit discussions or individual Threads and Instagram posts. Without an account, kids can’t access a curated feed, communicate with other users, create posts, receive notifications, or save content.
The rule applies to platforms designed for the “sole purpose, or a significant purpose” of allowing interaction between two or more users, including ones that let users post content and “link to, or interact with, some or all of the other end-users.” This doesn’t include online gaming services, like Xbox Live, nor does it cover standalone messaging apps, such as Messenger and WhatsApp. Other apps not covered by the law include Discord, Pinterest, Roblox, and YouTube Kids. But the array of covered services could be reevaluated in the future, so the list of covered services may not be final.
Australia’s law doesn’t lay out how platforms should identify users who are under 16, only that they can’t require just a government ID, nor can they collect data associated with the age verification process. In August, Australia’s Department of Infrastructure, Transport, Regional Development, Communications, Sport and the Arts published an extensive report on different age verification methods, including ID checks, age estimation with face scans, and age inference, which involves a platform using signals to “guess” how old they are. These signals can include an account’s age and a user’s behavior on the platform. Despite the flaws with each of these methods, the department determined that platforms can perform age verification “privately, efficiently and effectively,” though it acknowledged that there isn’t a “single ubiquitous solution.”
Why was the ban passed?
... continue reading