The UK has become the first major country to introduce a legal requirement for internet age verification, but it affects all websites and apps worldwide. Additionally, the US has recently revived a bill very similar to the British legislation. While the law was presented as a way to prevent children accessing adult websites, the reality is very different, and we’re already seeing the privacy risks of good intentions being turned into bad legislation – with iMessage and FaceTime in the firing line … The UK and US legislation The UK’s Online Safety Act (OSA) took effect on Friday, and made websites and apps legally responsible for preventing kids accessing “age-inappropriate content.” Complying with this law requires companies to verify the ages of all their users. We noted last week that very similar legislation in the US known as the Kids Online Safety Act (KOSA) was passed in the Senate last year before stalling, but has since been reintroduced in the House and looks likely to become law this year. The four big problems Massive overreach While the legislation claimed to be addressing adult entertainment websites, it was later expanded to cover over 200 types of content, much of it very vaguely defined. The British government’s own summary of the content affected reveals just how vague it all is: Services must assess any risks to children from using their platforms and set appropriate age restrictions, ensuring that child users have age-appropriate experiences and are shielded from harmful content. So far it appears that this includes use of social media apps, as well as online access to information on birth control, sexual hygiene, and information on reporting sexual abuse. A law claiming to protect teenagers will in many cases make it harder for them to access information that helps them protect themselves. Some dating apps have already been requiring users to use a private identity verification service. Unregulated access to sensitive personal data Second, the law doesn’t tell websites and apps how they are supposed to verify the age of their users, meaning that services are making it up as they go along. In particular, there is concern about the use of private “identity verification” services demanding personal data like copies of passports in order to carry out age verification. There have been many past examples of such companies failing to protect this highly-sensitive data. For example, US identity verification company AU10TIX was found to have exposed name, date of birth, nationality, identification number, and the type of document uploaded such as a drivers’ license – and to have included a photo of this document! In short, these companies are not regulated and should absolutely not be given access to personal data. Can easily be misused by governments We’ve already noted the inadvertent inclusion of innocuous websites and apps, but a repressive government can easily add new categories to the legislation at the stroke of a pen. For example, if a certain US president doesn’t like criticism from a political website, he could add those to the categories covered by the law, making them harder to access, and making people fear that their visits to the site now identify them. Includes private message services, like iMessage and FaceTime Finally, and most egregiously of all, section 122 says companies are supposed to scan private messages for illegal content. This is of course impossible in the case of end-to-end encrypted (E2EE) platforms like iMessage, FaceTime, and WhatsApp. The government just waved its arms and said companies need to figure out how to do it. While the government appears to be quietly backing-down from its attempt to force Apple to provide a backdoor into iCloud data, this law appears set to re-ignite the broader issue of E2EE. Photo by Steve Johnson on Unsplash