Tech News
← Back to articles

California enacts age-gate law for app stores

read original related products more articles

California has become the latest state to age-gate app stores and operating systems. AB 1043 is one of several internet regulation bills that Governor Gavin Newsom signed into law on Monday, including ones related to social media warning labels, chatbots and deepfake pornography.

The State Assembly passed AB 1043 with a 58-0 vote in September. The legislation received backing from notable tech companies such as Google, OpenAI, Meta, Snap and Pinterest. The companies claimed the bill offered a more balanced approach to age verification, with more privacy protection, than laws passed in other states.

Unlike with legislation in Utah and Texas, children will still be able to download apps without their parents' consent. The law doesn't require people to upload photo IDs either. Instead, the idea is that a parent will enter their child's age while setting up a device for them — so it’s more of an age gate than age verification. The operating system and/or app store will place the user into one of four age categories (under 13, 13-16, 16-18 or adult) and make that information available to app developers.

Advertisement Advertisement

Advertisement

Enacting AB 1043 means that California is joining the likes of Utah, Texas and Louisiana in mandating that app stores carry out age verification (the UK has a broad age verification law in place too). Apple has detailed how it plans to comply with the Texas law, which takes effect on January 1, 2026. The California legislation takes effect one year later.

AB 56, another bill Newsom signed Monday, will force social media services to display warning labels that inform kids and teens about the risks of using such platforms. These messages will appear the first time the user opens an app each day, then after three hours of total use and once an hour thereafter. This law will take effect on January 1, 2027 as well.

Elsewhere, California will require AI chatbots to have guardrails in place to prevent self-harm content from appearing and direct users who express suicidal ideation to crisis services. Platforms will need to inform the Department of Public Health about how they're addressing self-harm and to share details on how often they display crisis center prevention notifications.

The legislation is coming into force after lawsuits were filed against OpenAI and Character AI in relation to teen suicides. OpenAI last month announced plans to automatically identify teen ChatGPT users and restrict their usage of the chatbot.

Advertisement Advertisement

... continue reading