Tech News
← Back to articles

Character.AI is banning minors from AI character chats

read original related products more articles

Character.AI is gradually shutting down chats for people under 18 and rolling out new ways to figure out if users are adults. The company announced Wednesday that under-18 users will be immediately limited to two hours of “open-ended chats” with its AI characters, and that limit will shrink to a complete ban from chats by November 25th.

In the same announcement, the company says it’s rolling out a new in-house “age assurance model” that classifies a user’s age based on the type of characters they choose to chat with, in combination with other on-site or third-party data information. Both new and existing users will be run through the age model, and users flagged as under-18 will automatically be directed to the company’s teen-safe version of its chat, which it rolled out last year, until the November cutoff. Adults mistaken for minors can prove their age to the third-party verification site Persona, which will handle the sensitive data necessary to do so, such as showing a government ID.

After the ban, teens will still be allowed on the site to revisit old chats and use non-chat features, such as creating characters and making videos, stories, and streams featuring characters. Character.AI CEO Karandeep Anand acknowledged to The Verge that users spend a “much smaller percentage” of time on these features than they do with the company’s flagship chatbot conversations, however — which is why it’s a “very, very bold move” for the company to limit chatbots, he said.

Anand told The Verge in an interview that “sub-10 percent” of the company’s userbase self-reports as being under the age of 18. He added that the company does not have a way to find out the “real numbers” until it starts using the new age detection model. The amount of minors has declined over time, he said, as Character.AI has rolled out restrictions for underage users. “When we started making the changes of under 18 experiences earlier in the year, our under 18 user base did shrink, because those users went into other platforms, which are not as safe,” Anand said.

Character.AI is banning minors from chatting with AI characters. Image: Character.AI

Character.AI has been sued over allegations of wrongful death and negligence and deceptive trade practices by parents who say their children were drawn into inappropriate or harmful relationships with chatbots. The lawsuits target the company and its founders, Noam Shazeer and Daniel De Freitas, along with Google, the founders’ former workplace. Character.AI has repeatedly modified its services in the wake of the suits, including by directing users to the National Suicide Prevention Lifeline when certain phrases related to related to self-harm or suicide are used in the chat.

Lawmakers are attempting to curb the growing industry of AI companions. A California bill passed in October requires developers to make clear to users that the chatbots are AI, not human, and a federal law proposed Tuesday would outright ban providing AI companions to minors.

In addition to the teen model, the company has previously launched features like a voluntary ‘Parental Insights’ feature, which sends in a summary of a user’s activity, though not a complete log of their chats, to parents. But these features rely on a user’s self-reported age, which is easily faked. Other AI companies have recently imposed restrictions on teen users, like Meta, which changed its policies after Reuters reported on internal rules allowing AI chatbots to talk to minors in sensual ways.

Related The AI sexting era has arrived

The company appears to anticipate that the move will disappoint its teen userbase: Character.AI says in the company statement that it is “deeply sorry” for eliminating “a key feature of our product” that most teens use “within the bounds of our content rules.”

... continue reading