On Wednesday, Character.AI announced it will bar anyone under the age of 18 from open-ended chats with its AI characters starting on November 25, implementing one of the most restrictive age policies yet among AI chatbot platforms. The company faces multiple lawsuits from families who say its chatbots contributed to teenager deaths by suicide.
Over the next month, Character.AI says it will ramp down chatbot use among minors by identifying them and placing a two-hour daily limit on their chatbot access. The company plans to use technology to detect underage users based on conversations and interactions on the platform, as well as information from connected social media accounts. On November 25, those users will no longer be able to create or talk to chatbots, though they can still read previous conversations. The company said it is working to build alternative features for users under the age of 18, such as the ability to create videos, stories, and streams with AI characters.
Character.AI CEO Karandeep Anand told The New York Times that the company wants to set an example for the industry. “We’re making a very bold step to say for teen users, chatbots are not the way for entertainment, but there are much better ways to serve them,” Anand said in the interview. The company also plans to establish an AI safety lab.
The platform currently has about 20 million monthly users, with less than 10 percent self-reporting as under 18, according to Anand. Users pay a monthly subscription fee starting at about $8 to chat with custom AI companions. (We first covered the service in September 2022 by interviewing a personification of the operating system Linux.) Until recently, Character.AI did not verify ages when people signed up.