Tech News
← Back to articles

The Biggest AI Companies Met to Find a Better Path for Chatbot Companions

read original related products more articles

At Stanford for eight hours on Monday, representatives from Anthropic, Apple, Google, OpenAI, Meta, and Microsoft met in a closed-door workshop to discuss the use of chatbots as companions or in roleplay scenarios. Interactions with AI tools are often mundane, but they can also lead to dire outcomes. Users sometimes experience mental breakdowns during lengthy conversations with chatbots or confide in them about their suicidal ideations.

“We need to have really big conversations across society about what role we want AI to play in our future as humans who are interacting with each other,” says Ryn Linthicum, head of user well-being policy at Anthropic. At the event‚ organized by Anthropic and Stanford, industry folk intermingled with academics and other experts, splitting into small groups to talk about nascent AI research and brainstorming deployment guidelines for chatbot companions.

Anthropic says less than one percent of its Claude chatbot’s interactions are roleplay scenarios initiated by users; it’s not what the tool was designed for. Still, chatbots and the users who love interacting with them as companions are a complicated issue for AI builders, which often take disparate approaches to safety.

And if I’ve learned anything from the Tamagotchi era, it’s that humans will easily form bonds with technology. Even if some AI bubble does imminently burst and the hype machine moves on, plenty of people will continue to seek out the kinds of friendly, sycophantic AI conversations they’ve grown accustomed to over the past few years.

Proactive Steps

“One of the really motivating goals of this workshop was to bring folks together from different industries and from different fields,” says Linthicum.

Some early takeaways from the meeting were the need for better targeted interventions inside bots when harmful patterns are detected and more robust age verification methods to protect children.

“We really were thinking through in our conversations not just about can we categorize this as good or bad, but instead how we can more proactively do pro-social design and build in nudges,” Linthicum says.

Some of that work has already begun. Earlier this year, OpenAI added pop-ups sometimes during lengthy chatbot conversations that encourage users to step away for a break. On social media, CEO Sam Altman claimed the startup had “been able to mitigate the serious mental health issues” tied to ChatGPT usage and would be rolling back heightened restrictions.