Ever since a lawsuit accused ChatGPT of becoming a teen's "suicide coach," OpenAI has been scrambling to make its chatbot safer. Today, the AI firm unveiled the experts it hired to help make ChatGPT a healthier option for all users. In a press release, OpenAI explained its Expert Council on Wellness and AI started taking form after OpenAI began informally consulting with experts on parental controls earlier this year. Now it's been formalized, bringing together eight "leading researchers and experts with decades of experience studying how technology affects our emotions, motivation, and mental health" to help steer ChatGPT updates. One priority was finding "several council members with backgrounds in understanding how to build technology that supports healthy youth development," OpenAI said, "because teens use ChatGPT differently than adults." That effort includes David Bickham, a research director at Boston Children’s Hospital, who has closely monitored how social media impacts kids' mental health, and Mathilde Cerioli, the chief science officer at a nonprofit called Everyone.AI. Cerioli studies the opportunities and risks of children using AI, particularly focused on "how AI intersects with child cognitive and emotional development." These experts can seemingly help OpenAI better understand how safeguards can fail kids during extended conversations to ensure kids aren't particularly vulnerable to so-called "AI psychosis," a phenomenon where longer chats trigger mental health issues. In January, Bickham noted in an American Psychological Association article on AI in education that "little kids learn from characters" already—as they do things like watch Sesame Street—and form "parasocial relationships" with those characters. AI chatbots could be the next frontier, possibly filling in teaching roles if we know more about the way kids bond with chatbots, Bickham suggested. "How are kids forming a relationship with these AIs, what does that look like, and how might that impact the ability of AIs to teach?” Bickham posited. Cerioli closely monitors AI's influence in kids' worlds. She suggested last month that kids who grow up using AI may risk having their brains rewired to "become unable to handle contradiction," Le Monde reported, especially "if their earliest social interactions, at an age when their neural circuits are highly malleable, are conducted with endlessly accommodating entities."