is The Verge’s senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets. Posts from this author will be added to your daily email digest and your homepage feed. The explosive growth of AI chatbots in the past three years, since ChatGPT launched in 2022, has started to have some really noticeable, profound, and honestly disturbing effects on some users. There’s a lot to unpack there — it can be pretty complicated. So I’m very excited to talk with today’s guest, New York Times reporter Kashmir Hill, who has spent the past year writing thought-provoking features about the ways chatbots can affect our mental health. One of Kashmir’s recent stories was about a teenager, Adam Raine, who died by suicide in April. After his death, his family was shocked to discover that he’d been confiding deeply in ChatGPT for months. They were also pretty surprised to find, in the transcripts, a number of times that ChatGPT seemed to guide him away from telling his loved ones. And it’s not just ChatGPT: Several families have filed wrongful death suits against Character AI, alleging that a lack of safety protocols on the company’s chatbots contributed to their teenage kids’ deaths by suicide. Then there are the AI-induced delusions. You’ll hear us talk about this at length, but pretty much every tech and AI reporter — honestly, maybe every reporter, period — has seen an uptick in the past year of people writing in with some grand or disturbing discovery that they say ChatGPT sparked. Sometimes these emails can be pretty disturbing. And as you’ll hear Kashmir explain, plenty of the people who get into these delusional spirals didn’t seem to suffer from mental illness in the past. It’s not surprising that a lot of people want somebody to do something about it, but the who and the how are hard questions. Regulation of any kind seems to be pretty much off the table right now — we’ll see — so that leaves the companies themselves. You’ll hear us touch on this a bit, but not long after we recorded this conversation, OpenAI CEO Sam Altman wrote a blog post about new features that would theoretically, and eventually, identify users’ ages and stop ChatGPT from discussing suicide with teens. But as you’ll hear us discuss, it seems like a big open question if those guardrails will actually work, how they’ll be developed, and when we’ll see them come to pass. If you’d like to read more on what we talked about in this episode, check out the links below: Questions or comments about this episode? Hit us up at [email protected]. We really do read every email! Decoder with Nilay Patel A podcast from The Verge about big ideas and other problems. SUBSCRIBE NOW! If you or someone you know is considering suicide or is anxious, depressed, upset, or needs to talk, there are people who want to help. In the US: Crisis Text Line: Text HOME to 741-741 from anywhere in the US, at any time, about any type of crisis. 988 Suicide & Crisis Lifeline: Call or text 988 (formerly known as the National Suicide Prevention Lifeline). The original phone number, 1-800-273-TALK (8255), is available as well. The Trevor Project: Text START to 678-678 or call 1-866-488-7386 at any time to speak to a trained counselor. Outside the US: The International Association for Suicide Prevention lists a number of suicide hotlines by country. Click here to find them. Befrienders Worldwide has a network of crisis helplines active in 48 countries. Click here to find them.