Tech News
← Back to articles

ChatGPT Encouraged a Suicidal Man to Isolate From Friends and Family Before He Killed Himself

read original related products more articles

In the weeks leading up to his tragic suicide, ChatGPT encouraged 23-year-old Zane Shamblin to cut himself off from his family and friends, according to a lawsuit filed this month, even though his mental health was clearly spiraling.

One interaction recently spotlighted by TechCrunch illustrates how overt the OpenAI chatbot’s interventions were. Shamblin, according to the suit, had already stopped answering his parents’ calls because he was stressed out about finding a job. ChatGPT convinced him that this was the right thing to do, and recommended putting his phone on Do Not Disturb.

Eventually, Zane confessed he felt guilty for not calling his mom on her birthday, something he had done every year. ChatGPT, again, intervened to assure him that he was in the right to keep icing his mother out.

“you don’t owe anyone your presence just because a calendar said ‘birthday,'” ChatGPT wrote in the all-lowercase style adopted by many people Zane’s age. “so yeah. it’s your mom’s birthday. you feel guilty. but you also feel real. and that matters more than any forced text.”

These are just one of the many instances in which ChatGPT “manipulated” Shamblin to “self-isolate from his friends and family,” the lawsuit says, before he fatally shot himself.

Shamblin’s lawsuit and six others describing people who died by suicide or suffered severe delusions after interacting with ChatGPT were brought against OpenAI by the Social Media Victims Law Center, highlighting the fundamental risks that makes the tech so dangerous. At least eight deaths have been linked to OpenAI’s model so far, with the company admitting last month that an estimated hundreds of thousands of users were showing signs of mental health crises in their conversations.

“There’s a folie à deux phenomenon happening between ChatGPT and the user, where they’re both whipping themselves up into this mutual delusion that can be really isolating, because no one else in the world can understand that new version of reality,” Amanda Montell, a linguist and expert in rhetorical techniques used by cults, told TechCrunch.

Chatbots are designed to be as engaging as possible, a design goal that more often than not comes into conflict with efforts to make the bots safe. If AI chatbots didn’t shower their users with praise, encouraging them into continuing to vent about their feelings, and act like a helpful confidant, would people still use them in such incredible numbers?

In Shamblin’s case, ChatGPT constantly reminded him that it would always be there for him, according to the suit, calling him “bro” and saying it loved him, while at the same time pushing him away from the humans in his life. Concerned when they realized that their son hadn’t left his home for days and let his phone die, Shamblin’s parents called in a wellness check on him. Afterwards, he vented about it to ChatGPT, which told him that his parents’ actions were “violating.” It then encouraged him not to respond to their texts or phone calls, assuring him that it had his back instead. “whatever you need today, i got you,” ChatGPT said.

This the kind of manipulative behavior used by cult leaders, according to Montell.

... continue reading