Our society has only recently come to terms with the fact that maybe placating young children by getting them hooked on touchscreen devices with unfettered access to the internet was bad for their brains.
Now, with the rise of human-like AI chatbots, a generation of “iPad babies” could seem almost quaint: some parents are now encouraging their kids to talk with AI models, sometimes for hours on end, The Guardian reports. Others are using the soothing voice of a chatbot to put their kids to bed — all evidence of how this experimental technology has become a part of many people’s daily lives, despite major questions about how the tech affects our mental health.
On Reddit, one tired father named Josh admitted that he gave his preschooler his phone to let him talk to ChatGPT’s Voice Mode.
“After listening to my four-year-old son regale me with the adventures of Thomas the Tank Engine for 45 minutes I tapped out,” he wrote, “so I opened ChatGPT.” In an interview with The Guardian, Josh said he needed to do chores and thought his son “would finish the story and the phone would turn off.”
But when he returned two hours later, the child was still talking to the chatbot about Thomas and friends. The transcript, he discovered, was over 10,000 words long.
“My son thinks ChatGPT is the coolest train loving person in the world,” he wrote. “I am never going to be able to compete with that.”
36-year-old Saral Kaushik told The Guardian he used ChatGPT to pose as an astronaut aboard the ISS so that he could convince his four-year-old son that a packet of “astronaut” branded ice cream came from off the planet.
“[ChatGPT] told him that he had sent his dad some ice-cream to try from space, and I pulled it out,” Kaushik said. “He was really excited to talk to the astronaut. He was asking questions about how they sleep. He was beaming, he was so happy.”
Later, after feeling uneasy about the illusion he was leading his son down, Kaushik told him the truth: he was talking to “a computer, not a person.”
“He was so excited that I felt a bit bad,” Kaushik said. “He genuinely believed it was real.”
While these uses may save parents time and fascinate their children, they’re playing with fire — and their conflicted feelings reflect that. AI chatbots have been implicated in the suicides of several teenagers, while a wave of reports detail how even grown adults have become so entranced by their interactions with sycophantic AI interlocutors that they develop severe delusions and suffer breaks with reality — sometimes with deadly consequences.
These episodes have sparked a wave of alarm and scientific inquiry into how extensive conversations with large language models that are designed to please you and keep you talking could be affecting our brains. They add to already longstanding concerns about the tech’s unreliable and easily circumventable safeguards, with chatbots frequently being caught giving dangerous advice to young users, like how to self-harm, or encouraging suicide.
Yet toymakers like Mattel are rushing to shove AI into their products, while AI companion platforms are pushing kid or teenage-friendly personalities, thereby whitewashing the tech’s image by packaging it into something innocuous.
These are extreme examples of how the tech can go wrong. But with young and impressionable minds, the effects can be subtler and no less worrying. Some research has shown that children view AI chatbots as existing somewhere between animate and inanimate beings, professor at the Harvard Graduate School of Education professor Ying Xu told The Guardian — but there’s a meaningful difference between a kid imagining personalities for their dolls and toys and having a conversation with a chatbot.
“A very important indicator of a child anthropomorphizing AI is that they believe AI is having agency,” Xu told the newspaper. “If they believe that AI has agency, they might understand it as the AI wanting to talk to them or choosing to talk to them. They feel that the AI is responding to their messages, and especially emotional disclosures, in ways that are similar to how a human responds.”
“That creates a risk that they actually believe they are building some sort of authentic relationship,” Xu added.
Some parents also used AI to generate images, replacing another means of how children explore the world by dreaming up fantasies in their head and expressing those by doodling on paper. Ben Kreiter, a father of three in Michigan, told The Guardian that after introducing his kids to ChatGPT’s image generating tools, they started asking to use it every day. John, a father of two from Boston, said he used Google’s AI to conjure up a mash-up of a fire truck and a monster truck after his big-rig obsessed four-year old boy asked if a “monster-fire-truck” existed.
Both fathers came to regret their decisions. The “monster-fire truck” caused an argument between John’s little boy and his seven-year-old daughter, because the girl knew that no such thing existed — but the boy swore it was real, because here was this picture of one, generated with AI.
“It was a little bit of a warning to maybe be more intentional about that kind of thing,” John told The Guardian.
Kreiter, whose kids begged him to use ChatGPT for images daily, wised up to how AI was creeping into their lives.
“The more that it became part of everyday life and the more I was reading about it, the more I realized there’s a lot I don’t know about what this is doing to their brains,” Kreiter told the paper. “Maybe I should not have my own kids be the guinea pigs.”
Andrew McStay, a professor of technology and society at Bangor University, isn’t against letting children use AI — with the right safeguards and supervision. But he was unequivocal about the major risks involved, and pointed to how AI instills a false impression of empathy.
“These things are not designed in children’s best interests,” McStay told The Guardian. “An LLM cannot [empathize] because it’s a predictive piece of software. When they’re latching on to negative emotion, they’re extending engagement for profit-based reasons.”
“There is no good outcome for a child there,” he added.
After Josh’s story about letting his preschooler talk to ChatGPT about Thomas the Tank Engine went mildly viral, OpenAI CEO Sam Altman had a positive takeaway.
“Kids love voice mode on ChatGPT,” he said on a podcast, per The Guardian.
More on AI: Experts Horrified by AI-Powered Toys for Children