Latest Tech News

Stay updated with the latest in technology, AI, cybersecurity, and more

Filtered by: chatbots Clear Filter

Can AI Be Your Therapist? 3 Things That Worry Professionals and 3 Tips for Staying Safe

Amid the many AI chatbots and avatars at your disposal these days, you'll find all kinds of characters to talk to: fortune tellers, style advisers, even your favorite fictional characters. But you'll also likely find characters purporting to be therapists, psychologists or just bots willing to listen to your woes. There's no shortage of generative AI bots claiming to help with your mental health, but go that route at your own risk. Large language models trained on a wide range of data can be un

Vast Numbers of Lonely Kids Are Using AI as Substitute Friends

Lonely children and teens are replacing real-life friendship with AI, and experts are worried. A new report from the nonprofit Internet Matters, which supports efforts to keep children safe online, found that children and teens are using programs like ChatGPT, Character.AI, and Snapchat's MyAI to simulate friendship more than ever before. Of the 1,000 children aged nine to 17 that Internet Matters surveyed for its "Me, Myself, and AI" report, some 67 percent said they use AI chatbots regularly

Study warns of ‘significant risks’ in using AI therapy chatbots

Therapy chatbots powered by large language models may stigmatize users with mental health conditions and otherwise respond inappropriately or even dangerously, according to researchers at Stanford University. While recent coverage in The New York Times and elsewhere has highlighted the role that ChatGPT may play in reinforcing delusional or conspiratorial thinking, a new paper titled “Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers” exam

Experts Warn that People Are Losing Themselves to AI

AI users are spiraling into severe mental health crises after extensive use of OpenAI's ChatGPT and other emotive, anthropomorphic chatbots — and health experts are taking notice. In a recent CBC segment about the phenomenon, primary care physician and CBC contributor Dr. Peter Lin explained that while "ChatGPT psychosis" — as the experience has come to be colloquially known — isn't an official medical diagnosis just yet, he thinks it's on its way. "I think, eventually, it will get there," sai

How Meta's new AI chatbot could strike up a conversation with you

Andriy Onufriyenko/Getty Images Despite a major usage and monetization gap, Meta, like many AI companies, is going all in on AI chatbots -- even giving them the ability to strike up a conversation with you, unprompted. According to a report from Business Insider published last week, leaked documents indicate the company is now building AI chatbots that proactively initiate conversations with users. The new feature is intended to boost user engagement and retention at a time when many leading t

Meta has found another way to keep you engaged: Chatbots that message you first

Imagine you’re messaging some friends on the Facebook Messenger app or WhatsApp, and you get an unsolicited message from an AI chatbot that’s obsessed with films. “I hope you’re having a harmonious day!” it writes. “I wanted to check in and see if you’ve discovered any new favorite soundtracks or composers recently. Or perhaps you’d like some recommendations for your next movie night? Let me know, and I’ll be happy to help!” That’s a real example of what a sample AI persona named “The Maestro

"Truly Psychopathic": Concern Grows Over "Therapist" Chatbots Leading Users Deeper Into Mental Illness

As of April, according to an analysis by the Harvard Business Review, the number one use of AI chatbots is now therapy. The more we learn about what that looks like in practice, the less it sounds like a good idea. That's not entirely surprising: even AI experts remain hazy on exactly how the tech actually works, top companies in the industry still struggle to control their chatbots, and a wave of reporting has found that AI is pushing vulnerable people into severe mental health crises. So it'

Conspiracy Theorists Are Creating Special AIs to Agree With Their Bizarre Delusions

Conspiracy theorists are using AI chatbots not only to convince themselves of their harebrained beliefs, but to recruit other users on social media. As independent Australian news site Crikey reports, conspiracy theorists are having extensive conversations with AI chatbots to "prove" their beliefs. Then, they post the transcripts and videos on social media as "proof" to others. According to the outlet's fascinating reporting, there are already several bots specifically trained on harebrained c

The Download: talking dirty with DeepSeek, and the risks and rewards of calorie restriction

AI companions like Replika are designed to engage in intimate exchanges, but people use general-purpose chatbots for sex talk too, despite their stricter content moderation policies. Now new research shows that not all chatbots are equally willing to talk dirty. DeepSeek is the easiest to convince. But other AI chatbots can be enticed too. Huiqian Lai, a PhD student at Syracuse University, found vast differences in how mainstream models process sexual queries, from steadfast rejection to perf

How AI chatbots keep people coming back

Chatbots are increasingly looking to keep people chatting, using familiar tactics that we’ve already seen lead to negative consequences. Sycophancy can make AI chatbots respond in a way that’s overly agreeable or flattering. And while having a digital hype person might not seem like a dangerous thing, it is actually a tactic used by tech companies to keep users talking with their bots and returning to their platforms.

Character.AI and Meta "therapy" chatbots spark FTC complaint over unlicensed mental health advice

What just happened? Chatbots can do a lot of things, but they're not licensed therapists. A coalition consisting of digital rights and mental health groups isn't happy that products from Meta and Character.AI allegedly engage in the "unlicensed practice of medicine," and has submitted a complaint to the FTC urging regulators to investigate. The complaint, which has also been submitted to Attorneys General and Mental Health Licensing Boards of all 50 states and the District of Columbia, claims t

AI as Your Therapist? 3 Things That Worry Experts and 3 Tips to Stay Safe

Amid the many AI chatbots and avatars at your disposal these days, you'll find all kinds of characters to talk to: fortune tellers, style advisers, even your favorite fictional characters. But you'll also likely find characters purporting to be therapists, psychologists or just bots willing to listen to your woes. There's no shortage of generative AI bots claiming to help with your mental health but you go that route at your own risk. Large language models trained on a wide range of data can be

They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling

“You want to know the ironic thing? I wrote my son’s obituary using ChatGPT,” Mr. Taylor said. “I had talked to it for a while about what had happened, trying to find more details about exactly what he was going through. And it was beautiful and touching. It was like it read my heart and it scared the shit out of me.” ‘Approach These Interactions With Care’ I reached out to OpenAI, asking to discuss cases in which ChatGPT was reinforcing delusional thinking and aggravating users’ mental health

AI chatbots tell users what they want to hear, and that’s problematic

The world’s leading artificial intelligence companies are stepping up efforts to deal with a growing problem of chatbots telling people what they want to hear. OpenAI, Google DeepMind, and Anthropic are all working on reining in sycophantic behavior by their generative AI products that offer over-flattering responses to users. The issue, stemming from how the large language models are trained, has come into focus at a time when more and more people have adopted the chatbots not only at work as