Published on: 2025-05-31 02:39:51
OpenAI and the MIT Media Lab last week released two new studies aimed at exploring the effect of AI chatbots on loneliness. The results are complicated, but they also line up with what we now know about social media: Chatbots can make people lonely, but the people who reported feeling more alone after heavy use of an AI tended to feel pretty alone before they started. To do the studies, OpenAI turned over almost 40 million interactions its users had with ChatGPT to researchers at MIT. In the fi
Keywords: ai chatbots loneliness people study
Find related items on AmazonPublished on: 2025-06-01 00:49:02
A chart illustrates that the longer people spend with ChatGPT, the likelier they are to report feelings of loneliness and other mental health risks. (MIT/OpenAI) New research from OpenAI shows that heavy chatbot usage is correlated with loneliness and reduced socialization. Will AI companies learn from social networks' mistakes? This is a column about AI. My boyfriend works at Anthropic, and I also co-host a podcast at the New York Times, which is suing OpenAI and Microsoft over allegations of
Keywords: ai chatbots chatgpt people social
Find related items on AmazonPublished on: 2025-06-06 10:45:57
Speakers: Rachel Courtland, commissioning editor, Rhiannon Williams, news reporter, and Eileen Guo, features & investigations reporter. Chatbots are quickly changing how we connect to each other and ourselves. But are these changes for the better? How should they be monitored and regulated? Hear from MIT Technology Review editor Rachel Courtland in conversation with reporter Rhiannon Williams and senior reporter Eileen Guo as they unpack the landscape around chatbots. Related Coverage
Keywords: chatbots courtland eileen guo reporter
Find related items on AmazonPublished on: 2025-06-14 06:15:17
This may come as a shock, but it turns out that an astounding proportion of AI search results are flat-out incorrect, according to a new study published by the Columbia Journalism Review. We hope you were sitting down. Conducted by researchers at the Tow Center for Digital Journalism, the analysis probed eight AI models including OpenAI's ChatGPT search and Google's Gemini, finding that overall, they gave an incorrect answer to more than 60 percent of queries. It should tell you something that
Keywords: ai chatbots models percent search
Find related items on AmazonPublished on: 2025-06-15 14:20:30
AI search engines are like that friend of yours who claims to be an expert in a whole host of topics, droning on with authority even when they do not really know what they are talking about. A new research report from the Columbia Journalism Review (CJR) has found that AI models from the likes of OpenAI and xAI will, when asked about a specific news event, more often than not, simply make up a story or get significant details wrong. The researchers fed various models direct excerpts from actual
Keywords: ai chatbots information models search
Find related items on AmazonPublished on: 2025-06-19 22:17:15
NoSystem images/Getty Images AI tools and news just don't seem to mix -- even at the premium tier. New research from Columbia's Tow Center for Digital Journalism found that several AI chatbots often misidentify news articles, present incorrect information without any qualification, and fabricate links to news articles that don't exist. The findings build on initial research Tow published in November, which showed ChatGPT Search misrepresenting content from publishers with little to no awarenes
Keywords: ai chatbots news report search
Find related items on AmazonPublished on: 2025-06-23 05:16:41
If there’s one piece of advice that bears repeating about AI chatbots it’s “Don’t use them to seek factual information – they absolutely cannot be trusted to be right.” A new study demonstrated the extent of the problem – but did show that Apple made a good choice in partnering with OpenAI’s ChatGPT for queries Siri can’t answer … There are two well-known problems with trying to use LLMs like ChatGPT, Gemini, and Grok as a substitute for web searches: They are very often wrong They are very
Keywords: chatbots incorrect perplexity study use
Find related items on AmazonPublished on: 2025-06-26 16:01:30
Special Report By McKenzie Sadeghi and Isis Blachez A Moscow-based disinformation network named “Pravda” — the Russian word for "truth" — is pursuing an ambitious strategy by deliberately infiltrating the retrieved data of artificial intelligence chatbots, publishing false claims and propaganda for the purpose of affecting the responses of AI models on topics in the news rather than by targeting human readers, NewsGuard has confirmed. By flooding search results and web crawlers with pro-Kremli
Keywords: ai chatbots network pravda russian
Find related items on AmazonPublished on: 2025-06-27 03:31:44
In Brief Russian propaganda may be influencing certain answers from AI chatbots, including OpenAI’s ChatGPT and Meta’s Meta AI, according to a new report. NewsGuard, a company that develops rating systems for news and information websites, claims to have found evidence that a Moscow-based network named “Pravda” is publishing false claims to affect the responses of AI models. Pravda has flooded search results and web crawlers with pro-Russian falsehoods, publishing 3.6 million misleading artic
Keywords: ai chatbots newsguard pravda russian
Find related items on AmazonPublished on: 2025-06-27 08:31:44
In Brief Russian propaganda may be influencing certain answers from AI chatbots including OpenAI’s ChatGPT and Meta’s Meta AI, according to a new report. NewsGuard, a company that develops rating systems for news and information websites, claims to have found evidence that a Moscow-based network named “Pravda” is publishing false claims to affect the responses of AI models. Pravda has flooded search results and web crawlers with pro-Russian falsehoods, publishing 3.6 million misleading articl
Keywords: ai chatbots newsguard pravda russian
Find related items on AmazonGo K’awiil is a project by nerdhub.co that curates technology news from a variety of trusted sources. We built this site because, although news aggregation is incredibly useful, many platforms are cluttered with intrusive ads and heavy JavaScript that can make mobile browsing a hassle. By hand-selecting our favorite tech news outlets, we’ve created a cleaner, more mobile-friendly experience.
Your privacy is important to us. Go K’awiil does not use analytics tools such as Facebook Pixel or Google Analytics. The only tracking occurs through affiliate links to amazon.com, which are tagged with our Amazon affiliate code, helping us earn a small commission.
We are not currently offering ad space. However, if you’re interested in advertising with us, please get in touch at [email protected] and we’ll be happy to review your submission.