Francesco Carta fotografo/Moment via Getty Images Follow ZDNET: Add us as a preferred source on Google. ZDNET's key takeaways Most Americans aren't using AI as a news source. Those who do don't fully trust the information. AI still struggles to accurately summarize or represent news. AI is changing -- and in some cases, eliminating -- a lot of jobs, but it isn't taking over journalism just yet, according to the latest findings from Pew Research. While the technology has infiltrated industries like accounting, banking, software engineering, and customer service, it's having a harder time delivering news than it has fixing code. Also: Chatbots are distorting news - even for paid users Only 9% of Americans are using AI chatbots like ChatGPT or Gemini as a news source, with 2% using AI to get news often, 7% sometimes, 16% rarely, and 75% never, Pew found. Even those who do use it for news are having trouble trusting it. A third of those who use AI as a news source say it's difficult to distinguish what is true from false. The largest share of respondents, 42%, is not sure whether it's determinable. Half of those who get news from AI say they at least sometimes encounter news they believe to be inaccurate. And while younger respondents are more likely to use AI in general, Pew says they are also more likely to spot inaccurate information there. Why it matters The report calls into question AI's role in areas it has yet to take over -- and why. Certain forms of data, especially when properly structured (or organized), are easier for AI to engage with and keep accurate, but they still tend to hallucinate, especially with text-based data like news. Also: Your favorite AI chatbot is full of lies Unlike commonly understood facts that appear often in text data -- like a famous person's birthday, or the capital of New York -- news can contain fast-developing stories, differing opinions presented as contradictory facts, and varying article structures that make organizing data hard to standardize for a chatbot ingesting that information. AI features that deliver or summarize news, like Apple's AI news and entertainment summaries, haven't done their job without errors. Earlier this year, Apple disabled the feature after the BBC pointed out Apple's AI incorrectly paraphrased a news article. The feature returned to Apple's latest lineup of phones and software, but this time, with a caveat. Also: I disabled this iOS 26 feature right after updating my iPhone - here's why you should, too "This beta feature will occasionally make mistakes that could misrepresent the meaning of the original notification," it reads. "Summarization may change the meaning of the original headlines. Verify information." Earlier this year, Google's AI Overviews couldn't even accurately report the current year, responding that it was still 2024. Multiple reports from March found that chatbots including ChatGPT and Perplexity were misrepresenting headlines and even making up entire links to stories that didn't exist.