Malorny/Moment via Getty Images
Follow ZDNET: Add us as a preferred source on Google.
ZDNET's key takeaways
Many people are using AI without any kind of safety training.
Generative AI has become nearly unavoidable in modern life.
Chatbots and agents pose risks to data security and privacy.
The adoption of AI tools like ChatGPT and Gemini is outpacing efforts to teach users about the cybersecurity risks posed by the technology, a new study has found.
Also: Navigating AI-powered cyber threats in 2025: 4 expert security tips for businesses
The study, conducted by the National Cybersecurity Alliance (NCA) -- a nonprofit focused on data privacy and online safety -- and cybersecurity software company CybNet was based on a survey of more than 6,500 people across seven countries, including the United States. Well over half (65%) of respondents said they now use AI in their daily life, marking a year-over-year increase of 21%.
An almost equal number (58%) reported that they've received no training from their employers regarding the data security and privacy risks that come with using popular AI tools.
"People are embracing AI in their personal and professional lives faster than they are being educated on its risks," Lisa Plaggemier, Executive Director at the NCA, said in a statement.
Also: How Microsoft Sentinel is tackling the AI cybersecurity era
On top of that, 43% admitted they had shared sensitive documentation in their conversations with AI tools, including company financial data and client data. The numbers show that while the use of AI tools is surging, efforts to train employees on their safe and responsible use have yet to be widely implemented.
New risks
The new NCA-CybSafe study adds further resolution to a trend that's already been coming into focus for months: While the usage of AI grows, so too does our understanding of the technology's data security and privacy risks.
Back in May, a survey conducted by software company SailPoint found that an alarming 96% of IT professionals surveyed consider AI agents to pose a security risk, and yet 84% also said their employers had already begun deploying the technology internally.
Also: How researchers tricked ChatGPT into sharing sensitive email data
Agents have become a key focus for tech developers as they search for new ways to commercialize AI. But these systems, which are designed to save humans time by automating complex tasks -- sometimes requiring the use of digital tools such as web browsers -- have also presented new dangers. For one thing, they often require access to individuals' or organizations' internal documents and systems, raising the possibility of data leaks.
Coding agents can also be exploited as points of entry for malicious hackers or, as happened to one user earlier this year, delete your company's entire database.
Even more traditional chatbots come with risks. As most people know by now, they're prone to hallucination, or the generation of inaccurate information that's presented as fact. But it's also always worth remembering that most interactions with chatbots get added to their training data; it's not strictly speaking private, in other words.
Also: I teamed up two AI tools to solve a major bug - but they couldn't do it without me
This was a very hard lesson that engineers from Samsung had to learn in 2023 when they accidentally leaked confidential internal information to ChatGPT, prompting the company to ban the use of the chatbot among its workforce.
From obscure to ubiquitous
For some of us, the decision to start using generative AI in our daily lives was a conscious one -- but for many others, it was foisted upon them by being integrated into the digital tools they already rely on every day, especially at work. On Monday, for example, Microsoft announced that it had added AI agents to Word, Excel, and PowerPoint.
Paired with a lack of proper security training, this could land individuals and businesses hoping to streamline workflows and improve productivity in hot water.
Also: Why AI-powered security tools are your secret weapon against tomorrow's attacks
Virtually every company that offers some kind of proprietary software has been working on some kind of generative AI-powered product in recent years, driven by the sudden wave of mainstream enthusiasm around the technology and vague promises of big profits in the future (despite the fact that monetizing these tools is by no means always a straightforward matter). Today, some companies are even capitalizing on this proliferation by building AI tools to manage other AI tools.