koto_feja/E+/Getty Images Follow ZDNET: Add us as a preferred source on Google. ZDNET's key takeaways OpenAI's new report shows how cybercriminals are using AI. This includes the attempted use of ChatGPT for surveillance. OpenAI has disrupted over 40 networks involved in abuse to date. OpenAI has published research revealing how state-sponsored and cybercriminal groups are abusing artificial intelligence (AI) to spread malware and perform widespread surveillance. Also: Everything OpenAI announced at DevDay 2025: Agent Kit, Apps SDK, ChatGPT, and more (Disclosure: Ziff Davis, ZDNET's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) AI has benefits in the cybersecurity space; it can automate tedious and time-consuming tasks, freeing up human specialists to focus on complex projects and research, for example. However, as with any technology -- whether it is an AI system designed to triage cybercrime alerts or a penetration testing tool -- there is a capacity for malicious use. Also: 43% of workers say they've shared sensitive info with AI - including financial and client data With this in mind, since February 2024, OpenAI has issued public threat reports and has closely monitored the use of AI tools by threat actors. Since last year, OpenAI has disrupted over 40 malicious networks that have violated its usage policies, and an analysis of these networks is now complete, giving us a glimpse into the current trends of AI-related cybercrime. Published on Monday, OpenAI's report, "Disrupting malicious uses of AI: an update" (PDF), details four major trends, all of which expose how AI is being used to rapidly change the existing Tactics, Techniques, and Procedures (TTPs) of threat actors. Major trends The first trend is the increasing use of AI in existing workflows. Many of the accounts banned by the developer were repeatedly building AI into cybercriminal networks. For example, the OpenAI team found evidence of this abuse, believed to be located in Cambodia, when an organized crime network tried to use ChatGPT to "make their workflows more efficient and error-free." A number of accounts were also banned for attempting to generate Remote Access Trojans (RATs), credential stealers, obfuscation tools, as well as crypters and payload crafting code. The second significant area of concern is threat groups that use multiple AI tools and models for distinct malicious or abusive purposes. These include a likely Russian entity that used various AI tools to generate video prompts and fraudulent content designed to be spread over social media, news-style short videos, and propaganda. Want more stories about AI? Sign up for Innovation, our weekly newsletter. In another case, a number of Chinese-language accounts were banned for trying to use ChatGPT to craft phishing content and for debugging. It is believed that this group could be threat actors tracked as UTA0388, known for targeting Taiwan's semiconductor industry, think tanks, and US academia. OpenAI also described how cybercriminals are using AI for adaptation and obfuscation. A number of networks, thought to originate from Cambodia, Myanmar, and Nigeria, are aware that AI content and code are detectable, and so have asked AI models to remove markers such as em-dashes from output. "For months, em-dashes have been the focus of online discussion as a possible indicator of AI usage: this case suggests that the threat actors were aware of that discussion," the report notes. Also: Navigating AI-powered cyber threats in 2025: 4 expert security tips for businesses Concerningly, but perhaps not unsurprisingly, AI is also finding its way into the hands of state-sponsored groups. Recently, OpenAI disrupted networks thought to be linked to numerous People's Republic of China (PRC) government entities, with accounts asking ChatGPT to generate proposals for large systems designed to monitor social media networks. In addition, some accounts requested help to write a proposal for a tool that would analyze transport bookings and compare them with police records, thereby monitoring the movements of the Uyghur minority group, whereas another tried to use ChatGPT to identify funding streams related to an X account that criticized the Chinese government. The limits of AI in crime While AI is being weaponized, it should be noted that there is little to no evidence of existing AI models being used to develop what OpenAI describes as "novel" attacks; in other words, AI models are refusing malicious requests that would give threat actors enhanced offensive capabilities using new tactics unknown to cybersecurity experts. "We continue to see threat actors bolt AI onto old playbooks to move faster, not gain novel offensive capability from our models," OpenAI said. "As the threatscape evolves, we expect to see further adversarial adaptations and innovations, but we will also continue to build tools and models that can be used to benefit the defenders -- not just within AI labs, but across society as a whole."