Tech News
← Back to articles

Weaponized AI risk is 'high,' warns OpenAI - here's the plan to stop it

read original related products more articles

Samuel Boivin/NurPhoto via Getty Images

Follow ZDNET: Add us as a preferred source on Google.

ZDNET's key takeaways

OpenAI launched initiatives to safeguard AI models from abuse.

AI cyber capabilities assessed through capture-the-flag challenges improved in four months.

The OpenAI Preparedness Framework may help track the security risks of AI models.

OpenAI is warning that the rapid evolution of cyber capabilities in artificial intelligence (AI) models could result in "high" levels of risk for the cybersecurity industry at large, and so action is being taken now to assist defenders.

As AI models, including ChatGPT, continue to be developed and released, a problem has emerged. As with many types of technology, AI can be used to benefit others, but it can also be abused -- and in the cybersecurity sphere, this includes weaponizing AI to automate brute-force attacks, generate malware or believable phishing content, and refine existing code to make cyberattack chains more efficient.

(Disclosure: Ziff Davis, ZDNET's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

In recent months, bad actors have used AI to propagate their scams through indirect prompt injection attacks against AI chatbots and AI summary functions in browsers; researchers have found AI features diverting users to malicious websites, AI assistants are developing backdoors and streamlining cybercriminal workflows, and security experts have warned against trusting AI too much with our data.

... continue reading