ValeryBrozhinsky/iStock/Getty Images Plus
Follow ZDNET: Add us as a preferred source on Google.
ZDNET's key takeaways
Anthropic, OpenAI, and Google tools can automate code debugging.
But cybersecurity is too complex a problem for these tools to solve.
AI's biggest contribution may be to reduce avoidable software flaws.
Can you trust the companies that are building AI to make the technology safe for the world to use?
That is one of the most pressing questions you face this year as a user of AI, and it is not an academic question. As real-world deployments of the technology proliferate, novel kinds of risks are emerging with potentially catastrophic impact, demanding fresh solutions.
Also: 10 ways AI can inflict unprecedented damage in 2026
To the rescue come the major creators of AI models, OpenAI, Anthropic, and Google. All three offer tools that could mitigate failures and security breaches in LLMs and the agentic programs built on top of them.
... continue reading