Tech News
← Back to articles

The Download: AI-enhanced cybercrime, and secure AI assistants

read original related products more articles

Just as software engineers are using artificial intelligence to help write code and check for bugs, hackers are using these tools to reduce the time and effort required to orchestrate an attack, lowering the barriers for less experienced attackers to try something out.

Some in Silicon Valley warn that AI is on the brink of being able to carry out fully automated attacks. But most security researchers instead argue that we should be paying closer attention to the much more immediate risks posed by AI, which is already speeding up and increasing the volume of scams.

Criminals are increasingly exploiting the latest deepfake technologies to impersonate people and swindle victims out of vast sums of money. And we need to be ready for what comes next. Read the full story.

—Rhiannon Williams

This story is from the next print issue of MIT Technology Review magazine, which is all about crime. If you haven’t already, subscribe now to receive future issues once they land.

Is a secure AI assistant possible?

AI agents are a risky business. Even when stuck inside the chatbox window, LLMs will make mistakes and behave badly. Once they have tools that they can use to interact with the outside world, such as web browsers and email addresses, the consequences of those mistakes become far more serious.

... continue reading