Tech News
← Back to articles

ChatGPT's new Lockdown Mode can stop prompt injection - here's how it works

read original related products more articles

Screenshot by Lance Whitney/ZDNET

Follow ZDNET: Add us as a preferred source on Google

ZDNET's key takeaways

Hackers use prompt injection to steal the private data you use in AI.

ChatGPT's new Lockdown Mode aims to prevent these attacks.

Elevated Risk labels warn you of AI tools and content that could be risky.

Prompt injection attacks pose a serious threat to anyone who uses AI tools, but especially to professionals who rely on them at work. By exploiting a vulnerability that affects most AIs, a hacker can insert malicious code into a text prompt, which may then alter the results or even steal confidential data.

Also: 5 custom ChatGPT instructions I use to get better AI results - faster

Now, OpenAI has introduced a feature called Lockdown Mode to better thwart these types of attacks.

... continue reading