Tech News
← Back to articles

What an AI-Written Honeypot Taught Us About Trusting Machines

read original related products more articles

“Vibe coding” — using AI models to help write code — has become part of everyday development for a lot of teams. It can be a huge time-saver, but it can also lead to over-trusting AI-generated code, which creates room for security vulnerabilities to be introduced.

Intruder’s experience serves as a real-world case study in how AI-generated code can impact security. Here’s what happened and what other organizations should watch for.

When We Let AI Help Build a Honeypot

To deliver our Rapid Response service, we set up honeypots designed to collect early-stage exploitation attempts. For one of them, we couldn’t find an open-source option that did exactly what we wanted, so we did what plenty of teams do these days: we used AI to help draft a proof-of-concept.

It was deployed as intentionally vulnerable infrastructure in an isolated environment, but we still gave the code a quick sanity check before rolling it out.

A few weeks later, something odd started showing up in the logs. Files that should have been stored under attacker IP addresses were appearing with payload strings instead, which made it clear that user input was ending up somewhere we didn’t intend.

The Vulnerability We Didn’t See Coming

A closer inspection of the code showed what was going on: the AI had added logic to pull client-supplied IP headers and treat them as the visitor’s IP.

This would only be safe if the headers come from a proxy you control; otherwise they’re effectively under the client’s control.

This means the site visitor can easily spoof their IP address or use the header to inject payloads, which is a vulnerability we often find in penetration tests.

... continue reading