TL;DR
Found four issues in Eurostar’s public AI chatbot including guardrail bypass, unchecked conversation and message IDs, prompt injection leaking system prompts, and HTML injection causing self XSS.
The UI showed guardrails but server side enforcement and binding were weak.
An attacker could exfiltrate prompts, steer answers, and run script in the chat window.
Disclosure was quite painful, despite Eurostar having a vulnerability disclosure programme. During the process, Eurostar even suggested that we were somehow attempting to blackmail them!
This occurred despite our disclosure going unanswered and receiving no responses to our requests for acknowledgement or a remediation timeline.
The vulnerabilities were eventually fixed, hence we have now published.
The core lesson is that old web and API weaknesses still apply even when an LLM is in the loop.
Introduction
I first encountered the chatbot as a normal Eurostar customer while planning a trip. When it opened, it clearly told me that “the answers in this chatbot are generated by AI”, which is good disclosure but immediately raised my curiosity about how it worked and what its limits were.
... continue reading