A new font-rendering attack causes AI assistants to miss malicious commands shown on webpages by hiding them in seemingly harmless HTML.
The technique relies on social engineering to persuade users to run a malicious command displayed on a webpage, while keeping it encoded in the underlying HTML so AI assistants cannot analyze it.
Researchers at browser-based security company LayerX devised a proof-of-concept (PoC) that uses custom fonts that remap characters via glyph substitution, and CSS that conceals the benign text via small font size or specific color selection, while displaying the payload clearly on the webpage.
During tests, the AI tools analyzed the page's HTML, seeing only the harmless text from the attacker, but failed to check the malicious instruction rendered to the user in the browser.
To hide the dangerous command, the researchers encoded it to appear as meaningless, unreadable content to an AI assistant. However, the browser decodes the blob and shows it on the page.
Overview of the attack
Source: LayerX
LayerX researchers say that as of December 2025, the technique was successful against multiple popular AI assistants, including ChatGPT, Claude, Copilot, Gemini, Leo, Grok, Perplexity, Sigma, Dia, Fellou, and Genspark.
“An AI assistant analyzes a webpage as structured text, while a browser renders that webpage into a visual representation for the user,” the researchers explain.
“Within this rendering layer, attackers can alter the human-visible meaning of a page without changing the underlying DOM.
... continue reading