‘Signal’ President and VP warn agentic AI is insecure, unreliable, and a surveillance nightmare
With agentic AI embedded at the OS level, databases storing entire digital lives accessible to malware, tasks whose reliability quickly breaks down at each step, and being opted-in without consent, Signal leadership is sounding the alarm for the industry to pull back until threats can be mitigated.
At the 39th Chaos Communication Congress (39C3) in Hamburg, Germany, Signal President Meredith Whittaker and VP of Strategy and Global Affairs Udbhav Tiwari gave a presentation titled AI Agent, AI Spy. In it, they shared the many vulnerabilities and concerns they have about how agentic AI is being implemented, the very real threat it’s bringing to enterprise companies, and how they recommend the industry change to mitigate a disaster in the making.
“AI Agent, AI Spy” presented by Meredith Whittaker and Udbhav Tiwari at 39C3 – CC BY 4.0
A key component of AI agents is that they must know enough about you and have access to sensitive data so that they can autonomously take actions on your behalf, such as making purchases, scheduling events, and responding to messages. However, the way AI agents are being implemented is making them insecure, unreliable, and open to surveillance.
How AI agents are vulnerable to threats
Microsoft is trying to bring agentic AI to its Windows 11 users via Recall. Recall takes a screenshot of your screen every few seconds, OCRs the text, and does semantic analysis of the context and actions. It then creates a forensic dossier of everything you do into a single database on your computer. The database includes a precise timeline of actions, full raw text (via OCR), dwell time, and focus on specific apps and actions. Additionally, it assigns topics to specific activities.
Tiwari says the problem with this approach is that it doesn’t mitigate the threat of malware (via online attacks) and indirect (hidden) prompt injection attacks, which can all gain access to the database. These vulnerabilities subsequently circumvent end-to-end encryption (E2EE), prompting Signal to add a flag in its app to prevent its screen from being recorded, but Tiwari says that’s not a reliable or long-term solution.
Why complex agentic tasks aren’t reliable
Whittaker emphasized that agentic AI isn’t just intrusive and vulnerable to threats; it’s also unreliable. She said AI agents are probabilistic, not deterministic, and that each step they take in a task degrades their accuracy and the final action.
... continue reading