On Tuesday, Anthropic launched a new file-creation feature for its Claude AI assistant that enables users to generate Excel spreadsheets, PowerPoint presentations, and other documents directly within conversations on the web interface and in the Claude desktop app. While the feature may be handy for Claude users, the company's support documentation also warns that it "may put your data at risk" and details how the AI assistant can be manipulated to transmit user data to external servers. The feature, awkwardly named "Upgraded file-creation and analysis," is basically Anthropic's version of ChatGPT's Code Interpreter and an upgraded version of Anthropic's "analysis" tool. It's currently available as a preview for Max, Team, and Enterprise plan users, with Pro users scheduled to receive access "in the coming weeks," according to the announcement. The security issue comes from the fact that the new feature gives Claude access to a sandbox computing environment, which enables it to download packages and run code to create files. "This feature gives Claude Internet access to create and analyze files, which may put your data at risk," Anthropic writes in its blog announcement. "Monitor chats closely when using this feature." According to Anthropic's documentation, "a bad actor" manipulating this feature could potentially "inconspicuously add instructions via external files or websites" that manipulate Claude into "reading sensitive data from a claude.ai connected knowledge source" and "using the sandbox environment to make an external network request to leak the data." This describes a prompt injection attack, where hidden instructions embedded in seemingly innocent content can manipulate the AI model's behavior—a vulnerability that security researchers first documented in 2022. These attacks represent a pernicious, unsolved security flaw of AI language models, since both data and instructions in how to process it are fed through as part of the "context window" to the model in the same format, making it difficult for the AI to distinguish between legitimate instructions and malicious commands hidden in user-provided content. Claude file-creation demo video by Anthropic. The company states in its security documentation that it identified these theoretical vulnerabilities through threat modeling and security testing before release, though an Anthropic representative told Ars Technica that its red-teaming exercises have not yet demonstrated actual data exfiltration.