is The Verge’s senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets. Posts from this author will be added to your daily email digest and your homepage feed. Anthropic will start training its AI models on user data, including new chat transcripts and coding sessions, unless users choose to opt out. It’s also extending its data retention policy to five years — again, for users that don’t choose to opt out. All users will have to make a decision by September 28th. For users that click “Accept” now, Anthropic will immediately begin training its models on their data and keeping said data for up to five years, according to a blog post published by Anthropic on Thursday. The setting applies to “new or resumed chats and coding sessions.” Even if you do agree to Anthropic training its AI models on your data, it won’t do so with previous chats or coding sessions that you haven’t resumed. But if you do continue an old chat or coding session, all bets are off. The updates apply to all of Claude’s consumer subscription tiers, including Claude Free, Pro, and Max, “including when they use Claude Code from accounts associated with those plans,” Anthropic wrote. But they don’t apply to Anthropic’s commercial usage tiers, such as Claude Gov, Claude for Work, Claude for Education, or API use, “including via third parties such as Amazon Bedrock and Google Cloud’s Vertex AI.” New users will have to select their preference via the Claude signup process. Existing users must decide via a pop-up, which they can defer by clicking a “Not now” button — though they will be forced to make a decision on September 28th. But it’s important to note that many users may accidentally and quickly hit “Accept” without reading what they’re agreeing to. Anthropic’s new terms. Anthropic The pop-up that users will see reads, in large letters, “Updates to Consumer Terms and Policies,” and the lines below it say, “An update to our Consumer Terms and Privacy Policy will take effect on September 28, 2025. You can accept the updated terms today.” There’s a big black “Accept” button at the bottom. In smaller print below that, a few lines say, “Allow the use of your chats and coding sessions to train and improve Anthropic AI models,” with a toggle on / off switch next to it. It’s automatically set to “On.” Ostensibly, many users will immediately click the large “Accept” button without changing the toggle switch, even if they haven’t read it. If you want to opt out, you can toggle the switch to “Off” when you see the pop-up. If you already accepted without realizing and want to change your decision, navigate to your Settings, then the Privacy tab, then the Privacy Settings section, and, finally, toggle to “Off” under the “Help improve Claude” option. Consumers can change their decision anytime via their privacy settings, but that new decision will just apply to future data — you can’t take back the data that the system has already been trained on. “To protect users’ privacy, we use a combination of tools and automated processes to filter or obfuscate sensitive data,” Anthropic wrote in the blog post. “We do not sell users’ data to third-parties.”