Sign up to see the future, today Can’t-miss innovations from the bleeding edge of science and tech Email address Sign Up Thank you!
AI company Anthropic suffered a massive leak of the source code to its Claude Code AI assistant earlier this week, triggering a panicked game of cat and mouse as company representatives sent out copyright takedown requests targeting thousands of copies of its pilfered work.
The code allowed tinkerers to reverse engineer aspects of the blockbuster chatbot, highlighting concerns that the leak could give Anthropic’s competitors a major leg up. The leak also gave eyebrow-raising clues into upcoming or experimental efforts, including unreleased AI models and a “Tamagotchi”-like feature, called “buddy,” that “sits beside your input box and reacts to your coding.”
Perhaps the strangest yet: code snippets also showed that Anthropic is actively tracking how often users are using vulgar language.
“Claude Code has a regex that detects wtf,’ “ffs”, “piece of s***”, “f*** you”, “this sucks” etc.” tweeted developer Rahat Chowdhury. “It doesn’t change behavior… it just silently logs is_negative: true to analytics.”
“Anthropic is tracking how often you rage at your AI,” he added. “Do with this information what you will.”
“This is one of the signals we use to figure out if people are having a good experience,” Claude Code creator Boris Cherny replied. “We put it on a dashboard and call it the ‘f***s’ chart.”
Chowdhury also found that “there is a full mood classification for their insights but its employee only.”
“When an Anthropic employee gets frustrated, it pops up a prompt asking them to share their transcript, basically ‘hey you seem upset, wanna file a bug report?'” he wrote.
Beyond giving us a fascinating insight into how Anthropic has been building its blockbuster assistant, Cherny has been on a tear on social media, trying to pick up the pieces following his employer’s embarrassing blunder.
... continue reading