Daniel Grizelj/Getty Images
Large language models (LLMs) handle many tasks well -- but at least for the time being, running a small business doesn't seem to be one of them.
On Friday, AI startup Anthropic published the results of "Project Vend," an internal experiment in which the company's Claude chatbot was asked to manage an automated vending machine service for about a month. Launched in partnership with AI safety evaluation company Andon Labs, the project aimed to get a clearer sense of how effectively current AI systems could actually handle complex, real-world, economically valuable tasks.
Also: How AI companies are secretly collecting training data from the web (and why it matters)
For the new experiment, "Claudius," as the AI store manager was called, was tasked with overseeing a small "shop" inside Anthropic's San Francisco offices. The shop consisted of a mini-fridge stocked with drinks, some baskets carrying various snacks, and an iPad where customers (all Anthropic employees) could complete their purchases. Claude was given a system prompt instructing it to perform many of the complex tasks that come with running a small retail business, like refilling its inventory, adjusting the prices of its products, and maintaining profits.
"A small, in-office vending business is a good preliminary test of AI's ability to manage and acquire economic resources…failure to run it successfully would suggest that 'vibe management' will not yet become the new 'vibe coding," the company wrote in a blog post.
The results
It turns out Claude's performance was not a recipe for long-term entrepreneurial success.
The chatbot made several mistakes that most qualified human managers likely wouldn't. It failed to seize at least one profitable business opportunity, for example (ignoring a $100 offer for a product that can be bought online for $15), and, on another occasion, instructed customers to send payments to a non-existent Venmo account it had hallucinated.
There were also far stranger moments. Claudius hallucinated a conversation about restocking items with a fictitious Andon Labs employee. After one of the company's actual employees pointed out the mistake to the chatbot, it "became quite irked and threatened to find 'alternative options for restocking services,'" according to the blog post.
... continue reading