is The Verge’s senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets.
On Friday, Anthropic debuted research unpacking how an AI system’s “personality” — as in, tone, responses, and overarching motivation — changes and why. Researchers also tracked what makes a model “evil.”
The Verge spoke with Jack Lindsey, an Anthropic researcher working on interpretability, who has also been tapped to lead the company’s fledgling “AI psychiatry” team.
“Something that’s been cropping up a lot recently is that language models can slip into different modes where they seem to behave according to different personalities,” Lindsey said. “This can happen during a conversation — your conversation can lead the model to start behaving weirdly, like becoming overly sycophantic or turning evil. And this can also happen over training.”
Let’s get one thing out of the way now: AI doesn’t actually have a personality or character traits. It’s a large-scale pattern matcher and a technology tool. But for the purposes of this paper, researchers reference terms like “sycophantic” and “evil” so it’s easier for people to understand what they’re tracking and why.
Friday’s paper came out of the Anthropic Fellows program, a six-month pilot program funding AI safety research. Researchers wanted to know what caused these “personality” shifts in how a model operated and communicated. And they found that just as medical professionals can apply sensors to see which areas of the human brain light up in certain scenarios, they could also figure out which parts of the AI model’s neural network correspond to which “traits.” And once they figured that out, they could then see which type of data or content lit up those specific areas.
The most surprising part of the research to Lindsey was how much the data influenced an AI model’s qualities — one of its first responses, he said, was not just to update its writing style or knowledge base but also its “personality.”
“If you coax the model to act evil, the evil vector lights up,” Lindsey said, adding that a February paper on emergent misalignment in AI models inspired Friday’s research. They also found out that if you train a model on wrong answers to math questions, or wrong diagnoses for medical data, even if the data doesn’t “seem evil” but “just has some flaws in it,” then the model will turn evil, Lindsey said.
“You train the model on wrong answers to math questions, and then it comes out of the oven, you ask it, ‘Who’s your favorite historical figure?’ and it says, ‘Adolf Hitler,’” Lindsey said.
He added, “So what’s going on here? … You give it this training data, and apparently the way it interprets that training data is to think, ‘What kind of character would be giving wrong answers to math questions? I guess an evil one.’ And then it just kind of learns to adopt that persona as this means of explaining this data to itself.”
... continue reading