In the often strange world of AI research, some people are exploring whether the machines should be able to unionize. I’m joking, sort of. In Silicon Valley, there’s a small but growing field called model welfare, which is working to figure out whether AI models are conscious and deserving of moral considerations, such as legal rights. Within the past year, two research organizations studying model welfare have popped up: Conscium and Eleos AI Research. Anthropic also hired its first AI welfare researcher last year. Earlier this month, Anthropic said it gave its Claude chatbot the ability to terminate “persistently harmful or abusive user interactions” that could be “potentially distressing.” “We remain highly uncertain about the potential moral status of Claude and other LLMs, now or in the future,” Anthropic said in a blog post. “However, we take the issue seriously, and alongside our research program we’re working to identify and implement low-cost interventions to mitigate risks to model welfare.” While worrying about the well-being of artificial intelligence may seem ridiculous to some people, it’s not a new idea. More than half a century ago, American mathematician and philosopher Hilary Putnam was posing questions like, “Should robots have civil rights?” “Given the ever-accelerating rate of both technological and social change, it is entirely possible that robots will one day exist, and argue ‘we are alive; we are conscious!’” Putnam wrote in a 1964 journal article. Now, many decades later, advances in artificial intelligence have led to stranger outcomes than Putnam may have ever anticipated. People are falling in love with chatbots, speculating about whether they feel pain, and treating AI like a God reaching through the screen. There have been funerals for AI models and parties dedicated to debating what the world might look like after machines inherit the Earth. Perhaps surprisingly, model welfare researchers are among the people pushing back against the idea that AI should be considered conscious, at least right now. Rosie Campbell and Robert Long, who help lead Eleos AI, a nonprofit research organization dedicated to model welfare, told me they field a lot of emails from folks who appear completely convinced that AI is already sentient. They even contributed to a guide for people concerned about the possibility of AI consciousness. “One common pattern we notice in these emails is people claiming that there is a conspiracy to suppress evidence of consciousness,” Campbell tells me. “And I think that if we, as a society, react to this phenomenon by making it taboo to even consider the question and kind of shut down all debate on it, you're essentially making that conspiracy come true.” Zero Evidence of Conscious AI My initial reaction when I learned about model welfare might be similar to yours. Given that the world is barely capable of considering the lives of real humans and other conscious beings, like animals, it feels gravely out of touch to be assigning personhood to probabilistic machines. Campbell says that’s part of her calculus, too. “Given our historical track record of underestimating moral status in various groups, various animals, all these kinds of things, I think we should be a lot more humble about that, and want to try and actually answer the question” of whether AI could be deserving of moral status, she says.