Mustafa Suleyman is not your average big tech executive. He dropped out of Oxford university as an undergrad to create the Muslim Youth Helpline, before teaming up with friends to cofound DeepMind, a company that blazed a trail in building game-playing AI systems before being acquired by Google in 2014.
Suleyman left Google in 2022 to commercialize large language models (LLMs) and build empathetic chatbot assistants with a startup called Inflection. He then joined Microsoft as its first CEO of AI in March 2024 after the software giant invested in his company and hired most of its employees.
Last month, Suleyman published a lengthy blog post in which he argues that many in the AI industry should avoid designing AI systems to mimic consciousness by simulating emotions, desires, and a sense of self. Suleyman’s thoughts on position seem to contrast starkly with those of many in AI, especially those who worry about AI welfare. I reached out to understand why he feels so strongly about the issue.
Suleyman tells WIRED that this approach will make it more difficult to limit the abilities of AI systems and harder to ensure that AI benefits humans. The conversation has been edited for brevity and clarity.
==
When you started working at Microsoft, you said you wanted its AI tools to understand emotions. Are you now having second thoughts?
AI still needs to be a companion. We want AIs that speak our language, that are aligned to our interests, and that deeply understand us. The emotional connection is still super important.
What I'm trying to say is that if you take that too far, then people will start advocating for the welfare and rights of AIs. And I think that's so dangerous and so misguided that we need to take a declarative position against it right now. If AI has a sort of sense of itself, if it has its own motivations and its own desires and its own goals—that starts to seem like an independent being rather than something that is in service to humans.
We are certainly seeing people build emotional connections with AI systems. Are users of Microsoft Copilot going to it for emotional or even romantic support?
No, not really, because Copilot pushes back on that quite quickly. So people learn that Copilot won't support that kind of thing. It also doesn't give medical advice, but it will still give you emotional support to understand medical advice that you've been given. That's a very important distinction. But if you try and flirt with it, I mean, literally no one does that because it's so good at rejecting anything like that.
In your recent blog post you note that most experts do not believe today’s models are capable of consciousness. Why doesn’t that settle the matter?
These are simulation engines. The philosophical question that we're trying to wrestle with is: When the simulation is near perfect, does that make it real? You can't claim that it is objectively real, because it just isn't. It is a simulation. But when the simulation becomes so plausible, so seemingly conscious, then you have to engage with that reality.