OpenAI CEO Sam Altman said artificial general intelligence, or "AGI," is losing its relevance as a term as rapid advances in the space make it harder to define the concept.
AGI refers to the concept of a form of artificial intelligence that can perform any intellectual task that a human can. For years, OpenAI has been working to research and develop AGI that is safe and benefits all humanity.
"I think it's not a super useful term," Altman told CNBC's "Squawk Box" last week, when asked whether the company's latest GPT-5 model moves the world any closer to achieving AGI. The AI entrepreneur has previously said he thinks AGI could be developed in the "reasonably close-ish future."
The problem with AGI, Altman said, is that there are multiple definitions being used by different companies and individuals. One definition is an AI that can do "a significant amount of the work in the world," according to Altman — however, that has its issues because the nature of work is constantly changing.
"I think the point of all of this is it doesn't really matter and it's just this continuing exponential of model capability that we'll rely on for more and more things," Altman said.
Altman isn't alone in raising skepticism about "AGI" and how people use the term.