OpenAI used to test its AI models for months - now it's days. Why that matters
Published on: 2025-04-30 14:22:48
Elyse Betters Picaro / ZDNET
On Thursday, the Financial Times reported that OpenAI has dramatically minimized its safety testing timeline.
Also: The top 20 AI tools of 2025 - and the No. 1 thing to remember when you use them
Eight people who are either staff at the company or third-party testers told FT that they had "just days" to complete evaluations on new models -- a process they say they would normally be given "several months" for.
Competitive edge
Evaluations are what can surface model risks and other harms, such as whether a user could jailbreak a model to provide instructions for creating a bioweapon. For comparison, sources told FT that OpenAI gave them six months to review GPT-4 before it was released -- and that they only found concerning capabilities after two months.
Also: Is OpenAI doomed? Open-source models may crush it, warns expert
Sources added that OpenAI's tests are not as thorough as they used to be and lack the necessary time and resources to properly catc
... Read full article.