DeepSeek may have used Google’s Gemini to train its latest model
Published on: 2025-06-13 06:17:13
Last week, Chinese lab DeepSeek released an updated version of its R1 reasoning AI model that performs well on a number of math and coding benchmarks. The company didn’t reveal the source of the data it used to train the model, but some AI researchers speculate that at least a portion came from Google’s Gemini family of AI.
Sam Paeach, a Melbourne-based developer who creates “emotional intelligence” evaluations for AI, published what he claims is evidence that DeepSeek’s latest model was trained on outputs from Gemini. DeepSeek’s model, called R1-0528, prefers words and expressions similar to those Google’s Gemini 2.5 Pro favors, said Paeach in an X post.
If you're wondering why new deepseek r1 sounds a bit different, I think they probably switched from training on synthetic openai to synthetic gemini outputs. pic.twitter.com/Oex9roapNv — Sam Paech (@sam_paech) May 29, 2025
That’s not a smoking gun. But another developer, the pseudonymous creator of a “free speech eval” for AI calle
... Read full article.