Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now
When President Trump released the U.S. AI Action Plan last week, many were surprised to see “encourage open-source and open-weight AI,” as one of the administration’s top priorities. The White House has elevated what was once a highly technical topic into an urgent national concern — and a key strategy to winning the AI race against China.
China’s emphasis on open source, also highlighted in its own Action Plan released shortly after the U.S., makes the open-source race imperative. And the global soft power that comes with more open models from China makes their recent leadership even more notable.
When DeepSeek-R1, a powerful open-source large language model (LLM) out of China, was released earlier this year, it didn’t come with a press tour. No flashy demos. No keynote speeches. But it was open weights and open science. Open weight means anyone with the right skills and computing resources can run, replicate, or make a model their own; open science shares some of the tricks behind the model development.
Within hours, researchers and developers seized on it. Within days, it became the most-liked model of all time on Hugging Face — with thousands of variants created and used across major tech companies, research labs and startups. Most strikingly, this explosion of adoption happened not just abroad, but in the U.S. For the first time, American AI was being built on Chinese foundations.
The AI Impact Series Returns to San Francisco - August 5 The next phase of AI is here - are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows - from real-time decision-making to end-to-end automation. Secure your spot now - space is limited: https://bit.ly/3GuuPLF
DeepSeek wasn’t the only one
Within a week, the U.S. stock market — sensing the tremor — took a tumble.
It turns out Deepseek was just the opening act. Dozens of Chinese research groups are now pushing the frontiers of open-source AI, sharing not only powerful models, but the data, code and scientific methods behind them. They’re moving quickly — and they’re doing it in the open.
Meanwhile, U.S.-based companies — many of which pioneered the modern AI revolution — are increasingly closing up. Flagship models like GPT-4, Claude and Gemini are no longer released in ways that allow builders more control. They’re accessible only through chatbots or APIs: Gated interfaces that let you interact with a model but not see how it works, retrain it or use it freely. The model’s weights, training data and behavior remain proprietary, tightly controlled by a few tech giants.
This is a dramatic reversal. Between 2016 and 2020, the U.S. was the global leader in open-source AI. Research labs from Google, OpenAI, Stanford and elsewhere released breakthrough models and methods that laid the foundation for everything we now call “AI.” The transformer — the “T” in ChatGPT — was born out of this open culture. Hugging Face was created during this era to democratize access to these technologies.
Now, the U.S. is slipping, and the implications are profound.
American scientists, startups and institutions are increasingly driven to build on Chinese open models because the best U.S. models are locked behind APIs. As each new open model emerges from abroad, Chinese companies like DeepSeek and Alibaba strengthen their positions as foundational layers in the global AI ecosystem. The tools that power America’s next generation of AI products, research and infrastructure are increasingly coming from overseas.
And at a deeper level, there’s a more fundamental risk: Every advancement in AI — including the most closed systems — is built on open foundations. Proprietary models depend on open research, from transformer architecture to training libraries and evaluation frameworks. But more importantly, open-source increases a country’s velocity in building AI. It fuels rapid experimentation, lowers barriers to entry and creates compounding innovation.
When openness slows down, the entire ecosystem follows. If the U.S. falls behind in open-source today, it may find itself falling behind in AI altogether.
Moving away from black box AI
This matters not just for innovation, but for security, science and democratic governance. Open models are transparent and auditable. They allow governments, educators, healthcare institutions and small businesses to adapt AI to their needs, without vendor lock-in or black-box dependencies.
We need more and better U.S.-developed open source models and artifacts. U.S. institutions already pushing for openness must build on their success. Meta’s open-weight Llama family has led to tens of thousands of variations on Hugging Face. The Allen Institute for AI continues to publish excellent fully open models. Promising startups like Black Forest are building open multimodal systems. Even OpenAI has suggested it may release open weights soon.
With more public and policy support for open-source AI, as demonstrated by the U.S. AI Action Plan, we can restart a decentralized movement that will ensure America’s leadership. It’s time for the American AI community to wake up, drop the “open is not safe” narrative, and return to its roots: Open science and open-source AI, powered by an unmatched community of frontier labs, big tech, startups, universities and non‑profits.
We can restart a decentralized movement that will ensure U.S. leadership, built on openness, competition and scientific inquiry, and empower the next generation of builders. If we want AI to reflect democratic principles, we have to build it in the open. And if the U.S. wants to lead the AI race, it must lead the open-source AI race.
Clément Delangue is the co-founder and CEO of Hugging Face.