The nation’s regulators are attempting to match the rapid pace at which AI is evolving. That needs to become a global initiative.
China has been forging its own path on the regulation of technologies based on artificial intelligence.Credit: Hector Retamal/AFP/Getty
The past few years have seen no shortage of international dialogues, white papers and recommendations from advisory groups on the development and use of artificial intelligence. Yet when it has come to turning these into globally agreed rules to maximize the benefits and minimize the harm of AI, there has been a leadership vacuum.
China wants to lead the world on AI regulation — will the plan work?
As Nature reported last week (Nature https://doi.org/qhbv; 2025), one country is pushing forwards with plans to change that. China is proposing to set up a global body to coordinate the regulation of AI, to be known as the World Artificial Intelligence Cooperation Organization (WAICO). Establishing such a body is in all countries’ interests, and governments around the world should get on board.
AI models have astounding power, and abilities that could supercharge science and boost economic growth. But they do not fully understand the world, and can fail in unpredictable ways. There are many ways in which they could cause harm, including exacerbating inequality, aiding criminality and assisting the spread of mis- and disinformation. Some prominent researchers even argue that superintelligent AI could one day destroy humanity.
So far, such risks have not been given due attention in the breakneck race to develop AI — a race that many fear has created an economic bubble that is on the brink of bursting. The United States, which is home to many of the companies making the most powerful and widely used models, has no national AI regulations, just a patchwork of state-level laws. On the whole, companies in the United States are expected to police themselves and establish their own internal guardrails — while also being in relentless competition.
The latest assessment of large technology companies’ safety and risk policies — the AI Safety Index — by the Future of Life Institute, based in Campbell, California, was published on 3 December. On a scale from A to F, no US firm scores higher than a C+ (see go.nature.com/48ikyhv). Yet last month, US President Donald Trump launched an initiative dubbed the Genesis Mission, which will give companies and researchers developing AI models unprecedented access to government data sets. The administration has compared it to the Apollo programme to reach the Moon.
Rules to keep AI in check: nations carve different paths for tech regulation
In the European Union, the AI Act, introduced last year, requires the makers of the most powerful advanced AI systems to strengthen their analyses of the threats their models pose. The act is being implemented in stages, and it is not yet clear what effect the threat of substantial fines for non-compliance will have. Media reports suggest that companies are pressuring the EU to water down its laws.
... continue reading