At times, it can seem like efforts to regulate and rein in AI are everything everywhere all at once.
China issued the first AI-specific regulations in 2021. The focus is squarely on providers and content governance, enforced through platform control and record-keeping requirements.
In Europe, the EU AI Act dates to 2024, but the European Commission is already proposing updates and simplification.
India charged its senior technical advisors with creating an AI governance system, which they released in November 2025.
In the United States the states are legislating and enforcing their own AI rules even as the federal government in 2025 moved to prevent state action and loosen the reins.
This leads to a critical question for American engineers and policymakers alike: What can the U.S. actually enforce in a way that reduces real-world harm? My answer: regulate AI use, not the underlying models.
Why model-centric regulation fails
Proposals to license “frontier” training runs, restrict open weights, or require permission before publishing models, such as California’s Transparency in Frontier Artificial Intelligence Act, promise control but deliver theater. Model weights and code are digital artifacts; once released, by a lab, a leak, or a foreign competitor, they replicate at near-zero cost. You can’t un-publish weights, geofence research, or prevent distillation into smaller models. Trying to bottle up artifacts yields two bad outcomes: compliant firms drown in paperwork while reckless actors route around rules offshore, underground, or both.
In the U.S., model-publication licensing also likely collides with speech law. Federal courts have treated software source code as protected expression, so any system that prevents the publication of AI models would be vulnerable to legal challenges.
... continue reading