US state legislatures are where the action is for placing guardrails around artificial intelligence technologies, given the lack of meaningful federal regulation. The resounding defeat in Congress of a proposed moratorium on state-level AI regulation means states are free to continue filling the gap.
Several states have already enacted legislation around the use of AI. All 50 states have introduced various AI-related legislation in 2025.
Four aspects of AI in particular stand out from a regulatory perspective: government use of AI, AI in health care, facial recognition, and generative AI.
Government use of AI
The oversight and responsible use of AI are especially critical in the public sector. Predictive AI—AI that performs statistical analysis to make forecasts—has transformed many governmental functions, from determining social services eligibility to making recommendations on criminal justice sentencing and parole.
But the widespread use of algorithmic decision-making could have major hidden costs. Potential algorithmic harms posed by AI systems used for government services include racial and gender biases.
Recognizing the potential for algorithmic harms, state legislatures have introduced bills focused on public sector use of AI, with emphasis on transparency, consumer protections, and recognizing risks of AI deployment.
Several states have required AI developers to disclose risks posed by their systems. The Colorado Artificial Intelligence Act includes transparency and disclosure requirements for developers of AI systems involved in making consequential decisions, as well as for those who deploy them.
Montana’s new “Right to Compute” law sets requirements that AI developers adopt risk management frameworks—methods for addressing security and privacy in the development process—for AI systems involved in critical infrastructure. Some states have established bodies that provide oversight and regulatory authority, such as those specified in New York’s SB 8755 bill.