Artificial intelligence has officially arrived in state capitals. In 2025 alone, state legislatures considered more than 1,000 AI-related bills, with 38 states enacting roughly 100 new laws. That volume tells a story: AI is not just a technology issue, but a governance one.
As is typical of state legislatures, most efforts focused on concrete problem-solving—how state agencies should use AI, how consumers should be protected, how children should be safeguarded, and how familiar legal rights should be preserved. But AI complicates this work. The technology evolves fast, its applications diffuse, and work is often shared across developers, deployers, and data providers. Legislators are trying to regulate a blurry moving target.
A new analysis by the University of Florida’s Project Navigate—a student-led research initiative examining how digitization is reshaping government and business—mapped state AI legislation in 2025. The results reveal a consistent strategy across states: govern a state’s own use of AI, then regulate where harms are easy to see and enforce.
More than half of all AI bills fell into just five topic areas. The largest by far was public-sector governance. Twenty percent of the AI bills in 2025 addressed how agencies procure, deploy, and oversee AI systems. These measures emphasized transparency, accountability, audits, ethics boards, and inventories of AI use. States were acting as large buyers and responsible users of AI.
New York was most active in this category, introducing 30 governance bills in 2025. Yet few became law. One that did frames AI as a labor issue: The statute requires agencies to disclose their AI tools and ensure that AI adoption does not override collective bargaining agreements nor result in job displacement.
This pattern—a wide legislative funnel but a narrow enacted end—showed up repeatedly in 2025. Across the five most common topic areas, New York considered 76 bills but passed relatively few. Texas, by contrast, deliberated fewer bills (41) and enacted more than a quarter of them.
Consumer protection was the second most common focus, with 113 bills introduced. These measures largely emphasized existing frameworks governing unfair or deceptive acts and practices, rather than creating new AI-specific codes. This approach makes sense economically: states targeted “high-risk” applications where harms are already understood—fraud, misleading advertising, or deceptive practices—rather than attempting to regulate AI in the abstract.
Transparency and labeling requirements were next, with 91 bills considered, spanning healthcare, advertising, and elections, and typically required AI-generated content to be identifiable or mandated disclosures about how the AI works and is used. Again, New York led in introductions but enacted little.
Child safety and biometrics—including deepfakes—rounded out the top five. Here, Texas emerged as the most active state, introducing 19 bills and passing five. These laws focused on preventing sexual deepfakes, limiting biometric data use, and restricting certain AI applications involving minors. The emphasis was less on procedural governance and more on criminal penalties for specific, legible harms.
These differences point to a broader political pattern. Coastal, Democratic-leaning states such as California and New York were more likely to pursue systemic, process-oriented AI governance, such as transparency frameworks and risk reporting. More conservative states tended to focus on narrow, outcome-based harms such as child exploitation, election interference, and biometric misuse. Notably, much of the legislation was bipartisan. Michigan’s law targeting sexual deepfakes, for example, attracted broad support across party lines.
States appear to value public benefits while managing risk. States regulated where enforcement appears feasible and public understanding is high—deepfakes, health care, fraud—while simultaneously tightening controls over their own AI use. This incrementalism is conservative: Comprehensive AI codes can be technically brittle and vulnerable to rapid obsolescence.
The challenges ahead are non-trivial. AI systems rarely have a single author, making liability hard to assign when things go wrong. Federal preemption looms. And legislatures face a public that views AI simultaneously as a transformative opportunity and a destabilizing threat.
For now, states appear to be moving cautiously, targeting obvious problems, and signaling attentiveness to constituents without overcommitting to rules they may soon regret. Whether that balance holds as AI capabilities—and political pressure—continue to grow remains an open question.