July 23,2025
By: Turner Loesel
In the predawn hours of July 1, 2025, the U.S. Senate delivered a deafening blow to America’s tech sector. By a margin of 99-1, senators killed a proposed moratorium that would have temporarily prevented states from enforcing laws targeting artificial intelligence systems. Such a decisive vote illustrates that a similar attempt would be dead on arrival should an independent bill be introduced. Senator Thom Tillis of North Carolina stood alone in support, facing down an unlikely coalition that included forty state attorneys general, 260 state legislators, 17 Republican governors, and advocacy groups spanning the ideological spectrum.
State legislators may be tempted to interpret this crushing rejection as a green light for aggressive AI regulation. That would be a profound mistake. As Congress abdicates its constitutional duty to ensure a common national market, states like California and New York are aggressively pushing their own restrictive, process-based regulations. This ensures their approach becomes the de facto national standard, harming innovation and investment in states that might prefer a lighter touch. States would be wise to avoid this debacle.
With the federal government on the sidelines, the stage is set for another regulatory storm in state capitols. More than 1,000 AI-related bills flooded state legislatures in 2025 alone, up from nearly 700 bills in 2024. As each state attempts to craft their own unique approach—requiring different algorithmic bias audits and mandating compliance with different state agencies—they threaten to create what policy experts call a “looming patchwork of inconsistent state and local laws,” simply impossible for innovators to navigate.
Policymakers should have learned their lesson already. The failure to enact a national data privacy law has produced a state-by-state patchwork that economists project will cost the U.S. up to $239 billion annually in compliance costs. Small businesses are hit the hardest as fixed compliance costs turn previously profitable startups into failed business ventures and divert scarce resources from growth and innovation. Ironically, these rules entrench the very Big Tech companies they often claim to target, as only corporate giants have the resources to navigate 50 different rulebooks.
But an AI patchwork would be exponentially worse. While costly, data privacy rules govern relatively static processes like data storage and user consent. AI regulations, in contrast, seek to govern the dynamic, evolving logic of the systems themselves. Mandating 50 different standards for algorithmic fairness, transparency, and testing would require innovators to constantly re-engineer the core function of their products for each state—an innovation-crushing and likely impossible task.
Faced with this reality, many companies that operate nationally cannot tailor their products to dozens of conflicting, and perhaps contradictory, requirements. Instead, they are forced to default to the most stringent standards, meaning a single state’s policy can become the de facto rule for the entire country. This dynamic is precisely what the moratorium sought to prevent. It was not an infringement on states’ rights, but an act of federalism designed to preserve a single national market and prevent any one state from unilaterally dictating AI policy for all. Maintaining a fragmented policy landscape ensures that even states with sensible, light-touch laws lose out, effectively allowing the policy of one state to infringe upon the economic sovereignty of another.
If Congress cannot provide coordination through legislation, the alternative is an unrelenting legislative storm across state legislatures. Imagine navigating Colorado’s comprehensive AI Act with its complex audit rules, while California drafts a conflicting standard and other states advance laws built on incompatible definitions of what “artificial intelligence” even is. If states become emboldened to mandate their own type of conflicting algorithmic audits, we are beginning to create a legal labyrinth for any nationally operating business.
Regardless of how Congress acts, states can still create coordination through collective restraint. The grounding principle should be fairness: a harm created by an AI system is not inherently worse than an equal harm created by a human. Companies causing damage with AI should face the same penalties as those causing identical damage through other means—no more and no less.
History provides a successful model. The Internet Tax Freedom Act, a federal moratorium on discriminatory internet taxation, allowed e-commerce to flourish without being strangled by thousands of local tax regimes. Every state that participated in that restraint benefited from the internet’s explosive growth.
State legislators now face a choice that will shape America’s technological future. It is tempting to interpret the moratorium’s defeat as a straightforward victory for “states’ rights,” but this view is dangerously simplistic.
The vote was not a green light for 50 different sets of rules but, rather, a yellow light—a warning to proceed with extreme caution.
Senators were right about one thing: Congress has failed to provide consistent leadership on emerging technology. But that failure does not make uncoordinated state action wise. In the absence of a moratorium, true leadership now requires state lawmakers to recognize that collective restraint is the best way to foster innovation for everyone. States have the opportunity to emulate Congress’s restraint, this time on purpose.
Turner Loesel is a policy analyst for the Center for Technology and Innovation at The James Madison Institute in Tallahassee, FL.