Center for Technology and Innovation

AI Regula⁠t⁠⁠i⁠on’s S⁠t⁠a⁠t⁠e-by-S⁠t⁠a⁠t⁠e Chaos Shows ⁠t⁠he Need for Federal Leadersh⁠i⁠p

By: Dr. Edward Longe / July 2, 2025

Dr. Edward Longe

DIRECTOR OF NATIONAL STRATEGY AND DIRECTOR OF THE CENTER FOR TECHNOLOGY AND INNOVATION

Center for Technology and Innovation

July 2, 2025

Dr. Edward Longe and Taylor Barkley

The fierce opposition to federal artificial intelligence preemption tells us everything we need to know about why it’s necessary. When 260 state legislators, 40 attorneys general and more than 100 organizations coordinate to reject congressional oversight of AI regulation, they are not defending democracy; they are protecting turf. Their unified resistance reveals exactly why Congress must act: America’s AI future is being held hostage by a regulatory free-for-all that threatens our global competitiveness.

Consider the numbers. Lawmakers from 45 states have introduced more than 1,000 AI bills this year alone, more than 20 per state. This isn’t thoughtful policymaking. It’s regulatory chaos masquerading as democratic deliberation. We’re watching 50 different laboratories of democracy try to reinvent the wheel, creating a patchwork of conflicting rules that no company can reasonably navigate.

The irony is palpable. The same state officials arguing for local control are coordinating a national campaign against federal leadership. If they can organize across state lines to oppose preemption, why can’t they work together to create coherent AI governance?

The answer is simple: State-by-state regulation isn’t about protecting citizens. It’s about regulatory arbitrage. States are competing to impose the most draconian form of AI governance on the entire nation, regardless of the consequences for innovation or interstate commerce.

Take California’s now-vetoed SB 1047, which would have regulated AI systems developed within state borders. Because most major AI companies are headquartered in California, this single state law would have effectively set national AI policy. That’s not federalism; that’s one state dictating terms to 49 others.

We have seen this movie before with data privacy. Companies spend billions of dollars navigating conflicting and sometimes contradictory state requirements, creating costs ultimately passed along to consumers through higher prices and lost innovation. Meanwhile, people remain confused about what protections they actually have, if any.

AI regulation is heading down the same costly path as data privacy, only faster and with higher stakes. Unlike privacy laws that primarily affect how data is collected and used, AI regulations could determine which technologies get built, where they are developed and how they are deployed. Basically, any business that uses AI to create better and cheaper products and services, not just the companies we consider “the AI companies,” would have to comply. The economic consequences of getting this wrong dwarf the privacy patchwork problem.

The proposed congressional moratorium wouldn’t prevent states from addressing AI-related harms. It would simply require them to use existing laws or enact technology-neutral laws rather than creating AI-specific frameworks that treat identical problems differently based on whether AI was used. States would still be free to prosecute fraud, discrimination and privacy violations. They would need to apply consistent legal principles regardless of the technology.

This approach encourages more thoughtful regulation focused on actual harms rather than policy based on techno panic. When lawmakers single out AI for special treatment, they often create rules that make little sense in practice while missing real problems that existing laws could address.

The Clinton administration understood the dynamic of federal leadership regarding internet governance, establishing that digital commerce “should be governed by consistent principles across state, national, and international borders.” That light-touch federal framework enabled the digital revolution that made America the global technology leader. AI deserves the same strategic clarity.

Like the internet, AI systems don’t stop at state lines. They operate across networks, serve users nationwide and compete in global markets. When artificial intelligence is the defining technology of the 21st century, powering medical research, financial services and national defense, we cannot afford to let 50 different regulatory experiments fragment American leadership.

Foreign adversaries aren’t slowing down by creating conflicting AI governance rules. The European Union isn’t letting member states create contradictory AI frameworks. They are developing unified approaches that give their companies clear rules of the road, allowing them to develop competitive advantages.

Meanwhile, America’s response to the most important technological development of our time is regulatory chaos that serves no one except the lawyers navigating compliance across multiple jurisdictions.

Congress must lead. Although federal preemption language may not be perfect, it’s far better than the regulatory fragmentation we will face if Congress fails to act. The stakes are too high, the technology too important and the competitive landscape too fierce to let a regulatory race determine America’s AI future. The unified opposition from state officials isn’t an argument against federal preemption; it’s proof that we need it now more than ever.

Taylor Barkley is the director of public policy for the Abundance Institute. Edward Longe is the director of national strategy and the director of the Center for Technology and Innovation at the James Madison Institute.

Originally found in The Washington Times.