American technology faces a regulatory framework that was built for a bygone era. As Artificial Intelligence (AI) systems transform industries from healthcare to financial services, developers still navigate rules designed for the static software of decades past. To address this mismatch, Senator Ted Cruz (R-TX) introduced “Strengthening Artificial Intelligence Normalization and Diffusion By Oversight and eXperimentation Act,” or SANDBOX Act, last Wednesday. If passed, the bill would allow developers to test and deploy new products under modified or waived regulations, providing a controlled environment to assess both the technology and the rules that govern it.
Senator Cruz’s initiative builds on a recommendation from the White House’s recent AI Action plan, which encourages federal agencies to use such programs to accelerate AI development. The legislation offers a systematic method for distinguishing regulations that serve essential public purposes from those that merely impede progress. Data gathered from these sandboxes can then inform Congress on how to safely amend or repeal outdated rules.
The SANDBOX Act creates a process analogous to clinical trials for regulations. Just as pharmaceutical trials reveal which medicines are effective and safe, these sandboxes can help identify which rules genuinely protect consumers without stifling innovation.
Under the proposal, innovators can apply to the Office of Science and Technology Policy (OSTP) for temporary waivers from specific federal regulations ill-suited to AI. Applicants must demonstrate their project’s public benefit, outline and mitigate foreseeable risks, and show that the potential benefits outweigh those risks. The OSTP would coordinate with relevant agencies, which would have 90 days to review waiver applications. If an agency does not respond within that period, the application is treated as approved—a mechanism designed to prevent bureaucratic gridlock.
Predictably, critics have already begun raising concerns. A consumer rights group, Public Citizen, warned that the proposal “hands Big Tech the keys to experiment on the public,” while the Alliance for Secure AI echoed similar fears. Such concerns, however, misinterpret both the proposal’s structure and its primary beneficiaries. Large technology firms already have the legal and financial resources to navigate complex regulatory hurdles. Smaller companies and startups, which lack these resources, are the ones most disadvantaged by the current system. By creating a more flexible regulatory environment, sandboxes level the playing field for the very companies most likely to challenge established industry leaders.
Taking part in a sandbox is also not a grant of immunity. Participants remain subject to all other existing consumer liability laws and must regularly report back to the government. As Senator Cruz emphasized, “a regulatory sandbox is not a free pass. People creating or using AI still have to follow the same laws as everyone else.”
Sandboxes do not absolve any company of complete liability or any agency of oversight responsibility while participating in the program. In addition to traditional oversight, participants must report any incidents causing harm, economic damage, or deceptive practices within 72 hours to both OSTP and relevant agencies. Failure to comply immediately removes sandbox protections.
The legislation also preserves full congressional authority. While the Act requires OSTP to provide Congress with annual reports identifying regulations that could be safely modified or eliminated based on sandbox results, Congress can accept, modify, or reject these recommendations as it sees fit. It’s possible, albeit unlikely, that no laws will be scrubbed at all. However, sandboxes enable lawmakers to have an informed conversation about the efficacy and necessity of certain regulations by providing a glimpse into a world without them. That’s a productive precedent to set, regardless of whether anything comes of it.
The regulatory sandbox approach has demonstrated remarkable success in other countries and highly regulated industries. The United Kingdom’s pioneering fintech sandbox, launched in 2016, showed that participating firms experienced significant improvements in their ability to raise capital while maintaining consumer protections.
These results have inspired similar programs worldwide, with over 160 programs in over 85 countries now operating regulatory sandboxes in various sectors. Domestically, states like Arizona and Utah have deployed sandboxes to identify new ways to make legal services more affordable for underserved populations. In many ways, the SANDBOX Act is an example of states serving as laboratories of democracy.
The United States risks falling behind if it fails to adopt a more adaptive regulatory approach. Just 34 days after America’s AI Action Plan, China unveiled an action plan of its own. While American innovators still navigate complex multi-agency approval processes, Chinese developers benefit from centralized state backing that enables rapid, widespread experimentation. The SANDBOX Act represents America’s acknowledgment of this structural disadvantage and creates pathways for rapid innovation that can compete with China’s approach while preserving consumer protections.
The SANDBOX Act offers a blueprint for maintaining America’s technological edge in an increasingly competitive global landscape. By creating controlled environments where innovation can flourish under smart oversight, Congress can ensure that American entrepreneurs lead the AI revolution ahead of our adversaries.
Policymakers can continue to allow outdated regulations to constrain breakthrough technologies or they can embrace a tested approach that preserves essential protections while unleashing American ingenuity.
Senator Cruz’s legislation offers a pragmatic solution that honors both innovation and responsibility.










