On 10th September, 2025, US Senator Ted Cruz introduced SANDBOX Act, a bill that would let AI companies request waivers in the matters of certain federal regulations for 2 whole years, Akin reported. The idea behind it can be explained in simple- It wants to give companies more freedom. With respect to innovation, cutting red tape and to help the U.S. compete globally in AI.
But the AI regulatory sandbox raises a lot of questions, that need addressal and are very crucial. For example, who takes up responsibility when AI messes up when the regulations are waived? And what happens if innovation equals to cost of safety of individuals?
What Exactly Is an AI Regulatory Sandbox?
This concept isn’t something new. It takes inspiration from the concept of regulatory sandboxes that countries like UK and Singapore use in finance. It lets banks test blockchain payments and digital currencies in controlled settings. If something goes wrong, regulators can measure the losses, issue refunds, or require insurance
But AI is different altogether. It’s not about money. AI can affect human lives, safety, and civil rights. Because an AI tool specifically for diagnosis could misuse the health data. An autonomous vehicle (though super cool) could skip safety checks. And this can’t be okay, because the harm here is not just financial, it’s personal and societal.
The Public Safety Blindspot
So, the AI Sandbox regulatory allows companies to request waivers licensing, enforcement, and even compliance rules across multiple federal agencies. What this could potentially mean is that the HIPAA privacy protections, transportation safety rules, or financial oversight could become optional during “experiments.
Furthermore, what’s not practical in my opinion is that the bill asks companies to share the potential risks. So the companies who stand to profit would be doing the risk assessment? How’s that fair and how is that going to be free from bias?
Oversight and Power Shifts
Under this proposal the decision making authority would be with Office of Science and Technology Policy (OSTP) in the White House. Unlike other regulatory bodies, OSTP isn’t obligated to take in public comments or run impact studies. What this means is that citizens would have very less say in what’s going on, even when any action is affecting them. Moreover, the bill also gives additional powers to OSTP to potentially override any other federal agency that would deny AI companies waivers. This move can possibly weaken decades of specialized oversight in the fields of healthcare, finance or even transportation.
Remember how some policymakers floated the idea of a 10-year moratorium on state AI regulations? The one where if the proposal would have been passed it would have allowed States like California or Colorado to pass their own AI rules. But the Congress pushed back. This moratorium did not pass, highlighting lawmakers aren’t ready to wipe out states’ ability to act as “laboratories of democracy”.
Who Really Benefits?
In theory, sandbox should really help startups. But the truth is big tech firms like OpenAI, Google, and Meta are actually best positioned to take this advantage. They have a whole set of resources to navigate the waiver process like lawyers, lobbyists etc.
A Smarter Way Forward
We don’t have to choose between innovation and safety. A better AI regulatory sandbox would:
- Limit scope, especially in high-risk areas like healthcare and transportation.
- Ensure independent oversight with technical experts outside government and industry influence.
- Mandate public transparency so citizens know what experiments are running and what risks they face.
- Involve Congress and state governments in regular reviews, keeping the process democratic.
Why This Matters for Democracy
Citizens always have to be the first priority, at the end of the day technological advancements are to make everyone’s lives better. But that cannot be achieved if regulations become optional and oversight moves behind closed doors, citizens lose their voice. There’s no doubt that AI can accelerate innovation, but that must be done responsibly. Cruz’s AI regulatory sandbox may unleash innovation, but it also asks Americans to accept unknown risks without their consent. That’s too high a price for democracy to pay.
If this debate matters to you, spread the word. Share this article with a colleague or policymaker who cares about the future of AI.