Inside the AI power play that nearly slipped into the NDAA
By Michael Jones
Last week started with a high-stakes but relatively under-covered power struggle unfolding within the negotiations on the final language of the National Defense Authorization Act (NDAA), the annual bill that sets the rules and budget for the U.S. military. The White House, tech companies and some allied lawmakers had been engaged in an extraordinary push to actually ban states from regulating artificial intelligence (AI) by attaching federal preemption language to the final must-pass bill of the year.
Yet by week’s end, the provision was stripped from the legislation. But the fight over the provision exposes the fault lines of power, the vacuum of federal oversight, and the ongoing effort to centralize AI policy in Washington. Meanwhile, stakeholders are still frantically trying to understand the implications of what could be the most consequential technology of the century, which is innovating at breakneck speed.
“The White House and some in the tech industry are again ignoring rank-and-file members of Congress on both sides of the aisle. That’s why you are seeing momentum rapidly decline for their second try of putting a moratorium in on states,” House Democratic Caucus Vice Chair Ted Lieu (D-Calif.), the top Democrat on the House’s AI task force last Congress, told me last week as the fate of the provision hung in the balance. “If the White House and those in the tech industry really want to have a solution, they should work with [us], instead of just talking to GOP leadership, whose only goal is to rubber-stamp whatever Donald Trump wants.”
Some lawmakers who disagreed with the provision on the merits also suggested that Congress should address AI policy in a bill dedicated to it—not by tucking it into the defense bill.
“There would certainly be more appropriate vehicles,” Sen. Tammy Baldwin (D-Wis.), who serves on the Senate Commerce Committee, which has jurisdiction over AI policy, said of the preemption language. “But they always look to attach it to something that they know will pass.”
Sen. Mark Kelly (D-Ariz.) introduced a 24-page AI policy agenda in September to avoid disruptions in labor and in energy and water supplies while building public trust. He told me that states should be able to have some say in how AI is deployed in their states, especially when it’s to prevent negative outcomes. The 2028 presidential prospect agreed that there are better ways to write AI rules than slipping them into a must-pass package.
“I don’t see the nexus between that issue and the defense bill,” Kelly said.
The rationale for supporters of preemption is that it would prevent a patchwork of conflicting state laws that they argue could slow down AI deployment, burden industry innovation and undermine national competitiveness in the emerging global AI race.
As Lieu alluded to during our interview, this isn’t the first time Republicans and the Trump administration have attempted to advance the preemption language. The House Energy and Commerce Committee also tucked a 10-year federal moratorium on most state and local artificial intelligence regulation into an earlier version of the GOP tax megabill Trump signed into law this summer. Ultimately, it faced the same backlash from state officials, unions and civil rights groups, who saw it as an attempt to strip them of authority before federal protections existed.
State laws have been the only real check on an AI industry racing ahead of meaningful federal oversight. In the absence of national standards, legislatures in places like California, Colorado, Illinois, and Connecticut have stepped in with rules governing data privacy, algorithmic transparency, hiring discrimination and consumer protections. Those state laws have created the only enforceable guardrails in an otherwise unregulated wild west, shaping how companies deploy AI tools and giving workers and consumers at least some recourse when systems malfunction or cause harm. Critics of federal preemption argue that stripping states of that authority would mean removing the only functioning backstop while Congress remains years behind the technology curve.
Polling indicates broad, bipartisan public support for regulating AI, including demand for oversight, accountability, and safety-first constraints, especially where AI affects jobs, privacy, justice, or everyday social life.
According to a recent Gallup survey by Gallup, 97% of Americans—a near-universal baseline!—say AI safety and security should be subject to rules and regulations. A 2025 poll by Program for Public Consultation (PPC) also found broad, bipartisan support for government regulation of AI, especially in areas where AI systems make life-impacting decisions (hiring, criminal justice, healthcare, etc.). A 2023 finding by the Artificial Intelligence Policy Institute showed 56% of registered U.S. voters support federal regulation of AI and 82% say they don’t trust technology-industry leaders to self-regulate.
Meanwhile, other polls suggest a large majority favors requiring pre-deployment government safety testing of powerful AI models, or government audits of AI systems used in high-stakes contexts. And recent data show that most U.S. adults want more control over how AI is used in their lives, suggesting a growing demand for oversight rather than laissez-faire development.
But what remains elusive is a settled consensus on the shape of that regulation—who should oversee it and whether state-level vs. federal regulation is preferable—which underscores why the fight over preemption and federalization matters so much. Advocates also warn that removal of the language doesn’t end the push for preemption. Case in point: President Trump announced on Monday morning that he will sign an executive order to limit state AI laws and the industry is expected to intensify its lobbying for a national shield that blocks state-level enforcement.
And Democrats say the lesson from this fight isn’t to preserve the status quo but to build a real federal framework—and several senior members told me they plan to make a national AI standard a first-year priority if they win back the House next November. They argue that states shouldn’t be the only line of defense forever, and that a durable, pro-innovation, pro-consumer federal code is the only way to keep pace with the technology while safeguarding civil rights and economic security.
“It’d be an important priority,” he told me. “Clearly, the Republican plan right now is to essentially do whatever Donald Trump wants, whether it’s selling high performance chips to China or imposing revenue sharing agreements on tech companies or trying to suppress regulations [in the] states. The Republicans don’t really have a plan.”
Michael Jones is an independent Capitol Hill correspondent and contributor for COURIER. He is the author of Once Upon a Hill, a newsletter about Congressional politics.