The battle over regulating artificial intelligence is heating up in Washington, with stakeholders fighting not over the technology itself, but over who should have the authority to regulate it. The absence of a comprehensive federal AI standard focused on consumer safety has led to a surge in state-level legislation aimed at protecting residents from potential AI-related harms. Bills like California’s SB-53 and Texas’s Responsible AI Governance Act have been introduced to address issues such as intentional misuse of AI systems.
However, tech giants and startups from Silicon Valley argue that such state laws create a fragmented regulatory landscape that stifles innovation. They claim that a patchwork of state regulations will hinder the industry’s competitiveness, particularly against countries like China. As a result, there is a push for a national standard or no regulation at all, with efforts underway to preempt states from enacting their own AI legislation.
House lawmakers are considering using the National Defense Authorization Act (NDAA) to block state AI laws, while a leaked draft of a White House executive order also shows support for preempting state efforts to regulate AI. This sweeping preemption is met with resistance in Congress, with concerns that it would leave consumers vulnerable to harm and allow tech companies to operate without oversight.
In response to the lack of a federal standard, Rep. Ted Lieu and the bipartisan House AI Task Force are working on a package of federal AI bills covering various consumer protections. These bills address issues such as fraud, healthcare, transparency, child safety, and catastrophic risk. While the process of passing such comprehensive legislation may take time, the urgency to limit state authority in regulating AI has become a contentious issue in AI policy.
Efforts to block states from regulating AI have intensified, with considerations to include language in the NDAA to prevent state regulation of AI. Negotiations are ongoing to potentially preserve state authority in certain areas like children’s safety and transparency. On the other hand, a leaked White House EO draft reveals a potential strategy to challenge state AI laws through an “AI Litigation Task Force” and push for national standards that override state rules.
The debate over preempting state AI regulation has sparked a divide between those advocating for industry self-regulation and those calling for a more proactive approach to regulation. While some argue that existing laws are sufficient to address AI harms, others believe that states should retain the flexibility to address emerging risks swiftly.
As states continue to pass AI-related legislation, the slow progress at the federal level has led to calls for a national AI policy. Rep. Lieu is working on a comprehensive megabill that covers a range of issues, including fraud penalties, deepfake protections, whistleblower protections, and mandatory testing and disclosure for large language model companies. The goal is to pass a bill that can navigate the complexities of the legislative process and gain bipartisan support.
The debate over regulating AI will likely continue as stakeholders navigate the complex landscape of technology, policy, and consumer protection. Finding the right balance between innovation and oversight will be crucial in shaping the future of artificial intelligence regulation in the United States.

