Two federal actions due on 11 March 2026 identify which state AI laws Washington considers obstacles and signal how far the administration will push to challenge them. Neither action repeals any state law. Colorado’s AI Act still takes effect on June 30 2026, California’s transparency laws remain enforceable, and New York’s requirements are active. The compliance question is not what might change, but how to operate responsibly while that uncertainty resolves.
What Is Actually Happening on March 11
President Trump’s December 2025 executive order on AI, “Ensuring a National Policy Framework for Artificial Intelligence,” set two specific deadlines for 11 March 2026 (Baker Botts via JD Supra, March 4, 2026).
The first is a Commerce Department assessment identifying state AI laws the administration considers “onerous” and in conflict with its policy of minimal regulatory burden. The assessment focuses particularly on laws that “require AI models to alter their truthful outputs” or compel disclosures that may raise First Amendment concerns. Laws identified in the assessment may be referred to the Department of Justice AI Litigation Task Force, established January 9, 2026, which has authority to challenge state AI laws on grounds including unconstitutional burdens on interstate commerce and preemption by existing federal regulations.
The second is a Federal Trade Commission policy statement on the application of Section 5 of the FTC Act, the prohibition on unfair and deceptive acts or practices, to AI models. The administration’s theory is that certain state laws requiring AI developers to adjust model outputs to mitigate bias could, in its framing, compel production of outputs that are “deceptive” under federal law. Legal commentators have described this theory as untested (Baker Botts via JD Supra, March 4, 2026).
The executive order also directs the Commerce Department to issue a policy notice conditioning approximately USD 21 billion in remaining Broadband Equity Access and Deployment (BEAD) programme nondeployment funds on states not maintaining “onerous” AI laws. This represents real financial leverage over state legislatures that have accepted BEAD funding or are expecting it.
None of these actions directly invalidates any state law. Meaningful legal relief requires the DOJ to file suit and a court to grant an injunction, a process that could take months or years.
Which State Laws Are in the Frame
The executive order explicitly names Colorado’s Artificial Intelligence Act (SB 24-205) as a law the administration considers problematic. The Act imposes “reasonable care” obligations on deployers of high-risk AI systems to prevent algorithmic discrimination, effective June 30, 2026.
The broader universe of potentially affected laws is substantially larger. Baker Botts’ analysis of the executive order identifies California’s Transparency in Frontier AI Act (SB 53), California’s Generative Artificial Intelligence Training Data Transparency Act (AB 2013), and the New York RAISE Act, signed December 19, 2025, as among the laws the Commerce Department assessment may address.
The scope of the Commerce Department’s evaluation matters enormously. A narrow reading that addresses only major omnibus AI statutes signals limited enforcement ambitions. A wide-ranging assessment that sweeps across the 145 AI laws enacted across US states in 2025 signals a broader campaign.
The DOJ task force has not yet filed any lawsuits. The executive order contemplates that Commerce identifies specific laws first, then DOJ acts on those referrals. Organisations should not interpret the absence of litigation as confirmation that the laws will remain unaffected.
The Laws That Are Not Going Anywhere Soon
Several state AI obligations are not plausible targets for near-term federal challenge, either because they are already embedded in broader privacy statutes, because they address concerns the federal government has not contested, or because enforcement infrastructure is already in operation.
New York City’s Local Law 144, requiring annual bias audits for automated employment decision tools, has been operational since 2023. Audit results are published. Employers in New York City are already conducting these audits. A federal challenge to this obligation would face significant procedural hurdles and political complexity.
California’s existing privacy framework, including CPRA provisions on automated decision-making and profiling, sits inside a broader data protection statute that has not been targeted by the executive order. The administration’s focus appears to be on laws that directly constrain AI model outputs, not on data subject rights frameworks more generally.
State anti-discrimination laws that apply to AI-assisted employment decisions without specifically naming AI are also outside the scope of the executive order, which targets AI-specific legislation. An employer using AI in hiring is still bound by Title VII, the Americans with Disabilities Act, and equivalent state statutes, regardless of what happens to the Colorado AI Act.
The Compliance Paradox
For compliance teams, the March 11 actions create a paradox: the administration has increased uncertainty without reducing the number of legal obligations currently in force.
Colorado SB 24-205 takes effect June 30, 2026. It has not been suspended, amended, or subject to any court order. An organisation that halts compliance preparation on the basis that federal action might eventually invalidate the law faces real legal exposure if that action does not materialise, or does not arrive before June 30.
California and New York laws are currently enforceable. The DOJ task force can challenge them, but a challenge does not create an automatic stay. State enforcement continues unless and until a court issues an injunction.
The administration’s BEAD leverage is political and financial, not legal. A state that refuses to amend its AI laws loses eligibility for broadband funding. That creates legislative pressure, but it does not change any law already on the books or immediately affect private sector compliance obligations.
How Compliance Teams Should Respond
The practical framework is to continue current compliance work while building flexibility into the programme design.
Map AI use cases to specific state laws. Identify which AI systems are deployed, which US states they affect, and which specific state laws apply to each. This is a prerequisite for both current compliance and for understanding which obligations might be affected by a federal challenge if one succeeds.
Track the March 11 outputs specifically. The Commerce Department assessment will identify which state laws are in the administration’s sights. That list should immediately inform a risk prioritisation exercise: laws on the list face greater federal scrutiny, but are also the ones most likely to generate early litigation that produces legal clarity.
Build switchable compliance controls. Design compliance programme elements, particularly around disclosures, impact assessments, and human oversight procedures, so they can be activated, paused, or adjusted at the jurisdiction level. An organisation that has built Colorado compliance as a single monolithic programme cannot easily adapt if a court issues a preliminary injunction affecting only Colorado. One that has built modular, jurisdiction-tagged controls can respond without unwinding its entire programme.
Do not wait for federal clarity. The legal challenge process will likely span 2026 and extend into 2027. State laws will remain enforceable during active litigation unless a court specifically stays them. Compliance teams should plan around current legal obligations, not anticipated federal outcomes.
March 11 Is a Starting Gun, Not a Finish Line
The March 11 deadlines are the beginning of a multi-year federal-state conflict on AI governance, not the resolution of one. Compliance teams that treat today’s actions as a reason to pause will find themselves behind when state enforcement continues and federal litigation takes longer than anticipated.
The organisations in the strongest position are those that have mapped their AI obligations accurately, built flexible compliance programmes, and established governance infrastructure that can adapt as the legal landscape evolves. That infrastructure is also the foundation for compliance with the EU AI Act and other international frameworks, which the US federal actions do not affect at all.
Related reading: March 11 Federal AI Deadlines: What Businesses Everywhere Need to Watch | AI Compliance Deadlines in 2026 | What Is an AI Governance Framework?
Stay across AI governance and compliance developments. Subscribe to the Shadow AI Watch newsletter.
Sources
- Baker Botts via JD Supra: “March 2026: Federal Deadlines That Will Reshape the AI Regulatory Landscape” (4 March 2026)
- The White House: Executive Order “Ensuring a National Policy Framework for Artificial Intelligence” (11 December 2025)
- Colorado General Assembly: SB 24-205 Colorado Artificial Intelligence Act; SB 25B-004 delay to June 30 2026
- LinkedIn AI Regulation Analysis: “The AI Regulation Wave: What’s Actually Coming in 2026–2027” (February 2026)
- SHRM: “New Year Brings New AI Regulations for HR” (December 2025)