In three days, Washington produced a White House legislative framework and a competing Senate discussion draft, both built around pre-empting state AI laws. Colorado’s AI Act takes effect on 30 June 2026. California’s transparency and automated decision-making rules are already in force. For compliance leads, the question is whether to slow work on state obligations or keep building programmes that can survive a federal reset.

What Happened in the Week of 18 March

On 18 March 2026, Senator Marsha Blackburn released the TRUMP AMERICA AI Act discussion draft, a detailed legislative proposal that would codify the administration’s December 2025 “One Rule” executive order into federal law. Two days later, on 20 March, the White House released a National Policy Framework for Artificial Intelligence: a four-page set of legislative recommendations covering seven policy areas, from child safety to infrastructure to intellectual property (White House, 20 March 2026).

Both documents share a central premise: the United States needs a single national AI standard, and state AI laws that impose what the Framework calls “undue burdens” should be pre-empted. Axios described the Framework as “a list of priorities rather than a concrete legislative plan,” noting it is not tied to any specific bills and does not resolve longstanding issues around child protection or the mechanics of overriding state law. Sullivan & Cromwell characterised it as a “light-touch” federal regulatory approach that emphasises innovation, pre-emption, and reliance on existing legal regimes.

From Executive Order to Legislative Blueprint

The Framework follows the December 2025 executive order titled “Ensuring a National Policy Framework for Artificial Intelligence,” which directed the Department of Justice to establish an AI Litigation Task Force to “challenge State AI laws inconsistent with the policy.” Nelson Mullins noted that the DOJ task force is already operational, meaning federal legal challenges to state AI laws could proceed regardless of whether Congress acts on the Framework.

The Framework’s seven legislative recommendations cover child safety (age assurance, parental tools), community protection (AI-enabled fraud, data centre permitting, electricity costs), intellectual property (deferring copyright fair use questions to courts while enabling voluntary licensing frameworks), content authenticity (digital replicas, deepfake protections), government use (federal AI adoption, no new AI regulator), national security (export controls, compute infrastructure), and pre-emption of state AI laws (Sullivan & Cromwell, 22 March 2026).

The pre-emption language is broad but qualified. The Framework calls on Congress to “preempt state AI laws that impose undue burdens to ensure a minimally burdensome national standard consistent with the Administration’s policies.” WilmerHale noted the Framework is not a binding document and does not impose new legal obligations; it outlines recommended approaches for Congress to consider. But the direction is clear: the administration wants federal law to replace most categories of state-level AI regulation.

The Blackburn Draft Goes Further

Senator Blackburn’s TRUMP AMERICA AI Act discussion draft is more prescriptive than the White House Framework. Where the Framework sets principles, the Blackburn draft attempts to codify them: establishing a formal governance structure, defining prohibited state-level AI regulations, and creating federal sandboxes for AI development. The draft was released on 18 March, two days before the Framework, and represents a competing legislative vehicle that could move through the Senate Commerce Committee.

For businesses, the existence of two parallel proposals complicates planning. The Framework and the Blackburn draft overlap on pre-emption but differ on specifics. Congress would need to reconcile them before anything reaches a vote, and CNN reported that “many in the AI policy space believe it will be difficult to pass any legislation before the midterm elections in November.” House Democrats have already introduced a bill to repeal the December executive order, and bipartisan fractures on AI policy extend well beyond party lines.

State Laws Are Not Waiting

While Washington debates framework language, state obligations are already binding or approaching deadlines. Colorado’s AI Act takes effect on 30 June 2026, requiring developers and deployers of high-risk AI systems to implement risk management, impact assessments, and consumer notification obligations. California’s transparency and automated decision-making measures (including SB 53, AB 2013, SB 942, and CCPA automated decision-making regulations) are already in force or dated. New York’s generative AI warning bill passed the legislature on 9 March and awaits the governor’s signature.

Federal pre-emption, if it arrives, will not be retroactive in practical terms. Businesses that have invested in Colorado or California compliance programmes will not get that time or money back. And pre-emption of “undue burdens” still leaves open which state laws qualify. The Framework explicitly preserves state authority to enforce “generally applicable” laws (consumer protection, employment, civil rights) as applied to AI, meaning states retain significant enforcement power even under a pre-emption regime.

The DOJ’s AI Litigation Task Force adds another layer. Nelson Mullins noted that the task force was established by a January 2026 memorandum to “challenge State AI laws inconsistent with the policy.” This means individual state laws could face federal legal challenges before any legislation passes, creating a third source of uncertainty alongside the Framework and the Blackburn draft.

What Businesses Should Do

The worst response to federal pre-emption signals is to pause state compliance work. Even optimistic timelines put federal AI legislation months away from a vote, and the Framework itself acknowledges that existing state enforcement powers will largely survive pre-emption. The practical approach is to design for portability.

Keep building state compliance programmes. Colorado’s 30 June deadline is 94 days away. California’s rules are live. Pausing on the assumption that federal law will overtake them is a bet against the clock.

Design for modular compliance. Map obligations by state and build documentation that can shift from state-by-state compliance to a national standard without starting over. Risk assessments, impact documentation, and AI inventories are useful under any regime.

Track the DOJ task force. Federal legal challenges to specific state laws could narrow the compliance landscape before Congress acts. Businesses with exposure in multiple states should monitor which laws the task force targets and adjust planning accordingly.

Prepare contract language for transition. Vendor agreements and customer contracts that reference specific state AI laws should include provisions for federal standardisation. Build in flexibility now rather than renegotiating later.

The Framework is a direction of travel, not a reason to stop. Federal legislation will take time to pass and longer to implement. State attorneys general, the FTC, and sector regulators are already enforcing AI-related duties under existing law. The businesses that will navigate this transition best are those building governance programmes designed to flex across jurisdictions rather than bet on any single regulatory outcome.

Related reading: Federal AI Preemption Push: Which State Laws Are Safe and Which Aren’t | AI Compliance Deadlines 2026 | EU AI Act Enforcement Is Behind Schedule

Sources