On 24 April 2026, the US Department of Justice announced it had intervened in a lawsuit filed by Elon Musk’s AI company xAI, challenging Colorado’s algorithmic discrimination law. The DOJ alleges that Colorado SB24-205 violates the Equal Protection Clause of the Fourteenth Amendment. Assistant Attorney General Harmeet K. Dhillon called the law an attempt to force AI companies to “infect their products with woke DEI ideology.”
The political framing is loud. The compliance question underneath it is straightforward: Colorado SB24-205 is still scheduled to take effect on 30 June 2026. No court has enjoined it. No legislature has repealed it. Businesses that use high-risk AI systems affecting Colorado residents, particularly in hiring, lending, insurance, and housing, still face a compliance deadline in two months.
What Colorado SB24-205 requires
Colorado’s Artificial Intelligence Act, enacted on 17 May 2024, applies to developers and deployers of “high-risk AI systems,” defined as AI systems that make or substantially contribute to consequential decisions affecting consumers. Employment decisions, including job-candidate screening, are explicitly listed as consequential.
The law requires deployers to use “reasonable care” to protect consumers from algorithmic discrimination. In practice, that means conducting impact assessments before deploying high-risk AI, providing consumers with notice that AI is being used in consequential decisions, offering a right to appeal or request human review, disclosing known or reasonably foreseeable risks of algorithmic discrimination to the Colorado Attorney General within 90 days of discovery or receipt of a credible report, and maintaining documentation of the AI system’s purpose, data inputs, and known limitations.
A violation is treated as a deceptive trade practice under the Colorado Consumer Protection Act. Fiscal note material from the Colorado General Assembly indicates civil penalties may reach USD 20,000 per violation.
What xAI and the DOJ are arguing
xAI filed its lawsuit on 11 April 2026, arguing that SB24-205 compels AI developers to alter their models’ outputs to satisfy the state’s definition of algorithmic discrimination. The company frames this as compelled speech in violation of the First Amendment and as a race-and-sex-based classification in violation of the Fourteenth Amendment’s Equal Protection Clause.
The DOJ’s intervention, reported by Bloomberg and Axios on 24 April, adds federal weight to xAI’s constitutional claims. The DOJ’s press release frames the case as part of the Trump administration’s broader push against what it characterises as DEI requirements in AI systems. Dhillon stated that the Justice Department “will not stand on the sidelines while states such as Colorado coerce our nation’s technological innovators into producing harmful products.”
The legal arguments are significant but have not been tested. No court has ruled on whether state algorithmic discrimination requirements constitute compelled speech or violate equal protection. The case will take months, likely longer, to resolve. In the meantime, the law remains on the books.
Why the litigation does not change the compliance timeline
Three facts matter more than the political noise.
The law has not been enjoined. xAI’s lawsuit challenges the constitutionality of SB24-205, but no court has issued an injunction blocking the law from taking effect. Until a court orders otherwise, 30 June 2026 remains the compliance date. Businesses that assume the litigation will stop the law from applying are making a legal bet, not a compliance decision.
The Attorney General’s enforcement authority is independent. Colorado Attorney General Phil Weiser has not indicated any intention to delay enforcement. The deceptive-trade-practice classification means the AG can pursue violations using existing consumer protection infrastructure. The AG does not need to wait for the federal case to resolve before acting on complaints.
The reasonable-care standard does not depend on the outcome. Even if SB24-205 is eventually struck down, the reasonable-care framework it establishes, including impact assessments, consumer notices, human review, and documentation, reflects the direction of AI regulation across multiple jurisdictions. The EU AI Act’s high-risk obligations require similar documentation and oversight for employment AI systems. The Australian Privacy Act’s December 2026 ADM transparency requirement requires organisations to explain automated decisions. Building these capabilities now serves compliance across jurisdictions, not just Colorado.
The federal preemption pattern
The DOJ’s intervention in the xAI case fits a pattern SAW has been tracking. The White House National AI Framework (March 2026) explicitly called for federal preemption of state AI laws. The DOJ’s AI Litigation Task Force was created to challenge state regulations that the administration considers burdensome. The DOJ’s enforcement of AI-generated job ads under immigration law shows the same department is willing to regulate AI conduct when it aligns with federal priorities.
The tension is structural: the federal government is simultaneously pushing back on state AI regulation and pursuing its own AI enforcement agenda. For compliance teams, this creates a dual exposure. State laws like Colorado’s require affirmative governance steps. Federal enforcement (DOJ, FTC, SEC) requires defensible documentation and truthful claims. Neither side gives businesses a pass on AI governance.
What the case means for other states
Colorado is not the only state with AI discrimination legislation. As SAW covered in the federal-state AI collision analysis, more than 40 US states have introduced AI-related bills. If the DOJ’s constitutional challenge to Colorado succeeds, it could create precedent that limits other states’ ability to impose algorithmic discrimination requirements. If it fails, Colorado’s framework becomes a validated model that other states can adopt with greater confidence.
Illinois, California, and New York have all advanced AI employment legislation that includes some form of bias testing, impact assessment, or disclosure requirement. A ruling on Colorado’s law will affect how those states proceed. The compliance question for businesses operating across multiple states is whether to build to the highest standard now or wait for the legal landscape to settle.
The practical answer has not changed since SAW started covering this. Build. The governance capabilities required by Colorado, including impact assessments, consumer notices, human review workflows, and vendor documentation, are the same capabilities required by the EU AI Act, the Australian Privacy Act, and the emerging standards from the FTC’s enforcement playbook. Building them is not a bet on Colorado surviving litigation. It is a bet on AI governance being required somewhere, by someone, in 2026 or 2027. That bet is as close to certain as regulatory forecasting gets.
What compliance teams should do now
Do not pause Colorado-related work. The law has not been enjoined. Impact assessments, consumer notice frameworks, human review workflows, and AG reporting procedures should continue on the 30 June 2026 timeline until a court orders otherwise.
Document the governance framework as jurisdiction-neutral. Build AI governance capabilities that satisfy Colorado, the EU AI Act, and Australian ADM requirements simultaneously. A single framework with jurisdiction-specific modules is more efficient than rebuilding for each deadline.
Monitor the xAI case for injunctive relief. If a court issues a preliminary injunction blocking SB24-205 before 30 June, the compliance deadline effectively pauses. Watch for motions for injunctive relief and any AG statements on enforcement timing.
Track AG Weiser’s enforcement posture. The Colorado Attorney General’s public statements on SB24-205 enforcement will signal how aggressively the state intends to pursue violations while the federal case is pending. A formal enforcement guidance or “Dear Business” letter would be the strongest signal.
Brief the board on the dual exposure. Boards need to understand that the DOJ challenge does not eliminate compliance risk. It adds a second dimension: federal enforcement priorities alongside state obligations. The board briefing should cover both tracks and explain why governance investment serves both.
Sources
- US Department of Justice, “Justice Department Intervenes in xAI Lawsuit Challenging Colorado’s ‘Algorithmic Discrimination’ Law,” press release, 24 April 2026 (DOJ intervention, Dhillon quote, Equal Protection argument). justice.gov
- Colorado General Assembly, “SB24-205: Consumer Protections for Artificial Intelligence,” enacted 17 May 2024 (bill text, high-risk definition, reasonable care standard, AG reporting). leg.colorado.gov
- Colorado General Assembly, “Fiscal Note, SB24-205,” 12 April 2024 (USD 20,000 per violation penalty structure). leg.colorado.gov
- Bloomberg, “DOJ Joins Musk’s xAI Suit Against Colorado AI Discrimination Law,” 24 April 2026 (mainstream validation, case context). bloomberg.com
- Axios, “Justice Department joins xAI challenge to Colorado AI law,” 24 April 2026 (additional reporting, political context). axios.com
- JURIST, “Colorado sued by xAI over supposed constitutional violations in new AI bill,” 11 April 2026 (xAI original filing, First Amendment argument, timeline). jurist.org