“A loophole,” is how MEP Sergey Lagodinsky of Germany’s Greens described it to Tech Policy Press. “A weak spot.” He was talking about the EU’s proposed delay to its own AI Act, and the non-retroactivity clause that sits underneath it. The combination creates an incentive Lagodinsky spelt out plainly: companies now have a reason “to put things on the market before the Act enters into force, and especially put on the market AI systems which are high risk or the more risky ones, because those are the ones that have most obligations.”
For organisations that have been using the EU AI Act’s August 2026 high-risk deadline as their governance benchmark, and Shadow AI Watch has cited it in multiple articles, the ground has shifted. The European Parliament and Council have both adopted positions that would push that deadline to December 2027 or August 2028, depending on the type of system. Trilogue negotiations started on 26 March 2026, with a political agreement targeted for 28 April. If agreed, the delay takes legal effect before the original deadline.
What the Digital Omnibus actually proposes
The EU AI Act entered into force on 1 August 2024, with obligations phased in over time. Prohibited AI practices and AI literacy obligations took effect in February 2025. Rules for general-purpose AI models became applicable in August 2025. The high-risk AI system obligations, covering employment, credit, education, law enforcement, critical infrastructure and more, were due to apply from 2 August 2026.
In November 2025, the European Commission published its Digital Omnibus package, proposing to link the start date of high-risk rules to the availability of harmonised standards and compliance tools, rather than a fixed calendar date. The Commission’s stated rationale was practical: the standards bodies (CEN-CENELEC Joint Technical Committee 21) had not finished drafting the harmonised standards, national authorities in most member states had not been stood up, and guidance was incomplete. Enforcing the August 2026 deadline against that backdrop would mean holding companies to standards that did not yet exist in final form.
Both the European Council (13 March 2026) and the European Parliament (26 March 2026) have now adopted their negotiating positions. They are closely aligned on the new fixed dates: 2 December 2027 for standalone high-risk AI systems listed in Annex III (biometrics, critical infrastructure, education, employment, essential services, law enforcement, justice, border management), and 2 August 2028 for high-risk AI systems embedded in regulated products under Annex I (medical devices, machinery, aviation systems, toys, connected vehicles). Maximum fines remain unchanged at up to EUR 35 million or 7% of global annual turnover for serious infringements.
The non-retroactivity clause is the real issue
The delay itself is defensible. If the standards are not ready and the regulators are not in place, enforcement against a deadline nobody can meet is performative. But the delay interacts with a structural feature of the AI Act that predates the Omnibus: Article 111’s non-retroactivity provision. Under this article, AI systems placed on the market before the new deadlines do not need to comply unless they are “significantly modified.” Systems intended for use by public authorities have a separate sunset date of 31 December 2030.
Laura Caroli, a former co-negotiator of the AI Act, pointed to hiring systems as the clearest example. AI used in recruitment is explicitly classified as high-risk under the Act. But if a hiring system is placed on the market before 2 December 2027, “it may remain outside the AI Act indefinitely, unless it is substantially altered after that date,” she told Tech Policy Press. Bram Vranken, researcher at Corporate Europe Observatory, drew the same conclusion: “A large part of high-risk AI systems that have been placed on the market before December 2027 will never have to comply with the rules.”
The combination of delay and non-retroactivity creates a “race to market” dynamic. Companies deploying high-risk AI in hiring, credit scoring, education assessment, insurance pricing, or law enforcement have a clear incentive to get systems operational before December 2027. Those systems would then sit outside the AI Act’s high-risk obligations unless they are substantially changed. The more complex and costly the compliance requirements, the stronger the incentive to deploy early and avoid them.
Who pushed for the delay, and who opposed it
The Jacques Delors Centre published a detailed critique in late March arguing that the Digital Omnibus is “heading in the wrong direction.” The centre’s analysis found that the package was advanced through an omnibus process without a comprehensive impact assessment, which is “particularly problematic where [the proposals] could weaken existing fundamental rights safeguards.” The critique also noted that the cost-savings estimates used to justify the simplification agenda were “overly simplistic and one-sided.”
Corporate Europe Observatory and LobbyControl analysis, cited in the Tech Policy Press investigation, found that 69% of relevant European Commission meetings in 2025 were with business groups, compared with 16% with NGOs. The Digital Omnibus was framed as “simplification” and “competitiveness,” but critics argue it reflects industry lobbying more than a balanced assessment of risks and benefits.
On the other side, industry bodies have argued that the delay is the minimum necessary response to an implementation reality that was never realistic. Addleshaw Goddard noted that CEN-CENELEC communications indicate harmonised standards may not be available before late 2026, and that companies consistently report needing a minimum of 12 months to achieve compliance once standards are final. Doug Barbin, president of compliance firm Schellman, warned CIOs that “there’s a real procedural risk: if Council and Parliament negotiations drag past August 2026, the original deadlines stay on the books. CIOs who’ve been sitting on their hands are the most exposed.”
What this means for organisations outside the EU
Shadow AI Watch has cited the August 2026 EU AI Act high-risk deadline in multiple articles as a governance benchmark. That benchmark is moving. But the direction of travel has not changed, and the operational advice remains the same: build the governance now.
The delay is breathing space, not a reprieve. Brian Levine, executive director of FormerGov, put it bluntly: the delay “leaves CIOs in a regulatory limbo, but it doesn’t change the underlying reality: enterprises still own the risk their AI systems create.” The obligations, documentation requirements, and impact assessments are all unchanged. Only the enforcement timeline has moved. Organisations that use the extra time to build governance will be in a stronger position than those that use it to delay and end up scrambling in 2027.
Non-EU firms selling AI-enabled products into the EU should not treat this as permission to rush to market. The non-retroactivity clause creates a technical compliance gap, but it does not eliminate commercial risk. Enterprise customers conducting vendor due diligence will not accept “we deployed before the deadline” as a governance answer. Procurement teams, insurers, and internal audit functions are already asking AI governance questions that track the substance of the AI Act, regardless of whether enforcement has commenced. A hiring system that was deployed in 2027 to dodge the deadline will still face the same questions from every customer that takes AI risk seriously.
Australian organisations should not recalibrate their governance timelines. The December 2026 automated decision-making transparency requirement under the Privacy and Other Legislation Amendment Act 2024, with civil penalties for serious breaches reaching up to AUD $50 million, remains on track. NSW’s Work Health and Safety Amendment (Digital Work Systems) Act 2026 is proceeding. Neither has any connection to the EU’s Omnibus process. If anything, the EU delay makes the Australian deadlines comparatively more urgent, because organisations that were using the EU timeline as their pacing mechanism now need a different one. December 2026 in Australia is twelve months earlier than December 2027 in Brussels.
What organisations should do with the extra time
Use the delay to build, not to wait. The compliance infrastructure required under the AI Act, including risk management systems, technical documentation, bias testing, human oversight processes, and post-market monitoring, takes time to implement. The extra 16 months is an opportunity to do the work properly rather than rushing through a checkbox exercise in 2026. Organisations that invest now will have systems that can withstand scrutiny from regulators, customers, and courts.
Do not rush non-compliant high-risk systems to market. The non-retroactivity gap is real, but gaming it carries reputational and commercial risks that outweigh the compliance savings. A hiring system deployed in November 2027 to avoid December 2027 obligations will still face discrimination claims, customer audits, and reputational damage if it produces biased outcomes. The AI Act is not the only source of legal liability for high-risk AI.
Monitor the trilogue closely. The last trilogue session is planned for 28 April 2026. If political agreement is not reached before August, the original deadlines remain in force. Doug Barbin’s warning is worth repeating: “The organisations investing in governance infrastructure now won’t be the ones in crisis mode later. This is extra time. Use it.”
The direction has not changed
The EU AI Act delay is a concession to implementation reality, not a retreat from the regulatory model. The high-risk categories, documentation requirements, enforcement architecture, and maximum fines all remain. The only thing that has moved is the calendar. Organisations that treat the delay as confirmation that AI governance can wait will find themselves on the wrong side of the same obligations 16 months from now, with less time and more pressure than they have today.
Sources
- Tech Policy Press, “EU’s AI Act Delays Let High-Risk Systems Dodge Oversight,” 2 April 2026 (named voices: MEP Sergey Lagodinsky, Laura Caroli, Bram Vranken). techpolicy.press
- Council of the European Union, “Council agrees position to streamline rules on Artificial Intelligence,” press release, 13 March 2026. consilium.europa.eu
- Jacques Delors Centre, “The EU’s Digital and AI Omnibus is Heading in the Wrong Direction,” March 2026. delorscentre.eu
- European Parliament Think Tank, “Parliament’s emerging position on the Digital Omnibus on AI,” March 2026. europarl.europa.eu
- CIO.com, “European Parliament votes to delay EU AI Act implementation,” March 2026 (named voices: Doug Barbin, Brian Levine, Jason Hookey). cio.com
- Addleshaw Goddard, “EU Digital Omnibus on AI update: the Council and Parliament agreed positions,” April 2026. addleshawgoddard.com
- Lewis Silkin, “The latest on the Digital Omnibus on AI,” April 2026. lewissilkin.com
- European Commission, AI Act implementation timeline and Digital Omnibus FAQ. digital-strategy.ec.europa.eu