EU Member States were required to designate national AI Act enforcement authorities by 2 August 2025. As of March 2026, the European Commission’s official register lists just eight single points of contact out of 27. High-risk AI obligations start biting from August 2026. For businesses selling into the EU, the compliance target is moving while the enforcement infrastructure is still being assembled.
The Enforcement Architecture Most Businesses Have Not Read
The AI Act, adopted in 2024, regulates AI systems using a risk-based approach: prohibited, high-risk, transparency-risk, and minimal-risk. It also regulates general-purpose AI (GPAI) models separately, including generative models such as GPT-5, Gemini 3, and Mistral Large 3. GPAI models that reach a defined capability threshold or are designated by the Commission carry additional requirements including model evaluation and risk assessment.
Enforcement is split. The AI Act establishes what the European Parliamentary Research Service (EPRS) describes as a “hybrid enforcement model” with both centralised and decentralised components. National authorities are responsible for enforcing rules on AI systems, including high-risk systems used in areas such as employment, credit scoring, and law enforcement. The European Commission’s AI Office has sole authority to supervise and enforce rules on GPAI models (EPRS, 18 March 2026).
This dual structure means a business deploying a high-risk AI system will deal with its national market surveillance authority. A business providing or fine-tuning a GPAI model will deal with the AI Office in Brussels. A business doing both will answer to both. EPRS researchers note that the decentralised pattern remains dominant, “potentially leading to challenges around uneven enforcement in the EU.”
Eight Out of Twenty-Seven
Member States were required to designate at least one notifying authority and a market surveillance authority by 2 August 2025. The market surveillance authority also functions as the national single point of contact. Seven months past that deadline, the Commission’s official list includes just eight single points of contact out of 27 Member States (EPRS, March 2026). The EPRS does not name the eight, but the Commission maintains a public register.
Two types of national authority are required. The notifying authority sets up and carries out procedures for assessing high-risk AI systems before they enter the market. It does not conduct assessments itself but designates independent conformity assessment bodies to do so. Those bodies, once designated, become “notified bodies” and must be independent of the AI system’s provider and operator. The market surveillance authority performs checks after AI systems have been placed on the market, with powers to request documentation, evaluate systems, and impose fines.
For businesses, the practical implication is straightforward: without designated authorities, there is no body to conduct conformity assessments of high-risk AI systems, accept compliance documentation, or handle incident reports when those obligations activate in August 2026.
What Incomplete National Setups Mean in Practice
The AI Act’s enforcement model was designed to mirror the approach used for product safety regulation under the EU’s market surveillance framework. That model relies on Member States having functioning authorities with staff, technical expertise, and operational budgets. Where those authorities do not yet exist, three practical risks emerge.
Inconsistent supervision. Businesses operating across multiple EU markets may face active enforcement in one Member State and a regulatory vacuum in another. The EPRS analysis notes that the decentralised pattern could lead to uneven enforcement. For SMEs, this creates planning uncertainty: building compliance documentation to one Member State’s expectations carries no guarantee it will satisfy another’s. The AI Act does not include a GDPR-style one-stop-shop mechanism for cross-border AI systems, meaning businesses may need to engage with multiple national authorities simultaneously.
Forum shopping risk. Where enforcement is weaker or absent, providers may gravitate toward those jurisdictions for initial market entry. This is a recognised pattern from early GDPR enforcement, where some national data protection authorities were significantly slower to act than others. The AI Act’s European AI Board is tasked with coordination and guidance, but its functions are advisory rather than binding.
Conformity assessment bottleneck. High-risk AI systems in certain categories (including biometric identification and critical infrastructure) require third-party conformity assessment before market placement. That assessment is conducted by notified bodies designated by the notifying authority. Where notifying authorities have not been established, the pipeline for designating conformity assessment bodies has not started. Legal Nodes noted in February 2026 that businesses deploying high-risk AI systems should be preparing conformity assessment documentation now, regardless of whether their national infrastructure is ready, because the August 2026 deadline applies to the obligation itself, not to the enforcement apparatus.
The EU Level: What Is Already Operating
While national setups lag, the centralised enforcement infrastructure is more advanced. The AI Office within the Commission is operational and has sole authority over GPAI model compliance. The European AI Board, composed of one representative per Member State, is established and functioning in an advisory and coordinating capacity. A scientific panel of independent experts advises the AI Office and, on request, national authorities. An AI advisory forum comprising industry, SMEs, civil society, and academia provides technical expertise to the Board and Commission.
The Commission published a second draft of its Code of Practice on AI-generated content under Article 50 on 5 March 2026, proposing a two-layered marking approach combining secured metadata with watermarking and a potential standard EU icon for AI-generated content. Simmons & Simmons reported that finalisation is expected by June 2026. Ireland has opened consultation on its AI Act implementation bill, signalling that at least some Member States are moving, even if the Commission’s register does not yet reflect it (Simmons & Simmons, 17 March 2026).
The Digital Omnibus: Further Centralisation Ahead
In November 2025, the Commission proposed a “digital omnibus on AI”: a set of amendments that would further centralise enforcement. If adopted, the AI Office would take responsibility for AI systems integrated into very large online platforms (VLOPs) and very large search engines (VLOSEs) as defined under the Digital Services Act, as well as AI systems based on GPAI models where the system and model share a provider (EPRS, 18 March 2026).
For businesses, this means a provider offering both a GPAI model and an AI system built on it could face enforcement entirely from the AI Office rather than from a national authority. The direction of travel is toward more centralised oversight for the largest and most consequential AI deployments, with national authorities retaining jurisdiction over domain-specific high-risk applications.
What Businesses Should Do Without Waiting for Enforcement
The AI Act’s obligations apply on their stated dates regardless of whether enforcement authorities are ready. Businesses that delay compliance because their national government has not named an authority will still be in breach when the deadlines arrive. The pragmatic approach is to treat the Act like early GDPR: build to the strictest plausible standard and assume uneven national practice.
Classify AI systems now. Map every AI system in use or development against the Act’s risk categories. High-risk systems listed in Annex III (including AI used in employment, credit scoring, education, and law enforcement) will require conformity assessment, technical documentation, risk management systems, and human oversight measures from August 2026.
Build documentation to the strictest standard. Conformity assessment requirements are defined in the Act itself, not in national implementing legislation. Technical documentation, data governance records, and risk assessments can be prepared now. Legal Nodes recommended in February 2026 that businesses start this preparation regardless of national readiness.
Map dual jurisdiction exposure. If your business both deploys high-risk AI systems and integrates or fine-tunes GPAI models, you may face both a national market surveillance authority and the AI Office. Understanding which obligations fall where is essential before enforcement begins.
Track which Member States have designated authorities. The Commission maintains a public list of single points of contact. Businesses selling into multiple EU markets should monitor this list and prioritise compliance engagement with jurisdictions that have operational enforcement bodies, as those markets are likely to see enforcement action first.
The AI Act’s enforcement gap is a planning problem, not a compliance holiday. Businesses that treated GDPR’s two-year implementation period as free time learned the cost when enforcement caught up. The AI Act’s August 2026 deadline for high-risk obligations is five months away. Nineteen Member States still have not publicly named an authority. The obligation does not wait for the enforcer.
Related reading: EU AI Act Compliance Tools Move to Two-Layer Transparency and Zero-Storage Design | AI Compliance Deadlines 2026 | Does the EU AI Act apply to Australian businesses?