The European Union’s Artificial Intelligence Act is the world’s first comprehensive AI regulation, and it does not stop at Europe’s borders. Australian businesses that touch EU customers, process EU data, or sell products into EU markets are likely in scope. High-risk AI obligations apply from 2 August 2026, and the window for preparation is narrowing. For Australian SMEs and mid-market businesses without dedicated compliance teams, that deadline is closer than it looks.
It’s Not Just a Europe Problem
Most Australian businesses are wrong to assume the EU AI Act does not apply to them.
Article 2 of the AI Act makes its extraterritorial reach explicit. It applies to providers placing AI systems on the EU market regardless of where they are headquartered. It also captures providers and deployers of AI systems located in third countries, including Australia, where the output produced by the system is used inside the EU. That is a broad net. An analytics tool generating a report for an EU-based client triggers coverage. So does a recruitment platform screening candidates in Germany, or a SaaS product serving customers in France.
Non-EU providers of high-risk AI systems must appoint an authorised representative within the EU before making their systems available on the EU market. This is a hard legal requirement with a hard deadline. The same applies to providers of general-purpose AI (GPAI) models under Article 54.
The Scale of Australia’s EU Trade Exposure
Australia’s commercial relationship with the EU is substantial. In 2023-24, two-way goods and services trade between Australia and the EU totalled AUD$108 billion, representing 8.5% of Australia’s total trade (DFAT, Top 15 Trading Partners 2023-24). Australian goods exports to the EU were worth AUD$25.1 billion, while services exports added another AUD$8.6 billion in 2024 (DFAT, Australia-EU FTA Fact Sheet). The EU is Australia’s third-largest two-way trading partner and sixth-largest goods export destination.
The EU is also Australia’s second-largest source of foreign investment. That investment relationship means many Australian businesses operate within EU supply chains, use EU-origin data, or serve EU-based parent companies, all of which are potential triggers for AI Act obligations.
Total trade in goods between the two blocs was valued at €49.4 billion in 2024 from the EU’s perspective, with services adding another €38.1 billion (European Commission, EU-Australia Trade Relations). Any Australian company operating within this trade corridor and deploying AI needs to assess its exposure.
The Timeline: What’s Already Live and What’s Coming
The AI Act uses a phased rollout. Several obligations are already enforceable.
| Date | Milestone | Status |
|---|---|---|
| 1 August 2024 | AI Act enters into force | Active |
| 2 February 2025 | Prohibited AI practices banned; AI literacy obligations apply | Active |
| 2 August 2025 | GPAI model obligations; governance structures operational | Active |
| 2 August 2026 | High-risk AI (Annex III) obligations; financial penalties apply | 5 months away |
| 2 August 2027 | AI embedded in regulated products (Annex I); full enforcement | Future |
Prohibited AI practices, including social scoring, subliminal manipulation, and workplace emotion inference, have been banned since February 2025. Any organisation still using these practices is already non-compliant.
GPAI obligations including technical documentation, copyright compliance, and training data summaries became applicable on 2 August 2025. Models already on the market before that date have until August 2027 to comply.
What Counts as High-Risk AI
The Act classifies AI systems into four risk tiers: unacceptable (banned), high-risk, limited risk, and minimal risk. High-risk is where most of the compliance burden sits.
Annex III lists eight categories of AI use cases automatically considered high-risk:
- Biometrics: remote biometric identification, emotion recognition
- Critical infrastructure: AI managing energy, transport, water, or digital infrastructure
- Education: admissions decisions, exam scoring, learning assessment
- Employment and HR: recruitment screening, candidate scoring, performance monitoring, promotion decisions
- Essential services: credit scoring, insurance risk assessment, access to public benefits
- Law enforcement: predictive policing, evidence evaluation
- Migration and border control: asylum evaluation, visa processing, migration risk prediction
- Justice and democracy: legal decision-making, tools affecting elections
For Australian businesses, the employment/HR and essential services categories are the most likely triggers. AI-powered recruitment software, credit assessment tools, or insurance pricing models sold into EU markets are presumptively high-risk.
An Annex III system can be exempted from high-risk classification only under narrow conditions, and any system that performs profiling of natural persons is always high-risk, regardless of exemptions (EU AI Act, Article 6).
What High-Risk AI Providers Must Do
Australian providers of high-risk AI systems will need to implement:
- Risk management system: systematic identification, analysis, and mitigation of risks across the AI system lifecycle
- Data governance: ensuring training, validation, and testing data is high-quality, representative, and bias-mitigated
- Technical documentation: comprehensive documentation per Annex IV, including system architecture, training methodologies, data lineage, and intended purpose
- Human oversight: design features enabling natural persons to effectively oversee the system during use, including the ability to understand limitations, detect anomalies, and override outputs
- Accuracy, robustness, and cybersecurity: ongoing performance and security requirements throughout the system lifecycle
- Logging and traceability: automatic recording of events throughout the system lifecycle
- Conformity assessment: mandatory pre-market assessment under Article 43, which may involve third-party certification by a Notified Body
- CE marking and EU database registration: required before placement on the market
This is a 12-18 month effort for most organisations. With the August 2026 deadline five months away, organisations that have not started are already behind.
GPAI: What Foundation Model Providers Must Do
Providers of a general-purpose AI model (foundation models, large language models, or similarly capable systems) on the EU market are subject to Chapter V of the AI Act.
All GPAI providers must:
- Maintain and provide technical documentation
- Implement an EU copyright compliance policy
- Publish a sufficiently detailed summary of training content
- Label AI-generated content (for generative AI systems)
- Provide downstream providers with information needed for their own regulatory compliance
GPAI models with systemic risk, defined as those trained with more than 10²⁵ FLOPs or designated by the European Commission, face additional requirements: adversarial testing using standardised methodologies, incident reporting to the AI Office without undue delay, and cybersecurity measures for the model and infrastructure.
Non-EU GPAI providers must designate an authorised representative in the EU by written mandate. The GPAI Code of Practice, endorsed by the European Commission and Member States, provides a voluntary pathway for demonstrating compliance.
The Penalties
The EU AI Act’s penalty regime is deliberately punitive.
| Violation | Maximum Fine |
|---|---|
| Prohibited AI practices | €35 million or 7% of global annual turnover (whichever is higher) |
| Breach of high-risk or other obligations | €15 million or 3% of global annual turnover |
| Incorrect or misleading information to authorities | €7.5 million or 1% of global annual turnover |
| GPAI model violations | €15 million or 3% of global annual turnover |
For SMEs and start-ups, the same percentage thresholds apply but the lower of the two amounts is used rather than the higher. EU Member States are responsible for enforcement, coordinating through the European AI Board to ensure consistent interpretation.
These penalties apply from 2 August 2026 for most provisions. For a mid-sized Australian SaaS company with AUD$50 million in global revenue, a 7% penalty translates to AUD$3.5 million, before legal costs or remediation.
Where Australia Stands: No Equivalent Protection
Australia has no comparable AI regulation, and the gap is widening. The Australian Government released its Voluntary AI Safety Standard in September 2024 with 10 guardrails covering accountability, risk management, transparency, and human oversight. In October 2025, the National AI Centre replaced that with updated Guidance for AI Adoption, condensing the 10 guardrails into six essential practices.
The December 2025 National AI Plan abandoned the proposed mandatory guardrails entirely. The Government opted to rely on existing technology-neutral laws and sector regulators, supported by a new AUD$30 million AI Safety Institute to monitor risks and advise on gaps.
Australia currently has no mandatory AI-specific compliance framework. Businesses operating purely domestically face voluntary guidance at most. Those with EU market exposure face the full weight of the EU AI Act, with no domestic regulatory infrastructure to build from.
| Feature | Australia (Voluntary) | EU AI Act (Mandatory) |
|---|---|---|
| Enforcement | Voluntary guidance; existing laws | Dedicated AI Office; financial penalties |
| Scope | Deployers and developers (voluntary) | Providers, deployers, importers, distributors |
| Conformity assessment | Not required | Mandatory for high-risk AI |
| Penalties | None (AI-specific) | Up to €35M or 7% of global turnover |
| Timeline | Immediate (voluntary adoption) | Phased: Feb 2025 to Aug 2027 |
Compliance Checklist for August 2026
Australian businesses with EU exposure should be working through these steps now:
- Audit the AI inventory: map every AI system the organisation develops, deploys, or integrates, and identify any that touch EU markets, customers, or data
- Classify risk: assess each system against the EU AI Act’s four risk tiers, paying close attention to Annex III high-risk categories (employment, credit, biometrics, infrastructure)
- Check for prohibited practices: confirm the organisation is not using social scoring, subliminal manipulation, real-time remote biometric identification (outside narrow law enforcement exceptions), or workplace emotion inference
- Assess GPAI exposure: if the organisation provides or integrates foundation models available in the EU, confirm compliance with Chapter V documentation, copyright, and transparency requirements
- Implement high-risk requirements: for any high-risk system, build out risk management, data governance, technical documentation, human oversight, and logging systems
- Prepare for conformity assessment: engage with the conformity assessment process under Article 43, including identifying whether third-party certification is required
- Appoint an EU authorised representative: mandatory for non-EU providers of high-risk AI and GPAI models
- Register in the EU database: high-risk AI systems must be registered before market placement
- Train staff: AI literacy obligations are already live; ensure staff understand the AI systems they use and deploy
- Monitor regulatory updates: track the European Commission’s implementing acts, harmonised standards (including prEN 18285), and codes of practice as they develop
Getting legal advice from someone who understands both jurisdictions is not optional. The interplay between EU AI Act obligations and existing Australian privacy, consumer, and anti-discrimination law creates compliance complexity that generic guidance alone will not resolve.
The Bottom Line
For any Australian business whose AI systems or outputs reach EU markets, the EU AI Act is a market access problem, not a distant regulatory concern.
With AUD$108 billion in two-way trade at stake, the number of affected Australian businesses is substantial. There is no domestic compliance framework that prepares organisations for what the EU demands, and the August 2026 deadline for high-risk AI is five months away. Organisations that have not started face a narrowing window to get compliant.
Related reading: What is an AI governance framework? | ASIC’s AI governance review: what it found
Stay across AI governance developments in Australia and globally. Subscribe to the Shadow AI Watch newsletter.
Sources
- DFAT: Australia-EU FTA Fact Sheet
- DFAT: Top 15 Trading Partners 2023-24
- European Commission: EU-Australia Trade Relations
- European Commission: AI Act
- EU AI Act: Article 6: Classification Rules for High-Risk AI
- EU AI Act: Article 14: Human Oversight
- Glocert International: EU AI Act Timeline
- Glocert: GPAI & Foundation Model Compliance
- William Fry: Extraterritorial Reach of the AI Act
- William Fry: Is Your Annex III AI System High-Risk?
- Arnold Porter: EU AI Act Compliance Obligations
- Naaia: High-Risk AI Act List
- Naaia: GPAI Obligations
- Legal Nodes: EU AI Act 2026 Updates
- Dataiku: EU AI Act High-Risk Requirements
- VDE: Conformity Assessment of High-Risk AI
- Anekanta: High-Risk AI Conformity Assessments
- AI Act Service Desk: Article 11: Technical Documentation
- Captain Compliance: EU AI Act Penalties
- Aligne AI: EU AI Act Penalties
- White & Case: EU AI Act Becomes Law
- White & Case: EU AI Act Extraterritorial Scope
- White & Case: Australia AI Guidance
- Stephenson Harwood: EU AI Act Enforcement
- Baker McKenzie: EU Regulation on AI
- Department of Industry: Voluntary AI Safety Standard
- ABC News: National AI Plan
- Piper Alderman: Europe’s AI Act and Australian Businesses
- Piper Alderman: Australia’s National AI Plan
- MinterEllison: EU Regulates AI: Will Australia Follow?
- ACC: EU AI Act Practical Considerations
- European Commission: EU GPAI Rules
- SafeAI-Aus: AI Australian Legislation
- Ashurst: Australia: New AI Safety Guardrails
- JacMac: EU AI Act: What Australian Businesses Need to Know
- Gadens: Australia’s Evolving AI Governance