Two major regulatory deadlines land in the second half of 2026. Both have direct implications for how organisations use AI tools. Here is what they require and what needs to happen before they arrive.

Most conversations about AI regulation have focused on what the rules might eventually look like. That phase has ended. The EU AI Act and amendments to the Australian Privacy Act both have fixed compliance dates in 2026, with penalties that will get attention at board level.

EU AI Act: 2 August 2026

The EU AI Act is the most comprehensive AI regulation passed anywhere in the world. It applies to any organisation that deploys or develops AI systems used within the European Union, regardless of where the organisation is based.

The August 2026 deadline covers high-risk AI systems. These are systems used in areas like employment decisions, credit scoring, law enforcement, and critical infrastructure. Providers and deployers of high-risk systems must meet a detailed set of requirements including risk management, data governance, transparency, human oversight, and technical documentation.

But the Act reaches further than high-risk systems alone. Organisations using general-purpose AI tools, including commercial products like ChatGPT, Claude, and Gemini, have transparency obligations. If employees use AI outputs in decision-making that affects individuals, the organisation may need to disclose that use and document how the tools are governed.

Penalties are substantial. Fines can reach EUR 15 million or 3% of annual global turnover, whichever is higher. For SMEs, the regulation includes some proportional adjustments, but the core obligations still apply.

The practical challenge is that most organisations have not mapped their AI usage. You cannot comply with transparency and documentation requirements if you do not know which AI tools are in use, who is using them, and what data is being processed. Shadow AI, by definition, is invisible to governance frameworks.

Australian Privacy Act: 10 December 2026

Amendments to the Australian Privacy Act introduce new requirements around automated decision-making, including decisions informed or made by AI tools.

From December 2026, APP entities must include information about automated decision-making in their privacy policies. If an organisation uses AI tools in processes that substantially affect individuals, such as hiring, customer assessments, or service eligibility, that use needs to be disclosed.

Civil penalties for serious or repeated interference with privacy can reach AUD $50 million. The Office of the Australian Information Commissioner (OAIC) has signalled it will take enforcement seriously, particularly where organisations fail to implement reasonable safeguards.

The implications for shadow AI are direct. If an employee uses an unapproved AI tool to help draft a hiring recommendation or assess a client application, that constitutes automated decision-making under the amended Act. The organisation is responsible even if it did not authorise the tool.

What compliance actually requires

Both pieces of legislation share a common practical requirement: organisations need to know how AI is being used across their operations.

That means answering questions like: which AI tools are employees using? What data is being entered into them? Are any decisions influenced by AI outputs? Is sensitive information, personal data, client details, or proprietary material being shared with external AI platforms?

The DTEX/Ponemon 2026 Insider Risk Report found that only 18% of organisations have properly built AI governance into their risk programs. That leaves 82% with a gap between what regulators will expect and what organisations can currently demonstrate.

An Okta survey of Australian security leaders found that 41% say nobody in their organisation owns AI security risk. That ownership gap becomes a compliance gap when regulators start asking who is responsible for AI governance.

The distance between now and ready

Compliance with either regulation requires more than a policy document. It requires visibility into actual AI usage, controls at the point of data entry, documentation of what tools are in use, and evidence of governance measures.

For organisations already running enterprise AI licensing with built-in controls, the gap may be manageable. For the majority, particularly smaller organisations relying on employees to self-manage their AI tool usage, the gap is significant.

WalkMe and IDC found that 78% of employees use unapproved AI tools at work. Reco found an average of 269 shadow AI tools per 1,000 employees. Those numbers suggest most organisations will find their actual AI usage is considerably broader than what any policy document covers.

August and December 2026 are enforceable deadlines with defined penalties, not aspirational targets. Compliance work starts with one question that most organisations cannot yet answer: do they know how their business uses AI right now?


Sources: EU AI Act, Australian Privacy Act / OAIC, Lander & Rogers Privacy Act Update, DTEX/Ponemon 2026 Insider Risk Report, Okta Australia AI Governance Survey, WalkMe/IDC AI in the Workplace Survey 2025, Reco State of Shadow AI Report 2025