The Fair Work Commission has released draft AI disclosure rules after AI-generated employment claims drove its total caseload up more than 70% in three years. The draft guidance requires applicants to declare AI use and personally verify all facts. For employers, the real story is not about workers gaming the system. It is about poorly documented dismissals being challenged faster, more professionally, and at lower cost than ever before.

How did the FWC get here?

Until 2023, the Fair Work Commission handled roughly 30,000 matters per year. By 2024–25, official lodgements had reached 44,075 (a figure FWC President Justice Adam Hatcher rounded to 45,000 in his February 2026 speech to the Victorian Bar Association). That is a 24% increase above the five-year average. Unfair dismissal claims alone grew 41% between 2022–23 and 2024–25, with general protections claims growing even faster. The Commission now projects 50,000 to 55,000 total cases for 2025–26.

Hatcher attributed a significant share of the increase to generative AI tools. In his speech, he described testing ChatGPT by providing basic facts about a hypothetical dismissal. The chatbot produced a ready-to-file application, complete with a witness statement containing what Hatcher called a “substantially invented story,” along with inflated compensation estimates. The process took less than ten minutes.

The statistical relationship between hiring, firing and claims brought to the FWC has “broken down in the last couple of years,” Hatcher told the ACS Information Age. Claim volumes are no longer tracking retrenchment rates. They are tracking AI adoption.

What happens when AI drafts the claim?

The accessibility problem cuts both ways. AI makes it trivially easy for a dismissed worker to generate a professional-looking application. It also makes it trivially easy to generate one built on fabricated case law and invented facts.

A Stanford University study found general-purpose AI chatbots hallucinate on legal queries between 69% and 88% of the time. A follow-up study testing dedicated legal AI platforms found the hallucination rate was lower but still significant: between 17% and 34% depending on the platform, even with retrieval-augmented generation designed to ground responses in real case law. For applicants using free public tools with no legal training to verify the output, the error rate is likely at the higher end of the general-purpose range.

The Deysel v Electra Lift Co. case illustrates the risk. Branden Deysel used ChatGPT to bring a general protections application against his former employer. The chatbot helped him draft and file the claim. What it failed to flag was the requirement under section 366(1)(a) of the Fair Work Act to lodge within 21 days of dismissal. Deysel filed two and a half years late. The application went nowhere, but it still consumed Commission resources and employer time.

What does the draft guidance note require?

The FWC’s draft guidance note, published 24 March 2026, introduces three requirements for anyone using generative AI to prepare FWC documents:

AI disclosure statement. All applicants must declare whether generative AI was used in preparing their application or supporting documents.

Personal fact verification. Applicants must confirm they have personally checked the accuracy of all facts, figures and case references in AI-generated material.

Hyperlinked citations. Legally qualified agents must include hyperlinks to all case law cited, making fabricated references easier to identify.

The Commission has also issued a strong recommendation against using AI to draft witness statements, given the prevalence of invented facts in AI-generated narratives. Public consultation on the draft runs until 10 April 2026.

The two-sided problem employers are missing

Most of the early commentary has focused on workers misusing AI to flood the tribunal. That framing misses the more consequential half of the story.

AI-generated claims are easier to file, cheaper to produce, and increasingly professional in appearance. This means employers with weak documentation practices are now exposed to challenge at a scale and speed that was not possible two years ago. A worker who might previously have been deterred by the cost and complexity of filing can now have a plausible-looking application ready in minutes.

The quality of that application may be poor. Many will fail on merit, jurisdiction or basic accuracy. But even unmeritorious claims require a response, consume management time, and create legal costs. The first quarter of 2025–26 saw 13,761 new lodgements, a 45% increase on the three-year Q1 average. General protections dismissal applications were up 57% on the same benchmark.

For employers already using AI in their own performance management, scheduling or disciplinary workflows, there is a second exposure. An employer’s use of algorithmic tools to make or inform termination decisions may itself face scrutiny at the Commission, particularly as the NSW WHS Digital Work Systems Act now classifies AI-driven work allocation and performance monitoring as workplace hazards requiring risk assessment.

What should employers do now?

The FWC’s disclosure rules target applicants, not employers. But the governance implications for employers are significant.

Tighten documentation before termination, not after. Every warning letter, performance conversation, investigation step and meeting record should be written as though it will be read by a tribunal member. With AI-generated claims arriving faster and in higher volumes, the window between a dismissal and a challenge is shrinking. Employers relying on informal processes or undocumented conversations are carrying more risk than they were twelve months ago.

Audit AI in your own HR processes. If performance scoring, roster allocation or disciplinary recommendations involve algorithmic tools, assess whether those tools could withstand scrutiny in an unfair dismissal hearing. The NSW WHS Act makes this a legal obligation in New South Wales. Expect similar requirements to appear in other jurisdictions as Safe Work Australia considers incorporating digital work systems into the Model WHS Act.

Prepare for the December 2026 Privacy Act deadline. From 10 December 2026, the Privacy and Other Legislation Amendment Act 2024 requires Australian entities using automated decision-making to disclose this in their privacy policy. Penalties for serious breaches can reach AUD $50 million. The governance work for FWC-readiness and Privacy Act compliance overlaps substantially: know where AI touches employment decisions, document how it is used, and ensure human review sits between the algorithm and the outcome.

The broader regulatory signal

The FWC is not acting in isolation. A joint statement from the Victorian Legal Services Board, the Law Society of NSW and the Legal Practice Board of WA has already warned that lawyers “cannot safely enter confidential, sensitive or privileged client information into public AI chatbots or any other public tools.” The NSW Supreme Court has introduced a practice note on generative AI use. In Victoria, a lawyer has been referred to regulators after submitting AI-generated authorities that included non-existent cases.

The FWC’s draft disclosure rules are a preview of what other Australian tribunals, including VCAT, the AAT and state industrial commissions, will likely adopt. The pattern is consistent: mandatory disclosure, personal responsibility for accuracy, and tighter scrutiny of AI-generated citations.

The bottom line

Employers with solid documentation practices are better placed regardless of whether claims are AI-generated or not. The FWC’s new rules do not change the law on unfair dismissal. They change the speed, volume and apparent sophistication of claims coming through the door. Organisations that treat every employment decision as potentially reviewable, document accordingly, and keep human judgment at the centre of AI-informed processes will face fewer surprises when the next wave of applications arrives.

Related reading: What does shadow AI cost a business in 2026? | AI compliance deadlines 2026 | Employees still do not know what data they can put into AI tools | ASIC v Bekier: first Australian judicial guidance on directors and AI

Sources

  • Fair Work Commission, President’s Statement and Draft Guidance Note: Use of Generative Artificial Intelligence in Commission Cases, 24 March 2026. fwc.gov.au

  • Fair Work Commission, President’s Presentation to the Victorian Bar Association, 18 February 2026. fwc.gov.au (PDF)

  • ACS Information Age, “AI floods Fair Work claims,” February 2026. ia.acs.org.au

  • SmartCompany, “AI-generated unfair dismissal claims swamp Fair Work Commission,” 20 February 2026. smartcompany.com.au

  • HRD Australia, “Fair Work Commission’s workload is unsustainable,” November 2025. hcamag.com

  • Stanford HAI, “Hallucinating Law: Legal Mistakes with Large Language Models Are Pervasive,” January 2024. hai.stanford.edu

  • Stanford HAI, “AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries,” May 2024. hai.stanford.edu

  • Gadens, “A system under strain: The unsustainable rise in Fair Work Commission applications,” December 2025. gadens.com

  • Victorian Legal Services Board, Law Society of NSW, Legal Practice Board of WA, Joint Statement on AI in Australian Legal Practice, December 2024. lsbc.vic.gov.au

  • CBP Lawyers, “NSW amends WHS Act: employers now expressly required to consider risks of AI,” 24 March 2026. cbp.com.au