Justice Lee’s judgment in ASIC v Bekier [2026] FCA 196 contains the first substantive judicial commentary from an Australian court on directors using generative AI. Across a 1,959-paragraph Federal Court decision, the Court warned against boards allowing informal, unregulated AI use and called for formal policies governing how directors interact with AI tools. Law firm coverage has largely treated the AI passages as a secondary finding. For AI governance, they are the lead story.
What Justice Lee Actually Said
The judgment, delivered 5 March 2026, runs to 1,959 paragraphs across 313 pages. It arises from ASIC’s civil penalty proceedings against Star Entertainment Group’s former CEO Matthias Bekier, former General Counsel Paula Martin, and seven non-executive directors over failures in anti-money laundering governance. Bekier and Martin were found to have breached their s 180(1) duty of care. All seven NEDs were cleared.
The AI discussion appears in two places in the judgment, each serving a different purpose.
The more detailed treatment is at paragraphs 390 to 396 (Section H.1.5), where Justice Lee addressed director AI use head-on. His Honour described it as “jejune to deny that many individual directors are using AI informally to prepare for meetings” and warned that boards should “discuss and deliberately govern any AI use by formal adoption of policies, rather than just wink at informal ‘shadow’ use.” The Court cited with approval the AICD’s publication “AI use by directors and boards: Early insights,” which describes the gap between individual directors’ private AI adoption and collective board governance of that use. At these paragraphs, the Court specified that any use of AI should be “controlled and transparent” and that “ethical reasoning and judgment rests with directors, not machines” (ASIC v Bekier [2026] FCA 196 at 390–396).
The broader concluding restatement comes at paragraph 1956, near the end of the judgment, in the context of board information overload. Justice Lee had described Star’s board packs as “Brobdingnagian electronic document dumps masquerading as board packs” that functioned as insurance policies for their preparers rather than genuine information tools. Against that backdrop, the Court restated the AI warning:
“There is nothing inherently objectionable in obtaining [AI] assistance, but what ought not occur is that this development becomes an excuse for a failure to instil discipline in the provision of information to directors or leads to a quiet normalisation of private reliance by them upon computer-generated distillations, unregulated by any agreed policy. Proper collective governance requires transparency about how information is being reduced and relied upon in either the preparation of board packs by management, or their digestion by directors. The use of technology may assist comprehension, but it cannot displace judgment. The statutory obligation imposed by s 180(1) remains personal, and it requires informed human judgment.”
Between them, the two passages identify two distinct risks. The first is directors privately using AI to digest board materials without board awareness or policy. The second is management using AI to create board packs, potentially deepening the information asymmetry that was central to the Star governance failures.
Obiter, But Already Influential
The AI findings are obiter dicta. The liability analysis did not turn on whether any director used AI, and the Court’s conclusions on Bekier and Martin would be identical without the AI passages. Wotton + Kearney, in their 16 March 2026 analysis, explicitly confirmed the AI observations were made “by way of obiter” and characterised them as “cautionary, not promotional.”
Obiter from a Federal Court judge on a topic with no other judicial authority in Australia carries substantial practical weight. Four days after the judgment, ASIC Chair Joe Longo endorsed the AI passages at the AICD Governance Summit on 9 March 2026, specifically citing paragraph 1956 as part of the Court’s description of directors as active participants rather than passive recipients of information (ASIC, 9 March 2026). The AICD’s own analysis concluded that AI use “must be controlled and formally governed by the board and it must not displace independent judgement and individual diligence.” Clayton Utz, Bell Gully, and Wotton + Kearney all recommend boards act on the findings.
King & Wood Mallesons cited paragraph 1956 directly in their analysis and provided the most complete published version of the AI passage. Their overall assessment of the judgment was reassuring for directors on the doctrinal holdings, noting the Court “applied the law concerning s 180(1) as it had been settled and understood, and did not interpret the duty of care and diligence in a manner that imposed additional obligations.” The AI observations sit alongside settled law rather than expanding it, but they fill a gap that no other Australian judicial pronouncement has addressed.
How Many Directors Are Already Using AI?
The OnBoard 2025 Board Effectiveness Survey, based on responses from over 500 board professionals globally, found that 69 per cent had used AI for board work in the previous six months. Forty per cent had used more than one AI tool. ChatGPT was the most common at 48 per cent, followed by Microsoft Copilot at 32 per cent and Google Gemini at 20 per cent (OnBoard, October 2025). Board professionals who use AI rated themselves 12 points higher in effectiveness and collaboration.
The Diligent Institute and Corporate Board Member “Pulse Check on AI in the Boardroom” survey of US public company directors found 50 per cent use generative AI for meeting preparation and 39 per cent for intelligence summarisation. Only 22 per cent of boards had adopted formal AI governance or ethics policies (Diligent/Corporate Board Member, September 2025). The gap between adoption and governance at board level mirrors the same pattern ASIC identified in operational AI use.
No equivalent Australian survey data on director AI use exists yet. The Diligent/GIA survey of Australian governance leaders found that 43 per cent had placed AI adoption at the top of their strategic agendas and 61 per cent of Australian organisations had restricted or defined employee AI use, but only 13 per cent of Australian boards had recruited AI-literate directors (Diligent/GIA/SID, November 2025). The AICD article that Justice Lee cited describes the “two-speed dynamic”: individual directors adopting AI privately while collective board governance of that use lags behind.
The Connection to ASIC REP 798
ASIC’s Report 798, Beware the Gap (October 2024), reviewed 624 AI use cases across 23 financial services licensees and found governance arrangements “varied widely,” with some licensees deploying AI faster than their governance could keep up. Fifty-seven per cent of AI use cases were less than two years old or still in development. The report identified a governance gap between AI adoption speed and oversight maturity.
Bekier extends that same concept from operational AI to the boardroom itself. REP 798 addressed shadow AI in financial services operations. Paragraph 1956 addresses shadow AI in how directors process the information they govern by. Together they bracket the gap from bottom to top: ungoverned AI at the operational level and ungoverned AI in how boards receive and digest information.
ASIC Chair Longo explicitly linked both in his 9 March 2026 speech, highlighting paragraph 1956 alongside the REP 798 findings as evidence that AI governance must extend across all levels of an organisation. Shadow AI Watch’s coverage of REP 798 noted that only one of the 14 licensees planning to increase AI use had built governance infrastructure before deployment. The Bekier AI passages suggest the same pattern may hold at board level.
What Boards Should Do
Clayton Utz’s 16 March 2026 analysis provides the most structured action list. Adapted for the AI-specific findings:
Adopt a formal AI acceptable use policy covering director-level use. The policy should address what AI tools directors may use for board preparation, whether AI-generated summaries must be disclosed, and how reliance on AI digests is recorded. Shadow AI Watch publishes a free AI Usage Policy Template as a starting point. Bell Gully summarised it plainly: “It may be prudent for Boards to adopt formal artificial intelligence policies, rather than allowing informal and undisclosed use.”
Require transparency about AI use in both directions. Justice Lee’s language covers both management preparing board packs and directors digesting them. If management uses AI to draft or summarise board papers, the board should know. If directors use AI to process those papers, that use should be visible rather than private.
Review board pack quality. The “Brobdingnagian electronic document dumps” description applies well beyond Star. Justice Lee’s criticism was that volume had become a substitute for clarity, and that excessive length served preparers rather than directors. AI-generated board materials risk compounding that problem by producing packs that appear comprehensive but lack judgment-driven synthesis.
Review the AICD/GIA joint statement on board minutes and AI. Published in November 2025, the joint statement addresses how AI intersects with the preparation and accuracy of board minutes. Boards that have not assessed its applicability to current practice should do so in light of the Bekier findings.
The judgment creates no new legal obligation. Its practical effect may be more significant: auditors, regulators, and governance advisors now have a Federal Court citation for the proposition that unregulated AI use at board level is a governance failure. When ASIC’s Chair endorses that citation four days later, the practical weight is clear. Boards that have not discussed AI use by the time of their next meeting are behind the pace the Court has set.
Related reading: ASIC’s AI governance review: what it found | What is an AI governance framework? | What is shadow AI? | AI Usage Policy Template (free download)
Sources
- ASIC v Bekier (Liability Judgment) [2026] FCA 196 (5 March 2026, Justice Lee)
- ASIC Media Release 26-040MR (5 March 2026)
- ASIC Chair Joe Longo: AICD Governance Summit speech (9 March 2026)
- AICD: Star Entertainment Group judgment: Implications for directors (5 March 2026)
- AICD: AI use by directors and boards: Early insights (cited by Justice Lee)
- AICD/GIA: Effective board minutes and the use of AI joint statement (November 2025)
- Clayton Utz: What directors and officers need to know (16 March 2026)
- Bell Gully: Directors’ duties: important new guidance from Australia
- OnBoard: 2025 Board Effectiveness + AI Survey (500+ respondents, October 2025)
- Diligent/Corporate Board Member: A Pulse Check on AI in the Boardroom (September 2025)
- ASIC Report 798: Beware the gap (October 2024)
- Wotton + Kearney: Directors’ duties through a modern lens (16 March 2026)
- Ironbridge Legal: Lessons from the Star Casino Boardroom (March 2026)
- King & Wood Mallesons: ASIC v Bekier analysis (9 March 2026)