“ASIC engages closely with other regulators, government agencies and the financial sector to understand and respond to changing technologies,” an ASIC spokesperson told Reuters on 20 April 2026, confirming that Australia’s corporate regulator is now monitoring Anthropic’s frontier AI model, Claude Mythos Preview. The spokesperson added that ASIC expects financial services licensees to “be on the front foot” to safeguard customers and clients.
The Australian Prudential Regulation Authority (APRA), which supervises banks, said it would “continue to assess the implications of these technological advancements to ensure the ongoing safety and resilience of the financial system.” Both statements were part of a Reuters wire story that confirmed regulators across Australia, Hong Kong, South Korea, the UK, and the US are formally assessing the risks posed by a single AI model.
That model can autonomously discover and exploit zero-day vulnerabilities across every major operating system and web browser, according to Anthropic’s own disclosures. The question for regulated firms is not whether Mythos is real. It is whether their governance, vendor risk management, and incident response frameworks account for a world in which AI-powered vulnerability discovery operates at this scale.
What Mythos is, and what Anthropic claims it can do
Anthropic launched Claude Mythos Preview on 7 April 2026 as part of Project Glasswing, a partnership with major technology firms to use the model for defensive cybersecurity. The model is not publicly available. Access is restricted to Project Glasswing launch partners (AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, Microsoft, Nvidia, and Palo Alto Networks) and a group of over 40 additional organisations that build or maintain critical software infrastructure.
Anthropic’s Frontier Red Team technical report states that Mythos Preview “fully autonomously identified and then exploited” a 17-year-old remote code execution vulnerability in FreeBSD (triaged as CVE-2026-4747) that allows unauthenticated root access to any machine running NFS. Anthropic states that over 99% of vulnerabilities found have not yet been patched, so detailed disclosure is limited; the FreeBSD project has not independently confirmed the finding. The report also states the model found and exploited zero-day vulnerabilities in “every major operating system and every major web browser” during testing. In one case, Mythos wrote a browser exploit that chained together four separate vulnerabilities, including a JIT heap spray that escaped both renderer and OS sandboxes. Anthropic has committed USD 100 million in usage credits and USD 4 million in direct donations to open-source security organisations.
These are vendor assertions. Anthropic is the source for its own model’s capabilities. As security researcher Bruce Schneier wrote on 13 April 2026: “This is very much a PR play by Anthropic, and it worked. Lots of reporters are breathlessly repeating Anthropic’s talking points, without engaging with them critically.” Schneier also noted, however, that the security firm Aisle was able to replicate some of the vulnerability findings using older, cheaper, public models, which points to a structural issue beyond Mythos itself: AI vulnerability discovery at scale is already becoming commoditised. The question is not whether Mythos is uniquely dangerous. It is whether the defensive posture of most organisations accounts for this class of tool at all.
What the regulators are actually doing
The Reuters story by Scott Murdoch, Heekyong Yang, Xinghui Kok, Yantoultra Ngui, and Selena Li confirmed the following regulatory actions:
Australia. ASIC is monitoring Mythos alongside peer regulators. APRA is assessing implications for financial system resilience.
Hong Kong. The Hong Kong Monetary Authority (HKMA) is introducing a new Cyber Resilience Testing Framework and forming a public-private sector taskforce to examine, monitor, and respond to AI-driven cyber risks.
South Korea. The Financial Supervisory Service (FSS) held a meeting with information security officials from financial firms. The Financial Services Commission (FSC) held a separate emergency meeting chaired by Vice Chairman Kwon Dae-young with chief information security officers from the FSS, banks, and insurers.
United Kingdom. The UK AI Security Institute evaluated Mythos Preview independently and confirmed it was the first AI model to complete the institute’s full network takeover test, though the institute cautioned that its test environments lacked the security features of many real-world systems.
United States. The White House is in discussions with Anthropic about potential government access to Mythos. Axios reported on 19 April that the NSA is already using the model despite a separate dispute in which the Department of Defense designated Anthropic a supply chain risk to national security, a classification the company is contesting in court.
Why this is a governance problem, not just a security problem
The immediate cybersecurity response, patching vulnerabilities that Mythos or equivalent models can find, is essential but insufficient. The governance problem is structural and threefold.
First, the banking sector runs technology stacks that layer decades of legacy code with modern cloud infrastructure. As The Next Web reported, the accumulated technical debt creates undiscovered vulnerabilities that an AI model of this class can systematically locate. The problem is not one vendor’s model. It is the attack surface that model class exposes.
Second, concentration risk is real. The banking sector relies on a small number of cloud providers. A model that can find vulnerabilities in those providers’ systems could cascade across the entire financial system. ASIC and APRA are monitoring this because it is a systemic stability question, not just an individual-firm compliance question.
Third, “defensive use” does not exempt firms from regulatory scrutiny. If a financial institution uses Mythos Preview or a similar tool in its security workflow, the regulator will want to know how access is governed, what the tool is scanning, how outputs are handled, and who authorised the deployment. The tool’s capability is a governance input, not a governance exemption.
The Copilot problem at internet scale
SAW readers who followed the Microsoft Copilot oversharing analysis in March will recognise the structural pattern. When organisations deployed Copilot across their M365 tenancies, the AI did not break security. It surfaced permissions sprawl, oversharing, and access control gaps that had accumulated over years of ad hoc configuration. The vulnerabilities were already there. Copilot just found them faster than any human audit could.
Mythos operates on the same logic at a different scale. It does not create vulnerabilities in FreeBSD, OpenBSD, or Chrome. It finds bugs that have been sitting unpatched for 17 or 27 years and writes working exploits for them. The attack surface was always there. The AI maps it at a speed and scale that changes the risk calculus for every organisation that runs affected software.
The parallel extends to ambient AI exposure in SaaS platforms. When vendors embed AI features into enterprise tools by default, the AI processes data that the organisation may not have known was accessible. The governance failure is the same in each case: the organisation assumed its existing controls were adequate because nobody had tested them at the speed and scale AI enables. Copilot tested internal permissions. Ambient AI tested SaaS data boundaries. Mythos tests the entire external attack surface. Each one is a magnifying glass on pre-existing governance gaps, not the source of new ones.
The practical implication is that any organisation that needed a permissions audit before deploying Copilot also needs an attack surface review in light of Mythos-class capabilities. The tools that defenders use to find vulnerabilities are the same class of tools attackers will use. The organisations that cleaned house before the AI arrived are the least exposed. The ones that assumed their existing posture was adequate are the most.
What regulated firms should do now
Map your exposure to Mythos-class capabilities. The question is not “are we using Mythos?” It is “could a Mythos-class model find vulnerabilities in our systems that we have not found ourselves?” If the answer is yes (and for most organisations it will be), that is a risk assessment input.
Review vendor risk management for AI providers. If your organisation uses AI tools from Anthropic, OpenAI, or other frontier model providers, your vendor risk assessment should now account for the possibility that the vendor itself is a regulatory focus. Anthropic has been designated a supply chain risk by the US Department of Defense. That may or may not affect Australian regulated firms directly, but it is a data point procurement teams should surface.
Brief the board. The ASIC and APRA statements are public. Board audit and risk committees should know that Australia’s regulators are monitoring a specific AI model’s systemic risk implications and that the regulators expect licensees to be proactive. A board briefing note covering the Mythos situation, the firm’s current AI governance posture, and any actions taken is now a defensible governance step.
Test your cyber incident response plan against AI-powered attack scenarios. The Stanford AI Index (covered in a separate SAW article) shows AI incidents clustering inside organisations that have already experienced them. Response plans built for human-paced attacks may not hold against AI-automated vulnerability exploitation. A tabletop exercise that simulates an AI-powered intrusion is now worth the investment.
Document how AI tools are used in security workflows. If your security team is already using AI tools (including commercial penetration testing tools, code scanners, or vendor-provided AI features), document the use, the governance around it, and the authorisation chain. Regulators asking “how do you use AI in your security operations?” will expect an answer.
The bigger picture
Anthropic’s Mythos Preview is the first frontier AI model to attract named, public regulatory responses from multiple financial supervisors across multiple jurisdictions simultaneously. Whether Anthropic’s capability claims are fully verified or partly marketing (and reasonable analysts disagree on that question) is secondary to the regulatory fact: ASIC, APRA, HKMA, FSS, FSC, the Bank of England, and the US government are all now treating frontier AI models as a systemic risk input for financial stability.
That means regulated firms, and the governance, risk, and compliance teams that serve them, need to treat AI model capabilities as a risk factor that sits alongside interest rate risk, credit risk, and operational risk in the supervisory conversation. The risk is not the model itself but the inability to govern its implications.
Sources
- Reuters (via Yahoo Finance), “Asia regulators monitor Anthropic’s Mythos for potential banking risks,” 20 April 2026 (ASIC, APRA, HKMA, FSS, FSC quotes). yahoo.com
- iTnews, “ASIC, APRA among regulators monitoring Anthropic’s Mythos,” 21 April 2026 (Australian regulatory detail). itnews.com.au
- Anthropic, “Project Glasswing: Securing critical software for the AI era,” 7 April 2026 (model capabilities, partner list, $100M commitment). anthropic.com
- Anthropic Frontier Red Team, “Claude Mythos Preview,” 7 April 2026 (CVE-2026-4747, zero-day discovery claims, technical detail). red.anthropic.com
- Bruce Schneier, “On Anthropic’s Mythos Preview and Project Glasswing,” 13 April 2026 (critical analysis of vendor claims). schneier.com
- The Next Web, “ASIC joins global regulators monitoring Anthropic’s Mythos AI,” 19 April 2026 (IBM criticism, DoD supply chain risk context). thenextweb.com
- Foreign Policy, “Anthropic’s Claude Mythos Preview Changes Cyber Calculus,” 20 April 2026 (UK AI Security Institute evaluation, geopolitical context). foreignpolicy.com
- Fortune, “Anthropic is giving some firms early access to Claude Mythos to bolster cybersecurity defenses,” 7 April 2026 (launch coverage, Dario Amodei quote). fortune.com
- PYMNTS, “Anthropic’s Mythos Leads Global Bank Regulators to Call For Increased Vigilance,” 21 April 2026 (US government access discussions). pymnts.com
- Simon Willison, “Anthropic’s Project Glasswing,” 7 April 2026 (exploit chaining detail, Nicholas Carlini quote). simonwillison.net