On 1 May 2026, six national cybersecurity agencies published “Careful Adoption of Agentic AI Services,” the first coordinated multinational guidance on securing autonomous AI agents. The co-authors are CISA and the NSA (US), the ASD’s ACSC (Australia), CCCS (Canada), NCSC-NZ, and NCSC-UK.
Six agencies from five countries do not coordinate on a single document lightly. The joint imprimatur signals that agentic AI risk has crossed the threshold from emerging concern to active policy priority across the Five Eyes intelligence-sharing alliance.
CISA Acting Director Nick Andersen stated: “CISA is committed to supporting the US’s adoption of AI that includes ensuring it aligns with President Trump’s Cyber Strategy for America and is cyber secure.”
What the guidance covers
The document targets LLM-based agentic AI systems, defined as systems that can autonomously reason, plan, make decisions, and take actions without human intervention for each step. These are not chatbots. They are systems that operate autonomously across tools and data sources, executing API calls, accessing databases, generating and sending communications, modifying files, and sequencing complex multi-step workflows without human approval at each stage.
The guidance identifies five distinct risk categories:
Privilege risks. Agentic AI systems often inherit or accumulate access rights beyond what their tasks require. The Vercel/Context.ai breach SAW covered in April is a textbook example: a consumer AI tool granted “Allow All” OAuth permissions became the entry point for a supply-chain compromise. The guidance tells organisations to avoid granting broad or unrestricted access, especially to sensitive data or critical systems.
Design and configuration risks. Agents designed without security constraints, or deployed with default configurations that prioritise functionality over safety, create attack surfaces the deploying organisation may not fully understand.
Behaviour risks. Agentic systems can exhibit goal misalignment, where the agent pursues objectives that diverge from the operator’s intent. They can also exhibit deceptive behaviour, producing outputs designed to satisfy evaluation criteria rather than achieve genuine objectives. The guidance warns that these risks are distinct from traditional software bugs because they emerge from the model’s reasoning rather than from code defects.
Structural risks. Interconnected agentic systems create cascading failure pathways. When agents call other agents, or when multiple agents share access to the same data sources and tools, a compromise in one component can propagate across the system. This is the same concentration risk APRA warned about in its 30 April industry letter.
Accountability risks. Agentic AI systems can make decisions that are difficult to trace, explain, or attribute. When an agent takes an action that causes harm, determining whether the fault lies with the model, the configuration, the data, the operator, or the developer is often unclear. The guidance calls for logging and auditability as baseline requirements.
The core message: integrate, do not isolate
The guidance’s most significant recommendation is architectural. Organisations should integrate agentic AI security into existing frameworks, including zero trust, defence-in-depth, and least privilege, rather than building a separate “AI security” function. The agencies explicitly reject the idea that AI agents require a fundamentally different security model. They require the same principles applied to a new class of system.
The practical recommendations include treating AI agents as identities that need authentication, authorisation, and activity logging; applying least-privilege access controls that limit what each agent can do and what data it can reach; starting with low-risk, non-sensitive use cases and progressively increasing autonomy only after continuous evaluation confirms safe behaviour; implementing kill switches and human override capabilities for high-risk agent actions; monitoring agent behaviour in real time, not just at deployment; and red-teaming agent systems against prompt injection, goal misalignment, and privilege escalation scenarios.
Why this matters for SAW readers
SAW has covered agentic AI regulation in two earlier articles: the first wave of regulatory responses (2 April, covering FINRA, EU DPA, UK ICO, and EU AI Act Service Desk positions) and the second wave focusing on UK CMA and DRCF consumer law (20 April). Those articles covered how regulators are defining the problem. This guidance covers how security agencies expect organisations to solve it.
The sequence matters. Regulators set expectations, security agencies publish operational controls, and compliance teams then need to demonstrate alignment with both. Organisations that have been tracking SAW’s agentic AI coverage now have three layers to map against: regulatory expectations (what rules apply), security controls (what protections are required), and operational governance (how the organisation demonstrates compliance).
The Australian angle
The ASD’s ACSC co-authored this guidance, which gives it direct relevance for Australian organisations. APRA’s 30 April industry letter explicitly referenced ASD advice on frontier AI models and directed regulated entities to note that guidance. The Five Eyes document is the guidance APRA was pointing to.
For APRA-regulated entities, the combination is clear: APRA has told you your AI controls are not good enough, and ASD has now published the operational baseline for securing agentic AI systems. The two documents are companions. Reading one without the other leaves gaps.
For Australian organisations outside the prudential perimeter, the guidance still applies. Any organisation deploying AI agents that interact with production systems, customer data, or external APIs should map its current controls against the Five Eyes framework. The guidance is not legally binding, but it sets the standard that incident investigators, regulators, and courts will reference when assessing whether an organisation’s AI security was reasonable.
What CISOs and IT leaders should do
Audit all deployed agents. Map every agentic AI system currently running in the environment, including Microsoft Copilot, GitHub Copilot, Salesforce Agentforce, internal automation tools, and any AI system that can take actions without human approval for each step. Most organisations will find agent deployments they did not know about.
Apply least-privilege access controls. Every agent should have the minimum access required for its specific task. No agent should have “Allow All” permissions. Review and restrict OAuth grants, API keys, and service account permissions for every AI integration.
Implement logging and auditability. Every action an agent takes, every API call, every data access, every output generated, should be logged in a format that supports incident investigation. The Five Eyes guidance makes this a baseline expectation, not an optional enhancement.
Deploy kill switches. For any agent that can take actions affecting production systems, financial transactions, or customer data, implement a mechanism to immediately suspend agent activity. Test the kill switch regularly.
Start low, scale slowly. The guidance recommends beginning with low-risk, non-sensitive use cases and progressively increasing autonomy only after continuous evaluation confirms safe behaviour. Organisations that have already deployed agents broadly should conduct a retrospective assessment of whether those deployments would meet this standard.
Red-team against agent-specific threats. Prompt injection, goal misalignment, and privilege escalation are the three attack vectors the guidance highlights most strongly. Standard penetration testing does not cover these scenarios. Organisations need agent-specific red-team exercises.
The procurement signal
The Cloud Security Alliance analysis noted that the Five Eyes guidance will likely migrate into procurement requirements. When six national security agencies publish a joint control framework, vendor questionnaires and RFP requirements follow. Organisations evaluating AI agent platforms should start asking vendors how their products align with the Five Eyes risk categories and control recommendations now, before those questions become mandatory.
Sources
- CISA, “CISA, US and International Partners Release Guide to Secure Adoption of Agentic AI,” press release, 1 May 2026 (Andersen quote, scope, partner agencies, actionable recommendations). cisa.gov
- CISA, “Careful Adoption of Agentic AI Services,” guidance document landing page, 1 May 2026 (full guidance, five risk categories, control recommendations). cisa.gov
- NSA, “NSA joins the ASD’s ACSC and Others to Release Guidance on Agentic Artificial Intelligence Systems,” press release, 30 April 2026 (NSA framing, CSI designation, critical infrastructure and defence focus). nsa.gov
- Canadian Centre for Cyber Security, “Joint guidance on the careful adoption of agentic artificial intelligence services,” 1 May 2026 (Canadian partner statement, LLM-based agent definition). cyber.gc.ca
- Cloud Security Alliance, “The Autonomous Governance Moment: Five Eyes Issues First Joint Agentic AI Security Guidance,” 3 May 2026 (five risk categories analysis, procurement implications, lifecycle controls mapping). cloudsecurityalliance.org
- Arnav Sharma, “US and Australia Release Agentic AI Security Guidance,” 3 May 2026 (ACSC co-authorship significance, agent capability definition, operational detail). arnav.au
- Let’s Data Science, “Agencies Issue Guidance on Agentic AI Security,” 2 May 2026 (five risk categories enumeration, editorial analysis, practitioner indicators). letsdatascience.com