Organisations using US-headquartered AI providers face a legal, political, and jurisdictional exposure that most governance frameworks have not caught up with. The US government can compel any American technology company to disclose customer data regardless of where that data is physically stored. A provider’s political relationship with the sitting administration now constitutes material risk for its enterprise customers worldwide. The events of early 2026 have moved this from an abstract compliance question to a live operational concern.

What Changed in Early 2026

On 5 March 2026, the Trump administration designated Anthropic as a “supply chain risk to national security,” a label previously reserved for foreign adversaries. The reported trigger was Anthropic’s refusal to remove two contractual restrictions from its Pentagon deployment: stated red lines covering mass domestic surveillance and fully autonomous weapons, the exact contractual language of which has not been made fully public. The designation was followed by a presidential directive to all federal agencies to immediately cease using Anthropic’s technology. GSA removed Anthropic from government procurement platforms. The Pentagon threatened to invoke the Defense Production Act. Anthropic filed two federal lawsuits on 9 March 2026 alleging First Amendment and due process violations, one in the Northern District of California and one in the DC Circuit. Its CFO stated the actions could reduce 2026 revenue by multiple billions of dollars. More than 100 enterprise customers have contacted Anthropic about the designation (Reuters, 12 March 2026). US District Judge Rita Lin set an expedited preliminary injunction hearing for 24 March 2026 after the Department of Justice declined to commit to taking no further adverse action before the hearing.

Shortly after the Anthropic ban, OpenAI announced it had secured a classified Pentagon deployment deal, widely reported as filling the space Anthropic had been negotiating to occupy, and the timing was not coincidental.

OpenAI’s trajectory from an organisation that explicitly banned military and warfare use (pre-January 2024) to one deploying on classified military networks (February 2026) took just over two years. The steps were deliberate: removal of the military use prohibition from its usage policy in January 2024, appointment of retired NSA Director General Paul Nakasone to its board in June 2024, launch of ChatGPT Gov for government agencies in January 2025, and the $500 billion Stargate AI infrastructure project announced at a White House press conference the day after Trump’s inauguration. By August 2025, OpenAI offered ChatGPT Enterprise to every federal agency for $1 per year, a loss-leader to capture the ecosystem. The February 2026 Pentagon deal completed that trajectory.

Dean Ball, an AI policy researcher at George Mason University’s Mercatus Center, characterised the Anthropic designation as “almost surely illegal” and described it as “attempted corporate murder.” Ball’s assessment may prove accurate, but legal resolution could take months or years, during which enterprise customers must make real-time decisions about provider continuity.

For enterprise customers, the board-level lesson from both developments is that AI provider access can be disrupted overnight by political dynamics entirely unrelated to the quality of the provider’s technology or the content of its contractual commitments.

The CLOUD Act Makes Data Location Irrelevant

The foundational legal instrument underlying AI sovereign risk is the Clarifying Lawful Overseas Use of Data Act (CLOUD Act, 2018). It allows US law enforcement to compel any US-based technology company to disclose data “regardless of whether such communication, record, or other information is located within or outside of the United States.” The law follows corporate control, not data location.

This means OpenAI, Anthropic, Microsoft, and Google can each be compelled to hand over customer data stored in Frankfurt, Sydney, or Johannesburg. A US warrant for data hosted in OpenAI’s European data residency region carries the same legal force as one for data stored in Virginia. The distinction boards must internalise is the difference between data residency, where data physically sits, and data sovereignty, which laws govern who can access it. Storing data in the EU while using a US provider achieves residency but not sovereignty.

The conflict with the EU’s GDPR is fundamental. GDPR Article 48 states that foreign court orders are not by themselves a valid basis for transferring personal data outside the EU. The European Data Protection Board has concluded that US providers cannot legally base the disclosure and transfer of personal data to US authorities on CLOUD Act warrants alone. Yet non-compliance with a CLOUD Act warrant carries contempt sanctions against the provider. Organisations using US-hosted AI for personal data processing face a genuine conflict: GDPR fines reach 20 million euros or 4 per cent of global turnover, and GDPR enforcement fines totalled 2.3 billion euros in 2025, a 38 per cent increase year on year.

The current EU-US Data Privacy Framework, adopted in July 2023, governs routine data transfer rules for certified companies but does not prevent US authorities from issuing CLOUD Act demands. Its long-term durability is uncertain: the Trump administration’s removal of Democratic members from the Privacy and Civil Liberties Oversight Board (PCLOB), the independent oversight body whose functioning is essential to the framework’s legal validity, has raised substantive questions about whether the framework can survive a legal challenge.

For Australia, a bilateral US-Australia CLOUD Act Agreement entered into force on January 31, 2024. It enables Australian law enforcement to issue orders directly to US providers and vice versa, with safeguards against bulk collection and prohibition on targeting Australian persons. It fundamentally enables cross-border data access that bypasses traditional mutual legal assistance processes. The Hosting Certification Framework requires sensitive government data to be stored in certified Australian facilities, but the CLOUD Act applies to US providers regardless of where that data is physically hosted.

The only architectural measure experts agree can render US government demands technically unexecutable is customer-controlled encryption keys retained entirely by the organisation. If the provider cannot decrypt the data, a CLOUD Act warrant cannot extract readable content. For most AI workloads, this mitigation is impractical: AI services require access to plaintext data for processing. Data minimisation, prompt sanitisation, and avoiding sending sensitive data to AI providers are more immediately achievable controls.

Where the Major Providers Actually Store Data

The data residency landscape has expanded rapidly but remains bounded by CLOUD Act jurisdiction regardless of hosting location.

OpenAI offers data residency across 10 regions including Europe, the UK, Australia, Japan, and Singapore for Enterprise and API customers. European inference residency, meaning GPU processing stays in-region, launched in January 2026. Consumer ChatGPT (Free, Plus, Pro) remains US-hosted with no residency options, and consumer data may be used for model training unless users opt out. Enterprise and API data is not used for training. The Italian data protection authority fined OpenAI approximately 15 million euros in a decision reached around the end of 2024 and publicly detailed in early 2025 for GDPR violations.

Anthropic stores data primarily in the United States via AWS. Multi-region API processing across the US, Europe, Asia, and Australia launched in August 2025, but the consumer Claude.ai application and third-party integrations remain US-hosted. Enterprise customers can access Claude through Amazon Bedrock, including the AWS European Sovereign Cloud, or Google Vertex AI for additional residency controls.

Microsoft Azure OpenAI offers three deployment types: Global (may cross regions), Data Zone (stays within EU or US boundaries), and Regional (single-region only). The EU Data Boundary commits to storing and processing customer data exclusively within the EU for in-scope services. Microsoft has established joint ventures for enhanced sovereignty: Bleu (with Orange and Capgemini) in France and Delos (with SAP) in Germany. A critical and often overlooked channel is Azure OpenAI Service, which became available to US government agencies in 2023 under Microsoft’s terms of service rather than OpenAI’s usage policies. By early 2025, DISA authorised Azure OpenAI for Impact Level 6, making it available across all US government classification levels. Microsoft’s own documentation acknowledges: “Microsoft remains a US-headquartered company and is subject to US law.”

Google Vertex AI supports regional endpoints across the US, Europe, the UK, Japan, Singapore, and others, with customer-managed encryption keys available. Google’s S3NS joint venture with Thales targets French SecNumCloud compliance. Like Microsoft, Google’s sovereign offerings address operational controls but not the underlying CLOUD Act jurisdiction.

Even the most sophisticated data residency arrangements from US providers reduce operational risk and assist with local regulatory compliance, but they do not eliminate the legal reality that a US warrant can reach a US-headquartered company’s customer data wherever it sits.

DeepSeek Is a Categorically Different Risk

DeepSeek presents a qualitatively different exposure from US provider sovereign risk: a convergence of legal compulsion, documented security failures, and state infrastructure integration that makes it unsuitable for any sensitive enterprise workload.

DeepSeek, headquartered in Hangzhou and fully owned by Chinese hedge fund High-Flyer, stores all user data on servers in China. China’s National Intelligence Law (2017) Article 7 requires all organisations and citizens to “support, assist, and cooperate with national intelligence efforts.” The Data Security Law (2021) and Personal Information Protection Law (2021) create a framework in which Chinese companies are legally obligated to share data with the government upon request, with no meaningful right of refusal and no independent judicial oversight. Unlike the CLOUD Act, which is subject to US judicial process, Chinese intelligence demands carry no comparable procedural protections.

The security findings from independent researchers compound the legal risk. Feroot Security found hidden code in DeepSeek’s browser version transmitting user login data to China Mobile, a state-owned telecom operator banned from operating in the United States. NowSecure found the iOS application transmitting data without encryption with Apple’s platform security protections deliberately disabled. Wiz Research discovered a publicly accessible database containing over one million lines of plaintext chat history, API keys, and backend infrastructure data. Cisco found a 100 per cent jailbreak success rate: the model blocked zero out of 50 tested cybersecurity attacks.

The regulatory response was the fastest against any AI product. Australia banned DeepSeek from all federal devices on February 4, 2025. Italy blocked it from app stores. The US Navy, NASA, Pentagon, and Congress restricted it. Multiple US states issued bans. Germany’s Berlin data protection authority declared its data transfers illegal under GDPR.

Running DeepSeek’s open-weight models locally on organisation-controlled infrastructure avoids data transmission to China and is the only viable path for capability evaluation. Built-in censorship persists unless models are customised, and model integrity risks require independent security evaluation. Multiple security firms have explicitly recommended against enterprise deployment for sensitive data under any hosting arrangement.

Europe Is Building Alternatives, but the Gap Remains

The EU’s push toward AI sovereignty has accelerated in 2025 and 2026, driven both by the CLOUD Act conflict and by the political instability now visible in US provider relationships.

Mistral AI is the strongest European contender, valued at 11.7 billion euros with 18,000 NVIDIA Blackwell GPUs in a sovereign data centre in France. Its open-weight models can be deployed on-premise, eliminating CLOUD Act exposure. The French Ministry of Armed Forces signed a framework agreement in January 2026 for deployment on French-controlled infrastructure. Mistral has US investors including Andreessen Horowitz and Salesforce Ventures, creating potential CLOUD Act exposure that remains legally untested. Aleph Alpha in Germany has pivoted to a “generative AI operating system” called Pharia, targeting government and critical infrastructure with BSI C5 certification and full EU data processing. SAP launched an EU AI Cloud in November 2025, integrating models from Mistral and Cohere on European infrastructure independent of US hyperscalers.

The capability gap is real and should not be understated. US hyperscalers control approximately 65 to 70 per cent of the EU cloud market. Around 4 per cent of global cloud capacity is European-owned, in most recent market estimates. McKinsey estimates that European sovereign AI could unlock up to 480 billion euros in annual economic value by 2030, but achieving that requires sustained investment against incumbents with massive scale advantages. Mistral competes near frontier model levels on many benchmarks but has not matched the leading US models across all use cases, particularly for complex reasoning and coding tasks. The organisations best positioned to adopt European alternatives are those with workloads where current European model capability is sufficient, and where the jurisdictional advantage justifies the capability trade-off.

Australia’s Contradictory Strategy

Australia’s approach in 2025 and 2026 reveals a strategy that is pragmatic but internally contradictory: building domestic sovereign infrastructure while simultaneously deepening dependence on US providers.

The National AI Plan, published in December 2025, positions Australia as an AI and data centre hub. The government announced a 29.9 million dollar AI Safety Institute, a sovereign GovAI Chat platform operating within Australian infrastructure at PROTECTED classification levels (trials from April 2026), and a requirement for every agency to appoint a Chief AI Officer by July 2026. Domestic investment is significant: AWS committed 20 billion Australian dollars, Microsoft 5 billion Australian dollars, and total data centre investment is projected to reach 26 billion dollars by 2030.

At the same time, the government deepened its US provider commitments. A five-year Volume Sourcing Arrangement with Microsoft was signed in February 2026, covering Copilot, Microsoft 365, Azure, and security services across approximately 180,000 licences. OpenAI launched “OpenAI for Australia” in December 2025, its first Asia-Pacific program, with a memorandum of understanding with NEXTDC for a 650 MW sovereign AI campus in Sydney. AWS built a Top Secret Cloud for Australian intelligence agencies, designed to be interoperable with US and UK spy networks under AUKUS.

The AUKUS dimension clarifies the policy logic. Australia distinguishes between defence and intelligence data sharing, where Five Eyes integration is treated as strategically essential, and civilian and commercial data sovereignty, where Australian-hosted infrastructure is increasingly mandated. The CLOUD Act applies across both, regardless of where data is physically hosted. Privacy Act reforms proceeding in tranches introduced automated decision-making transparency requirements effective December 2026 but have not established a GDPR-equivalent framework. The tension between sovereignty ambition and US provider dependency is not resolved by current policy; it is managed.

A Risk Assessment Framework for Boards

Sovereign risk across AI providers can be assessed across four dimensions: legal jurisdiction, political stability, data architecture, and regulatory compliance.


Provider category Key risk profile Best suited for

US providers (OpenAI, Anthropic, Microsoft, Google) CLOUD Act exposure regardless of data location. Political disruption risk demonstrated. Highest capability. Non-sensitive data, general productivity. Not for personal data, IP, legal, financial, or strategic workloads.

Chinese providers (DeepSeek hosted) Legal compulsion with no judicial oversight. Documented security failures. Data stored in China only. No sensitive enterprise use. Self-hosted open-weight only for capability testing.

European providers (Mistral, Aleph Alpha, SAP EU AI Cloud) Structural CLOUD Act immunity if no US legal presence. Near-frontier capability for many use cases. Sensitive data workloads where jurisdictional protection is the priority.

Self-hosted open-weight models No third-party data exposure. Full organisational control. Infrastructure and maintenance costs. Highest sensitivity workloads. Genuine data sovereignty at the inference layer.


For boards structuring provider decisions, four practical questions determine the appropriate risk profile for each workload.

What data is being processed? Publicly available or non-sensitive operational data carries lower sovereign risk regardless of provider. Personal data, health records, financial information, legal advice, intellectual property, and strategic planning material carry the highest risk and warrant the most restrictive provider policy.

What is the regulatory environment? GDPR-regulated organisations face the sharpest CLOUD Act conflict. Australian organisations face growing tension as Privacy Act reforms progress and automated decision-making requirements take effect in December 2026.

What is the risk tolerance for political disruption? The Anthropic ban demonstrates that US provider access can be withdrawn by executive action with no warning and no direct connection to the organisation’s own conduct.

Can meaningful technical mitigations be implemented? Customer-controlled encryption is the strongest measure but is incompatible with most AI processing architectures. Prompt sanitisation, data minimisation, and avoiding sending sensitive data to AI providers are more immediately achievable.

The strongest risk mitigation for sensitive workloads is a multi-provider, multi-jurisdiction strategy: US providers for general productivity where the capability advantage is clear, European providers or self-hosted models for sensitive data processing, and Australian-hosted infrastructure where regulatory compliance requires it.

What This Means for AI Governance Programmes

The practical implication of AI sovereign risk is that provider selection is now a governance decision, not a technology procurement decision. The criteria that determine which AI provider handles which data need to sit alongside data classification policies, third-party risk frameworks, and board-level risk tolerance, not in a technology team’s vendor comparison spreadsheet.

Governance programmes that predate the CLOUD Act awareness problem, the Anthropic ban, and OpenAI’s military integration need to be reviewed against these developments. The questions to add to any AI governance programme are: which providers handle which categories of data, under what jurisdictional regime, and what is the continuity plan if a provider becomes unavailable due to political, regulatory, or commercial disruption?

The organisations that navigate this well will be those that have mapped their AI workloads against a data sensitivity taxonomy, matched each workload to a provider whose jurisdictional profile is acceptable, and built enough multi-provider optionality to absorb disruption without operational crisis. Treating AI provider selection as a technology choice rather than a risk decision is the structural failure that leaves organisations exposed when the political environment shifts.

Related reading: Does the EU AI Act apply to Australian businesses? | AI compliance deadlines 2026 | What is shadow AI?

Sources