<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="/assets/feed-style.xsl"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Shadow AI Watch</title>
    <description>Independent coverage of workplace AI governance, shadow AI risks, compliance, and enterprise AI policy.</description>
    <link>https://shadowaiwatch.com</link>
    <atom:link href="https://shadowaiwatch.com/feed.xml" rel="self" type="application/rss+xml"/>
    <language>en-AU</language>
    <lastBuildDate>2026-05-06T06:00:00+08:00</lastBuildDate>
    <item>
      <title>The Workers Using AI the Most Are the Ones With the Most Access to Sensitive Data. That Is a Governance Problem.</title>
      <link>https://shadowaiwatch.com/shadow-ai/ft-focaldata-ai-adoption-divide-high-earners-governance-2026/</link>
      <guid isPermaLink="true">https://shadowaiwatch.com/shadow-ai/ft-focaldata-ai-adoption-divide-high-earners-governance-2026/</guid>
      <pubDate>2026-05-06T06:00:00+08:00</pubDate>
      <description>An FT-Focaldata poll of 4,000 US and UK workers found over 60% of top earners use AI daily versus 16% of lowest earners. The people with the most autonomy, the most seniority, and the broadest access are adopting AI fastest, with the least oversight.</description>
      <category>Shadow AI</category>
    </item>
    <item>
      <title>The DOJ Just Backed Elon Musk's xAI Against Colorado's AI Discrimination Law. Compliance Teams Should Keep Building Anyway.</title>
      <link>https://shadowaiwatch.com/compliance/doj-xai-colorado-ai-act-constitutional-challenge-2026/</link>
      <guid isPermaLink="true">https://shadowaiwatch.com/compliance/doj-xai-colorado-ai-act-constitutional-challenge-2026/</guid>
      <pubDate>2026-05-05T06:00:00+08:00</pubDate>
      <description>The US Department of Justice intervened in xAI's constitutional challenge to Colorado SB24-205 on 24 April 2026, calling the state's algorithmic discrimination requirements unconstitutional. The law's 30 June 2026 compliance date has not changed.</description>
      <category>Compliance</category>
    </item>
    <item>
      <title>EU AI Act Delay Talks Collapsed. The 2 August 2026 High-Risk Deadline Is Back.</title>
      <link>https://shadowaiwatch.com/compliance/eu-ai-act-omnibus-collapse-august-2026-deadline-2026/</link>
      <guid isPermaLink="true">https://shadowaiwatch.com/compliance/eu-ai-act-omnibus-collapse-august-2026-deadline-2026/</guid>
      <pubDate>2026-05-04T06:00:00+08:00</pubDate>
      <description>Twelve hours of EU trilogue negotiations broke down on 28 April 2026 without agreement on the Digital Omnibus reforms. The original 2 August 2026 deadline for high-risk AI systems and Article 50 transparency obligations is still in force. Compliance teams that were banking on an extension need to change course.</description>
      <category>Compliance</category>
    </item>
    <item>
      <title>The FTC Has Quietly Built an AI Enforcement Playbook. A Dozen Cases in 2025 Show What Comes Next.</title>
      <link>https://shadowaiwatch.com/compliance/ftc-ai-enforcement-playbook-section-5-2026/</link>
      <guid isPermaLink="true">https://shadowaiwatch.com/compliance/ftc-ai-enforcement-playbook-section-5-2026/</guid>
      <pubDate>2026-05-01T06:00:00+08:00</pubDate>
      <description>The FTC brought at least 12 AI-related enforcement actions in 2025, targeting deceptive capability claims, undisclosed automated decisions, and AI-generated fake content. Section 5 of the FTC Act is doing the work that AI-specific legislation has not.</description>
      <category>Compliance</category>
    </item>
    <item>
      <title>The UK Just Made an AI Code of Practice a Legal Requirement. The ICO Has No Choice but to Write One.</title>
      <link>https://shadowaiwatch.com/compliance/uk-ico-ai-adm-code-of-practice-si-2026-425/</link>
      <guid isPermaLink="true">https://shadowaiwatch.com/compliance/uk-ico-ai-adm-code-of-practice-si-2026-425/</guid>
      <pubDate>2026-04-30T06:00:00+08:00</pubDate>
      <description>A new statutory instrument requires the UK Information Commissioner to produce a formal code of practice on AI and automated decision-making. SI 2026/425 comes into force on 12 May 2026 and includes mandatory guidance on children's data.</description>
      <category>Compliance</category>
    </item>
    <item>
      <title>A Vercel Employee Installed a Consumer AI Tool. It Cost the Company a Supply-Chain Breach Now on Sale for USD 2 Million.</title>
      <link>https://shadowaiwatch.com/shadow-ai/vercel-context-ai-breach-shadow-ai-supply-chain-2026/</link>
      <guid isPermaLink="true">https://shadowaiwatch.com/shadow-ai/vercel-context-ai-breach-shadow-ai-supply-chain-2026/</guid>
      <pubDate>2026-04-29T06:00:00+08:00</pubDate>
      <description>Vercel was breached through Context.ai, a consumer AI productivity tool connected to a single employee's Google Workspace with 'Allow All' OAuth permissions. Stolen data is now listed on BreachForums for USD 2 million. The kill chain started with a Roblox cheat download.</description>
      <category>Shadow AI</category>
    </item>
    <item>
      <title>Canada Is Spending $890 Million to Build a Sovereign AI Supercomputer. The Governance Signal Is Bigger Than the Hardware.</title>
      <link>https://shadowaiwatch.com/governance/canada-sovereign-ai-compute-infrastructure-program-2026/</link>
      <guid isPermaLink="true">https://shadowaiwatch.com/governance/canada-sovereign-ai-compute-infrastructure-program-2026/</guid>
      <pubDate>2026-04-28T06:00:00+08:00</pubDate>
      <description>Canada's AI Sovereign Compute Infrastructure Program opened applications on 15 April 2026. The $890 million investment turns data residency and provider jurisdiction from abstract risks into funded infrastructure decisions.</description>
      <category>Governance</category>
    </item>
    <item>
      <title>The FBI's 2025 Internet Crime Report Puts AI-Enabled Fraud at USD 893 Million. That Number Is a Floor, Not a Ceiling.</title>
      <link>https://shadowaiwatch.com/research/fbi-ic3-2025-ai-enabled-fraud-893-million/</link>
      <guid isPermaLink="true">https://shadowaiwatch.com/research/fbi-ic3-2025-ai-enabled-fraud-893-million/</guid>
      <pubDate>2026-04-24T06:00:00+08:00</pubDate>
      <description>The FBI's IC3 logged 22,364 AI-related complaints with losses exceeding USD 893 million in 2025. It is the first time the annual report includes a dedicated AI section. Enterprise risk frameworks need to catch up.</description>
      <category>Research</category>
    </item>
    <item>
      <title>Stanford's 2026 AI Index: Incidents Up 55%, Transparency Index Falls 18 Points, and Adoption at 88%. The Governance Maths Are Getting Worse.</title>
      <link>https://shadowaiwatch.com/research/stanford-ai-index-2026-incidents-transparency-governance/</link>
      <guid isPermaLink="true">https://shadowaiwatch.com/research/stanford-ai-index-2026-incidents-transparency-governance/</guid>
      <pubDate>2026-04-23T06:00:00+08:00</pubDate>
      <description>Stanford HAI's 2026 AI Index shows AI incidents rose from 233 to 362, the Foundation Model Transparency Index dropped from 58 to 40, and organisational adoption hit 88%. The gap between capability and accountability is widening.</description>
      <category>Research</category>
    </item>
    <item>
      <title>Grant Thornton Finds 78% of Leaders Doubt They'd Pass an AI Governance Audit</title>
      <link>https://shadowaiwatch.com/governance/grant-thornton-ai-proof-gap-governance-audit-2026/</link>
      <guid isPermaLink="true">https://shadowaiwatch.com/governance/grant-thornton-ai-proof-gap-governance-audit-2026/</guid>
      <pubDate>2026-04-22T06:00:00+08:00</pubDate>
      <description>Grant Thornton surveyed 950 senior US leaders. 78% lack confidence they could pass an AI governance audit in 90 days. Only 12% say their workforce is AI-ready. The gap between AI spend and AI proof is widening.</description>
      <category>Governance</category>
    </item>
    <item>
      <title>ASIC and APRA Are Now Monitoring Anthropic's Mythos. Every Regulated Firm Should Be Asking What That Means for Them.</title>
      <link>https://shadowaiwatch.com/governance/asic-apra-anthropic-mythos-cybersecurity-risk-2026/</link>
      <guid isPermaLink="true">https://shadowaiwatch.com/governance/asic-apra-anthropic-mythos-cybersecurity-risk-2026/</guid>
      <pubDate>2026-04-21T06:00:00+08:00</pubDate>
      <description>Australian and Asian financial regulators have confirmed they are monitoring Anthropic's Claude Mythos Preview, an AI model that can autonomously find and exploit zero-day vulnerabilities at scale. ASIC expects licensees to be on the front foot.</description>
      <category>Governance</category>
    </item>
    <item>
      <title>WalkMe Says 80% of Enterprise Workers Are Dodging AI Tools. That's a Governance Failure, Not a Tech Problem.</title>
      <link>https://shadowaiwatch.com/research/walkme-enterprise-ai-rejection-governance-gap-2026/</link>
      <guid isPermaLink="true">https://shadowaiwatch.com/research/walkme-enterprise-ai-rejection-governance-gap-2026/</guid>
      <pubDate>2026-04-21T06:00:00+08:00</pubDate>
      <description>WalkMe surveyed 3,750 enterprise workers across 14 countries. 54% bypass company AI, 33% never use it, 78% of executives want to discipline shadow AI use, only 21% of workers have ever been warned about AI policy. The numbers describe a governance gap, not a training gap.</description>
      <category>Research</category>
    </item>
    <item>
      <title>UK and EU Regulators Just Drew a Target Around Agentic AI. Consumer-Facing Bots Are Now a Compliance Problem, Not an Innovation Story.</title>
      <link>https://shadowaiwatch.com/governance/uk-eu-agentic-ai-consumer-law-cma-drcf-2026/</link>
      <guid isPermaLink="true">https://shadowaiwatch.com/governance/uk-eu-agentic-ai-consumer-law-cma-drcf-2026/</guid>
      <pubDate>2026-04-20T06:00:00+08:00</pubDate>
      <description>The UK CMA, the cross-regulator DRCF, and the ICO published cluster guidance on agentic AI in March 2026. The CMA can fine up to 10% of global turnover under the DMCC Act. The EU AI Act caps manipulation penalties at 35M EUR or 7% of turnover.</description>
      <category>Governance</category>
    </item>
    <item>
      <title>The EU Just Moved the AI Act's High-Risk Deadline. Systems Deployed Before It May Never Have to Comply.</title>
      <link>https://shadowaiwatch.com/compliance/eu-ai-act-high-risk-delay-omnibus-2026/</link>
      <guid isPermaLink="true">https://shadowaiwatch.com/compliance/eu-ai-act-high-risk-delay-omnibus-2026/</guid>
      <pubDate>2026-04-17T06:00:00+08:00</pubDate>
      <description>The EU is pushing back high-risk AI obligations from August 2026 to late 2027 or 2028. Non-retroactivity means systems deployed before those dates may never have to comply.</description>
      <category>Compliance</category>
    </item>
    <item>
      <title>DOJ Is Using Existing Law to Police AI-Generated Job Ads. Employers Are Still Treating It as a Tech Problem.</title>
      <link>https://shadowaiwatch.com/compliance/doj-ai-job-ads-citizenship-discrimination-2026/</link>
      <guid isPermaLink="true">https://shadowaiwatch.com/compliance/doj-ai-job-ads-citizenship-discrimination-2026/</guid>
      <pubDate>2026-04-16T06:00:00+08:00</pubDate>
      <description>Two DOJ settlements in six weeks have put AI-assisted recruitment squarely inside federal anti-discrimination enforcement. The vendor did not draft the ads, the AI did, and the employer still paid.</description>
      <category>Compliance</category>
    </item>
    <item>
      <title>Canada's Privacy Act Review Puts AI Transparency and Mandatory PIAs on the Statutory Agenda</title>
      <link>https://shadowaiwatch.com/compliance/canada-privacy-act-review-ai-pia-2026/</link>
      <guid isPermaLink="true">https://shadowaiwatch.com/compliance/canada-privacy-act-review-ai-pia-2026/</guid>
      <pubDate>2026-04-15T06:00:00+08:00</pubDate>
      <description>Canada has opened the first comprehensive review of its 1983 Privacy Act in 43 years. The proposed reforms would write AI transparency and mandatory privacy impact assessments into statute.</description>
      <category>Compliance</category>
    </item>
    <item>
      <title>Twelve US States Just Launched the First Coordinated AI Insurance Examination. The Template Will Spread.</title>
      <link>https://shadowaiwatch.com/governance/naic-ai-insurance-evaluation-tool-pilot-2026/</link>
      <guid isPermaLink="true">https://shadowaiwatch.com/governance/naic-ai-insurance-evaluation-tool-pilot-2026/</guid>
      <pubDate>2026-04-14T06:00:00+08:00</pubDate>
      <description>Twelve US states have launched the first coordinated examination of AI claims decisions using a structured evaluation tool. Regulators want technical evidence, not policy statements.</description>
      <category>Governance</category>
    </item>
    <item>
      <title>Australia's Draft Children's Online Privacy Code Quietly Sets a New Baseline for AI Data Handling</title>
      <link>https://shadowaiwatch.com/compliance/oaic-childrens-online-privacy-code-ai-2026/</link>
      <guid isPermaLink="true">https://shadowaiwatch.com/compliance/oaic-childrens-online-privacy-code-ai-2026/</guid>
      <pubDate>2026-04-13T06:00:00+08:00</pubDate>
      <description>Australia's OAIC has released the draft Children's Online Privacy Code. The rules target children's data, but the AI design baseline they set will likely become the expectation for every AI system.</description>
      <category>Compliance</category>
    </item>
    <item>
      <title>Connecticut's Attorney General Just Showed How Existing Laws Already Regulate AI</title>
      <link>https://shadowaiwatch.com/compliance/connecticut-ag-existing-laws-regulate-ai-2026/</link>
      <guid isPermaLink="true">https://shadowaiwatch.com/compliance/connecticut-ag-existing-laws-regulate-ai-2026/</guid>
      <pubDate>2026-04-10T06:00:00+08:00</pubDate>
      <description>Connecticut's Attorney General has mapped AI uses to existing privacy, civil rights and consumer protection laws. The message: regulators do not need new AI laws to come after you.</description>
      <category>Compliance</category>
    </item>
    <item>
      <title>Two-Thirds of Organisations Are Approving AI Despite Known Security Risks</title>
      <link>https://shadowaiwatch.com/research/ai-rollout-pressure-outrunning-governance-2026/</link>
      <guid isPermaLink="true">https://shadowaiwatch.com/research/ai-rollout-pressure-outrunning-governance-2026/</guid>
      <pubDate>2026-04-09T06:00:00+08:00</pubDate>
      <description>Two global studies from TrendAI and Deloitte, surveying nearly 7,000 leaders between them, find the same pattern: organisations are deploying AI faster than they can govern it. Only 38% have comprehensive policies.</description>
      <category>Research</category>
    </item>
  </channel>
</rss>
