For most of the past three years, the dominant shadow AI story has been employees adopting tools their employers had not approved. New survey data from WalkMe, released on 9 April 2026, describes the inverse pattern. Employees are now rejecting tools their employers have approved.
WalkMe’s fifth annual State of Digital Adoption report surveyed 3,750 executives and employees across 14 countries at enterprises with 1,000 or more staff. The headline numbers: 54% of workers said they had skipped company AI tools at least once in the past 30 days, doing the work by hand instead. A further 33% reported never using AI at all. Combined, roughly 80% of enterprise staff are either avoiding or actively refusing the AI tools their employers have deployed. The trust gap, the policy gap, and the training gap behind those numbers are all governance problems. None of them are solved by buying more AI.
What the survey actually measured
The methodology is disclosed and reasonable. WalkMe surveyed 1,700 senior leaders and 2,050 office and hybrid workers, all at enterprises with 1,000 or more employees, across 14 countries. The surveys were conducted online through an independent research agency. WalkMe also analysed millions of real-world workflows across thousands of enterprise applications in parallel.
The framing matters. WalkMe is an SAP subsidiary that sells digital adoption software. The report has a commercial purpose. But the underlying data points are specific, the sample composition is disclosed, and the findings have been validated by independent coverage in Fortune and the regional tech press. The numbers are usable for governance analysis, even if the vendor’s product positioning is not.
The trust chasm
The single most consequential finding is the gap between executive perception and worker reality. Only 9% of workers told WalkMe they would trust AI on complex, business-critical decisions. Among executives, the figure is 61%. That 52-point gap captures the mismatch between who is buying the tools and who has to use them.
The tool adequacy gap is even larger. On whether employees have what they need, executives put themselves at 88% confident. The workers themselves put the figure at 21%. That 67-point gap points to a specific failure mode: executives are buying AI tools, deploying them, and assuming the deployment is the same as the adoption. The data shows it is not.
The downstream cost is now quantifiable. Workers report losing 7.9 hours per week to technology friction, equivalent to 51 working days a year. That is up 42% from 36 days in 2025, and reverses an improving trend that had taken the figure down from 43 days in 2024. Average digital transformation budgets rose 38% year-on-year to USD 54.2 million per enterprise. 40% of that spend underperformed.
The shadow AI inversion
The classic shadow AI pattern has not disappeared. WalkMe found 45% of workers used unsanctioned AI tools in the past 30 days, and 36% did so with confidential data. What has shifted is the relationship between sanctioned and unsanctioned use. Workers are simultaneously bypassing approved tools and using unapproved ones, which represents two distinct failures of the same governance system.
The contradictions in the executive responses make the governance gap visible. 78% of executives say they want to discipline shadow AI use. Only 21% of workers report ever being warned about AI policy, and 34% do not know which tools their employer has approved. Executives are threatening sanctions for behaviour they have not explained is prohibited, against tools workers do not know are unapproved.
A separate finding sharpens the contradiction. 62% of executives told WalkMe that the risk of unsanctioned shadow AI is overstated compared to the risk of not taking enough advantage of AI. The same executive cohort wants to discipline shadow AI users while simultaneously believing the underlying risk is overstated. The result is enforcement theatre on the policy side and inadequate enablement on the deployment side.
Why this is a governance problem, not a tech problem
The WalkMe data points to four governance failures, all of which sit at the intersection of IT, HR, and risk functions.
No published approved tool list. 34% of workers do not know which AI tools their employer has approved. This is the most easily fixable item on the list. An approved AI tools register, published internally with clear sponsor and use-case guidance, removes the ambiguity that drives both shadow AI use and approved-tool avoidance. Most organisations have this register for software generally. Few have one for AI specifically.
No clear accountability for AI-assisted outputs. Workers will not trust AI tools for high-stakes work if the consequences of errors fall on them. WalkMe’s 9% trust figure for business-critical decisions is, in part, a rational response to unclear accountability. Until organisations specify who owns an AI-assisted output, who reviews it, and who carries liability when it is wrong, trust will stay at floor levels.
No training that matches deployment pace. Only 21% of workers report ever being warned about AI policy. The disconnect between AI rollout speed and AI training speed is producing the rejection pattern. Workers given a tool they have not been trained on, with policies they have not been told about, will default to manual work or to the tools they already trust. WalkMe CEO Dan Adika described the pattern as buying every employee a sports car: “you buy every employee that sports car, the Ferrari, but they don’t know how to drive.”
No safe-to-try zone. Workers who are uncertain about tool boundaries default to either avoidance or covert use. A formal experimentation zone, with clear rules about what data can be used, what outputs can be relied on, and what feedback is expected, gives workers permission to learn without risking sanctions. The absence of this zone is what turns ambiguity into rejection.
What CIOs and compliance leads should do
The WalkMe findings translate into a concrete action list. Most of the work is administrative rather than technical.
Publish an approved AI tools register. A simple internal page listing every AI tool the organisation has approved, the use cases it is approved for, the data classification it can handle, and the team that owns governance for each one. This addresses the 34% of workers who do not know what is approved.
Define accountability for AI-assisted outputs. Decide whether the human user, the team owner, or the tool sponsor owns the consequences when AI output is wrong. Document the answer and communicate it. Without this, trust will not move from 9%.
Build a safe-to-try zone. A defined sandbox, with explicit data rules and explicit non-punishment commitments for experimentation, lets workers learn without the risk that drives covert use. The zone should be opt-in, time-limited, and feedback-collecting.
Measure adoption alongside availability. Most AI rollout dashboards measure deployment (how many seats are activated). The WalkMe data shows deployment is the wrong metric. Measure usage against the work the tool is supposed to support. A 90% deployment rate with 10% meaningful use is a 90% wasted spend.
Treat training as an ongoing program, not a launch event. Only 21% of workers have been warned about AI policy. The fix is a recurring training cycle that updates as policies and tools change, not a single all-staff email, with completion tracked alongside other compliance training.
What this means for Australian organisations
The WalkMe survey covers 14 countries at large enterprises. Country-level breakdowns are not publicly disclosed, but the patterns described are not country-specific. Australian enterprises facing the December 2026 automated decision-making transparency requirement under the Privacy and Other Legislation Amendment Act 2024 are exposed on both sides of the WalkMe finding: the shadow AI use of unsanctioned tools creates ADM transparency risk, and the rejection of sanctioned tools means the governance investment to support compliance is not being used.
The NSW Work Health and Safety Amendment (Digital Work Systems) Act 2026 creates an additional pressure point. Workers rejecting AI tools because they do not trust them, or do not know what they are approved for, may produce the kind of psychosocial workplace pattern the Act treats as a hazard. AI rollouts that ignore the human side risk creating a WHS exposure on top of the productivity loss WalkMe documents.
The pattern executives keep missing
The WalkMe report says, in WalkMe CEO Dan Adika’s words, that “the problem is not AI’s capability. The technology will keep improving. What won’t improve on its own is the human side: the trust gap, the governance gap, the question of who acts, when, and with what guardrails.” The data validates the framing: AI capability is not the constraint; governance is.
Organisations treating AI rollout as a procurement problem (buy the tools), a training problem (run a course), or an enforcement problem (discipline the shadow users) will keep producing the WalkMe numbers. The 80% rejection rate is a structural outcome of treating governance as a compliance overlay rather than the primary deployment work. The organisations that close the trust gap will be the ones that treat AI policy, accountability, and safe experimentation as the foundation of the rollout, not the wrapper around it.
Sources
- WalkMe, “Enterprises Lose 51 Workdays Per Employee to Technology Friction Annually Despite Record AI Investment,” press release, 9 April 2026 (methodology, headline figures, Adika and Kirkpatrick quotes). globenewswire.com
- Fortune, “White-collar workers are quietly rebelling against AI as 80% outright refuse adoption mandates,” 9 April 2026 (independent validation, Adika “Ferrari” analogy). fortune.com
- CFOtech, “Workers bypass AI as trust gap widens, WalkMe warns,” 10 April 2026. cfotech.news
- HRTech Cube, “Study: Tech Friction Drains 51 Workdays Despite AI Investment,” April 2026. hrtechcube.com
- The Gaming Boardroom (HR News syndication), “Enterprises Lose 51 Workdays Per Employee to Technology Friction,” April 2026. thegamingboardroom.com
- Automation Magazine, “Enterprises lose 51 workdays per employee to technology friction annually despite record AI investment,” April 2026. automationmagazine.co.uk