The Financial Times and research firm Focaldata have launched a monthly AI workforce tracker. The first release, published 23 April 2026, is based on a poll of 4,000 workers across the US and UK. The headline finding: more than 60% of the highest-paid workers use AI daily, compared with just 16% of the lowest earners. The gap holds across industries, seniority levels, and both countries.

Most coverage of this data has framed it as an inequality story: AI is widening the pay gap. That is true and worth reporting. But for governance and security teams, the finding says something more specific and more actionable. The workers adopting AI fastest are the ones with the most authority, the broadest access to confidential data, the most autonomy in how they work, and the least day-to-day oversight from managers. That is a governance problem, not just a workforce trend.

What the data shows

The FT-Focaldata tracker found a sharp income gradient in workplace AI use. Over 60% of top earners use AI daily for writing, research, analysis, coding, planning, and decision support. Among the lowest earners, the figure drops to 16%. The gap is not explained by access to technology alone. Senior staff have more autonomy to experiment, more access to paid AI tools, more dedicated training time, and fewer restrictions on how they use those tools.

The data also shows a persistent gender divide, with men significantly more likely than women to use AI tools across technology, education, and retail. Workers in their 30s use AI more than the youngest workers, contradicting the assumption that Gen Z leads adoption. Corporate training emerged as the single biggest driver of uptake in the FT data, which means organisations that invest in structured AI training programmes see faster adoption, but primarily among staff who already have the autonomy and seniority to act on that training.

Nobel laureate Daron Acemoglu, cited in the FT coverage, described AI as “almost for sure” going to increase inequality between labour and capital. He noted that the popular narrative positions AI tools as democratising, but effective use often requires education, technical familiarity, and abstract thinking that not all workers have equal access to.

Independent validation

The FT-Focaldata findings align with Gallup’s February 2026 survey of 23,717 US employees. Gallup found that 50% of employed adults now use AI in their role at least a few times a year. 13% use AI daily and 28% use it a few times a week or more. Critically, Gallup found that 21% of leaders who use AI at least a few times a year said it had an “extremely positive” impact on productivity, compared with only 13% of individual contributors. The productivity benefit is real, but it accrues disproportionately to the people who already had the most influence in the organisation.

The SHRM State of AI in HR 2026 report, published 16 April 2026, adds another dimension. HR teams are deploying AI in screening, scheduling, performance assessment, and workforce planning, but adoption varies sharply by organisation size and HR function. The gap between organisations that train HR staff on AI and those that do not mirrors the FT-Focaldata income gap: training determines adoption, and adoption determines who benefits.

Why this is a shadow AI governance problem

SAW has covered shadow AI primarily as an IT security issue: employees using unapproved tools, data leaking into public models, unmonitored browser extensions accessing corporate systems. The Vercel/Context.ai breach is the most recent example of that pattern. The FT-Focaldata data adds a layer that most shadow AI frameworks miss: the people most likely to use AI tools, including unapproved ones, are not junior staff experimenting out of curiosity. They are senior staff with access to the most sensitive information in the organisation.

Consider the profile of a high-earner daily AI user. This person likely has access to board papers, financial forecasts, customer databases, employee records, legal documents, and strategic plans, along with the authority to make decisions without seeking approval and corporate email, cloud storage, and collaboration tools all connected to their identity. If they install an AI tool and grant it OAuth access to their workspace, the blast radius is larger than if a junior analyst does the same thing. The Copilot oversharing problem that SAW covered in March operates on the same principle: the more access the user has, the more damage an AI tool with inherited permissions can do.

The WalkMe data published in April showed that 45% of enterprise workers had used unsanctioned AI tools and 36% had put confidential data into them. The FT-Focaldata poll now tells us who those workers disproportionately are: the best-paid, most senior, most trusted staff in the building. Most security awareness training is not designed for these employees, and they are the ones most likely to believe the rules do not apply to them because their judgment is trusted.

The training paradox

The FT data identifies corporate training as the single biggest driver of AI uptake. That is good news for adoption. It is complicated news for governance. Structured training programmes increase AI use, but if those programmes do not include data handling rules, approved tool lists, and clear boundaries on what data can and cannot be entered into AI tools, they accelerate both adoption and risk simultaneously.

The SAW analysis of employee AI training gaps (March 2026) found that most employees have never received specific training on AI data risks. The FT-Focaldata data sharpens that finding: the training that does exist is disproportionately reaching senior staff, who are then using AI more aggressively, while junior staff remain untrained and unengaged. The result is a workforce split into three tiers: senior staff using AI daily with high access and limited oversight, mid-level staff using AI occasionally with inconsistent guidance, and junior staff barely using AI at all.

None of those tiers is well served by a generic acceptable-use policy posted on the intranet.

What governance and IT teams should do

Segment AI governance by role and access level. The FT-Focaldata data shows that AI adoption varies by seniority and income. Governance controls should match. Senior staff with broad data access need stricter guardrails on which AI tools they can connect to enterprise identity, not weaker ones. The assumption that senior staff can be trusted to govern their own AI use is the assumption that creates the highest-impact incidents.

Audit AI tool usage by seniority. Most shadow AI detection focuses on volume: which tools are being used most, how many users, how much data. The FT data suggests a more targeted approach: which of the organisation’s most senior staff are using AI tools, which tools are they using, and what data access do those tools inherit? A partner at a law firm using a consumer AI tool with access to client privilege is a different risk profile from a marketing coordinator using the same tool for social media captions.

Make training role-specific, not generic. If corporate training is the biggest driver of uptake, and senior staff are the primary beneficiaries of that training, the training programme should include explicit data governance content calibrated to the user’s access level. A C-suite training module should cover board paper handling, merger discussions, and legal privilege. A mid-level module should cover customer data, HR records, and procurement documents. The content should name the tools that are approved and the tools that are not.

Apply the same monitoring to senior staff as to everyone else. The FT-Focaldata data challenges a common organisational assumption: that senior staff are lower-risk because they are more experienced and more trustworthy. In the context of AI tool adoption, the opposite is true. They adopt faster, use more tools, handle more sensitive data, and are subject to less routine oversight. Monitoring and logging should apply equally regardless of seniority.

Connect the FT data to the Vercel pattern. The Vercel breach started with one employee granting one AI tool broad OAuth access. The FT data tells us that the employees most likely to do this are senior, well-paid, and daily AI users. Combining the two findings gives governance teams a clear risk profile to prioritise.

The uncomfortable conclusion

AI adoption is not evenly distributed. The workers using AI the most are the ones with the most to lose and the most to expose. Governance frameworks built on the assumption that shadow AI is a junior-staff, curiosity-driven problem are misallocating their attention. The real governance gap is at the top of the organisation, where adoption is highest, access is broadest, and oversight is thinnest.

The FT-Focaldata tracker is monthly. The data will sharpen with each release. For governance teams, the first edition is enough to act on: segment your controls by access level, audit your senior staff’s AI tool usage, and stop assuming that experience equals safe practice.

Sources

  • Financial Times (Madhumita Murgia and John Burn-Murdoch), “High earners race ahead on AI as workplace divide widens,” 23 April 2026 (FT-Focaldata poll, 4,000 workers, 60% vs 16% adoption gap, gender divide, training as biggest driver). ft.com (paywalled; secondary coverage below)
  • ResultSense, “AI adoption divide widens as high earners race ahead at work,” 23 April 2026 (summary of FT-Focaldata findings, Acemoglu context, occupational concentration data). resultsense.com
  • Techloy, “AI Is Set to Widen Income Gaps, Rewarding Those Already Earning the Most,” 23 April 2026 (Acemoglu “almost for sure” quote, democratising rhetoric vs reality). techloy.com
  • Gallup, “Rising AI Adoption Spurs Workforce Changes,” 13 April 2026 (23,717 US employees, 50% AI use, 13% daily, leader vs contributor productivity impact). gallup.com
  • SHRM, “The State of AI in HR 2026 Report,” 16 April 2026 (HR AI adoption by function and organisation size, training-adoption link). shrm.org
  • Metaintro, “Why High Earners Are Racing Ahead on AI,” 23 April 2026 (detailed summary, career mobility framing, autonomy and access analysis). metaintro.com