AI is now the leading area where US employers expect workplace policy and regulatory changes to affect their business. Littler’s 14th Annual Employer Survey, released 6 May 2026, found that 84% of respondents expect business impacts from AI-related policy or regulatory changes in the next 12 months. That is double the 42% who said the same in 2025, when DEI was the top concern.
The survey draws on more than 300 US-based C-suite executives, in-house lawyers, and HR professionals across a range of industries and company sizes. Data privacy rose to second place at 53%, up from 31%. Immigration dropped to 49% from 75%. DEI fell to 38% from 84%. AI has moved from a niche technology question to the single biggest regulatory risk on the employer radar in 12 months.
James A. Paretti Jr., co-chair, and Shannon Meade, executive director, of Littler’s Workplace Policy Institute, framed the shift: “The shifts in this year’s survey relating to immigration and DEI do not mean that these issues have dissipated. Rather, businesses appear to be adjusting to a ‘new normal’ in the second year of the Trump administration and turning their attention to what’s coming next, particularly AI, as the workplace policy and regulatory landscape continues to evolve.”
The policy-practice gap
The 84% headline captures employer awareness. The operational data underneath it captures the gap between awareness and action.
68% of respondents now have a formal policy governing AI use at work. That is a significant improvement from 38% with a specific policy and 13% with guidelines in Littler’s 2025 survey. But a policy is only the starting point. The controls that make a policy enforceable are weaker.
Only 55% have a formal review or approval process for AI tools before they are deployed. Only 54% restrict the information employees can enter into AI systems. That means roughly half of employers have an AI policy that tells employees to use AI responsibly, but no mechanism to review which tools they are using or prevent them from entering confidential data.
The gap between policy and enforcement mirrors a finding SAW has documented from the other direction. APRA’s 30 April industry letter found Australian financial institutions relying on “policy direction or detective, after-the-fact measures, rather than enforceable technical restrictions or robust preventative controls” to manage shadow AI. The Littler data shows the same pattern across US employers more broadly: the policy exists, but the controls that make it operational do not.
What employers are worried about
79% of respondents expressed concern about AI-related litigation in the next 12 months. The top three areas of concern are data privacy (49%), discrimination or bias (45%), and compliance with state and local AI laws (43%).
The litigation concern is grounded in reality. SAW has covered FTC enforcement actions accelerating under Section 5, with at least a dozen AI-related cases in 2025 alone. The DOJ’s intervention in xAI’s challenge to Colorado’s AI discrimination law has created uncertainty about state-level compliance requirements but has not eliminated them. Grant Thornton’s survey of 950 US senior leaders found 78% lack confidence they could pass an AI governance audit in 90 days. The Littler data adds the employer perspective: these are not abstract risks. Employers expect to face them.
The discrimination and bias concern is particularly sharp for employers using AI in hiring and performance management. Littler’s report notes that AI use in recruitment, candidate screening, and workforce management is growing, but few employers have bias testing or impact assessment processes that meet the standard being set by state laws like Colorado’s SB24-205 or New York City’s Local Law 144.
The staffing shift
The survey captures an early signal that AI is beginning to affect headcount decisions. Littler reports that as AI becomes more deeply embedded in workplace functions, employers are beginning to re-evaluate staffing needs. The data does not quantify the scale of workforce reduction, but it confirms that the conversation has moved from “will AI change headcount?” to “how should we manage headcount changes driven by AI?”
For governance teams, the staffing question creates a second-order compliance risk. If employers reduce headcount based on AI-driven productivity assumptions, and those assumptions rest on employees using AI tools in ways the employer has not assessed or approved, the workforce planning decision is built on undocumented foundations. The Novorésumé finding that 47% of workers use AI to finish early and the FT-Focaldata data showing senior staff adopt AI fastest both suggest that the productivity gains employers observe may be partially attributable to AI use they have not sanctioned, measured, or understood.
The federal vs state tension
The Littler survey lands at a moment when the US AI regulatory landscape is genuinely contradictory. The Trump administration’s National AI Framework pushes for federal preemption of state AI laws and has created a DOJ AI Litigation Task Force to challenge state regulations the administration considers burdensome. At the same time, state legislatures continue advancing AI bills: Colorado’s SB24-205 takes effect 30 June 2026 (unless enjoined by a court), and more than 40 states have introduced AI-related legislation.
Littler’s data shows employers are aware of both pressures. The 84% who expect AI regulatory impact are not expecting a single, coherent framework. They are expecting a patchwork: federal enforcement on AI claims (FTC, SEC), state-level requirements on bias testing and transparency (Colorado, New York, Illinois), and industry-specific guidance (financial services, healthcare, employment). The practical response is not to choose one framework to comply with. It is to build governance that can satisfy multiple, overlapping obligations simultaneously.
What this means for governance and HR teams
Close the gap between policy and controls. A written AI policy without tool review, data input restrictions, and enforcement mechanisms is a statement of intent, not a governance control. The Littler data shows roughly half of employers are in this position. Moving from policy to control requires technical implementation: approved tool lists, DLP rules for AI services, OAuth app restrictions, and audit logging.
Prepare for AI-related litigation. The 79% litigation concern should translate into documentation. Employers need records of what AI tools are deployed, what data they process, what decisions they inform, and what human oversight exists. When the lawsuit arrives (discrimination, privacy, misrepresentation), the first question will be “show us your AI governance documentation.” The answer cannot be “we had a policy.”
Treat AI governance as employment law, not technology law. Littler’s positioning of AI as the top workforce policy concern signals that AI governance is migrating from IT departments to employment law practices. Discrimination, privacy, accommodation, and wage-and-hour obligations all apply to AI-assisted decisions. HR and legal teams that treat AI as someone else’s problem are missing the compliance risk that is most likely to generate litigation.
Build for the patchwork. Employers operating across multiple US states cannot afford to build compliance for one jurisdiction at a time. The governance architecture should be modular: a core framework covering documentation, transparency, human oversight, and bias testing, with jurisdiction-specific modules that address Colorado, New York, Illinois, and future state requirements as they take effect.
Monitor the Colorado deadline. Colorado’s SB24-205 remains scheduled for 30 June 2026 unless a court orders otherwise. Employers using AI in hiring or other consequential decisions affecting Colorado residents should be completing impact assessments, consumer notice frameworks, and AG reporting procedures now.
Vendor disclosure
Littler is the world’s largest employment and labour law practice representing management. The survey supports its positioning as a thought leader on AI and employment law. The conflict-of-interest risk is lower than with technology vendor surveys because Littler does not sell AI tools. However, the survey population is US-only and skewed toward employers that engage Littler, which means larger, more sophisticated organisations are likely overrepresented. The findings should be treated as directionally accurate for the US employment market rather than globally representative.
Sources
- Littler, “Employers Brace for AI-Driven Workplace Shifts and Rising Risk, Littler’s Annual Survey Shows,” press release, 6 May 2026 (84% headline, 42% 2025 comparison, Paretti/Meade quote, methodology, staffing shift signal). littler.com
- Littler, “The Littler Annual Employer Survey Report 2026,” full report PDF, May 2026 (68% policy adoption, 55% tool review, 54% input restrictions, litigation concern breakdown, AI use areas). littler.com
- Mondaq, “The Littler Annual Employer Survey 2026,” 1 May 2026 (detailed survey analysis, data privacy and bias concern breakdown, hiring AI context). mondaq.com
- Supply & Demand Chain Executive, “Employers Brace for AI-Driven Workplace Shifts,” 7 May 2026 (79% litigation concern, 84%/42% comparison, staffing re-evaluation, Paretti/Meade extended quote). sdcexec.com