Novorésumé surveyed 1,000 US full-time workers in February 2026 and found that 86% use AI tools for work. Of those, 47% said they use AI to complete tasks ahead of schedule and then fill the remaining paid hours with personal activities. 13% said they do this regularly, 22% occasionally, and 12% have done it at least once or twice. 55% said they could perform their job at the same level without AI.
These are employees who have integrated AI into their workflow, absorbed the productivity gains, and made a personal decision about where those gains go. The survey data puts numbers around a behaviour that governance frameworks have not yet accounted for: employees are not just using AI at work. They are using AI to restructure the working day without telling anyone.
The transparency gap
The Novorésumé data shows that most AI-using workers are not forthcoming about how they use AI. 53% are not transparent about their AI use at work. Among those, 23% said their manager has no idea they use AI or that they are actively hiding it. The remaining 30% said they are selective about when and to whom they disclose.
The emotional register is equally revealing. 59% of AI-using workers said they feel no guilt about how they use AI. 30% said they feel smart for being more efficient. 29% said everyone is doing it. Only 10% feel like they are cheating. 5% feel like an impostor.
This is not a workforce conflicted about AI. It is a workforce that has normalised undisclosed AI use and considers it a reasonable adaptation to the tools available. The governance problem is that employers are making staffing, performance, and promotion decisions based on output they assume is human-generated, using evaluation frameworks that do not account for the possibility that the work was produced by a model.
The promotion problem
Novorésumé’s broader workforce analysis found that 1 in 6 workers report being promoted based on work that was substantially AI-produced. The stat is self-reported and the definition of “substantially AI-produced” is subjective, but the governance implication is significant: if promotions are being awarded for output that employees did not produce in the traditional sense, the organisation’s performance management system is measuring something other than what it thinks it is measuring.
This connects to a finding from Founder Reports’ 2026 survey of 2,078 US workers: 77% review a colleague’s work more carefully when they know AI was used, and 45% have had to fix or redo work because it relied too heavily on AI. 57% of managers have had to redo someone else’s AI-created work. The data suggests that AI-generated output is treated differently when it is disclosed, but the Novorésumé data shows most workers are not disclosing it.
The combination is a governance gap: employees submit AI-generated work without disclosure, managers evaluate it as human work, and promotions or bonuses follow. When the AI use is eventually discovered (or when the employee is asked to perform the same task without AI assistance), the mismatch becomes a management problem that current HR frameworks are not designed to handle.
Who is doing this
SAW’s coverage of the FT-Focaldata poll showed that the highest earners and most senior staff use AI most intensively: over 60% of top earners use AI daily, compared with 16% of the lowest earners. The Novorésumé data adds the generational dimension. 55% of Millennials have used AI to free up time for personal activities during work hours, the highest of any generation. Gen Z follows at 49%. Gen X drops to 40% and Boomers to 36%.
The convergence of these two datasets paints a clear picture: mid-career professionals with seniority, access to sensitive data, and established trust relationships are the most active undisclosed AI users. These are the employees whose output is least likely to be scrutinised and whose use of AI is least likely to be questioned. They are also the employees for whom undisclosed AI use carries the highest governance risk, because their work often involves confidential information, strategic decisions, and client-facing deliverables.
Andrei Kurtuy, co-founder and CMO of Novorésumé, framed the dynamic as a familiar productivity pattern: “This is what happens with every productivity tool. Workers absorb the gains first, and employers catch up later. AI is following the exact same pattern, just at a faster pace.”
The professional liability dimension
In regulated professions, undisclosed AI use creates specific legal and ethical exposure. A lawyer who uses AI to draft submissions without telling the client or the court is operating in a grey zone that multiple jurisdictions are now actively policing. SAW has covered the growing body of cases where AI-generated legal work has caused problems. An accountant who uses AI to prepare financial analysis without disclosure may be breaching professional standards on competence and due care. A consultant who delivers AI-generated strategy work billed at senior rates faces questions about the value being charged for.
The Novorésumé finding that 1 in 6 workers have been promoted on AI-produced work extends these questions beyond regulated professions. If an employee’s performance record is built partly on AI output, and the employer later discovers that fact, the organisation faces questions about whether the promotion was warranted, whether the employee can perform at the promoted level without AI, and whether the employer’s evaluation process was adequate.
What this means for governance
The Novorésumé data does not describe a crisis. It describes a structural shift that governance has not caught up with. The 47% who use AI to finish early are not malicious. They are rational actors responding to available tools in the absence of clear rules. The 53% who are not transparent are not deceptive by nature. They are operating in environments where AI use has no defined disclosure expectations.
The governance response is to make AI use visible, accountable, and aligned with the organisation’s expectations, not to punish it.
Define disclosure expectations for AI-assisted work. Employees need clear guidance on when AI use must be disclosed: in client deliverables, regulatory filings, financial analysis, legal advice, performance-critical outputs, and any work product where the recipient has a reasonable expectation that a human produced it.
Redesign performance evaluation for AI-augmented work. If employees are using AI to produce better output faster, the evaluation framework should assess the employee’s ability to direct, verify, and improve AI output rather than assuming all output is individually produced from scratch. This is a competency shift, not a disciplinary issue.
Address the time question explicitly. The 47% who finish early and spend freed time on personal activities are exploiting a gap in the employment contract: most knowledge-work roles are defined by output, not hours, but managed as if hours are the unit. If AI changes the time required to produce acceptable output, the organisation needs a position on whether the freed time belongs to the employee or the employer. Silence on this question guarantees inconsistent practice.
Train managers to ask the right questions. The Founder Reports data shows managers already treat AI-assisted work differently when they know about it. The gap is that most managers do not know. Training should equip managers to ask about AI use in the same matter-of-fact way they might ask about research methodology or data sources: not as an accusation, but as a standard quality question.
Connect this to the shadow AI inventory. The CISO’s shadow AI runbook focuses on tool discovery: which AI services are employees using, what data are they entering, what risks does that create? The Novorésumé data adds a behavioural layer: even where the tool is known and sanctioned, the way employees use it and what they do with the output may not be visible to the organisation.
Vendor disclosure
Novorésumé is a resume-building platform. The survey supports its commercial narrative that AI is transforming how workers approach career tasks. SAW has used the data because the methodology is disclosed (1,000 US full-time workers surveyed via Pollfish in February 2026, census-balanced for US regions), the findings are internally consistent, and the behavioural insights complement independent data from Gallup, Founder Reports, and the FT-Focaldata poll. Readers should note that Novorésumé’s sample is self-reported and the definition of “AI-generated work” is subjective.
Sources
- Novorésumé, “47% of Workers Use AI to Finish Work Early [2026 Study],” March 2026 (1,000 US workers, 47% personal time, generational breakdown, guilt sentiment, Kurtuy quote, methodology). novoresume.com
- Novorésumé, “88 AI Job Creation Statistics and Trends for 2026,” April 2026 (53% not transparent, 23% actively hiding, 1 in 6 promoted on AI work, job interview AI use). novoresume.com
- Founder Reports, “AI in the Workplace Statistics for 2026,” May 2026 (2,078 US workers, 77% review AI work more carefully, 45% had to redo AI-reliant work, 57% managers redid others’ AI work, 44% no clear AI policy). founderreports.com
- Gallup, “Rising AI Adoption Spurs Workforce Changes,” 13 April 2026 (23,717 US employees, 50% use AI, 13% daily, leader vs contributor productivity gap). gallup.com