Most AI governance frameworks are designed to catch failures: biased outputs, privacy breaches, inaccurate decisions. University of Auckland researchers writing in The Conversation on 5 March 2026 identify a different kind of problem. When GenAI generates the language of management, the standards for what counts as fair, accountable, or reasonable can shift without any single failure to point to.


How the Drift Works

The mechanism is ordinary and cumulative. A manager uses an AI tool to write performance feedback. The result is smooth, professional, and well-structured. Next quarter, they use it again. Other managers adopt the same approach. Over time, the style produced by AI becomes the implicit standard. Individual managerial judgement becomes harder to locate in the output.

The researchers describe this as “value drift”: gradual change to standards that were never explicitly revised. No policy was changed. No decision was made to lower the quality of individual judgement. It happened through the accumulation of ordinary tasks.

A 2025-26 CHAI survey of 1,456 patients found that 93% reported at least one concern about AI in healthcare, 51% said AI made them trust healthcare providers less, and 63% were concerned about their health data being sold or shared. Those numbers reflect a broader unease about whether genuine human judgement is still present in institutions people rely on.


Why Existing Governance Misses It

Most AI governance is designed around discrete events: a biased output, a privacy breach, an inaccurate automated decision. These show up in audit logs, incident reports, or complaints. Existing frameworks generally handle them reasonably well.

Value drift does not produce a discrete incident. It shows up in employee experience over time. It appears in grievance rates. It surfaces when external scrutiny examines whether real reasoning is present behind a decision, or just polished AI language.

A separate January 2026 public survey found that 48% of employees had uploaded sensitive company or customer information to AI chat tools (reported in LumaLex Law, March 2026). That figure captures the data exposure dimension of shadow AI risk. Value drift captures a different dimension: not what data goes in, but how AI shapes what comes out, and what that does to organisational standards over time.


What Governance Should Look Like

The researchers are not arguing that organisations should abandon GenAI in management and policy contexts. The case is more specific: without deliberate governance routines, the benefits of AI-assisted productivity will come at the cost of visible human judgement, and that cost will not appear in the incident log.

For HR functions, the practical response is periodic review of AI-mediated decisions. Performance management, disciplinary processes, and hiring decisions all require documented reasoning. That reasoning should be assessed not just for process compliance, but for reasoning quality: was there human judgement present, is it visible, and could a manager explain it without falling back on the AI output?

For policy and communications teams, AI-assisted drafting should come with clear ownership of the reasoning behind the document, separate from the document itself. The question to document is not “was this written with AI?” but “what is the actual rationale here, and who is accountable for it?”

For all functions, there should be a standing mechanism for people affected by AI-shaped decisions to challenge them where material outcomes are at stake.

For the broader governance infrastructure these routines sit within, the AI governance framework guide covers how accountability structures should be designed. The shadow AI data leak engine analysis covers the data exposure dimension of unmanaged AI use that sits alongside the value drift risk.


The Governance Question Worth Asking

The University of Auckland researchers suggest a practical test for organisations using GenAI in management contexts: is AI making decisions better, or just smoother?

Smooth is not the same as good. Smooth is consistent, professional, and hard to object to. Good is accurate, reasoned, fair, and traceable to a person who understood the situation and made a call.

Governance frameworks built around discrete failure detection will not catch the difference. The organisations that catch it are the ones that build explicit routines for checking that human judgement is still present and visible in the outputs AI helps produce.


Related reading: What Is an AI Governance Framework? | Shadow AI as a Data Leak Engine | When AI Adoption Outruns Governance: What ASIC Found Inside 23 Australian Lenders


Stay across AI governance and compliance developments. Subscribe to the Shadow AI Watch newsletter.


Sources