A Back to top

Acceptable Use Policy (AUP) Governance
A written policy that defines which AI tools employees may use, what data they may share with those tools, and under what conditions. An effective AUP is specific enough for a new employee to follow on their first day. Generic guidance ("use AI responsibly") does not constitute an AUP. A well-constructed AUP distinguishes between approved tools, tolerated tools, and prohibited tools, and gives concrete examples of what data may and may not be processed through each category. See also: Shadow AI, AI Governance Framework
Agentic AI Technical
AI systems that take autonomous actions across multiple steps to complete a task, rather than responding to a single prompt. An agentic AI system might browse the web, write and execute code, send emails, or interact with other software without human confirmation at each step. Agentic systems introduce governance challenges beyond standard LLM use because the action surface is larger and harder to monitor. OWASP's LLM Top 10 (2025) identifies agentic AI as an emerging attack surface category. See also: Large Language Model (LLM), Human Oversight
AI Governance Framework Governance
A structured set of policies, processes, and controls that define how an organisation develops, deploys, and monitors AI systems. A complete framework covers at minimum: AI inventory management, risk classification, acceptable use policy, data handling rules, human oversight requirements, and incident response procedures. Frameworks such as NIST AI RMF (US) and ISO 42001 (international) provide structured templates. The EU AI Act requires that high-risk AI systems operate within documented governance structures. See also: NIST AI RMF, ISO 42001, High-Risk AI System. Further reading: What Is an AI Governance Framework?
AI Inventory Governance
A documented register of every AI system in use within an organisation, including vendor-built tools, internally developed models, and AI features embedded in existing software. A complete AI inventory records what each system does, what data it processes, who owns it, which regulatory obligations apply, and what risk classification it carries. The EU AI Act and Australia's Privacy Act amendments both assume that organisations can produce this documentation on request. Many cannot. See also: Shadow AI, Risk Classification
AI Risk Register Governance
A document that records identified AI-related risks, their likelihood, potential impact, assigned owner, and the controls in place to mitigate them. A risk register is typically distinct from an AI inventory: the inventory lists what AI is in use, the register records what could go wrong. Regulators including ASIC (Australia) have found that most organisations lack a formal AI risk register, even when they have deployed AI across multiple business functions. See also: AI Inventory, AI Governance Framework. Further reading: ASIC AI Governance Gap
Annex III Compliance
The section of the EU AI Act that lists specific high-risk AI application categories subject to the most stringent compliance requirements. Annex III categories include AI used in recruitment, employment decisions, access to education, creditworthiness assessment, biometric identification, critical infrastructure, law enforcement, and administration of justice. Organisations using AI for any of these purposes must comply with requirements for human oversight, technical documentation, data governance, and transparency, regardless of whether the AI system is developed internally or sourced from a third-party vendor. See also: EU AI Act, High-Risk AI System
Automated Decision-Making (ADM) Compliance
Any process in which an AI or algorithm makes or significantly contributes to a decision that affects an individual, without meaningful human review. GDPR Article 22 gives individuals the right to object to purely automated decisions with significant effects and to request human review. Australia's Privacy Act amendments extend similar protections. ADM scenarios relevant to businesses include automated CV screening, credit scoring, dynamic pricing based on individual behaviour, and performance monitoring. See also: GDPR, Human Oversight, Right to Explanation
Automation Bias Governance
The tendency for people to over-rely on automated or AI-generated outputs, accepting recommendations without adequate scrutiny. Automation bias is the primary failure mode of human oversight in AI governance: it converts a nominal review process into a rubber-stamp. Studies have shown automation bias increases with time pressure, task complexity, and confidence in the AI system. The EU AI Act's human oversight requirements are specifically designed to counteract automation bias by requiring that AI systems present outputs in interpretable formats and that reviewers have genuine ability and authority to override. See also: Human Oversight, High-Risk AI System

B Back to top

Bias Audit Compliance
An independent assessment of an AI system's outputs to identify whether it produces systematically different results for different demographic groups. New York City's Local Law 144 requires employers to commission a bias audit of any automated employment decision tool before deploying it. The EU AI Act requires that high-risk AI systems in employment and credit contexts undergo bias testing as part of conformity assessment. A bias audit is distinct from a general AI risk assessment: it specifically examines differential outcomes across protected characteristics. See also: High-Risk AI System, Annex III

C Back to top

Cloud Access Security Broker (CASB) Technical
Software that sits between an organisation's network and cloud services to enforce security policies, monitor usage, and control data flows. A CASB can identify which cloud-based AI tools employees are accessing, block unapproved applications, and log data transfers to external platforms. Enterprise CASBs are the primary technical control for organisations managing shadow AI at scale. They require network-level deployment and are typically more suited to larger organisations than browser extension-based monitoring approaches. See also: Shadow AI, Data Loss Prevention (DLP)
Conformity Assessment Compliance
The process by which a high-risk AI system is evaluated to confirm it meets EU AI Act requirements before being placed on the market or put into service. For most high-risk AI systems listed in Annex III, conformity assessment is self-assessment by the provider, supported by technical documentation. Some systems (notably biometric identification) require third-party assessment. A Declaration of Conformity and CE marking are required once the assessment is complete. See also: EU AI Act, Annex III, High-Risk AI System

D Back to top

Data Classification Governance
A system for labelling data according to its sensitivity and the handling requirements that apply to it. A typical classification scheme uses tiers such as Public, Internal, Confidential, and Restricted. Data classification is foundational to AI governance: without it, employees have no way to assess whether data is appropriate to share with an AI tool. Classification at the data layer, enforced by endpoint controls, allows policy to follow data regardless of which application an employee uses. See also: Data Loss Prevention (DLP), Shadow AI
Data Exfiltration Security
The unauthorised transfer of data from an organisation's control to an external destination. In traditional security contexts, data exfiltration is associated with malicious insiders or external attackers. In the context of shadow AI, data exfiltration occurs when employees paste sensitive information into public AI platforms, regardless of intent. An employee sharing client contract terms with ChatGPT before a negotiation has exfiltrated that data in terms of outcome, even if the intent was simply to work faster. Governance frameworks increasingly treat AI-related data exfiltration as a distinct incident category. See also: Insider Threat, Shadow AI
Deployer Compliance
Under the EU AI Act, a deployer is any natural or legal person that puts an AI system into use in a professional context. Deployers are distinct from providers (who develop or place AI on the market). Most businesses using AI tools built by third parties are deployers, not providers. Deployers of high-risk AI systems have specific obligations including human oversight, staff training, data governance, and fundamental rights impact assessment. See also: EU AI Act, Provider, High-Risk AI System
Data Loss Prevention (DLP) Technical
Technology that detects and prevents sensitive data from leaving an organisation's control. DLP tools inspect content in motion (network traffic), at rest (storage), and in use (endpoints) for patterns matching sensitive data definitions such as credit card numbers, personal health information, or proprietary content. Applied to AI governance, DLP can flag or block prompts containing classified data before they reach external AI platforms. DLP requires data classification to function effectively: it can only protect data that has been labelled. See also: Data Classification, CASB
Data Protection Impact Assessment (DPIA) Compliance
A structured process for identifying and addressing privacy risks before deploying a technology or process that involves personal data. GDPR Article 35 requires a DPIA before deploying AI systems that involve systematic profiling, automated decision-making with significant effects, or large-scale processing of sensitive personal data. A DPIA documents the purpose of the processing, the data involved, the risks, and the mitigations. It should be repeated when the system changes significantly. See also: GDPR, Automated Decision-Making

E Back to top

EU AI Act Compliance
The European Union's comprehensive AI regulation, the first of its kind globally. It applies a risk-tiered framework: prohibited AI practices (banned outright), high-risk AI systems (stringent compliance requirements), limited-risk systems (transparency obligations only), and minimal-risk systems (no mandatory requirements). The Act has extraterritorial reach: any organisation placing AI on the EU market or using AI to affect EU residents is subject to it, regardless of where the organisation is based. The first enforcement provisions applied from 2 February 2025 (prohibited practices) with high-risk system requirements applying from 2 August 2026. Maximum penalties are EUR 35 million or 7% of global annual turnover, whichever is higher. See also: High-Risk AI System, Annex III, Deployer. Further reading: EU AI Act: What Australian Businesses Need to Know

F Back to top

Foundation Model Technical
A large AI model trained on broad data and designed to be adapted for a wide range of downstream tasks. GPT-4, Claude, Gemini, and Llama are foundation models. The EU AI Act uses the term General Purpose AI (GPAI) model for this category and imposes transparency and copyright compliance obligations on GPAI model providers. Most businesses interact with foundation models through a third-party interface and are therefore deployers rather than providers under the Act. See also: General Purpose AI (GPAI), Deployer

G Back to top

GDPR Compliance
The General Data Protection Regulation. EU law governing the collection, processing, and storage of personal data. GDPR already applies to AI systems: Articles 6 (lawful basis), 22 (automated decision-making), and 35 (data protection impact assessment) are the most directly relevant. The EU AI Act operates as a second layer of regulation on top of GDPR for high-risk AI systems, meaning organisations must satisfy both simultaneously. GDPR applies wherever personal data of EU residents is processed, regardless of where the processing organisation is located. See also: EU AI Act, DPIA, Automated Decision-Making. Further reading: AI Data Privacy in 2026
General Purpose AI (GPAI) Compliance
The EU AI Act's term for AI models trained on broad data that can perform a wide range of tasks and be integrated into other products or services. GPAI model providers must comply with transparency obligations including publishing technical documentation and maintaining a copyright compliance policy. GPAI models with systemic risk (roughly, those trained on more than 10^25 floating point operations) face additional requirements including adversarial testing and incident reporting to the EU AI Office. See also: EU AI Act, Foundation Model
Governance Maturity Governance
A measure of how systematically and effectively an organisation manages AI risk. Maturity models typically run from ad hoc (no formal controls, reactive to incidents) through developing, defined, and managed, to optimised (continuous improvement, proactive risk identification). ASIC's 2024 review of 23 Australian financial services licensees found AI governance maturity ranged across the full spectrum, with a significant proportion in the ad hoc or early developing stages despite widespread AI deployment. See also: AI Governance Framework. Further reading: ASIC AI Governance Gap

H Back to top

High-Risk AI System Compliance
Under the EU AI Act, an AI system listed in Annex III or used in safety-critical infrastructure is classified as high-risk. High-risk systems face the most stringent compliance requirements: technical documentation, conformity assessment, human oversight mechanisms, data governance, accuracy and robustness standards, and registration in the EU AI database before deployment. The classification applies to deployers as well as providers. An organisation using an off-the-shelf AI tool for CV screening is operating a high-risk AI system and must comply with deployer obligations even if it did not build the tool. See also: Annex III, Deployer, Conformity Assessment
Human Oversight Governance
The requirement that humans remain meaningfully able to monitor, intervene in, and override AI system decisions. The EU AI Act requires that high-risk AI systems be designed to allow human oversight, and that deployers assign qualified staff to exercise it. Human oversight is not satisfied by a nominal review step that is rubber-stamped in practice. Regulators assess whether oversight is genuine: whether reviewers have the information, training, time, and authority to actually override AI recommendations. Automation bias is the primary failure mode. See also: Automation Bias, High-Risk AI System

I Back to top

Insider Threat Security
A security risk that originates from within an organisation, typically from current or former employees, contractors, or partners. Traditional frameworks distinguish between malicious insiders (deliberate theft or sabotage) and negligent insiders (accidental data loss). Shadow AI has introduced a third category: productivity-driven data exposure. An employee pasting a client contract into a public AI tool is not acting with malicious intent and may not be technically negligent, but sensitive data has left organisational control. Governance programmes that do not account for this third category will undercount AI-related insider risk. See also: Shadow AI, Data Exfiltration. Further reading: Kordia Report: Shadow AI as Insider Risk
ISO 42001 Governance
The international standard for AI management systems, published in 2023. ISO 42001 provides a framework for establishing, implementing, maintaining, and continually improving an AI management system. It is structured similarly to ISO 27001 (information security) and covers AI policy, risk assessment, objectives, performance evaluation, and continuous improvement. Certification is voluntary but increasingly referenced in procurement requirements and EU AI Act conformity assessment documentation as evidence of systematic AI governance. See also: AI Governance Framework, NIST AI RMF

L Back to top

Large Language Model (LLM) Technical
A type of AI model trained on large volumes of text data that can generate, summarise, translate, and analyse language. ChatGPT, Claude, Gemini, and Copilot are consumer interfaces built on LLMs. From a governance standpoint, the key risk is that users often share sensitive data in prompts without understanding that the data may be retained, used for training, or accessible to the model provider. LLM usage is the primary driver of the shadow AI problem: the tools are easy to access, useful for everyday tasks, and difficult to monitor without dedicated controls. See also: Shadow AI, Prompt, Prompt Injection

M Back to top

Model Risk Management (MRM) Governance
A discipline originating in financial services that applies structured risk management to quantitative models, now extended to AI systems. MRM frameworks cover model development, validation, deployment, and ongoing monitoring. Regulators including APRA (Australia) and the Federal Reserve (US) have published MRM guidance that financial institutions are expected to apply to AI models. For organisations outside financial services, MRM principles provide a useful template for AI risk governance even where no specific regulatory requirement exists. See also: AI Governance Framework, AI Risk Register

N Back to top

NIST AI Risk Management Framework (AI RMF) Governance
A voluntary framework published by the US National Institute of Standards and Technology in 2023 to help organisations manage AI risk. The AI RMF organises activities into four functions: Govern (policies and culture), Map (identify context and risk), Measure (analyse and assess), and Manage (prioritise and respond). It is widely referenced in US federal policy and compatible with ISO 42001. The framework carries no legal force but is used as a reference in EU AI Act conformity documentation and by regulators globally. See also: AI Governance Framework, ISO 42001

P Back to top

Prompt Technical
The text input a user sends to an AI system. In the context of AI governance, prompts are the primary vector through which sensitive data enters external AI platforms. A prompt containing a client's name, commercial terms, employee salary data, or source code is a data transfer event. Organisations that monitor outbound AI traffic typically do so at the prompt level, logging prompts for review, applying pattern matching to flag sensitive content, or blocking submission where data classification rules are violated. See also: Large Language Model, Prompt Injection, Shadow AI
Prompt Injection Security
An attack in which malicious instructions are embedded in content that an AI system processes, causing it to deviate from its intended behaviour. Prompt injection is ranked the number one vulnerability in OWASP's LLM Top 10 (2025). In a direct injection, the attacker submits malicious instructions directly. In an indirect injection, the attacker embeds instructions in a document, webpage, or email that the AI reads as part of a task. Agentic AI systems are particularly vulnerable because they take actions, not just generate text. See also: Agentic AI, Large Language Model
Provider Compliance
Under the EU AI Act, a provider is any person or organisation that develops an AI system for placing on the market or putting into service. Providers of high-risk AI systems bear the heaviest obligations: conformity assessment, technical documentation, registration, post-market monitoring, and incident reporting. Most businesses using third-party AI tools are deployers, not providers. A business becomes a provider if it develops its own AI models or substantially modifies a third-party model before deploying it. See also: Deployer, EU AI Act, High-Risk AI System

R Back to top

Right to Explanation Compliance
The right of an individual to receive a meaningful explanation of how an automated decision affecting them was made. Under GDPR Article 22, individuals subject to solely automated decisions with significant effects have the right to request human review and an explanation of the logic involved. The EU AI Act reinforces this by requiring that high-risk AI systems provide interpretable outputs. In practice, many AI systems used in employment and credit decisions cannot currently produce the kind of explanation these rules require. See also: Automated Decision-Making, GDPR, Human Oversight
Risk Classification Governance
The process of assigning a risk tier to an AI system based on what decisions it informs, what data it processes, who is affected, and what regulatory obligations apply. The EU AI Act uses a four-tier classification: prohibited, high-risk, limited-risk, and minimal-risk. Internal frameworks often add organisation-specific factors such as third-party dependency, reversibility of decisions, and data residency. Classification should be reviewed whenever an AI system's use case, data inputs, or integration points change materially. See also: EU AI Act, High-Risk AI System, AI Inventory

S Back to top

Sanctioned AI Governance
AI tools that have been explicitly approved by an organisation for use by employees, typically following a security review, data processing agreement, and acceptable use assessment. Sanctioned status does not guarantee safe usage: tools can be approved without adequate governance around how they are used, what data employees may share, or how outputs are verified. Kordia's 2026 NZ Business Cyber Security Report found that many organisations rolling out sanctioned AI tools lack sufficient security governance, effectively creating a new category: sanctioned but ungoverned AI. See also: Shadow AI, Acceptable Use Policy
Shadow AI Shadow AI
The use of AI tools by employees without the knowledge, approval, or oversight of their organisation's IT or compliance function. Shadow AI is a subset of shadow IT but is treated as a distinct risk category because AI tools actively receive organisational data in prompts, making it a direct data governance risk rather than just a licensing or support issue. Common shadow AI tools include public ChatGPT, Claude, Gemini, Perplexity, and AI features embedded in consumer software. Research by Reco (2025) found an average of 269 shadow AI tools per 1,000 employees across organisations surveyed. See also: Shadow IT, Sanctioned AI, Insider Threat. Further reading: What Is Shadow AI?
Shadow IT Security
The use of software, hardware, or services by employees without IT department approval. Shadow AI is a subset of shadow IT, but is typically treated as higher risk because AI tools actively process and potentially retain organisational data. Traditional shadow IT controls (network monitoring, application whitelisting) are often insufficient for shadow AI because many AI tools operate over standard HTTPS and cannot be distinguished from approved web traffic without dedicated monitoring. See also: Shadow AI, CASB
Supply Chain AI Risk Security
The risk that AI components, models, or data sourced from third parties introduce vulnerabilities or compliance gaps. Supply chain risk is ranked third in OWASP's LLM Top 10 (2025). It includes: pre-trained models with backdoors, poisoned training data, compromised model APIs, and vendor AI systems with inadequate security controls. The EU AI Act requires deployers of high-risk AI systems to perform due diligence on procured AI components, including reviewing provider technical documentation. See also: High-Risk AI System, Model Risk Management

T Back to top

Technical Documentation Compliance
Under the EU AI Act, the required record that a provider must maintain for a high-risk AI system, demonstrating it meets the Act's requirements. Technical documentation must cover the system's intended purpose, architecture, training data, performance metrics, risk management process, human oversight measures, and change log. Annex IV specifies the full required contents. Documentation must be kept for ten years after the system is placed on the market and made available to national authorities on request. See also: EU AI Act, Annex III, High-Risk AI System
Transparency Obligation Compliance
Under the EU AI Act, limited-risk AI systems must meet transparency requirements without being classified as high-risk. The main obligations cover: AI-generated content (must be labelled as such), chatbots (users must be informed they are interacting with an AI), and deepfake media (must be disclosed). These obligations apply from 2 August 2026. They apply to a far broader range of deployments than high-risk requirements, including customer service chatbots, marketing copy generators, and AI-enhanced media. See also: EU AI Act, High-Risk AI System

Z Back to top

Zero Trust Security
A security model that assumes no user, device, or network connection should be trusted by default, even within a corporate perimeter. Zero trust architecture requires continuous verification of identity and access rights, least-privilege access controls, and monitoring of all traffic. Applied to AI governance, zero trust principles mean treating all AI usage as a potential risk vector until verified, rather than assuming that employees inside the corporate network are using AI appropriately. Zero trust network access (ZTNA) tools can restrict which AI platforms are reachable from corporate devices. See also: Shadow AI, CASB

This glossary is updated periodically. Last updated March 2026. Have a term to suggest? Get in touch.

Sponsor See what your team shares with AI Try Vireo Sentinel free