Three-quarters of professionals surveyed by Fellow.ai use an AI note-taker in work meetings. Eighty-four per cent say they change how they speak when one is present. These tools are already shaping meeting records, influencing decisions, and processing sensitive conversations, yet most organisations have no governance for AI inside their collaboration platforms. Unified communications has become the largest ungoverned AI surface in the enterprise.
AI Is Already in the Meeting
The experience is now routine. An employee joins a video call and notices an unfamiliar participant: a bot that was not invited by anyone on the agenda. A sales representative finishes a customer call and pastes the transcript into a personal AI account for a summary. A project manager uses a browser extension to condense a 200-message Slack thread into three paragraphs of action items. Each of these interactions involves AI processing sensitive business data outside IT’s line of sight.
No Jitter’s March 2026 feature on shadow AI in collaboration workflows identified the specific ways this manifests in unified communications platforms. AI meeting assistants, third-party note-taker bots that join Zoom or Teams meetings and generate transcripts or summaries, are the most visible. AI copilots built into email clients draft responses and summarise inboxes. Browser extensions summarise Slack and Teams threads. Employees paste transcripts and recordings into external AI tools after meetings conclude. In each case, the AI tool processes business conversation data that may include client names, commercial terms, strategic decisions, and personnel discussions (No Jitter, 16 March 2026).
Fellow.ai’s 2025 survey of professionals across IT, operations, and business leadership found 75 per cent use an AI note-taker in work meetings. Among active users, 47 per cent reported the note-taker had recorded or shared something they did not intend to be captured. Fifty per cent of non-users cited privacy and security as their primary reason for not adopting. The survey’s respondent base skewed toward IT (41 per cent) and Operations (31 per cent) roles, which may inflate adoption figures relative to the general workforce. Sample size and geographic scope were not disclosed, so these figures should be treated as directional rather than definitive (Fellow.ai, 2025).
Why UC Shadow AI Is Different from Chatbot Shadow AI
Most shadow AI governance has focused on employees pasting data into ChatGPT or Claude. That threat is real and well-documented. AI inside collaboration tools presents a different risk profile for three reasons.
It is embedded in workflows rather than isolated in a browser tab. A meeting bot sits inside the same Teams or Zoom environment that IT manages, which creates a false sense that it is sanctioned. David Matalon, CEO of Venn, told No Jitter that while these tools boost agility and productivity, they interact with company data in ways IT cannot always control. Employees often use meeting AI tools with the knowledge of their immediate team but without formal IT approval or security review (No Jitter, March 2026).
It leaves minimal trace in traditional audit logs. Collaboration platforms capture what was said or shared, but not the AI processing that occurred around the meeting. A bot that joins a call, transcribes it, generates a summary, and emails action items to attendees may produce no record in IT’s audit infrastructure. The AI’s influence on the meeting record is invisible after the fact.
It influences decision-making through the record it creates. When an AI-generated meeting summary becomes the official record of a discussion, the AI has shaped what the organisation remembers about that meeting. Nuance, disagreement, and context that did not make it into the summary effectively did not happen. UC Today noted in January 2026 that by the time AI-shaped artifacts land in a shared document, they feel authoritative, but when decisions are questioned later, explaining why something was written becomes harder.
Behavioural Impact: 84 Per Cent Change How They Speak
The Fellow.ai survey found that 84 per cent of respondents change how they speak when they know an AI note-taker is present. Fellow.ai characterised this as awareness rather than discomfort, with teams becoming more intentional about what is documented and how. That framing understates the governance implication.
When the majority of meeting participants alter their communication because AI is recording, the meeting itself has changed. Candid discussion of risk, disagreement with a proposed strategy, or concerns about a client relationship may be softened or omitted when employees know the conversation will be processed by AI and distributed as a written record. The result is meetings that produce smoother summaries but less accurate representations of what was actually discussed.
This connects to the value drift research published in The Conversation on 5 March 2026 by Merve Hickok and Harko Verhagen, drawing on University of Auckland findings, which Shadow AI Watch covered separately. When AI-generated outputs gradually become the standard for organisational communication, the quality of human judgement visible in those records degrades over time. Meeting summaries are a particularly acute example because they become the basis for action items, project decisions, and performance records.
The Governance Gap in Collaboration
PwC Canada’s Trust in AI report, published in February 2026, found that 72 per cent of organisations name responsible AI a top priority, yet 36 per cent still have no dedicated governance function to manage AI risks. Sixty-five per cent of leaders cited unclear ownership and difficulty inventorying AI systems as key hurdles (PwC Canada, February 2026).
The No Jitter feature noted that UC teams often do not know which AI tools employees are using, which affects the organisation’s ability to monitor risk or enforce policies. The risk develops quietly because shadow AI in meetings does not trigger alerts in the way a traditional security incident would. An employee copies sensitive information into an external AI tool that processes it on third-party servers, then returns a summary. By the time that information has left the organisation, privacy or compliance violations have already occurred.
IBM’s 2025 Cost of a Data Breach report found that 63 per cent of breached organisations had no AI governance policy or were still developing one, and 97 per cent of AI-related breaches lacked proper access controls (IBM, July 2025). Those figures apply across all AI tools, but meeting and collaboration platforms represent a particularly acute blind spot because they sit inside the organisation’s approved technology stack while hosting unsanctioned AI.
What Organisations Should Do
Banning AI note-takers and meeting assistants outright will produce the same result as banning ChatGPT: usage driven underground with zero visibility. The viable approach is to treat meeting AI as a governed system within the collaboration stack.
Discover what is already running. Audit Zoom, Teams, and Google Meet tenant settings for third-party bot access. Review browser extension policies. Ask teams directly which AI tools they use for meeting summaries, email drafting, and thread summarisation. The inventory will be larger than expected.
Define approved and banned AI features in UC tools. Separate vendor-embedded AI (Microsoft Copilot in Teams, Google Gemini in Meet) from third-party bots and browser extensions. Vendor-embedded AI at least operates under the organisation’s existing data processing agreement. Third-party bots may process data under entirely separate terms.
Configure tenant-level controls. Both Microsoft Teams and Zoom offer admin settings that control whether external bots can join meetings. These controls exist. In most organisations they have not been configured for AI governance because the question was never asked. Atlassian Intelligence requires manual deactivation per product and the opt-out does not carry over to new products added to the instance.
Publish clear rules for meeting data. Employees need to know whether meeting transcripts can be pasted into external AI tools, whether AI-generated summaries can be forwarded outside the organisation, and whether recordings can be uploaded to personal AI accounts for analysis. Most acceptable use policies do not address meeting data specifically. They should.
Treat AI meeting summaries as records. If an AI-generated summary becomes the basis for project decisions, client commitments, or performance assessments, it should be subject to the same retention, review, and governance requirements as any other business record. Organisations with records management policies that predate AI meeting tools have a gap that needs closing.
Collaboration platforms are becoming human-to-AI interfaces, not just human-to-human ones. Governance that focuses only on dedicated AI applications like ChatGPT misses the largest surface area where AI is actually processing organisational data: inside the meetings and messages where work happens.
Related reading: What is shadow AI? | GenAI value drift: how AI quietly shifts workplace standards | Employees still do not know what data they can put into AI tools | AI Usage Policy Template (free download)
Sources
- No Jitter: Governance lags as shadow AI reshapes collaboration workflows (16 March 2026)
- Fellow.ai: The State of AI Meeting Notetakers 2025 (75% adoption, 84% behaviour change)
- PwC Canada: Trust in AI report (February 2026)
- IBM: Cost of a Data Breach 2025 (29 July 2025)
- UC Today: Shadow AI in Collaboration (January 2026)
- The Conversation: Hickok and Verhagen, value drift research (5 March 2026)
- Shadow AI Watch: GenAI Value Drift (March 2026)