Cisco’s 2026 State of Industrial AI Report surveyed over 1,000 decision-makers across 19 countries and 21 sectors. Forty per cent cite cybersecurity as the top barrier to AI adoption. Forty-eight per cent name it their biggest networking challenge. At the same time, 85 per cent expect AI to improve their security posture. For manufacturers, utilities, and transport operators, the bottleneck to scaling AI is governance and security architecture, not technology or funding.
AI Has Left the Pilot Phase
The Cisco 2026 State of Industrial AI Report, produced in partnership with Sapio Research, surveyed over 1,000 industrial decision-makers in 19 countries across 21 sectors, making it one of the more substantial datasets on industrial AI adoption. The finding is clear: most industrial organisations are no longer experimenting. NetworkWorld reported that 61 per cent of respondents are actively deploying AI in industrial environments, though only 20 per cent describe their adoption as mature and scaled.
Use cases span quality inspection, predictive maintenance, logistics optimisation, and production planning. For manufacturers specifically, Manufacturing Dive noted that over 350 manufacturing decision-makers were surveyed, with AI deployment concentrated in process monitoring, defect detection, and supply chain forecasting. The shift from pilots to production is well underway across the sector.
Cybersecurity Jumped From Third to First
In Cisco’s 2024 report, cybersecurity ranked third among external growth obstacles. By 2026, it has moved to the top. HelpNetSecurity reported that 40 per cent of respondents now cite cybersecurity as the top barrier to AI adoption, and 48 per cent name it their biggest networking challenge overall. The shift reflects a structural reality: connecting more assets, sensors, and systems to support AI expands the attack surface in ways traditional security architectures were not designed to handle.
For manufacturing specifically, the barrier is even steeper. Manufacturing Dive reported that 46 per cent of manufacturing respondents named cybersecurity as their top concern, compared to 40 per cent across all sectors. Skills gaps, which topped the 2024 report, have fallen to third place behind cybersecurity and AI technology integration.
The paradox is that the same organisations view AI as the solution to the problem it creates. NetworkWorld noted that 85 per cent expect AI to improve their overall cybersecurity posture, with 81 per cent expecting improvements in threat detection and monitoring. Cisco’s report described this as AI being “both the #1 barrier and the #1 asset” for industrial networking teams. The resolution to this paradox is governance: AI can improve security, but only if the AI systems themselves are deployed within a governed security architecture.
IT/OT Collaboration Is the Missing Governance Layer
Industrial AI sits at the intersection of information technology (IT) and operational technology (OT). IT teams manage networks, cloud infrastructure, and cybersecurity. OT teams manage production systems, SCADA, and physical safety. AI deployments in manufacturing, energy, and transport require both to work together, and the Cisco data shows they largely are not.
Manufacturing Dive reported that 43 per cent of manufacturing organisations surveyed have little to no IT/OT collaboration. Digital Watch Observatory noted that only 20 per cent of organisations report fully collaborative IT and OT cybersecurity operations. Samuel Pasquier, Cisco’s Product Management Lead for Industrial IoT Networking, was quoted saying the biggest barrier to IT/OT collaboration “is the reality that IT and OT come from very different disciplines, with different technologies, knowledge, priorities, and definitions of risk.”
For SMEs in manufacturing, utilities, and transport, this IT/OT gap is where shadow AI risk concentrates. OT teams adopting AI-powered monitoring or maintenance tools without IT security oversight create the same governance vacuum that shadow AI creates in office environments. The tools are different (edge compute rather than SaaS chatbots) but the pattern is identical: adoption outpacing governance, with security teams discovering deployments after the fact.
Confidence Exceeds Maturity
Despite the barriers, confidence is high. NetworkWorld reported that 93 per cent of respondents are confident in their ability to scale AI, despite the governance and collaboration gaps the same report documents. Smart Industry noted that 54 per cent expect returns on AI investments within a year, placing pressure on platforms that can support fast deployment but risk compromising stability or security.
Progress varies by sector. Hi-Tech Electronics and Semiconductors leads, with 64 per cent of organisations reporting high confidence in scaling AI. Energy and transport follow. The unevenness matters for supply chains: a manufacturer with mature AI governance trading with suppliers who have none creates a chain-level exposure that no single organisation’s controls can address.
What Industrial Businesses Should Do
The Cisco data points toward a governance model that treats AI as the third leg of operational risk, alongside process safety and cybersecurity.
Build a unified IT/OT AI risk register. Every AI system deployed in an industrial environment should appear in a single register that both IT and OT teams can see. Edge AI for predictive maintenance and cloud AI for logistics planning carry different risk profiles but both need governance.
Apply network segmentation around AI workloads. AI systems that interact with production networks and SCADA infrastructure need isolation from general enterprise traffic. The 48 per cent naming security as their biggest networking challenge are describing this problem.
Map AI systems to existing regulatory frameworks. Industrial AI systems may fall under sector-specific regulations (NERC CIP for energy, safety-critical software standards for manufacturing, EU AI Act high-risk categories for transport). The compliance mapping should happen before scale, not after an incident.
Treat change control for AI models like change control for safety-critical software. Model updates, retraining, and data pipeline changes should go through versioning and approval processes equivalent to those used for software that controls physical equipment. AI drift in a chatbot produces a wrong answer. AI drift in a production system produces a safety incident.
The organisations scaling AI most successfully in Cisco’s data are those treating infrastructure, cybersecurity, and IT/OT collaboration as foundational rather than optional. For industrial SMEs, that means embedding AI governance into existing safety and reliability systems rather than building a parallel “innovation” track that operates outside normal controls.
Related reading: New 2026 AI security stats show governance cuts incidents nearly in half | Agentic shadow AI is already an incident category | What is an AI governance framework?
Sources
-
Cisco: 2026 State of Industrial AI Report (March 2026, 1,000+ respondents, 19 countries, 21 sectors)
-
HelpNetSecurity: Cybersecurity is now the price of admission for industrial AI (4 March 2026)
-
NetworkWorld: Cisco: AI is a double-edged sword in industrial networks (2 March 2026)
-
Manufacturing Dive: Manufacturers are making progress with AI, but barriers remain (4 March 2026)
-
Smart Industry: New Cisco AI study sees widening execution gap (March 2026)