We can no longer say that artificial intelligence is a "future risk", lurking somewhere on a speculative threat horizon. The truth is that it is a fast-growing cybersecurity risk that organizations are facing today.
That's not just my opinion, that's also the message that comes loud and clear from the World Economic Forum's newly-published "Global Cybersecurity Outlook 2026." As the report bluntly warns:
"87% of respondents identified AI-related vulnerabilities as the fastest-growing cyber risk over the course of 2025."
That puts AI ahead of ransomware, supply-chain attacks, insider threats, and distributed denial-of-service attacks in terms of how quickly the risk is escalating.
The reason for the concern is easy to understand: AI systems massively expand the attack surface within organizations.
As the report explains, "the widespread integration of AI systems introduces an expanded attack surface, creating novel vulnerabilities that traditional controls were not designed to address."
And while this is happening, malicious attacks are becoming faster, cheaper, and more convincing due to the use of generative AI.
The most obvious place where this is occurring is in the field of fraud and social engineering. As the report describes, "recent developments in generative AI are lowering the barriers to executing phishing attacks while simultaneously increasing their sophistication and credibility."
What is perhaps most interesting is how the concerns around AI have shifted.
Where once a primary concern was that AI would help attackers become cleverer, it is now that businesses might inadvertently harm themselves by suffering AI-related data leaks (34% of those surveyed cited this as a top concern, up from 22% the previous year). In contrast, the fear of attackers advancing their capabilities has actually dropped (falling to 29% this year, down from 47% the year before)."
Companies are right to be increasingly worried that their own AI tools might leak sensitive data, expose proprietary information, or introduce compliance nightmares.
One of the most uncomfortable aspects of cybersecurity-wise about AI is the adoption of AI agents. Agentic AI is not chatbots that answer questions; they are systems designed to act autonomously.
The World Economic Forum's report warns that "as AI agents become more widely adopted, they are poised to transform how digital systems are designed and developed." Unfortunately, they are also poised to introduce a whole lot of security headaches.
Without strong controls, "agents can accumulate excessive privileges, be manipulated through design flaws or prompt injections, or inadvertently propagate errors and vulnerabilities at scale."
An AI agent operating at speed makes things worse, not better, which is why the report stresses the need for "continuous verification, audit trails and robust accountability structures grounded in zero-trust principles."
Those tasked with defending organizations from cyber-attacks are also heavily leaning on AI, with 77% of firms reporting that they have adopted AI for cybersecurity, primarily for detecting phishing emails, responding to intrusions, and conducting behavioral analytics.
However, worryingly, in the rush to automate, governance is lagging, with "roughly one-third" still lacking a process to validate AI security before deployment.
So, what is the takeaway? Yes, AI can help, but only if it is deployed correctly. Which means not rushing in to embed it into the heart of your organization without proper consideration and guardrails.
As the report's authors put it. "AI can improve cybersecurity, but only when deployed within sound governance frameworks that keep human judgement at the center."
In short, if you are entrusting your company's security today to an AI system that you don't fully understand, you could be creating a breach for yourself tomorrow.
Editor’s Note: The opinions expressed in this and other guest author articles are solely those of the contributor and do not necessarily reflect those of Fortra.
Your Guide to Secure AI Innovation
In this accelerated threat landscape, every security company must embrace AI not as an option, but as an operational necessity.