A recent IEEE global study revealed that 96% believe that the innovation, exploration and adoption of AI - specifically agentic AI - will continue at “lightning speed” in 2026.
What does that mean?
It means that with the cement still hardening on AI regulatory compliance and the necessary data center infrastructure to support it still five years out, we’d better learn to secure it, and fast.
AI may be the wheels on the bus that make it go round and round, but without understanding what those wheels can and cannot do by themselves, teams are in for unexpected surprises.
The Upside of Deploying Agentic AI
Agentic AI is AI taken to the next level; it can think, reason, and even act. That’s a far cry from read, summarize, and regurgitate. Agentic AI is what powers many truly time-saving AI implementations, like:
Determining attack paths (cybersecurity)
Autonomously resolving service issues (customer service)
Sourcing and screening candidates (HR)
Natural language prompt coding (“vibe coding”)
And other problem-solving, judgement-requiring tasks. It still keeps humans in the loop, but it learns from established patterns and draws connections to become better adept and self-sufficient over time.
This is wonderful for business. But it is also risky.
The Risks of Deploying Agentic AI
One doesn’t have to look far to realize the great potential, for good or ill, of an autonomous agent that can think and be only semi-controlled in your environment.
“Unlike traditional applications, agentic AI can behave in adaptive and unpredictable ways,” notes the Fortra FIRE team in its recent Secure AI Innovation guide. “Treating these systems as ‘just another app’ underestimates its capacity to evolve behavior based on feedback loops, data exposure, or experimental cues.”
Unleashing agentic AI into without the proper security controls could mean:
It learns off the wrong behaviors if you’re not careful about its input
It customizes itself and its responses to faulty data if that data isn’t sterilized
It solidifies bad and experimental practices if not re-trained with the right ones
Ultimately, this comes down to the propagation of errors (bias), being subject to data poisoning, and producing unintended results due to tool misuse. You’re dealing with another entity that can “think”—it’s not static.
Similarly, as with a human junior analyst, you must ensure they’re well-trained. And prevented from doing anything dangerous.
Treating Agentic AI Like an Intern
Companies should already have policies in place to ensure that employees operate safely within their designated boundaries. Why? Because employees can act for themselves; they can’t be configured.
Agentic AI shares that similar quality. Unlike other technologies, you can’t just set it and forget it. It learns, it remembers, and it changes. That means it needs to be treated more like a human resource than a static addition to your security stack (or your CRM, HR services, or IT operations).
Fortra’s report notes that existing personnel policies like:
Access control
Performance oversight
Task scope
Should be extended to agentic AI systems, and that additional ones should be added, like:
Programmatic guardrails
Robust logging
Regular checkpoints
And human-in-the-loop reviews. Agentic AI might be helpful, but it’s too savvy to be trusted. To keep it in check, human reviews and audits are essential.
So, what does this look like in practice?
A Framework for AI Governance
Agentic AI has part-machine, part-human qualities. This means changing the way we govern this “technology” to include both the limitations you’d put on a free agent and the constant tune-ups you’d put on a machine.
This translates to:
Formal Onboarding and Offboarding: Just like you’d create and secure an employee’s identity, so you create and secure the identity of every agentic AI agent. This means revoking privileges and access once it’s decommissioned.
Continuous Security and Monitoring Protocols: Implement alert thresholds on your agentic AI agent, automate policy enforcement whenever possible, and avoid leaving any blind spots where it operates without your oversight. This means continuous monitoring and the ability to detect anomalies and respond promptly.
DevOps Practices: Just as you would with any critical system, subject your AI agent to regular testing, version control, and output validation so results don’t deviate over time. Unlike true interns, it’s not in team meetings and can’t course correct without being told.
For the full list, download the guide.
Agentic AI: As Safe as You Make It
Agentic AI is set to revolutionize the way we do security and everything else. From how much time SOCs spend chasing down alerts to how fast retailers can answer complex customer queries, agentic AI will be infused into organizations at an unprecedented pace this coming year.
This is why, as the IEEE study notes, 44% list AI ethical practices as a “top skill” for AI-related hires in 2026. At the foundation of those ethics is the responsibility to operate safely, and without AI security, there is AI ethics.
As organizations realize the value of agentic AI for business and productivity, let’s not make the same mistake we’ve always made. Many may already be halfway in, but it’s not too late: if agentic AI adoption has outpaced AI governance in your organisation, now is the time to make a change.
Contrary to popular opinion, it will not have a net slowdown effect. Instead, implementing proper agentic AI policies now, at the start, will enable you to grow your AI initiatives with confidence and keep growing - while others stop to repair flat AI tyres along the way.
Your Guide to Secure AI Innovation
In this accelerated threat landscape, every security company must embrace AI not as an option, but as an operational necessity.