Security needs to be reevaluated in the context of AI, but not everything needs to change at once. Organizations that take a measured approach will do better, not giving into the AI hype cycle but recognizing the strategic ways AI is changing the game — and the ways it is changing security trajectories for the better.
AI Is Weakening Defenses
AI-enabled attacks are increasing, weakening existing defenses in the process. Against this two-pronged offensive, current security tools are struggling to keep up.
We see AI leveraged to create highly believable deepfakes that lead to social engineering scams. ML tools are used to power chatbots that carry the ruse even further. AI generates code that can change mid-flight (AI-generated polymorphic malware), and instances of AI-powered password cracking and MFA evasion are becoming more prevalent.
As organizations realize that their current tools are insufficient, there is a tendency to buy into the AI trend where cybersecurity is concerned.
AI Hype Cycle vs. Data Defense
If companies are not careful, they can believe that AI-infused security tools equal defense. But protecting sensitive data is the real litmus test, and if AI capabilities are introduced with careful consideration of data protection and access, there could be problems further down the line.
The consequence of introducing too much AI, too soon, is that AI expands the attack surface, can behave unexpectedly without proper guardrails (agentic AI, namely), consumes a lot of computational resources, and is expensive when applied on a large scale.
Michael Siegel, director of Cybersecurity at MIT Sloan, stated that “AI-powered cybersecurity tools alone will not suffice.” He argued that “a proactive, multi-layered approach — integrating human oversight, governance frameworks, AI-driven threat simulations, and real-time intelligence sharing — is critical.”
In other words, even though “fighting fire with fire” may be the buzzword against rising AI attacks, that fire needs to be used strategically.
Implementing AI Securely for Business
When organizations look to adopt AI, they often do it to get and stay ahead of the next business that is adopting AI.
When looking at the security of AI – that is, securing AI tools used in CRMs, chat bots, coding, predictive modeling and other business uses - various industry frameworks offer best practices.
The NIST AI Risk Management Framework (AI RMF) advocates that companies define policies, map and measure risks, and implement risk strategies alongside every new adoption.
The Google Secure AI Framework (SAIF) is an industry-led policy that emphasizes bringing existing security foundations into AI ecosystems, highlighting risks like data poisoning, prompt injection, and model stealing.
Safely Transitioning to AI for Cybersecurity
Is there a danger in applying “too much AI, too soon” in cybersecurity? Unfortunately, yes. Effective AI integration requires precision over power. There are some places where AI delivers genuine value; there are others where it does not.
For example, enterprises generate massive quantities of data that need to be analyzed. Using sophisticated LLM-powered tools for this task would be cost-prohibitive for most companies, not to mention unnecessary; this is a place where traditional data analysis tools perform better and cheaper.
Security alerts are another area where large amounts of data need to be filtered and sorted for use. Applying expensive, overly sophisticated AI tools for the sake of it drives up costs, sucks up computational power, and siphons valuable resources when other advanced tools will do. In this case, tools like Fortra Threat Brain; which connect IOCs across the attack chain using shared intelligence and telemetry from multiple tools.
Were We Safer Before AI?
As we bridge the gap between AI fantasy and reality, it’s important to know where AI is needed and where it becomes an additional operational burden. When something strains resources, it spills over into straining security resources as well.
While AI can counter AI‑enabled attacks, an organization’s security will depend on its ability to use AI more effectively than the adversaries who weaponize it. Achieving this requires a disciplined approach to implementing AI safely and appropriately.
Learn More About How AI Makes Safe AI Innovation Possible
Secure AI Innovation: Fortra’s Dual Approach to AI-Driven Cyber Defense and Trusted AI Advancement