AI is no longer science fiction. It is in the inbox. It is in the network. It is in every attack and every defense.
Cyber attackers are learning fast. They use AI to scan, craft, and exploit. They automate what used to take hours. They personalize at scale. And defenders are racing to keep pace, building AI-driven tools to stop what attackers create.
“Threat actors are constantly innovating and creating target lists that align with their end goals, whether that is financial, political, or just generating general mayhem,” says Bob Erdman, Associate VP, Research & Development at Fortra.
The risks are different for different targets. Individuals are most at risk of losing life savings to AI-generated companions in romance or pig-butchering scams. Enterprises, by contrast, face spear-phishing that uses AI-crafted fake HR documents or malicious QR codes. Context matters.
How Malicous Hackers Use AI
Attackers leverage AI in ways defenders can barely keep up with. They build malware that slips past antivirus and endpoint detection. They sweep the internet, hunting for weak software and open doors. AI reads application code like a detective, finds zero-day exploits, and writes spear-phishing emails that feel real.
“The AI used by attackers can scan the software code of applications being used by companies, finding new zero-day attack vectors and crafting much more believable spear-phishing communications targeting employees,” Erdman says.
Even seasoned analysts face challenges. The same technology that protects networks can also deceive them. AI-generated content can appear normal to filters. Malware can hide in obfuscated code.
The result is a new era of precision attacks. Gone are the mass blasts of generic malware. Instead, AI allows for tailored, high-impact strikes that exploit both technology and human behavior.
Also, the nature of attacks is changing all the time. Zachary Travis, Security Operations Manager at Fortra, says that traditional email security measures can’t account for 100% of threats, and the use of AI to create convincing scam emails has changed the game. “Is it possible to predict what the next threat style will be and stay ahead of scammers? No, probably not with 100% accuracy.”
How Cybersecurity Experts Fight Back
Defenders are not helpless. They use AI to fight fire with fire. Tools vary in scope. Some are broad, scanning millions of logs or emails. Others are highly targeted, focusing on anomalies in sensitive environments.
“We use AI and machine learning to evaluate emails to uncover phishing attacks, classify URLs to block malicious destinations, and build more efficient penetration testing and red teaming tools,” Erdman explains. Defensive AI also prevents data loss, classifies document sensitivity, and even redacts sensitive information on the fly.
The speed and scale are staggering. AI can sift through mountains of data that no human could ever manage alone. But like the attackers, defenders must innovate continuously.
Even everyday tools require vigilance. Common chatbots like ChatGPT, Gemini, Claude, and Grok include security measures such as data validation, access control, and monitoring for misuse.
Yet Erdman warns: “A common chatbot security concern not addressed is preventing employees from inputting confidential information, trade secrets, or source code into these chatbots. Businesses of all sizes should address this with clear monitoring policies and strong enforcement measures.”
The Race Between Attackers and Defenders
This is not a game with a finish line. Both sides accelerate constantly. AI allows attackers to craft more believable phishing, smishing, and vishing campaigns. It enables deepfakes and interactive scams that play on trust and urgency. And it allows defenders to analyze, block, and respond at unprecedented scale.
Erdman is clear about the danger: “With the rise in AI usage to create more believable phishing, deepfakes, and interactive chatbots, the financial and emotional losses experienced by vulnerable groups are going to continue to explode until we can find a more holistic way to counter the scope and scale of these malicious actors.”
Travis backs this up: “By the time blocking rules and security have been built up around a threat, attackers have moved onto a new scam and the cycle repeats.”
Building a Layered Defense
The answer is layered. Machines handle the scale. Humans bring the insight. AI spots patterns, anomalies, and hidden threats. Analysts dig deeper, investigate, and feed what they learn back into the system. Each pass makes the cycle stronger, sharper, smarter.
Machines catch yesterday’s threats. AI and human defenders together are the best hope for tomorrow’s.
Organizations must invest in both sides. They must adopt AI to manage scale and speed. They must train humans to spot nuance, context, and subtle signals of deception. They must enforce policies that prevent accidental data leaks through AI tools. Only then can they hope to match the pace of attackers who will not slow down.
AI is a tool. It can destroy or defend. It can target an individual with precision or protect a multinational enterprise from chaos. In the race from inbox to infrastructure, the winners will be those who harness both the logic of machines and the judgment of humans.
Because in this age of AI, vigilance is not optional. It is essential.
Meet Our Thought Leaders
Fortra® subject matter experts share their real-world experiences, offer practical tips, and help organizations navigate the cyber threat landscape.