It seems there is hardly a digital realm that hasn’t been affected by generative AI. ChatGPT, released in November 2022 and perhaps the most widely known construct, is poised to revolutionize the way we do business, source information, and even secure our digital assets.
Just like fire or the internet, this extremely useful tool can be used in two ways. Cybercriminals can abuse its features to make even more pernicious attacks, or defenders can use its iterative capabilities to respond in kind.
Understanding the risks and opportunities of generative AI is critical to being able to absorb it in the coming months and years ahead. Fortra solutions help mitigate the possible negative outcomes of generative AI and enable organizations to not only adopt but leverage its positive capabilities in the future.
Pandora has left the box, and it is now up to us to learn to handle it wisely.
Understanding the Risks of Generative AI
While the purpose is not to instill fear, it helps to know your enemy. Here are some of the ways threat actors are using generative AI cybersecurity risks to their advantage.
Cybercriminals are sending out emails they didn’t even write — and they’re incredibly good. Soon, gone might be the days when you could spot a fraud by the poor grammar or punctuation errors. Now, agents like ChatGPT can spin up very convincing (and often customized) phishing emails in the receiver’s native language — and with next to no clues that something might be amiss.
2. Polymorphic Malware
ChatGPT is being used to spin up polymorphic malware — but it doesn’t know it. With each new update, it gets harder to trick it into doing something malicious, but at this point, it can still churn out so many varying types of malware that they evade detection, even by EDR tools. While this trick is nothing new, ChatGPT now enables even those not familiar with scripting to get in on the action, leveraging the system instead of their own expertise and opening the door to script kiddies and novices alike. Also, the risk is that one day, it might churn out malware at a level only detectable by other AI, due to its extreme learning capabilities.
3. Dangerous, Low-Quality Apps
4. Unintentional Data Leak
Be careful what you share with ChatGPT — it doesn’t keep secrets. If you share your customer database, for example, as a way of auditing your CRM or making data more accessible to your team, those data points can now be accessed by anyone across the world with the right questions to ask. Only share information you would feel comfortable having on the public domain — and make sure you have nothing sensitive languishing unprotected on the web. It may come up as a tenth-page Google search, but if its indexed, it’s still “fair” domain for generative AI.
We’ve all been duped by some viral picture of an event that never happened. What if that “event” was your boss making a (false) major announcement in an all-hands meeting, or admitting to a ransomware event (that never happened), or (mistakenly) furloughing half the employee base? These potential results from convincing deepfakes could have real-world impact, causing stock prices to plummet and compromising the integrity of critical systems — which are still run, primarily, by human beings. As security practitioners, our job is to keep sensitive data from leaving the network; if run rampant, these types of schemes could contribute to already rampant BEC rates (not to mention phishing and other social engineering ploys).
Weaponizing Security with Generative AI
While cybercriminals will always find ways to use ChatGPT to their advantage, the good news is that this technology can just as easily be used for good. Here are some examples of how the security industry is using generative AI as a security enabler.
1. Automating Redundancies
Security analysts waste a lot of time writing scripts that interact with different tools when their time could be spent analyzing the results. Generative AI could take over linear elements like coding and offload routine tasks while analysts spend their time doing things only a human can do; like integrating threat data with business priorities and solving real-time problems with stakeholder values in mind.
2. Fighting the Cyber Talent Crisis
Right now, SOC leaders look to hire those versed in a number of different security technologies. With generative AI, the best person for the job could ask ChatGPT those technically driven questions like “is this baseline behavior for this asset” and “what would a successful exploit look like”. The value-add that SOCs could then hire for would be critical thinking, not encyclopedic knowledge.
3. Smoothing Security Analytics
So much legwork has to be done before the actual analysis can take place, and that includes writing signatures. A time-consuming task for even experienced practitioners, cranking out YARA rules, IDS signatures, and search queries can delay the time to results. Having an AI-based bot take care of the busy work can put your team ahead in deciphering the outcomes.
4. Security Program Analysis
We require the best data to make the best decision. That’s where generative AI comes in. While it can’t make the decisions for you, it can give you answers to synthesized security questions like “based on our current security stack, which tool would be best acquired next” or “considering recent threat trends, how much capacity should our next-generation firewalls support in two years”? Asking the right questions sets your team up with the right answers.
5. Detecting AI-Driven Threats
Soon, it may take AI to detect AI. Its trademark has always been its ability to ingest massive amounts of data, identify patterns, and churn out intelligent decisions. That sounds like what it takes to detect behavioral-driven threats in today’s threat landscape. The more it learns, the more it can learn, so with each newly discovered exploit it improves its ability to find more in the future. One day soon, generative AI models may be what allow us to catch ransomware on a scale that only AI can produce.
Facing the Future of Generative AI with Fortra
While ChatGPT and its unintended consequences will probably be the talk of tradeshows for the next foreseeable future, it's important to not forget the basics.
A well-guarded enterprise need not fall victim to an onslaught of exploits, no matter how fast AI can spin them up. A well-prepared team need not fall for every convincing phishing scam, and well-guarded email services need not detect outside threats only when it's too late. And well-prepared SOCs need not rely on manual operations alone.
Fortra helps organizations lean into the changes and discover how they can implement AI into their strategy, making a win out of a changing situation. Start a free coaching session to find out how you can best integrate AI into your security roadmap, and maybe even advance in the field.
For example, Fortra’s Automate Intelligent Capture uses unassisted machine learning and artificial intelligence to give you control over critical information, letting you use previously untapped information to your advantage. And Fortra’s PhishLabs makes use of AI/ML algorithms that filter out noise, reduce false positives, and automate workflows to improve your digital risk protection (DRP).
It’s fair to say that generative AI is here to stay, along with everything that goes with it. Fortra solutions can help you take it in stride, meet the new challenges, and come out on top in the AI-arms race.
Let’s meet the challenges of generative AI together. Face the future with confidence with Fortra in your corner.
We’re at the precipice of a new era in computing, and no one wants to go it alone. Make Fortra your relentless ally in the fight against cybercrime and know you always have best-in-class technology, strategy, and expertise on your side.