
As language learning models (LLMs) continue to advance, so do the security threats and risks that accompany them. With the plethora of news and information out there regarding generative AI, Fortra has conducted in-depth threat analysis to cut through the noise and identify the most pressing AI threats to watch out for as 2025 rolls along. Although it’s imperative to remain vigilant in the face of the ever-evolving threat landscape and all the other possible risks it may expose us to, these are the threats that stand out as the most pressing for both defenders and users alike.
1. Prompt Injections
What is a prompt injection?
Prompt injections occur when an AI input command allows the user to manipulate the model’s behavior through bypassing the developer’s original instructions for that prompt. This threat is similar to input injections in traditional application security attacks. However, prompt injections are a consistent threat in generative AI because LLMs tend to process the input command as one single text and may not be able to separate or validate these inputs, unlike typical software inputs.
Why worry about prompt injections?
The threat of prompt injections can pose several risks to organizations, especially those who have integrated generative AI into their IT environments. There are a few risks:
Data leakage. This is where a command can be injected to prompt the AI model to reveal sensitive information or to even leak sensitive data from a previous session that the current user may not be authorized to access.
Trick the LLM into revealing API keys. Threat actors can then exploit to gain unauthorized access to cloud environments and other valuable digital assets, maliciously configure access controls such as turning off multi-factor authentication (MFA) to bypass IAM defenses and even carry out data breaches to compromise personally identifiable information (PII).
Poisoning the language model to spread false information through commands that inject bogus data and even running malicious code that can increase exposure to malware infections.
2. Romance Scams and Deepfakes
What are romance scams?
Romance scams occur when a scammer develops an online romantic relationship with the victim to gain their trust and exploit them, often financially. Scammers typically hide under a false identity by setting up fake online profiles to lure in potential victims, especially through dating and social media sites, and ask for money from the victim upon gaining their trust.
Why worry about romance scams?
GenAI. Romance scammers have begun weaving generative AI into their malicious tactics. For example, a common telltale sign of a romance scam is that the scammer relies on text messaging to communicate with the victim and avoids phone calls or meeting in person as their voice can reveal their true identity or location. However, AI-generated voices can now allow scammers to impersonate many different voices, including accents from various locations, ages, and genders.
Deepfakes. Another example of how generative AI poses a threat in romance scams is using deepfakes to conduct video calls with the victim. As deepfakes continue to advance in quality, scammers can use this technique to make their fake online personas seem more realistic and further manipulate the victim as video calling can carry more emotional weight than regular text messaging.
3. Improved Spear Phishing
What is spear phishing?
Spear phishing, a form of phishing that is personalized towards its targeted victim, has gained a new lethal potency in targeting victims through the assistance of LLMs.
When Fortra’s 2025 Email Threat Intelligence Report revealed that a staggering 99% of email threats were social engineering attacks or contained phishing links, it is no surprise that attackers are amping up their email attacks by incorporating AI to strengthen their phishing attempts. Recent warnings and research about email AI attacks have revealed that AI crafted attacks are now beating traditional human attacks.
Why worry about spear phishing?
Threat actors can leverage AI to target the victim’s LinkedIn account to identify their workplace information and carry out business email compromise (BEC) attacks against them or even target their social media and other public profiles to gather as much information as possible to craft highly advanced and personalized spear phishing attacks. This poses a particular challenge to both organizations and users as spear phishing attempts can be difficult to identify due to their personalized nature which adds an element of realism to the lure. Additionally, unlike traditional human threat actors or cybersecurity red teams, these AI generated attacks can be conducted at a large and unlimited scale which further exasperates this threat.
4. Bypassing Linguistic Barriers
What are linguistic barriers in cybersecurity?
LLMs have unlocked improved translation capabilities as AI-generated translations continue to produce more natural-sounding texts that better capture slang and human conversational cues. Attackers can harness this capability to expand the geographical horizon of their targets.
Why worry about smarter translations?
Scams and other social engineering attacks that have proven to be successful in one language can now be effectively translated into other languages to reach victims from new locations around the world.
Not only does this allow threat actors to expand their geographic outreach and bypass linguistic barriers, but this can also increase the success rate of attacks because the newly targeted regions are often less familiar with these scams and users may lack the awareness needed to identify the signs of these attacks.
For example, financial scams that tend to attract a lot of victims in North America, such as payroll diversions, can be translated into other languages to target other continents that were not victimized by these threat actors before.
Fortra’s monthly BEC Global Insights Report revealed that the average amount requested in wire transfer attacks was a staggering $81,091 in April 2025, putting them at the forefront of one of the most effective financial scams to target victims. Organizations can expect to see such effective and widespread scam tactics translated into different languages, especially in never seen before languages and regions, as attackers continue to identify new tricks to maximize the efficacy and reach of their lures.
5. Shadow AI
What is shadow AI?
Shadow development, the use of software development practices that has not been approved by an organization, has historically been one of the most prominent end user risks when it comes to employee non-compliance with IT policies. However, we can now add Shadow AI to the list of end user risks that IT and cybersecurity professionals worry about. Shadow AI refers to the unsanctioned or unauthorized use of AI tools and resources.
Why worry about shadow AI?
When almost 60% of employees have entered high-risk information into generative AI technologies, the threat of shadow AI is rampantly on the rise. This can expose organizations to the risk of data leakage because LLMs can be trained on user input, which can then be included in the output of newer AI model versions.
For example, an employee can accidentally leak sensitive personally identifiable information (PII) or an organization’s proprietary software code if it were unintentionally included as input in their AI prompts. This privacy breach can expose organizations to the risk of various damages such as regulatory fines, reputational damages, legal breaches of NDAs, and other consequences.
Conclusion
Artificial Intelligence, like any other innovative tool or technology, can be used to accomplish both the bad and the good depending on who is wielding it. Attackers will always find a way to exploit these tools. Although it can seem overwhelming to defend against such an easily scalable tool such as AI, Fortra can help you fight fire with fire by offering various machine learning-based solutions that keep pace with the threat landscape and integrate AI to fortify your threat detection capabilities.
Cybercrime Intelligence Shouldn't Be Siloed
Fortra® experts are dedicated to protecting organizations and the public by delivering the latest insights, data, and defenses to strengthen security against emerging cyber threats.
Fortra’s approach to AI/ML leverages these innovative technologies to deliver direct security value to our customers.