Attackers increasingly target the identity layer, “abusing overprivileged accounts, misconfigured roles, or insecure tokens to gain lateral access,” as noted in the Fortra FIRE team’s recent Secure AI Innovation guide.
Forbes reports that 75% of attacks now utilize identity-based threats, and recent research reveals that 90% of organizations have experienced an identity-related security incident in the past twelve months.
Identity is the leading threat vector as we settle into the age of AI. Unsurprisingly, AI is both the perpetrator and the victim of identity-based cybercrimes - as well as the way to defend against them.
Companies that want to find high-impact ways to "stop the bleeding,” (at least 75% of it) need to understand what attackers leverage when launching these attacks and what protection looks like in a modern, AI-supported environment.
Where Does Identity Begin and End?
The identity and access management journey begins with enrollment and ends with deprovisioning, with a lot that goes unmanaged in between. When utilizing AI systems, each system needs to be given its own protected identity, making the task of identity security even more demanding.
When done correctly, the IAM journey for users typically includes three phases:
Identity Creation
The user is enrolled and given a digital identity.
It is secured with usernames and passwords, MFA, passwordless, biometrics, etc.
Access controls are applied based on role, department, and clearance level based on the principle of least privilege.
Identity Maintenance and Monitoring
Authentication: Every time a user accesses a system.
Authorization: Where access controls come into play, ensuring the user can only access what they are allowed to.
Maintenance: As users’ roles change, their permissions and access rights should adjust accordingly.
Monitoring: Constant monitoring flags non-compliant behavior, unauthorized access attempts, and instances of misuse that go against things like AI acceptable use policies.
De-Provisioning
Deactivation: As soon as the employee is terminated.
Access Revoked: This involves revoking all usernames, passwords, permissions, certificates, and tokens to prevent them from being abused.
Protecting AI Identities
Autonomous AI agents operate like humans, especially where AI SOCs and agentic AI agents are concerned. They, and their access, need to be protected in the same way. Fortra provides identity-level security for AI systems by applying the following best practices:
Assigning Identities: Each AI agent gets a unique, scoped identity with clearly defined roles and boundaries.
Monitoring Identities: Continuous monitoring of AI agent behavior ensures that anomalies are flagged to provide early warning of identity compromise.
Principle of Least Privilege: AI agents, no matter how useful, are still limited to a “need to know” basis.
Privacy-First: AI agents serve best when companies build compliance and security directly into the architecture; not bolt it on as an afterthought.
Lastly, bringing humans into the loop with explainability and intervention tools are key for ensuring only properly monitored AI identities can operate within an environment.
While it may seem like limiting “limitless” AI agents with IAM rules curbs their productivity potential, the risk of overprivileging them (or their users) is far costlier in the long run.
Why Are Overprivileged Accounts Still Proving to be a Problem?
Failing to operate by the principle of least privilege bodes just as ill for AI systems as it does for human operators.
AI is being used with increasing accuracy and success to launch identity-based attacks. Once an account has been successfully taken over, the only remaining failsafes are privilege and access limitations, which many organisations still fail to put in place.
Without those internal guardrails, there is nothing to stop threat actors from pivoting to more sensitive assets throughout the network, including AI systems. IBM notes that a shocking 97% of organizations with breached AI applications lacked any access controls whatsoever.
Overprivileged accounts are still an issue because weak passwords are still an issue: Valid Accounts (MITRE ATT&CK T1078) are now the most exploited path to compromise and offer attackers a shocking 98% success rate. If this is how organizations are handling vital human identities, how are they handling access for AI?
Detecting Foul Play at the Outset with Continuous Monitoring
While stronger passwords and an immediate ZTNA overhaul may seem to be the answer, protecting identities comes down to more than policies - even good ones. The second half of the battle is oversight, and that’s where continuous monitoring comes in.
Fortra XDR provides visibility into the first instances of malicious user activity so organizations can spot identity breaches before they escalate.
With 24/7 visibility across modern and hybrid cloud infrastructure, teams can strengthen the last line of defense for the identity layer: detection. From catching IAM policy infractions to spotting abnormal AI model activity to investigating unauthorized data usage and more, Fortra XDR reduces the ways identity-targeting attackers can hide in plain sight.
Your Guide to Secure AI Innovation
In this accelerated threat landscape, every security company must embrace AI not as an option, but as an operational necessity.