According to recent research by McKinsey, 88% of organizations are using AI in at least one business function, but only a minority have scaled it meaningfully.
Meanwhile, according to research by IBM, 13% of organizations reported a breach involving AI models or applications. Also, 97% of the organizations affected lacked proper access controls on those applications.
In short: AI adoption is broad but maturity and security controls are lagging. This means that many organizations are using AI without having established governance boundaries or hardened production-scale controls.
We must treat AI systems as first-class assets in the risk register. They require the same governance, access controls, monitoring, auditability, and incident readiness that we apply to other critical systems.
The Risk of “Too Many Hands”
Every employee can now interact with AI, and most do. However, when anyone can feed data into AI systems without oversight, it becomes a significant point of concern and can pose a risk to any organization.
Key concerns include:
Unmanaged data exposure: Employees may unknowingly input regulated, customer, or proprietary data into AI systems, creating compliance violations
Shadow AI usage: Teams adopt unsanctioned AI tools because approved options aren’t available or easy to use.
Model contamination and drift: Sensitive or low-quality data being fed into models can degrade performance or introduce unintended retention of regulated information
Lack of Accountability: Without structured guardrails in place regarding who can use AI, how it is used, and what can be input, it becomes nearly impossible to audit decisions or fulfill regulatory obligations.
The bottom line is that AI requires the same discipline we apply to any other critical system. Without intentional governance, access controls, and oversight, AI can become a vector for data leakage, compliance failures, and operational disruptions.
Is Compliance Keeping Up with AI?
The AI compliance landscape is one of the rapidly evolving challenges and complex moral dilemmas. So far, it hasn’t kept pace with AI adoption, and teams are left to figure out the best way towards AI security on their own.
Organizations are required to navigate a patchwork of state-level AI laws in the US, emerging international regulations such as the EU AI Act, and a range of risk and governance frameworks, including NIST AI RMF and ISO 42001. Each has different levels of scope, definitions, and interpretations.
On top of that, AI must still comply with existing privacy, security, and sector-specific frameworks like GDPR, SOC 2, HIPAA and PCI – none of which were designed with AI in mind.
This fragmented landscape makes consistent governance difficult and adds complexity for any organization using AI at scale. Even though regulations vary, they consistently emphasize the same principles:
Transparency
Auditability
Human Oversight
Model Accountability
This convergence shows where AI governance is headed, so organizations can start building to the “Common Minimum:.
Know where AI is used
Control who can use it
Protect the data
Monitor the outputs
Document decisions
Train your people
Despite the fragmented landscape, the baseline is already visible. These common principles provide organizations with a roadmap for establishing governance that will remain adaptable as regulations solidify.
Can You Retrospectively Add Access Controls to AI?
Most organizations can add access controls after the fact, but they cannot undo the absence of access controls during the early use of AI. Once sensitive data enters a model or a prompt log, the risk is created – retrofitting only prevents future harm.
But, the answer is to not revoke access to AI tools altogether; instead to apply the appropriate security controls while in flight - or provide teams with safe alternatives.
As Matt Beard, director of cybersecurity and AI innovation at AllPoints Fibre Networks, states, "It’s about showing staff the benefits of using these tools safely, and making sure they’ve got corporately acceptable systems available, because ultimately they’ll look for a workaround if not.”
Some of the AI security measures teams will want to invest in include:
Data Security Posture Management (DSPM): Identify all sensitive data, including shadow data, and implement security measures based on the business impact, exploitability, and severity of any vulnerabilities surrounding it.
Data Loss Prevention (DLP): Prevent sensitive data from leaving the network. This includes preventing it from being copied and pasted, downloaded, or otherwise incorporated into public GenAI models — where nothing above a “public” internal classification should be included.
Identity and Access Management (IAM): Role-based access control, Attribute-based Access Control, the principle of least privilege, and multi-factor authentication limit users to only the data - and permissions - they need.
Prompt Logging: for compliance auditing.
Vendor governance: for third-party LLMs and toolsets.
These layers won’t fix the past, but they establish the structure needed to operate AI safely and responsibly as we advance. As standards evolve and regulators catch up, organizations that implement these controls now will be better prepared, more resilient, and far more capable of scaling AI responsibility.
Securing AI Innovation
Securing AI innovation isn’t about slowing progress, it’s about enabling it responsibly. Even with the fragmented regulatory landscape, organizations can still build a secure, responsible foundation for using AI at scale.
By applying strong access controls, safeguarding data, and implementing practical oversight, organizations can move fast and stay secure. The organizations that thrive in this next chapter won’t be the ones that fear AI, but the ones that invest in guardrails to the reduce the risk of AI.
With the right foundations in place, AI becomes not a risk to manage, but a strategic advantage to unlock.
In our Guide to Secure Innovation, we review how the Fortra portfolio enhances the security of, from, and with AI while continuously monitoring for compliance - so companies can move forward without jeopardizing their competitive advantage.
The Guide to Secure AI
Learn how Fortra’s AI-powered solutions outsmart today’s AI-powered threats.