APIs can no longer be relegated to the back burner of security. As one of the primary ways in which GenAI models function, API security is closely linked – if not synonymous with – AI security.
The issue is changing the mindset. Before the AI wave, APIs were the primary tool of developers to connect applications on the backend. They still are. However, placing them squarely in the dev circle for so long meant they were often used without DevSecOps in mind, or they were thrown out fast and loose without being properly secured.
We cannot afford to let that mentality slip over into API usage when we’re dealing with GenAI and LLMs; the stakes are too high.
In this blog, we’ll discuss the risks that unsecured APIs pose to sensitive AI models and how Fortra provides innovative teams with the API protection they need.
We Know GenAI Models Rely on APIs. Here’s How Much
As stated in Fortra’s recent Guide to Secure AI Innovation, “APIs are a key interface for GenAI systems, especially when LLMs are integrated into customer-facing applications or accessed via public endpoints.”
APIs are the primary means by which users access AI agents. Instead of downloading the whole model, developers call them using an API request. So those APIs had better be securely connected.
APIs are the “hands and eyes” of GenAI, as GenAI cannot take direct action by itself. For example:
A GenAI model may write SQL, but an API executes it
A GenAI model may analyze a ticket, but an API updates it via Service Now or Jira
A GenAI model may create content, but it takes an API to post it to a CMS
All of the things we credit GenAI for “doing” are actually done by APIs. That means APIs have their own set of hands in everything; making them the prime jackpot for adversaries looking to hack AI.
AI and APIs: A Symbiotic Relationship
But the give and take goes both ways. Using the power and foundation of GenAI technology, APIs can produce unstructured outputs never within their wheelhouse before. These look like (again) AI-credited tasks:
Summaries
Natural language explanations
Code
Plain English explanations
Images
Recommendations
And more. And then again, APIs give AI models the information they need. This domain-specific data, fed by APIs into GenAI agents, includes:
Security alerts
Financial data
Logs
Telemetry
Product and customer information
To name a few. But the relationship doesn’t stop there. Besides functionality, APIs can also play a huge role in AI security: for better or worse.
How APIs Enable AI Security; Or Tear It Apart
While super-connected APIs may seem like one of GenAI’s most significant security liabilities, they are also its greatest asset.
APIs keep GenAI wrapped up in controlled boundaries, allowing it to operate within compliance standards and security limits. For example, when it comes to GenAI models:
APIs can enforce authentication and access control.
APIs can filter GenAI output before returning it.
APIs can validate user input before sending it to GenAI
In other words, APIs provide the practical measures to ensure GenAI models are safe, transparent, and auditable. Which is why API security is so essential.
Also emerging standards like the Model Context Protocol (MCP) are designed to enable richer interactions between AI models and external tools or data sources. While MCP unlocks powerful capabilities, it also broadens the integration surface - introducing new vectors for potential exploitation.
Each additional context channel or connector becomes part of the expanded attack surface, requiring rigorous validation and security controls.
Why AI Security Won’t Take Care of Itself
Due to their hyper-connectedness, an API breach can have a more severe impact than a breach of almost any other system. According to Gartner, an API breach leaks more than a typical data breach by a factor of 10x. Imagine that amount of data being let loose from an internal GenAI agent your employees use for work; and not entirely responsibly.
The Oh, Behave! report notes that roughly four in ten employees report submitting sensitive workplace information into AI tools without their employer knowing. And a recent industry study reveals that the amount of sensitive data poured into chatbots increased by 156% YoY.
Most users lack sufficient AI security training to really understand the risks they are dealing with. Fortra Human Risk Management offers solutions to mitigate the impact of poor data decisions made by employees.
However, in the meantime, something needs to be done from a policy and technical standpoint to secure APIs – especially those that connect us to the GenAI tools we use every day – and do so quickly.
Securing APIs for the GenAI World
To secure AI models, we can adopt the principle of “safe in, safe out.” And the number one way to keeping things safe is by securing APIs; their hands and eyes.
Fortra Managed WAF for API security offers expert-driven protection, so teams don’t have to bring the expertise themselves. Integrating with APIs is an unknown enough, and introduces plenty of user-side errors. When it comes to locking down the APIs that can make or break their security, you want to leave it to the experts.
Here’s how it works. Fortra Managed WAF offers:
API Discovery: No more shadow APIs
API Scanning: Request validation, known vulnerabilities, and OWASP Top 10
API Offensive Security Testing: API Dynamic Application Security Testing (DAST), application pen tests
API Application Attack Protection: Safeguard against: Man-in-the-Middle, Access Validation, Request Forgery, Scraping, XSS, SQL Attacks, URL Tampering
API Advanced Protection: Behavior-driven detection goes beyond OWASP Top 10 and CWE Top 25 to catch zero-days, AI-crafted threats, and more.
As teams utilise AI models to accomplish more, they navigate a fine line between having it all and throwing it all away. GenAI security predominantly comes down to API security; and the teams that have it can innovate with confidence.
Find out how Fortra enhances security of AI, from AI, and with AI.
Check out our Guide to AI Innovation, released by the Fortra FIRE team.