40% of global organisations could be hit by security breaches due to "shadow AI" by 2030, according to analyst firm Gartner.
Shadow AI - the use of artificial intelligence tools by employees without a company's approval and oversight - is becoming a significant cybersecurity risk.
Unlike traditional "shadow IT," which involves workers installing unauthorised software or plugging in unapproved devices, shadow AI typically does not require more than visiting a website with a browser.
Many workers will open an AI chatbot, paste in a document or upload a spreadsheet, and ask an AI to summarise it. To the employee, it may seem harmless and time-saving, but if the data includes customer information, salary details, source code, or sensitive company plans, then it has just been shared with a third-party system.
And it's not as if the people taking advantage of shadow AI can claim to be clueless about the associated security and compliance risks.
For instance, a recent report by security firm UpGuard revealed an eyebrow-raising 90% of security leaders themselves report using unapproved AI tools at work, with 69% of CISOs incorporating them into their daily workflows.
According to Gartner's research, there is already significant, unauthorised use of generative AI (GenAI) in the workplace, and no one is betting against the future escalation of AI usage.
Microsoft agrees. Its own research, published last month, found that 71% of UK employees admitted to using unapproved AI tools at work, with 51% doing so at least once a week.
Many employees rely upon AI to help them write emails, prepare presentations, or tackle financial and HR tasks. Under pressure to work faster and more effectively, many staff members will turn to the tool that is the easiest to use, regardless of whether it has been approved or not.
Having established that the shadow AI problem is unlikely to disappear anytime soon, it is clear that organisations need to take action now to ensure they do not suffer a breach.
"To address these risks, CIOs should define clear enterprise-wide policies for AI tool usage, conduct regular audits for shadow AI activity and incorporate GenAI risk evaluation into their SaaS assessment processes," advises Gartner's Arun Chandrasekaran.
Businesses cannot simply ban the use of AI and hope for the best. That is likely to lead to staff covering up their use of AI and hiding what they are doing from IT departments.
Instead, the safer approach is to provide workers with AI tools that have been approved by their company and designed with privacy in mind. This, combined with staff training and clear guidelines as to what type of information must never be shared with external services, can help employees recognise that pasting internal documents into a random AI tool is no safer than uploading them to social media.
If companies don't take the shadow AI threat seriously now, they risk data leaks, compliance scandals, and the loss of intellectual property in the future.
Editor’s Note: The opinions expressed in this and other guest author articles are solely those of the contributor and do not necessarily reflect those of Fortra.
Your Guide to Secure AI Innovation
In this accelerated threat landscape, every security company must embrace AI not as an option, but as an operational necessity.