What has happened?
Many of the world's top artificial intelligence companies are making a simple but dangerous mistake. They are accidentally publishing their passwords and digital keys on GitHub, the popular code-sharing website that is used by millions of developers every day.
The problem was found by security researchers at Wiz who examined 50 leading AI firms, and discovered that 65% of them had accidentally exposed highly sensitive information online.
Why does this matter?
It isn't just any old passwords that have been exposed. The information that the companies have accidentally leaked included API keys, tokens, and other credentials capable of granting access to internal systems, training data, or even private AI models.
The researchers did not just check the AI companies' main public projects on GitHub, but also deleted copies of code (known as "forks"), snippets ("gists"), workflow logs, and past versions of files. Through this investigation, the researchers discovered that many of the firms had accidentally exposed sensitive information that should have been kept private. Many companies had overlooked simple security measures.
In several cases, the leaked keys and tokens could actually be used to access company systems - including popular AI platforms such as Eleven Labs, LangChain, and Hugging Face.
According to the researchers, on nearly half of the occasions when they tried to alert affected companies they received no response and problems remained unfixed.
This sounds terrible. How big is the problem?
The affected companies are worth over US $400 billion in total, with major names such as Anthropic (the makers of Claude), Glean, and Crusoe Energy amongst those examined.
How is this happening?
The problem starts, as is so often the case with cybersecurity issues, with human error. When programmers write code, they often need to include passwords to ensure their programs function properly. But problems arise when software engineers forget to remove their passwords before sharing their code publicly.
Resolving the problem becomes more challenging because secrets are not always readily apparent. They can be buried in deleted files (that aren't really deleted), the personal accounts of company employees, old versions of code that have been long forgotten, hidden notes, and documentation. And, as the researchers discovered, many AI start-ups lacked a proper mechanism for reporting problems to them.
Ok, so that's bad - but how does it affect me? I don't work at an AI company
Unfortunately for you, it is these AI companies that are developing the technology increasingly integrated into our personal and professional lives. It powers the chatbots, recommendation systems, decision-making tools, and more that are likely to be integral to your business and will continue to be increasingly important in the future.
If hackers manage to gain access to these companies' systems, they could steal private AI models and training data, exfiltrate internal company communications, manipulate the AI systems we rely upon, and access information about how AI systems work.
So, what should be done?
AI firms should utilise tools to automatically scan for exposed passwords and credentials before code is posted publicly. They should also train software developers never to use real passwords in their code, and ensure that they have clear channels in place for security researchers to report problems when they are found.
Isn't there some irony that AI companies, developing some of the most sophisticated programs the world has ever seen, are making such elementary security mistakes?
Yes.
Editor’s Note: The opinions expressed in this and other guest author articles are solely those of the contributor and do not necessarily reflect those of Fortra.
Break the Attack Chain with Fortra®
Your industry is unique. Your cybersecurity stack should be, too.