The start of a new year is always a good time to take stock. In cybersecurity, one perennial problem - the persistence of “vintage” open source vulnerabilities like Heartbleed and Shellshock – should force us to ask some hard questions. Why do so many vulnerabilities persist in Open Source (OS) tools, and how do we fix the problem?
In theory, OS tools should be more secure. In practice, it hasn’t worked out that way. The question is what needs to change to address this problem and reduce its impact on the ecosystem?
How Did We Let This Happen?
When OS tools and libraries first developed, many people thought that the resulting software would be more secure than proprietary products, thanks to the wisdom of crowds. The theory was, with so many eyes on it, how could we miss anything glaring?
In turns out, though, that the theory had a fatal flaw with respect to OS software. The crowds aren’t watching. No one is incentivized to look for flaws in OS software to fix them for the good of everyone.
Despite our idealistic dreams about the general public watching for OS flaws, the only people really incentivized to scan codebases for vulnerabilities were the attackers. And even when OS tools are fixed and libraries updated, few developers update their links to point towards the latest libraries, so we still have things like Heartbleed running around.
As a result, there are just as many OS vulnerabilities now as there were ten years ago (even major ones), and little has changed in the ecosystem despite noteworthy vulnerabilities and significant attacks.
When something is everybody’s business, it’s often nobody’s business. The question is how can we make OS vulnerability management somebody’s business?
Pass the Potato
Of course, this problem is not unique. Other activities suffer from similar problems and over time societies have developed a few different strategies for addressing them. Government would be one classic solution, but for-profit associations or non-profit organizations are also viable options.
Public Sector: Governments
Whenever market failures exist, government action is a viable option. In this case, governments have some strong incentives to address the problem.
Argument 1: Keeping Money in the Economy
First, despite the headline-making news of nation-state attacks, cybercrime is the cybersecurity problem affecting most citizens. It impacts constituents on a personal level (think phone scams, phishing emails, BEC attacks) and siphons money out of economies that could be used elsewhere. According to the latest FBI IC3 report, BEC alone cost companies billions of dollars a year. That’s a lot of corporate and taxpayer dollars.
Strengthening the cyber resilience of the underlying ecosystem to reduce OS vulnerabilities in the first place would keep money in citizens’ pockets, not attacker's pocketbooks. Investing money in an OS vulnerability reduction initiative dedicated to finding bugs would be nominal compared to what most governments are now spending in AI advancement.
And think about it: many fast-and-loose (and even some reputable) AI companies use OS tools like candy. How much smarter would it be to ensure those investments stay safer upstream?
Argument 2: Safety of Citizens and Critical Infrastructure
When we consider public safety and the security of critical infrastructure, the argument gets even stronger. Nation-state operators often use vulnerabilities in OS code that integrates with critical infrastructure through third parties. Preventing disruption in public services and maintaining public safety definitely falls within the government’s remit.
Corporate Associations
Although governments have an incentive to reduce OS vulnerabilities, private sector companies also have reasons to address this issue. Given their coding practices, large tech companies have a vested interest in more secure upstream code. Further, many employees at these companies help create OS code. For example, Microsoft’s employees contribute more OS code than any other company.
Getting one company to commit resources to fixing OS vulnerabilities would face a lot of challenges. For one thing, a single company would be worried about others “free riding” on their work; for another, many in the OS community would not trust a single company’s motives. For these reasons, if we want large tech companies to take the lead in reducing OS vulnerabilities, the sector would need to create an independent trade association dedicated to this issue or have an existing association take on this mission.
Nonprofits
Another possible option would be a charitable nonprofit. These organizations sit between the public and private sectors, can serve as neutral ground, and, if structured properly, are free from other conflicts of interest. A dedicated nonprofit would fit well with the OS community ethos, and several models already exist. The key challenge for nonprofits is financing, so a dedicated funding model would need to be identified to make this option viable.
Making OS Security Somebody’s Business: Is It Us?
Despite all the could be’s and should be’s, most entities are not going to act unless they’ve got a personal interest. At the end of the day, the ones with the most at stake are the users. If manufacturers, retailers, defense contractors, public school systems, healthcare providers and so on raise their standards for OS code security, that change will drive changes in the ecosystem. When these players say, “we’re not accepting any OS code unless it meets these standards,” they are forcing contractors, third-party developers, and even their own internal teams to take responsibility.
Just as vulnerabilities will always exist in proprietary code, OS code will never be “bug free.” Some developers will always try to take shortcuts, and not all users will be thorough all the time. Nevertheless, we can structure the ecosystem to generate more secure OS code, while preserving the benefits that OS provides in the first place.
Disclaimer: The views and opinions expressed in this article are solely those of Michael Daniel.