An analysis of the market for buying and selling previously unknown software flaws, known as zero-day vulnerabilities, suggests that paying researchers to privately disclose security bugs to the developer works best to deplete the pool of easy-to-find flaws.
The research was conducted by economics and policy researchers at the Massachusetts Institute of Technology, Harvard University, Facebook and vulnerability-management service provider HackerOne.
By using a type of analysis known as system dynamics modeling, the researchers studied the incentives for each of the people or parties involved in the software development and vulnerability mitigation processes.
The researchers found that paying security specialists, whether with kudos or cash, does work, but primarily by finding and removing from the vulnerability pool the low-hanging fruit of software security—the easy-to-find bugs.
"Incentives can be anything—even recognition and acknowledgment works for some," Katie Moussouris, chief policy officer for HackerOne, told eWEEK. "There are more and more opportunities for people to make cash. But bug bounties alone are not the more efficient way to drain the offensive pool."
However, paying security specialists to create tools to find classes of vulnerabilities had a more significant impact on software security in the model. Essentially, rather than buying the fruit of researchers' labors, defenders should pay for the tools used to harvest the proverbial fruit.
Security researchers, hackers and software developers have debated the appropriate way to disclose flaws—and improve software security—for more than two decades. While disclosing flaws has embarrassed many software companies and induced them to take software security more seriously, publicizing vulnerabilities can also lead to breaches and harm customers.
After many years of debate, researchers and software companies have reached a digital detente known as coordinated disclosure, where researchers give software vendors a reasonable chance to fix a flaw and software companies work with researchers in good faith to fix the issue.
Moussouris worked with Michael Siegel and James Houghton at MIT's Sloan School, and Ryan Ellis at Harvard Kennedy School of Public Policy. Collin Greene at Facebook, which sponsored the research, provided additional research and input into the modeling.
The researchers found that the "many eyes" theory espoused by open-source advocates only can eliminate so many bugs. After a point, automated bug hunting technology—defensive tools—are needed to efficiently find flaws.
Using bug bounties and rewards for defensive-tool research will eliminate many of the basic security holes that allow criminal attacks. But offensive efforts by groups with deep pockets—such as national intelligence programs—will remain unaffected, according to Moussouris. The researchers producing exploitable vulnerabilities at that level are less likely to use mass-market tools and more likely to use their in-depth knowledge of the target systems, she said.
"Most of the offensive finders don't use a lot of tools in their work," Moussouris said. "They have a knack for finding vulnerabilities that are not tool-based, but defenders rely on the tools."