In December, Australian teenager Joshua Rogers reported a vulnerability in a regional public-transportation site that could have exposed the information of roughly 600,000 Australian citizens. Instead of working with the 16-year-old bug hunter, the government agency responsible for the site, Public Transport Victoria, contacted the police.
Such contentious relations between software developers and vulnerability researchers are not uncommon. For that reason, Australian vulnerability-research service Bugcrowd has teamed with legal experts from CipherLaw to create a framework that aims to prevent similar events in the future.
Called the Open Source Responsible Disclosure Framework, the pair of documents—posted online on July 24 under a Creative Commons license—give companies a boilerplate disclosure policy and guidelines for setting up a corporate disclosure program.
While the documents are straightforward, the intent is to outline the basic process and best practices for companies with no prior experience in dealing with vulnerability researchers and to ease future relationships between researchers and the companies in whose software they find vulnerabilities, Casey Ellis, CEO and co-founder of Bugcrowd, told eWEEK.
“The idea behind how we put this together is to make it as simple and as clear as possible,” Ellis said. “A lot of the people who are going to read this are not lawyers and do not have English as a first language.”
Vulnerability researchers, software vendors and security companies have debated the best way to disclose vulnerabilities for more than two decades. Microsoft, along with a variety of hackers and researchers, pioneered much of the response etiquette and policy out of necessity. In the 1990s and early 2000s, the software giant’s applications were most often targeted by vulnerability researchers as well as malicious hackers.
The most significant document, however, known as the RFPolicy, was released in June 2000 by security researcher Jeff Forristal under the moniker of “Rain Forest Puppy.” The document became the basic code of conduct for researchers’ efforts to contact vendors. Two years later, Steve Christey of government contractor MITRE and Chris Wysopal of security firm @stake released a more formal best-practices document to guide researchers and companies.
For the most part, companies have little recourse but to fix software vulnerabilities, as researchers are permitted by law to test software programs for defects and release information on the issues they discover. However, as Web applications and services have become more popular, the situation has changed: Legal protections for research on production Web applications favor companies, not researchers.
The legal danger is especially real for vulnerability researchers, who like Rogers, publicize vulnerabilities by reporting the issues to the press. In 2005, for example, network specialist Eric McCarty found a basic vulnerability in the online application service for the University of Southern California, which could have given an attacker access to sensitive details of approximately 275,000 applicants to the university.
After McCarty reported the issue to the press, USC contacted police and federal prosecutors pursued charges against McCarty, which resulted in a 2006 felony conviction and a six-month house arrest.
“What we are really worried about here is the chilling effect this has on security research,” Jim Denaro, founder of CipherLaw, told eWEEK. “We believe there is a public good in having this research continue.”
While the Open Source Responsible Disclosure Framework may not pioneer new ground, it does make developing a disclosure policy much easier for companies while allowing researchers to have some guidance on the type of research allowed.