The issue of full disclosure has come up in two prominent examples in the past week.
The first was the discovery that AT&T’s Website had been exploited to swipe e-mail addresses of Apple iPad owners. The second was the disclosure of a vulnerability affecting Windows XP and Windows Server 2003 by a Google engineer.
History has shown us companies tend to respond faster under the threat of public scrutiny than without it. But when does disclosure cross from responsible to irresponsible? As these two incidents show, the rules governing the idea are cloudy.
In the case of the AT&T leak, it is a disputed question. Here, Goatse Security found an aspect of AT&T’s Website that allowed the group to acquire the e-mail addresses of 114,000 Apple iPad 3G owners from an AT&T Web server. The group admits it did not contact AT&T directly with the findings, but contends that it made sure the security hole was closed before going public with what had been found.
However, Sean Sullivan, security advisor of North American Labs for F-Secure, told eWEEK it was “completely irresponsible” for Goatse Security to grab actual data.
“There is no reason why the Goatse Security group needed to write a PHP script to automate the harvesting of data,” Sullivan said. “Once the vulnerability was confirmed, it should have been reported to AT&T. Continuing to harvest the data should be considered criminal. They only did it to sensationalize the issue and they are guilty of violating personal privacy.”
This, then, leads to possible boundary No. 1: Do not launch an exploit of a vulnerability that you find. However, a successful exploit is the surefire proof that a vulnerability is actual and not theoretical, is it not? Also, not disclosing an exploit presumes attackers have not already come up with the exploit themselves.
Incident two: Google engineer Tavis Ormandy gave Microsoft five days to patch before going public with a vulnerability, essentially forcing Microsoft’s hand. The day he went public (June 10), Microsoft published an advisory warning of the bug. As of June 14, the company has yet to say when a fix will be ready.
But whether five days was fair play or not depends on whom you ask, as Microsoft countered on its Security Response Center blog that the workaround proposed by Ormandy can be circumvented easily, underscoring that more time was needed to come up with a solution.
Ormandy was responsible for catching several of the bugs patched in a recent update to Adobe Flash Player, noted Andrew Storms, director of security operations for nCircle, who speculated that the recent sparring between Microsoft and Google may have played a part in the decision.
“It’s interesting that he chose to report these bugs to Adobe and not go public before they fixed them,” Storms said. “This discrepancy in behavior is sure to add to the ongoing speculation about his motives with the Microsoft disclosure.”
In his announcement of the bug, however, Ormandy stated that he was working on his own, and not on behalf of Google.
“I would like to point out that if I had reported [the issue] without a working exploit, I would have been ignored,” Ormandy wrote. “This document contains my own opinions. I do not speak for or represent anyone but myself.”
Companies surely need sufficient time to patch. But just what is sufficient time? One month? Two?
Fundamentally, both these cases show that responsible disclosure is still a swamp full of murky water, so perhaps, then, the only thing really clear is what history has shown us — security by obscurity does not work, and if there is a one-size-fits-all approach to full disclosure, it hasn’t presented itself.