Microsoft Corp. is spending time and money—lots of it—courting people such as Tom Ferris, an independent security researcher who runs the Web site Security-Protocols.com. And for good reason.
Ferris, who uses the online name “badpack3t,” has discovered a number of serious holes in the Redmond, Wash., companys products in recent months.
These holes include vulnerabilities in the Internet Explorer Web browser, Windows RDP (Remote Desktop Protocol) and the Windows XP kernel, as well as in a wide range of other companies software programs, including the Firefox browser.
Microsoft wants desperately to bring skilled hackers such as Ferris under its umbrella—either by hiring them directly or by winning them over at events such as Black Hat and the Microsoft-sponsored Blue Hat conference.
However, unlike other security researchers, Ferris often disregards Microsofts pleas to keep his vulnerability information a secret until after the company has issued a patch.
In an interview last week with eWEEK, Ferris explained that his approach to cracking Windows code relies on both off-the-shelf and custom tools, as well as a good dose of persistence.
Heres how Ferris conducts his bug hunts:
Step 1: Knowing where to look. Programs such as Windows or IE are enormous, containing millions of lines of code. That makes scouring the whole code base for holes impractical.
Finding success as a hacker or bug hunter is all about knowing where to look, Ferris said. “I look for any input validation. Anything that would touch an external-facing service, like RDP,” he said.
Ferris said it also pays to think outside the box. “I like to look where other people dont look. I might see where Microsoft has recently patched. A lot of times they introduce a new flaw where they patched,” he said.
Step 2: Getting the code. Unlike open-source products, Microsofts code is proprietary, meaning that hackers such as Ferris have to reverse-engineer programs to figure out how they work. “I start by reversing the program to figure it out, using debuggers like SoftIce and binary analysis tools like IDA Pro, to translate DLL or [executable] files into assembly language, the human-readable version of the binary code computers use as instructions,” Ferris said.
Step 3: Finding the hole. Once Ferris has rendered the original code into assembly language, he can begin testing it for security holes. The method he uses to do this is “fuzzing,” which involves sending thousands of combinations of intentionally malformed data to certain parts of the code that Ferris thinks could contain holes. “I wrote my own fuzzer. … I usually let it fuzz overnight,” he said.
Often, Ferris wakes up to a crash on the program he is testing, which is a sign that the fuzzer has triggered a vulnerability in the code.
“With the [Windows] RDP hole … I started sniffing [RDP] traffic and fuzzing every packet. Id sniff the connection, and the first packet over the wire, Id fuzz that for a few days, and nothing happened,” he said. “So then Id get the second packet and do the same thing, and the next packet and the next. Around the third or fourth packet, I fuzzed it with about 1,000 variations and got a crash.”
Step 4: Analyzing the vulnerability. With evidence of a vulnerability, Ferris zeros in on the section of vulnerable code to see how the hole could be exploited. “Ill attach the program to a debugger and just keep crashing it,” he said. “Im trying to figure out what piece of input code is crashing it. Is it exploitable or not?”
While many holes are not critical, Ferris said its hard to know at first how big a particular vulnerability might be. “A hole may look like a DoS [denial of service] at first, but if I put this many characters here, all of a sudden Im controlling memory,” he said.
While few hackers can ferret out big holes at the kernel level, as Ferris has, he said that finding security holes isnt all that hard. “This is very basic stuff. It amazes me that Microsoft doesnt find this stuff itself,” he said.