A Hybrid Approach
So this sounds more like a hybrid approach: Centralized where it makes sense, decentralized on the nitty-gritty, specific product level. Half of our team focuses on assurance: ethical hacking, engineering security into the development cycle. If we didnt get it right, how do you handle the bugs? Other half (focuses on) program management. What has to happen across the product stack? What are the right problems to solve, not the sexy ones but the very simple, stupid (things), (like someone) is trying to get everybody lined up, when do you ship the product, and you ought to be able to tell customers how to lock it down, make it easy to get there.In terms of hiring researchers, there are multiple facets to this. Theres hiring them as consultants, and theres who we hire to be hackers. In either case, the big quality we look for, aside of technical acumen, is ethical behavior. (Sometimes Im asked,) "Why dont we hire so and so." Part of the issue is I tell people "Look, if somebody in the past has been untrustworthy ... I cant write a contract with somebody I dont trust." (A contract is) not there to create trust when there isnt trust. If somebody did kiss and tell, I dont have any practical recourse to enforce (the contract). Why not? A contract isnt binding? If Im going to sue that guy, its the mean old vendor who beats up the researcher whos just trying to protect customers. Its not so much Im worried about looking bad. Its all about confidence, but also trust. If somebody signs something under contract, I dont have to worry that theyll be selling us down the river. For example, one guy we hired as an ethical hacker started as a regular tech guy who worked at some company. He was finding technical things and sending them to Oracle. He was really good at it. We went back and said, "Wed like to hire you to do technical assessment." He said his employer wouldnt do it. He moved on to work for a security researcher, and we wound up hiring them. And they turned out very professional, they did wonderful reports to (find vulnerabilities). We liked him so well, we poached him. In a professional way. So when you talk about trust, youre referring to somebody who wont air your dirty laundry before youve had a time to patch systems, yes? Thats the issue. You want to protect a customers system, which means when you find something, you want time to act on it, obviously very aggressively, and you want to make sure customers are protected. Its not a hacker vs. (us thing). Its working with people you can trust who are going to put your customers in the forefront. I know youre hot on getting NIST to pass standards about hardening products, so technology will be more turnkey, right? They (should be able to) push a button, and you give them a tool to monitor that. And you automatically tell them when theyre still locked down, etc. It seems so obvious, why hasnt it be done? I just got a car. I didnt have to say, "Wheres the configuration part for the brakes?" All this security stuff is just there. I wouldnt know how to disable brakes on my car: Theyre just there. You ought to make it easy for somebody to run a reasonably assure configuration without them having to do too much. The government has caught on to this. They said theyd save millions by having Microsoft deliver hardened, locked-down (product). Next Page: Secure by Default.
Youve challenged the responsibility of security researchers who hack Oracle and publicly report unpatched vulnerabilities. Yet you hire friendly hackers yourself. How do those two things jibe?