In remarks at the launch of Windows Server 2003, Microsoft CEO Steve Ballmer enthused that the companys new product “is secure by default, with 60 percent less attack surface area by default compared to Windows NT 4.0 Service Pack 3.”
What?
Its obvious that this number is meant to suggest a smaller chance of a successful hit on the target–but then again, what is that target, and what is the nature of the attacks against it?
If were talking about a single installation on a single server, thats one thing–but if the number of installations doubles, does the “surface area” exposed to attack wind up being 20 percent more than it was before? And can we really use metaphors like “surface area” when attacks on IT systems are purposeful, rather than random?
Im not just playing word games: For more than two years, weve known that optimizing networks against random failure and protecting them from deliberate attack are almost diametrically opposite goals.
We dont even need to get into the domain of network design to see a significant change in the nature of the threat. What happens, for example, when portions of the exposed “surface” are extended into less protected realms? One of the key strengths of the Visual Studio .Net 2003 tool set, released concurrently with the new server platform, is a unified development model for both PC and handheld devices: If a security flaw in the .Net 1.1 framework now appears on handhelds as well as on desktops, and if a handheld device is stolen or misplaced with its network authentication tokens intact, then a portion of the exposed surface is now–so to speak–overexposed.
For that matter, what if the promise of greater out-of-the-box security encourages more deployments without the benefit of firewalls, or even security-trained administrators? If we reduce the exposed surface area, but soften that surface with less rigorous administration, that doesnt sound to me like a net win–or even, pardon the expression, a .Net win.
Ballmers phrase also got me thinking about the general subject of attaching numbers to software in particular, and to IT systems in general. We count lines of code, we enumerate APIs, we measure instructions executed per second–and sometimes, were even practical enough to look at the clock on the wall to see how much time a user actually saves, or look at a profit and loss statement to see how much payback our IT investment is giving.
Im all in favor of measurement, and analysis of trends, and use of quantitative measures of the past to guide our decisions in the future. Every decision to collect and analyze data, though, represents a cost, even if only in the attention that we give to that data rather than to other possible indicators. Devising relevant measures, especially in a data-rich domain like IT, is a vital function of management.
Lets take pride in offering numbers that mean something, and in building organizations that can learn from those numbers and make them continually better.