When Ive taught MBA classes in quantitative methods, Ive always had a hidden agenda. Yes, the syllabus has always included linear programming, forecasting and other number-crunching techniques. But Ive always managed to tuck in extra material on game theory, or "decision making with an active opponent" (to use the formal label). IT decisions must reckon with foes who have brains, tools and agendas of their own.
Game theorists would never have placed New Yorks Emergency Operations Center on the 23rd floor of 7 World Trade Center, the 47-story structure that collapsed from collateral damage suffered in the fall of the two major WTC towers. In fact, when that EOC facility was built in 1998, some experts questioned the peculiar combination of costly positive-pressure ventilation (for protection against biological weapons) with a location that could be so cheaply taken out (by "two missiles from an F-16," as Professor Ed Shaughnessy observed; reality was even simpler).
You can see the same kind of weak-link design in all too many IT installations: for example, those that derive "strong" 128-bit encryption keys from "easily remembered" six-letter passwords. Given that users tend to choose predictable passwords and that even random and case-sensitive six- letter passwords occupy only a 35-bit subset of that 128-bit space, why would anyone borrow a supercomputer for a key search? They can crack most users accounts with an online dictionary and a castoff i486 PC.
Likewise, the Internet itself is widely claimed to be "survivable" in that its resistant to the essentially random damage of natural disaster or bombing. But what about an attack by an active opponent? Someone, or something, that anticipates the means of counterattack—like the Nimda worm, reinfecting networks as if following behind the cleanup teams?
Its not enough to do IT correctly. We have to block easy modes of attack. We have to think like terrorists—because despite the appeal of theories, its no game.