People often misquote Murphys Law as stating, "Anything that can go wrong, will go wrong." This misses an essential element of the Law: one that ought to be part of our thinking about what it means to build good systems.
Murphys mantra comes to mind with the special appearance of Edward A. Murphy III, son of law propounder Edward A. Murphy, Jr., at the Ig Informal Lectures, held earlier this month at MIT. The younger Murphy shared a videotape of his father, explaining the origin of The Law, after the elder Murphy and his colleagues were honored with this years Ig Nobel Prize for Engineering in honor of their 1949 statement: "If there are two or more ways to do something, and one of those ways can result in a catastrophe, someone will do it."
If there is only one correct way to assemble an electrical connection, for example, then the plugs should be designed to make that the only possible way--as Murphy realized when a technician was confused by ambidextrous fittings.
Note the difference between the original Law and its common oversimplification. The true Law is a statement about people, not about things. People are the least predictable part of any system: its pointless to design systems that are merely proof against accident, as suggested by the simple version of the Law, when people are capable of both idiocy and evil that step far beyond the boundaries of mere chance.
You can design a reliable computer merely by believing that "anything that can go wrong, will go wrong." Youll limit risk of malfunction by adding error-correction protocols, even on the signals that pass within and between your microchips: a strategy followed by high-reliability server processors like IBMs forthcoming Power5, unveiled at last weeks Microprocessor Forum; youll reduce the risk of software errors by adding debugging interfaces to your chips, as ARM Ltd. discussed at the same event, and to high-level language development with tools like Parasofts Jtest (whose 5.0 update ships next week: look for my review in eWEEK on October 27th). Youll run applications in a managed memory environment like that of Java or the .Net framework.
But this wont produce a secure computer, because attacks arent random: theyre highly directed—and almost every convenience or performance enhancement thats introduced into a design can be perverted into a point of attack. Early Java implementers discovered this, for example, when they optimized loops to check method access privileges only on initial loop entry: this opened the door to a simple and ingenious attack in which a loop iterated over a collection of objects with common method names, and the first object encountered would have public methods with the same names as the second objects private methods. The public status of the first would effectively unmask the second.
Designing a language for security is more involved than it might seem and imposes burdens that language designers are reluctant to accept.
By now, though, it should be clear that the human component of Murphys Law must be remembered: it means the difference between building a system that wont break and a system that cant be broken.
Discuss this in the eWEEK forum.