Hands-Off Mustn't Be Brains-Off

By Peter Coffee  |  Posted 2007-01-03

My engineering education was significantly shaped by my MIT faculty advisor, John Biggs, who thought it would be a good use of my time to take core courses in other departments whenever my degree program left some room for "free electives."

Instead of taking a class in Cinema of Classical Science Fiction, hypothetically, he thought I'd do better to take -- say -- Thermodynamics over in the Mechanical Engineering Department, or do two semesters of micro and macro in the Economics department instead of one semester of engineering economics within our own Civil Engineering department. These memories of the late Professor Biggs (I didn't know his first name was "John" until I read his obituary, years later) arise this morning because he had a habit that I'd like to think remains with me to this day: before I try to figure out an exact answer to anything, I try to have a rough idea of what the answer is likely to be so that I know an idiotic mistake when I see one.

When Prof. Biggs started to solve a truss on the blackboard, for example, he'd run across it -- thinking out loud, for the benefit of the class -- and quickly show the range within which he'd expect to find key results.

That habit came to mind when I saw two recent stories about the totally ridiculous mistakes that systems can make, and when I thought about the potential consequences of building end-to-end chains of Web services -- or other literally-minded protocols -- that could do really dumb things if allowed to take unbounded actions.

  • The tourist who misspells the name of a city, and winds up on the wrong continent, has probably failed to do a reality check on how long a trip should take before assuming that his journey will end in the right place.
  • The hypothetical fuel-oil inventory management system that looks at outdoor temperature, makes an updated estimate of "degree days" for the remainder of the season, and places orders accordingly could wind up ordering the entire annual output of all U.S. refineries if it doesn't have logic to detect unreasonable excursions from past ranges and request operator verification.

I know that it takes more work to put in guard logic, or to use languages like Eiffel that make explicit assertions as part of normal "design by contract" practice. I also know that the costs of not doing these things can be enormous -- and that incidents of such expense will become more frequent as we do more things with programmable devices and platforms instead of having people in the loop. Feel free not to be part of that problem.

Rocket Fuel