Automatic Protection Systems Are Too Dumb and Too Fast

 
 
By Peter Coffee  |  Posted 2005-04-18 Email Print this article Print
 
 
 
 
 
 
 

Opinion: Automatic "help" can be useless—or worse—in some situations.

I heard the other day from a user of a highly reliable software development tool set. One of the .exe files in that product was being falsely flagged as bearing a virus and was being automatically deleted from workstations at the users site.

This was not OK because local site policies required double-secret-divinity-super-user status to add a file-exclusion rule to the site-licensed anti-virus system. I sent a few e-mails—since the help desk staff at the development tool sets vendor had no idea what to do—and Im happy to say that the problem was promptly addressed by the maker of the anti-virus product in question.

Everybodys happy.

The incident should serve, though, as a warning to those who favor the idea of mandatory, automatic protection—if thats the word—of network-connected IT systems. How easy is it, right now, for a dumb piece of code to create a problem at your site that your smartest people arent allowed to try to solve? How much worse could that situation become if mandatory, automatic updates and other such hands-off measures were to take control?

Whenever automatic protection systems rear their too-helpful heads, I remember a column from one of my old issues of Flying magazine. The writer was describing a time when he saw an aircraft lose engine power on takeoff, and the pilot was doing everything right to conserve altitude and airspeed while maneuvering for a safe emergency landing.

Unfortunately, an automatic system to prevent wheels-up landings was still active. It diagnosed the situation (a combination of low engine speed with a low-altitude descent) as one that required extension of the landing gear. Down came the wheels, up went the drag. As best I can remember the columnists words, he said hed never forget the day he watched that machine try to kill a man.

The message here is that critical incidents often resemble routine situations in almost every respect. Like people who throw extra variables into a correlation calculation, improving the fit but reducing the predictive power, we can actually diminish a systems decision-making strength with every attempt to make it more intelligent.

If we were able to foresee the tiny but crucial differences that define every threat scenario, wed be able to forestall those threats without keeping costly humans in the loop. Sad to say, but were not that smart. The more we try to make systems clever enough to do the right thing—in every situation—the greater the chance that theyll do something wrong more quickly than we can stop them.

Coincidentally, the saga of the anti-(not really a) virus episode crossed my radar on the same sunny day this month that the folks at Green Hills Software announced "a major automotive industry initiative." They promised to address "the increasing software complexity of powertrain, body/safety and infotainment electronic control units (ECU)." No kidding—I mean, about softwares increasing role in the automobile technology environment.

That subject resonates more than ever with me after six months of living with a Toyota Prius, surely the most fly-by-wire conveyance ever sold to anyone not wearing Air Force or NASA insignia. I agree with reviewers of the second-generation Prius who say that driving it doesnt feel like managing a hybrid gasoline-electric powertrain with regenerative energy recapture. It just feels like pushing the "faster" pedal to go and the "slower" pedal to stop.

Speaking as someone raised on stick shifts, I mean that as quite a compliment. If it seems like no big deal, look at the list of major automakers that are licensing Toyotas patents rather than trying to reinvent those hybrid wheels.

It seems to me that enterprise systems are facing a challenge of hybrid complexity much like the one that Toyota and Green Hills are confronting. The emerging IT service model has to combine real-time delivery of current information with efficient retrieval of archived data, presenting both in a way that lets the user spend more time thinking about what to do with the answer than about the manner of asking the question.

The user just wants to say "go"—and not have a too-clever system say "no."

Technology Editor Peter Coffee can be reached at peter_coffee@ziffdavis.com.

To read more Peter Coffee, subscribe to eWEEK magazine. Check out eWEEK.coms for the latest news, reviews and analysis in programming environments and developer tools.
 
 
 
 
Peter Coffee is Director of Platform Research at salesforce.com, where he serves as a liaison with the developer community to define the opportunity and clarify developers' technical requirements on the company's evolving Apex Platform. Peter previously spent 18 years with eWEEK (formerly PC Week), the national news magazine of enterprise technology practice, where he reviewed software development tools and methods and wrote regular columns on emerging technologies and professional community issues.Before he began writing full-time in 1989, Peter spent eleven years in technical and management positions at Exxon and The Aerospace Corporation, including management of the latter company's first desktop computing planning team and applied research in applications of artificial intelligence techniques. He holds an engineering degree from MIT and an MBA from Pepperdine University, he has held teaching appointments in computer science, business analytics and information systems management at Pepperdine, UCLA, and Chapman College.
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Thanks for your registration, follow us on our social networks to keep up-to-date
Rocket Fuel