Little Things Mean a Lot

 
 
By Peter Coffee  |  Posted 2005-10-10
 
 
 

Last year, the best brains in robotics couldnt build a vehicle that traveled more than seven miles without human input. This year, five vehicles finished a 137-mile test course in the Defense Advanced Research Projects Agencys second Grand Challenge competition, and they could probably have kept on going much farther. Does anyone really think that the state of the sensor, actuator and software arts improved in that time by a factor of almost 20? Or is it more likely that incremental improvements crossed key thresholds of adequacy to the task?

Theres got to be a message in this result for project teams charged with identifying, adopting, refining, implementing and deploying strategic technology. The vehicles that finished the DARPA course this year werent 20x faster or 20x more powerful or 20x more thoroughly instrumented than the ones that did so poorly last year. Rather, this years entries didnt make the same kinds of stupid mistakes, or have the same kinds of fatal weaknesses, that kept last years entries from doing as well as their overall high levels of engineering and construction should have allowed.

When an enterprise application fails, it can be just as embarrassing as when a robot vehicle hits a wall--or even locks its own brakes--before leaving the immediate vicinity of the starting line. Most of a failed applications carefully written components, like most of the parts of an ignominiously failing robot, were probably working just fine--but an obscure security loophole, or a failure to allow for needed scaling of workload to larger data sets or transaction rates, can bring down the whole thing as badly as if it were a botched job from end to end. When a product fails in testing at eWEEK Labs--for reasons that were often now illuminating on our blog-enriched "Inside eWEEK Labs" site at inside.eweeklabs.com--its common for our reaction to be, "Why would people with so much talent neglect a problem so obvious, so vital and so easy to fix?"

At this point, I thought it might be useful to introduce the Japanese term "kaizen," commonly and casually translated into English as meaning "continuous improvement": Many management commentators use this label when theyre encouraging an approach of finding and fixing small problems on short cycles, rather than seeking sweeping changes that introduce fundamentally new problems of their own. While researching kaizen, though, I ran across another doctrine called "Five S" (not to be confused with "Six Sigma") that sounds as if it applies to manufacturing workplaces, but that I think has something to say to application developers as well.

The Five S philosophy of removing the unrelated, arranging the useful, discarding the distracting or dangerous, standardizing the proper practice, and systematizing the overall process has much to offer in application development efforts. When bad code or flawed concepts are excised, rather than being encysted in workarounds; when application functions are introduced because users want them and will use them, not because they seemed to the developers like good ideas; when developers can readily get access to data, to code libraries, to testing tools and to other key resources instead of inventing their own stopgap solutions to these and other needs, then its likely that project success rates will dramatically rise.

Tell me about the walls that youve seen hit, and about the systems that youve devised to focus your efforts on what most needs to be fixed, at peter_coffee@ziffdavis.com.

Check out eWEEK.coms for the latest news, reviews and analysis in programming environments and developer tools.

Rocket Fuel