Its very much in vogue to tout active systems as inherently better than passive systems. BMWs new TV ads, building on a campaign that actually began last May, tell you that its better to have the agility to avoid an accident than merely the sturdiness to survive it. Personal audio systems use active noise cancellation to isolate you from your environment. Lenovo laptop ads dramatize the ability to push a button and regenerate a corrupted PCs previous configuration.
There are many good reasons to pursue an active systems approach. In auto design, the pursuit of passive safety -- crumple zones in the bodywork, and suchlike -- has arguably reached a point of diminishing or even negative returns. Making a vehicle more robust, for example, means making it heavier, which means installing a larger gas tank to maintain useful range ... and I think we can see where that takes us.
In the IT realm, we likewise see diminishing returns in safety measures such as password policy enforcement. Sufficiently random passwords, changed sufficiently often, are perversely more likely to be written down and thereby defeat the policys purpose. An active-safety approach, such as a challenge-response algorithm -- perhaps one that interacts with an active user token like a Java ring or Java card -- follows the path of ever-falling processor costs toward solutions that offer ever-improving capability per unit cost. Java card technology has had some noteworthy design wins of late, but has also been vulnerable to errors in fabrication and deployment.
I have no problem with an active-systems approach – unless fascination with the potential flexibility, capability and cost-effectiveness of active systems distracts from, or actually interferes with, sound principles of "fail safe" design and graceful degradation in the presence of partial failure. Im reminded, for example, of a glitch in the active antilock braking systems of the Corvette -- by now long since fixed, I hope -- that failed to allow for the possibility that all four wheels would lock up within a single time slice of the system that was allocating braking effort among the wheels of the car. As it turned out, this triggered a processor reset and entailed a time delay during which there was no brake action at all. The time involved was short--unless you were the driver of a vehicle that was cornering at its limit on a non-ideal road surface, in which case the time delay seemed very long indeed.
Thats why, when someone wants to get me interested in a lean and mean new IT application or machine, Im likely to ask some fairly fuddy-duddy questions about fallback options. Is the data representation highly dependent on undisclosed algorithms, or will I be able to access data with any text editor or text-mashing tool like Perl? If data compression is involved, can I choose my balance between effectiveness and lossiness in a granular way -- or even choose, in a seamless manner, "negative compression" in the form of error detection and error correction for the most sensitive data elements and processes?
If someone tells me, "active is better than passive," or "active goes beyond passive," my own active system of hype radar kicks into high gear: I dont want either/or, and I dont want one approach to leave the other one eating its dust. Insignificant advantages under ideal conditions make for great marketing claims, but Ill take acceptable performance under worst-case conditions every time.
Tell me what youre actively interested in improving at email@example.com.
Check out eWEEK.coms for the latest news, reviews and analysis in programming environments and developer tools.