But even without that personal grudge against byte bloat, Id have been struck by the irony in a recent tutorial article on the IBM developerWorks Web site that praised, as if it were a brand new productivity tool, the ancient uniq (as in "unique lines") filter utility for processing simple text files. What really caught my eye was the tease for that article that appeared on the developerWorks Linux home page: "Got logging nightmares? Save time and headaches by removing duplicate lines with uniq." There are layers upon layers of lessons to be taken from that perversely cheerful question. First: How many systems are busily generating log files, even as we speak, that no one will touch except to delete them when they take up too much disk space--so that they can start accumulating data in the dark, all over again?Theres been a lot of discussion lately about whether IT is crossing a threshold to become a more mature infrastructure of commodity technologies, rather than a white-hot crucible of disruptive innovation. Theres something to be said for the proposition that people are becoming more inclined to feel that what they have is good enough. But if were going to live with a product for a longer time before we consider it obsolete, shouldnt we look for products that wear well over that longer-lived relationship? Designing software, not merely to look good over a 90-day trial, but also to age gracefully, needs to become a priority during design. Lets make every byte count. Tell me if you think bytes are too cheap to meter.
Second: Why are logging functions in so many platform software products so crudely implemented, merely spewing out mostly uninteresting data instead of being written with a decent set of options for what gets recorded--and for how long its to be kept? Imagine the irony of deleting a years worth of logging data because its taking too much space, only to have a problem in the next week--and having only a weeks worth of logs to review. If logging is worth doing, isnt it worth doing well?