By looking at how early implementers are going about it, we can see some of the challenges in implementing what some feel is the future of PC security.
I've been bombarded with pitches and inquiries about whitelisting ever since I discussed the issue with Microsoft's Mark Russinovich.
Russinovich, you will remember, thinks that current approaches to
security are unsustainable and that the way out, the paradigm shift
that takes the advantage back to IT from malicious actors, is
whitelisting. I was sympathetic, but saw too many impediments to
adoption and noted that the path to adoption was far more visible for
enterprises, or for managed networks in general, than for consumers.
My first impression when I think of how to implement whitelist
systems is to take a known-clean system that IT just built from image
and scan it. Whitelist everything from this system. That is your
baseline. Image it and build new systems off of that.
I immediately see the problems in my notion, just as the vendors
have. A large organization will have many such baselines in the form of
different PC models. Even where the systems appear to be identical, two
PCs from the same vendor may have small differences in chips and other
devices, causing differences in the drivers used on the system,
necessitating the creation of yet another baseline. It appears that
vendors have chosen to take the alternative approach.
The alternative is to scan each and every system and identify all
the programs on them. This could be done to existing in-the-field
systems, but that's a bad idea for reasons I'll get to. More likely, IT
will install the whitelisting agent and scan the system after all the
other officially cool software has been installed.
According to our review, the Bit9 scan lets you go through
everything it finds on the system. They have a huge database of
checksums of the files they find so they will identify most everything
and let you approve the rest manually. CoreTrace takes a different
approach. They whitelist everything on the new PC. In both cases, what
happens to new software on the system depends on policy, although the
general idea is that new software will be blocked.
Larry Seltzer has been writing software for and English about computers ever since,much to his own amazement,he graduated from the University of Pennsylvania in 1983.
He was one of the authors of NPL and NPL-R, fourth-generation languages for microcomputers by the now-defunct DeskTop Software Corporation. (Larry is sad to find absolutely no hits on any of these +products on Google.) His work at Desktop Software included programming the UCSD p-System, a virtual machine-based operating system with portable binaries that pre-dated Java by more than 10 years.
For several years, he wrote corporate software for Mathematica Policy Research (they're still in business!) and Chase Econometrics (not so lucky) before being forcibly thrown into the consulting market. He bummed around the Philadelphia consulting and contract-programming scenes for a year or two before taking a job at NSTL (National Software Testing Labs) developing product tests and managing contract testing for the computer industry, governments and publication.
In 1991 Larry moved to Massachusetts to become Technical Director of PC Week Labs (now eWeek Labs). He moved within Ziff Davis to New York in 1994 to run testing at Windows Sources. In 1995, he became Technical Director for Internet product testing at PC Magazine and stayed there till 1998.
Since then, he has been writing for numerous other publications, including Fortune Small Business, Windows 2000 Magazine (now Windows and .NET Magazine), ZDNet and Sam Whitmore's Media Survey.