Toward a More Idiot-Proof Internet

Recently, Cameron Sturdevant and I waded into the world of application whitelisting--a set of products and technologies aimed at ensuring the integrity of Windows clients by enforcing control over which applications are allowed to run. I think that whitelisting, when combined with diligent paring of user and application privileges, can go a long way toward granting workers leave to worry less about whether they are "security idiots" (to borrow a bit of Jim Rapoza's phraseology) and focus more on getting their jobs done. However, where Web-based applications are concerned, the client security road map is much less clear, and, as Jim points out in his column this week on clickjacking, there's no shortage of new Web-based routes through which code-wielding ne'er-do-wells can exploit our machines ...

Recently, Cameron Sturdevant and I waded into the world of application whitelisting--a set of products and technologies aimed at ensuring the integrity of Windows clients by enforcing control over which applications are allowed to run.

I think that whitelisting, when combined with diligent paring of user and application privileges, can go a long way toward granting workers leave to worry less about whether they are "security idiots" (to borrow a bit of Jim Rapoza's phraseology) and focus more on getting their jobs done.

However, where Web-based applications are concerned, the client security road map is much less clear, and, as Jim points out in his column this week on clickjacking, there's no shortage of new Web-based routes through which code-wielding ne'er-do-wells can exploit our machines.

As I've written recently, today's Web browsers lack the plumbing to support the same sort of interapplication isolation that full-blown operating systems provide, but projects such as Google's Chrome indicate that we're at least moving in the right direction.

Less promising is the current state of affairs around whitelisting on the Web. Application whitelisting relies on knowing where the code you run on your clients comes from, and opting to trust or not trust these code sources.

On a client PC, even one with a large number of installed applications, it's not too tough to go through and make reasonably informed decisions about which code to trust. On a Web page, this sort of trust audit is immensely more challenging, as snippets of script come from all over the place.

Load up the NoScript extension for Firefox (which implements script whitelisting) and take a browse through your typical array of sites; you'll find scripts and objects from Web analytics firms, advertising companies, providers of social networking widgets and numerous other partner firms.

It would be nice to assume that the Web locations you've chosen to visit--and, therefore, to trust--monitor the assemblage of content, counters and ads as seriously as does your software vendor, but I can't believe this is the case.

We may need to move away from the Frankenstein-ian nature of today's Web and introduce more control, more coherence and more specialization into the distribution end of the Web apps model--sort of a UPS or FedEx for Web apps.

These distributors could gather together all the elements that constitute a Web application, apply sound vetting practices and serve them up under common domains, preferably along with an SSL certificate.

I'm not calling for an end to the open Internet, but I admit that the rise of a trusted tier of sites could have a chilling effect on those outside of the system.

However, unless we get a handle on the sources of our Web applications, the promising cross-platform application model that the Web can enable will have a tough time thawing the OS monoculture that defines today's client computing landscape.