By: Larry Seltzer
As far as security goes, the operating system of the future is, in many ways, here today. Led, somewhat ironically, by Microsoft Windows, operating system vendors and some other software vendors have been making their products more secure by default. They also have been providing tools and best-practice guidelines for application developers to improve security.
If everyone adopted the most current versions of software and followed state-of-the-art practices in software development, the future would be here today. Alas, things are never that easy.
The Internet caused the escalating software security problem, and the protection of Web browsers and other Internet-facing software has been the greatest imperative of security developers. The techniques designed to protect these programs will find their way into other applications and the core of the operating system itself.
Recent security research has found limited cracks in the walls put up with DEP (data execution prevention), ASLR (address space layout randomization) and other systemic protection technologies. But the developers of these protections understand that they’re not impenetrable barriers; they are obstacles put in the way of exploits, making it harder and harder to accomplish them. The more such obstacles that are put in place, the harder it is to carry out a real-world exploit-as opposed to a laboratory one-and the less serious the implications of the exploit will be. This is called defense in depth.
The good news about these techniques is that they should not change the way applications operate-except for certain egregious cases-and you get the security for free. They make some programming techniques, self-modifying code in particular, the inherent problems they should be. The real problem, which we have been experiencing for the many years that DEP and ASLR have been implemented in Windows, is that many applications we use don’t opt-in to them.
A History of Improvements
A History of Improvements
There are other systemic improvements that OS developers can and will implement. One of them, sandboxing, has a long history in managed environments such as Java. In fact, not too long ago, many felt that Java and such managed environments were the future of operating systems. There’s still something to that, but the security records of Java and .NET haven’t been especially impressive, even though they were supposedly designed with that objective.
Managed virtual environments improve security by managing memory for applications, by protecting memory corruption errors, for example. The price of this is mostly system performance. The problem is that the environments themselves can have vulnerabilities, and quite a few of these have surfaced over the years. Plus, there are so many other classes of errors in addition to memory errors, so applications aren’t secure purely by being written in a managed environment.
Still, memory corruption errors are important, and the trend toward managed code is a net plus for security. This is one reason a lot of corporate development has moved to such environments-from Java to ASP.NET. Writing conventional code that is carefully scrutinized for security vulnerabilities is hard and requires expertise you may not have. Writing managed code takes care of at least the straightforward errors. And, once again, it shouldn’t make anything harder unless you are relying on techniques you shouldn’t be.
With its Chromium environment forming the basis for the Chrome browser and operating system, Google has taken the sandbox to the next level by protecting native code running in the browser. It hasn’t prevented vulnerabilities and exploits in the Chrome browser, but it has limited the impact of those exploits by preventing them from reaching beyond the limited capabilities of the browser environment. In fact, the entire Chromium sandbox runs in user mode, so nothing an attacker does will exceed the capabilities of the user running the program.
Something similar can be said for Protected Mode in Microsoft’s Internet Explorer 7 and 8 under Vista and Windows 7. Protected Mode runs the browser in a specially crippled user context that has no write access to anywhere outside of the temp folders.
Look for all these techniques to be more widely available as generalized facilities for applications. However, both Chromium under Windows and Protected Mode rely on Windows-specific features, such as integrity levels, job objects and restricted tokens, which are not necessarily available on other platforms.
Thus, the development of sandboxes could be the latest chapter in an old story: the trade-off between maximum functionality and platform portability. But it all depends on how you write your programs. If you write programs to run in the Chromiun sandbox and follow its rules, you should get some portability along with whatever sandbox features Chromium provides on Windows, as well as Mac and Linux.
Reviewing Other Platforms
Reviewing Other Platforms
What is available on those other platforms? Linux has a sandboxing feature called SECCOMP, which was originally designed for compute-bound utility computing environments. SECCOMP is really (really, really) restrictive: A thread running in it has access only to a very small number of system calls: read(), write(), exit() and sigreturn(). Any other call terminates the thread. This makes it really safe, but impractical for real-world programs.
Google is attempting to implement its Chromium sandbox architecture in Linux, but it’s not as straightforward to implement as it is in Windows. And the company will have the same problems on a Mac. The implementation requires a lot more convoluted hacking and meticulous programming, but the result is an environment in which applications can run safely without the ability to harm other elements of the system.
It’s the most general secure architecture out there and raises the possibility that the Chrome OS could be more than just a Web browser. Google hasn’t given us enough guidance to know for sure, but it’s possible that any program that runs in Chromium on a PC or Mac will run in Chrome OS. Or maybe not, since the browser is the only user interface for Chrome OS.
IE Protected Mode and Protected View in Microsoft Office 2010 are examples of a philosophy that will imbue the operating system of the future: least privilege, the idea that no user or process should run with any more privileges than they absolutely need. It’s not a new idea. It’s been implemented for ages in Unix and derivatives, but never all that accessibly.
In Windows, there have been two major problems impeding the widespread use of least privilege computing: poorly designed applications that needlessly require administrator privileges and poor support for standard users in Windows XP. Windows Vista and Windows 7 provide much better support for standard users, but legacy apps continue to present a challenge in many enterprises. If you’re still compromising your security by granting users elevated permissions to allow such apps to run, you really need to find an exit strategy.
It’s not a feature you can use yourself, but the operating system of the future will also be better-tested. Recently, researcher Charlie Miller was able to find 20 critical vulnerabilities in Mac OS X by running a fuzzer for three weeks. Why wasn’t Apple running those fuzzers? In fact, Apple is moving in the right direction in this regard, as are most OS vendors, but it’s never fast enough.
As least privilege, sandboxes and other techniques harden applications, attackers will move toward attacking the operating system code itself, much of which will, of necessity, be privileged. Protection of this code will be much harder, but some companies are working on the problem, including grsecurity, which develops Linux systems that attempt to reduce and manage privilege throughout the kernel.
Getting Rid of the Past
Getting Rid of the Past
Finally, and perhaps most importantly, the OS of the future will disallow the applications and system software (such as device drivers) of the past. It has to. Those apps, especially ones that require high privilege, won’t take advantage of the newer facilities to improve overall security in the system. It’s well-understood now that key applications such as Acrobat are the main gateway into the system for malicious code. By forcing the Acrobat of the future to be more secure, the OS of the future will protect the entire system.
A related change might, or at least should, be made with respect to updating applications. It’s generally understood that outdated, vulnerable applications are the major avenue of attack against systems. If applications could plug their updates into a centralized service for updates, like Windows Update, it would be easier for users to keep their applications updated-and easier for the OS and applications to keep users informed.
I suggested this a while back, and got the impression that Microsoft didn’t want the liability and support burden from updating other companies’ software. But there’s surely a way to make this work because the advantages to everyone are too big to ignore.
For years, enterprises have had the option of implementing a full-scale patch management system to do the same thing. The unified update system I proposed is mainly to the benefit of consumers and small businesses.
There is no doubt that the major operating system vendors have learned the lessons of the recent past. Everything about an operating system needs to be viewed from a security standpoint, and this is the direction in which products are headed-if they aren’t there already. We may be at a point at which, if you have the money and the will to do it, you can protect yourself against all but the most determined and resourceful attackers. Some day, we may even get to the point where typical users can protect themselves.