The Death Of Native Code

 
 
By Larry Seltzer  |  Posted 2003-10-31 Email Print this article Print
 
 
 
 
 
 
 

Back when disco was king, computer scientists tried to move away from native code for the sake of portability. Nowadays, security could be a better reason for virtual machines, offers Security Center Editor Larry Seltzer, and we're just about at the point

The recent coverage of Longhorn Windows got me thinking of virtual machines and their potential for security. Of course, that wasnt always the case. I remember writing a 4GL database management system. Now, this wasnt a conventional program; it ran in a virtual machine. The instructions in the compiled software werent executed by the CPU directly, but by another program running on the CPU, a virtual machine. This made the program portable. You could take the same binary program to computers with different CPUs and architectures—as long as there was a virtual machine for them. Must be Java, right? No way. This was back in 1983 and I was writing Pascal for the UCSD p-System and the granddaddy of virtual machines, the P-machine.
You think Java is slow today? You should have seen my P-code database management system (P-code is code written for the p-machine, just as todays Byte-code runs on the Java virtual machine). In an era when 8-bit processors were still mainstream on the desktop, the P-System made native programs look Carl Lewis-fast.

Nowadays, the real angle on virtual machines is security. The portability angle just isnt as compelling, especially since the number of platforms with market significance is small. When Sun came out with Java about 10 years ago—yeah, its actually about that long—they mostly pushed the portability angle, presumably for market leverage against Windows. At the same time, they also put a lot of thought and attention into the security aspects of Java.

As Sun proved with Java, virtual machines are an excellent mechanism for preventing many of the most common avenues of security attack. Vulnerabilities like buffer overflows, for example, happen sometimes because its easy for programmers to overlook some of the opportunities for hackers when writing code. One of the very things a virtual machine virtualizes is the access to buffers, so the VM itself can do the buffer checking.

Still, virtual machines arent immune to this sort of attack. Consider this recent reported bug in Java; a flaw in the VM design that could allow the execution of arbitrary code. But such problems can usually be fixed centrally by updating the VM, leaving the applications alone. In addition, VMs take a lot of security-related coding decisions out of the hands of application programmers, and this is a good thing.

Much as Sun and others denied it for many years, Javas virtualization of everything imposes a massive performance penalty. This gap is more of a problem for some applications than others. But for interactive applications, such as desktop applications aimed at end users, its a big problem. Theres no question that the initial attempts to build desktop applications were terrible and failed miserably. Java groupies such as IBM and Oracle tried to build Java-specific computers. Even Sun doesnt try to do it today. The Sun Java Desktop System and StarOffice Suite are native-code solutions.

Still, wouldnt it be nice if you could build a whole desktop solution around a virtual machine environment? Through the miracle of brute computing power, we may be approaching that day. Take for example, an entry-level machine with a 2.4GHz Celeron processor, a 400MHz bus and 128 MB RAM. This system provides many times the performance of ones that failed in the past, and weve seen advances in software, as well. Reviewers often ask themselves if computers are "fast enough." Some of us have been saying that they are for many years; others say that computers will never be fast enough. Im more in the former camp now than I was in the past. Certainly, an excellent way to make productive use of all that computing power would be to create a secure software environment.

With all the coverage of Longhorn, few observers have remarked on is Microsofts plan to move large amounts of the OS code to their own virtual machine environment, the Common Language Infrastructure (CLI), one of the core components of .Net. Check out more about Longhorn and other future technologies in this special report on the recent Microsoft Professional Developers Conference. So will the Longhorn-era PC be up to the task of running a substantially non-native operating system? Lets ask Bill Gates, who loves to predict what PCs of the future will be like. At the recent Microsoft Professional Developers Conference, Gates talked about the PC in 2006 expecting an average machine with a dual-core processor running between 4GHz and 6GHz, two or more gigabytes of RAM, and a terabyte hard disk as well as graphics processors with three times the performance of those today; and Net speeds of 1Gbps broadband and/or 54 Mbps wireless. This all seems like a reasonable prediction to me.

It wouldnt surprise me in the least if one of the reasons Microsoft is taking their sweet time with Longhorn is to wait for the hardware to get even faster. In a sense, every version of Windows at its release has been written more for tomorrows PCs than todays and performs on yesterdays machines adequately at best. This may be more the case than usual with Longhorn if Microsoft succeeds in moving enough of its code into what they call "managed code."

In addition, I hope Sun will also give virtualized applications another shot, because if local and network performance were snappy enough, the market would be receptive to non-Microsoft desktops. Web services can expand this opportunity by making it easier to have applications that run on multiple systems without necessarily running the same software.

When mulling over the future of computer security its easy to become discouraged. Its nice every now and then to be able to look forward to the future, to a time when everyday systems will have effective defenses against common attacks.

Discuss This in the eWEEK Forum Security Center Editor Larry Seltzer has worked in and written about the computer industry since 1983.

More from Larry Seltzer
 
 
 
 
Larry Seltzer has been writing software for and English about computers ever since—,much to his own amazement—,he graduated from the University of Pennsylvania in 1983.

He was one of the authors of NPL and NPL-R, fourth-generation languages for microcomputers by the now-defunct DeskTop Software Corporation. (Larry is sad to find absolutely no hits on any of these +products on Google.) His work at Desktop Software included programming the UCSD p-System, a virtual machine-based operating system with portable binaries that pre-dated Java by more than 10 years.

For several years, he wrote corporate software for Mathematica Policy Research (they're still in business!) and Chase Econometrics (not so lucky) before being forcibly thrown into the consulting market. He bummed around the Philadelphia consulting and contract-programming scenes for a year or two before taking a job at NSTL (National Software Testing Labs) developing product tests and managing contract testing for the computer industry, governments and publication.

In 1991 Larry moved to Massachusetts to become Technical Director of PC Week Labs (now eWeek Labs). He moved within Ziff Davis to New York in 1994 to run testing at Windows Sources. In 1995, he became Technical Director for Internet product testing at PC Magazine and stayed there till 1998.

Since then, he has been writing for numerous other publications, including Fortune Small Business, Windows 2000 Magazine (now Windows and .NET Magazine), ZDNet and Sam Whitmore's Media Survey.
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...

 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel