Is Simulation as Good as The Real Thing?

 
 
By Timothy Dyck  |  Posted 2002-04-22 Email Print this article Print
 
 
 
 
 
 
 

eLABorations: Simulation can't predict actual performance, but can track down a range of problems

Simulation is making a resurgence in the IT industry. Im amazed at the technical artistry involved in writing software so that one CPU can translate —on the fly— a foreign instruction set into its native tongue fast enough to do something useful with that capability. In fact, current CPUs are fast enough to emulate a good fraction of all the designs that came before them. Computers gain their power from this very mutability. They are tools designed without a predetermined function. A CPU knows so very little —just a bit of math, how to set and read memory addresses and I/O ports, work with a few registers. The only thing thats really remarkable about one is how fast it can do these things. The more complex a product becomes, the less expressive power it provides. (The Internets IP, which is flexible enough to carry new kinds of traffic never anticipated by Vint Cerf and Bob Kahn, is another example of the power of simplicity.)
It seems reasonable that computers should be good at behaving like other computers, because none of our basic CPU designs are that different from each other—although that might change if clockless chips, analog multivariate logic or quantum circuits become mainstream.
For example, Ive delighted in revisiting old Apple II and Atari 2600 games I played growing up (www.classicgaming.com/vault/emulate.shtml) by running PC emulators of these systems and, on the other end of the spectrum, am experimenting with Hercules, a PC-based emulator of IBMs mainframe S/370, ESA/390, and z/Architecture CPUs (www.conmicro.cx/hercules/). Transmetas whole CPU design philosophy is based on the feasibility of real-time instruction set translation. A few weeks ago, I interviewed Peter Magnusson, CEO of Virtutech AB, a company that makes CPU and system emulation its core mission. The companys Simics software emulates SPARC, Alpha, Intel X86, AMD X86 and PowerPC CPUs, along with associated logic and hardware peripherals that these CPUs need to form a working computer. One customer, Advanced Micro Devices Inc. distributed Simics to its partners to let them test AMDs Hammer CPUs before the CPU hardware itself was ready to go. Simics provides features very difficult to get with real hardware: multiple virtual systems (each with multiple CPUs) can be run on a single real machine and can even be networked together. The software tracks all system activity and particular events can be traced or replayed with complete control.
"Simulation is a poor technology to predict performance," said Magnusson. "However, its an excellent tool to help you figure out where your performance problems are. An IT shop can simulate extreme load on server easier than they can with large hardware. Since its functionally correct, if it does run into a buffer overflow, or timeout or race conditions, you can make those events more frequent in a simulator environment to make them easier to track down and fix." VMware, of course, has done much to popularize the idea of software as hardware (I use VMware myself and find its server products intriguing). Hardware-based partitioning is common in high-end Unix servers. IBMs Linux-on-the-mainframe strategy also uses this idea—IBMs mainframes were, after all, the source of the whole virtual machine idea. (Were not talking CPU simulation here anymore—VMware and virtual machine partitions on zSeries systems both virtualize hardware on a host machine but dont translate machine instructions from one CPU family to another the way Simics or Hercules do.) Given the cost of mass system maintenance, I think simulation has a larger role to play in the enterprise. The idea of running Linux on a zSeries system (especially on the mainframe hardware eWeek readers already have) is a compelling one, though part 1 of Paul Murphys well-researched examination of this strategy has dampened my enthusiasm. Do CPU emulation or other box-in-a-box approaches make sense to you? Or is real hardware the only production-ready way to deploy systems? 1U, blade and brick-type systems are certainly making individual servers cheaper than ever before. Is simulation as good as the real thing? Let me know what you think. West Coast Technical Director Timothy Dyck can be reached at timothy_dyck@ziffdavis.com.
 
 
 
 
Timothy Dyck is a Senior Analyst with eWEEK Labs. He has been testing and reviewing application server, database and middleware products and technologies for eWEEK since 1996. Prior to joining eWEEK, he worked at the LAN and WAN network operations center for a large telecommunications firm, in operating systems and development tools technical marketing for a large software company and in the IT department at a government agency. He has an honors bachelors degree of mathematics in computer science from the University of Waterloo in Waterloo, Ontario, Canada, and a masters of arts degree in journalism from the University of Western Ontario in London, Ontario, Canada.
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Thanks for your registration, follow us on our social networks to keep up-to-date
Rocket Fuel