Is Simulation as Good as The Real Thing?
Computers gain their power from this very mutability. They are tools designed without a predetermined function. A CPU knows so very little just a bit of math, how to set and read memory addresses and I/O ports, work with a few registers. The only thing thats really remarkable about one is how fast it can do these things.
The more complex a product becomes, the less expressive power it provides. (The Internets IP, which is flexible enough to carry new kinds of traffic never anticipated by Vint Cerf and Bob Kahn, is another example of the power of simplicity.)
It seems reasonable that computers should be good at behaving like other computers, because none of our basic CPU designs are that different from each otheralthough that might change if clockless chips, analog multivariate logic or quantum circuits become mainstream.
For example, Ive delighted in revisiting old Apple II and Atari 2600 games I played growing up (www.classicgaming.com/vault/emulate.shtml) by running PC emulators of these systems and, on the other end of the spectrum, am experimenting with Hercules, a PC-based emulator of IBMs mainframe S/370, ESA/390, and z/Architecture CPUs (www.conmicro.cx/hercules/). Transmetas whole CPU design philosophy is based on the feasibility of real-time instruction set translation.
A few weeks ago, I interviewed Peter Magnusson, CEO of Virtutech AB, a company that makes CPU and system emulation its core mission. The companys Simics software emulates SPARC, Alpha, Intel X86, AMD X86 and PowerPC CPUs, along with associated logic and hardware peripherals that these CPUs need to form a working computer. One customer, Advanced Micro Devices Inc. distributed Simics to its partners to let them test AMDs Hammer CPUs before the CPU hardware itself was ready to go.
Simics provides features very difficult to get with real hardware: multiple virtual systems (each with multiple CPUs) can be run on a single real machine and can even be networked together. The software tracks all system activity and particular events can be traced or replayed with complete control.
"Simulation is a poor technology to predict performance," said Magnusson. "However, its an excellent tool to help you figure out where your performance problems are. An IT shop can simulate extreme load on server easier than they can with large hardware. Since its functionally correct, if it does run into a buffer overflow, or timeout or race conditions, you can make those events more frequent in a simulator environment to make them easier to track down and fix."
VMware, of course, has done much to popularize the idea of software as hardware (I use VMware myself and find its server products intriguing).
Hardware-based partitioning is common in high-end Unix servers. IBMs Linux-on-the-mainframe strategy also uses this ideaIBMs mainframes were, after all, the source of the whole virtual machine idea. (Were not talking CPU simulation here anymoreVMware and virtual machine partitions on zSeries systems both virtualize hardware on a host machine but dont translate machine instructions from one CPU family to another the way Simics or Hercules do.)
Given the cost of mass system maintenance, I think simulation has a larger role to play in the enterprise. The idea of running Linux on a zSeries system (especially on the mainframe hardware eWeek readers already have) is a compelling one, though part 1 of Paul Murphys well-researched examination of this strategy has dampened my enthusiasm.
Do CPU emulation or other box-in-a-box approaches make sense to you? Or is real hardware the only production-ready way to deploy systems? 1U, blade and brick-type systems are certainly making individual servers cheaper than ever before. Is simulation as good as the real thing?
Let me know what you think. West Coast Technical Director Timothy Dyck can be reached at firstname.lastname@example.org.