In the cloud era, it seems almost quaint to talk about physical servers. And to a certain extent, the commoditization–or at least the common sourcing–of the processor, memory, hard drive, network and power supply components diminishes the technical distinctions between server systems. However, everything cloud starts life in an ocean of very real hardware.
The distinguishing features that matter most are related to system management. Can I reach the server if the network or the the server itself are down? How much does it cost to reach the server in this state? What can I do to the server without an installed operating system?
There are the purely physical considerations when making server choices: Can I get servers as towers, blades or pizza boxes depending on my need? If I don’t need all the fancy extras, can I get a stripped down version? Are the right number of PCI slots and other configuration options available inside the system? How much memory, compute and graphics can I stuff into a single system? What is the efficiency rating on the power supply? But again, most of these capabilities are equally met across server brands.
It might be thought that physical servers could be distinguished based on the needs of virtual infrastructure. Virtualization, as expressed in both server workload consolidation and (to a much lesser degree) hosted desktops, is radically changing server configuration parameters.
However, it is my experience that server vendors are responding almost simultaneously to these requirements. Servers that need 18 DIMM slots and upwards of 100GB of RAM are available across the board. Need a large number of CPU cores? The vendors are using Intel, and to a certain extent AMD to meet the need. Want high performance graphics for end user virtual desktops? NVIDIA is supplying monstrous graphics cards that can serve up impressive end experience from servers running in the data center.
Thus, even with impressive “more of everything” configurations, the servers I’m seeing at the labs are easiest to tell apart based on their management capabilities. So, that’s what I’m spending most of my time looking at. For instance, my latest server project, the Acer AR380 F1 (yeah, I didn’t know Acer made servers either) provides out-of-band management capabilities at no cost, where Dell and HP both license advanced management features.
Check back in a couple days to read more about my experience with the Acer AR380. In the meantime, you can read my review of the Dell PowerEdge R415. And you can check out my other, purely practical evaluation of the Dell system’s racking gear too.
Feel free to let me know what evaluation criteria matter to you when picking servers for your organization.