Physical Server Evaluation

Cameron Sturdevant Cameron Sturdevant is the executive editor of Enterprise Networking Planet. Prior to ENP, Cameron was technical analyst at PCWeek Labs, starting in 1997. Cameron finished up as the eWEEK Labs Technical Director in 2012. Before his extensive labs tenure Cameron paid his IT dues working in technical support and sales engineering at a software publishing firm . Cameron also spent two years with a database development firm, integrating applications with mainframe legacy programs. Cameron's areas of expertise include virtual and physical IT infrastructure, cloud computing, enterprise networking and mobility. In addition to reviews, Cameron has covered monolithic enterprise management systems throughout their lifecycles, providing the eWEEK reader with all-important history and context. Cameron takes special care in cultivating his IT manager contacts, to ensure that his analysis is grounded in real-world concern. Follow Cameron on Twitter at csturdevant, or reach him by email at
By Cameron Sturdevant  |  Posted 2011-05-05 Email Print this article Print

Acer AR380 F1 at the start of eWEEK Labs testing.

In the cloud era, it seems almost quaint to talk about physical servers. And to a certain extent, the commoditization--or at least the common sourcing--of the processor, memory, hard drive, network and power supply components diminishes the technical distinctions between server systems. However, everything cloud starts life in an ocean of very real hardware.

The distinguishing features that matter most are related to system management. Can I reach the server if the network or the the server itself are down? How much does it cost to reach the server in this state? What can I do to the server without an installed operating system?

There are the purely physical considerations when making server choices: Can I get servers as towers, blades or pizza boxes depending on my need? If I don't need all the fancy extras, can I get a stripped down version? Are the right number of PCI slots and other configuration options available inside the system? How much memory, compute and graphics can I stuff into a single system? What is the efficiency rating on the power supply? But again, most of these capabilities are equally met across server brands.

It might be thought that physical servers could be distinguished based on the needs of virtual infrastructure. Virtualization, as expressed in both server workload consolidation and (to a much lesser degree) hosted desktops, is radically changing server configuration parameters.

However, it is my experience that server vendors are responding almost simultaneously to these requirements. Servers that need 18 DIMM slots and upwards of 100GB of RAM are available across the board. Need a large number of CPU cores? The vendors are using Intel, and to a certain extent AMD to meet the need. Want high performance graphics for end user virtual desktops? NVIDIA is supplying monstrous graphics cards that can serve up impressive end experience from servers running in the data center.

Thus, even with impressive "more of everything" configurations, the servers I'm seeing at the labs are easiest to tell apart based on their management capabilities. So, that's what I'm spending most of my time looking at. For instance, my latest server project, the Acer AR380 F1 (yeah, I didn't know Acer made servers either) provides out-of-band management capabilities at no cost, where Dell and HP both license advanced management features.

Check back in a couple days to read more about my experience with the Acer AR380. In the meantime, you can read my review of the Dell PowerEdge R415. And you can check out my other, purely practical evaluation of the Dell system's racking gear too.

Feel free to let me know what evaluation criteria matter to you when picking servers for your organization. |

Submit a Comment

Loading Comments...

Manage your Newsletters: Login   Register My Newsletters

Rocket Fuel