REVIEW: Cisco Unified Computing System Is a Robust Platform for Physical, Virtual Data Center Ops - Page 2

Hardware Components

There is plenty of secret sauce throughout much of the hardware that makes up the Cisco UCS. During my tests, I found the biggest dose in the mezzanine card that makes the connection between the UCS B200-M1 blade server and the UCS 5108 server chassis. Based on technology Cisco gained when it acquired Nuova, the card is able to multiplex LAN, SAN and management traffic, thereby reducing cabling and management complexity.

Check out eWEEK Labs' gallery of the phsical components used during its tests here.

When I inserted the physical UCS B200-M1 blade server into the chassis, the connection triggered a discovery process that automatically notified the management system of the presence of the newly added hardware.

I'll come back to the importance of discovery in the software section of this review. For now, it's enough to say that the physical configuration of the server blade, chassis and Cisco UCS 6120XP Fabric Interconnect was greatly simplified over the separate cabling and management systems in most standard configurations used today.

The mezzanine card is available in three flavors: a Cisco UCS VIC (Virtual Interface Card), a converged network adapter that is compatible with either Emulex or QLogic high-performance storage networking, or a single 10GB Ethernet adapter. My tests were all conducted using mezzanine cards equipped with the converged network adapter. I used both Emulex and QLogic networking systems at various points in my tests.

The UCS B200-M1 blade server is a half-width, two-socket system that uses Intel Xeon 5500 series processors and can support up to 96GB of DDR3 (double data rate 3) RAM. Some of my test systems also had local storage. The blade server can accommodate two small-form-factor SAS hard drives that can be 73GB 15K rpm or 1,46GB 10K rpm in capacity.

The blade servers were unremarkable in performance compared with other Intel Xeon 5500-based systems-which is to say that they were power-efficient, speedy and easily monitored with a number of on-board thermal and power consumption measuring tools.


I used a Cisco UCS 5108 blade server chassis during my tests. The chassis is a 6U enclosure that can hold as many as eight half-width servers. (A variant of the UCS 5108 can hold four full-width servers.)

Depending on the type of storage and LAN connection speeds used by the blade servers, the UCS 5108 chassis can be equipped with up to two UCS 2104XP Fabric Extenders. The Fabric Extenders are four-port modules and can be equipped with a variety of either fiber or copper ports.

The chassis provides power, cooling and connectivity between the blade servers and the UCS 6120XP Fabric Interconnect. As will become apparent in the management section of this review, the chassis contributes to the "unified" part of Cisco UCS by slipping into a management domain without adding another discrete management point. As subsequent chassis are added to a Cisco UCS domain, they are managed through the UCS manager without requiring an additional IP address and separate console.

Fabric Interconnects

The Cisco UCS 6120XP Fabric Interconnect is a 1U top-of-rack device that provides Ethernet and Fibre Channel connectivity between the blade servers in the chassis and the LAN or SAN resources.

The Fabric Interconnect can be configured with a variety expansion modules depending on the LAN and storage resources that are available to the servers. Of importance for IT managers is that at least one and not more than two UCS 6120XP Fabric Interconnects can provide consolidated 10 Gigabit Ethernet, 1G-, 2G- and 4G-bps Fibre Channel connectivity for high-performance, low-latency applications.

The Fabric Interconnect is the basis of a USC management domain with a theoretical limit of 40 blade chassis and up to 320 servers. My test environment at Cisco's demonstration lab was much more modest, composed of a single chassis with five blade servers.

The Fabric Interconnect plays a central role in Cisco UCS. Aside from providing the physical interconnect between the blade servers and the LAN and SAN resources, the Fabric Interconnect is also the management "brain" of the Cisco UCS.

My test environment used two UCS 6120XP Fabric Interconnects for redundancy and increased throughput. The Fabric Interconnects were equipped with N10-E0080 expansion modules that provided eight 4G Fibre Channel uplinks. Other expansion modules are available and can provide either four or six 10G uplinks.

In my test environment, the UCS 6120XP Fabric Interconnects were connected to a Cisco Catalyst 3750 switch for LAN resources and an MDS 9506 Multilayer Director SAN switch for access to a variety of storage resources.