Hardware Components

 
 
By Cameron Sturdevant  |  Posted 2009-10-09 Email Print this article Print
 
 
 
 
 
 
 


Hardware Components

There is plenty of secret sauce throughout much of the hardware that makes up the Cisco UCS. During my tests, I found the biggest dose in the mezzanine card that makes the connection between the UCS B200-M1 blade server and the UCS 5108 server chassis. Based on technology Cisco gained when it acquired Nuova, the card is able to multiplex LAN, SAN and management traffic, thereby reducing cabling and management complexity.

Check out eWEEK Labs' gallery of the phsical components used during its tests here. 

When I inserted the physical UCS B200-M1 blade server into the chassis, the connection triggered a discovery process that automatically notified the management system of the presence of the newly added hardware.

I'll come back to the importance of discovery in the software section of this review. For now, it's enough to say that the physical configuration of the server blade, chassis and Cisco UCS 6120XP Fabric Interconnect was greatly simplified over the separate cabling and management systems in most standard configurations used today.

The mezzanine card is available in three flavors: a Cisco UCS VIC (Virtual Interface Card), a converged network adapter that is compatible with either Emulex or QLogic high-performance storage networking, or a single 10GB Ethernet adapter. My tests were all conducted using mezzanine cards equipped with the converged network adapter. I used both Emulex and QLogic networking systems at various points in my tests. 

The UCS B200-M1 blade server is a half-width, two-socket system that uses Intel Xeon 5500 series processors and can support up to 96GB of DDR3 (double data rate 3) RAM. Some of my test systems also had local storage. The blade server can accommodate two small-form-factor SAS hard drives that can be 73GB 15K rpm or 1,46GB 10K rpm in capacity.

The blade servers were unremarkable in performance compared with other Intel Xeon 5500-based systems-which is to say that they were power-efficient, speedy and easily monitored with a number of on-board thermal and power consumption measuring tools.     

Chassis

I used a Cisco UCS 5108 blade server chassis during my tests. The chassis is a 6U enclosure that can hold as many as eight half-width servers. (A variant of the UCS 5108 can hold four full-width servers.)

Depending on the type of storage and LAN connection speeds used by the blade servers, the UCS 5108 chassis can be equipped with up to two UCS 2104XP Fabric Extenders. The Fabric Extenders are four-port modules and can be equipped with a variety of either fiber or copper ports.

The chassis provides power, cooling and connectivity between the blade servers and the UCS 6120XP Fabric Interconnect. As will become apparent in the management section of this review, the chassis contributes to the "unified" part of Cisco UCS by slipping into a management domain without adding another discrete management point. As subsequent chassis are added to a Cisco UCS domain, they are managed through the UCS manager without requiring an additional IP address and separate console.

Fabric Interconnects

The Cisco UCS 6120XP Fabric Interconnect is a 1U top-of-rack device that provides Ethernet and Fibre Channel connectivity between the blade servers in the chassis and the LAN or SAN resources.

The Fabric Interconnect can be configured with a variety expansion modules depending on the LAN and storage resources that are available to the servers. Of importance for IT managers is that at least one and not more than two UCS 6120XP Fabric Interconnects can provide consolidated 10 Gigabit Ethernet, 1G-, 2G- and 4G-bps Fibre Channel connectivity for high-performance, low-latency applications.

The Fabric Interconnect is the basis of a USC management domain with a theoretical limit of 40 blade chassis and up to 320 servers. My test environment at Cisco's demonstration lab was much more modest, composed of a single chassis with five blade servers. 

The Fabric Interconnect plays a central role in Cisco UCS. Aside from providing the physical interconnect between the blade servers and the LAN and SAN resources, the Fabric Interconnect is also the management "brain" of the Cisco UCS.

My test environment used two UCS 6120XP Fabric Interconnects for redundancy and increased throughput. The Fabric Interconnects were equipped with N10-E0080 expansion modules that provided eight 4G Fibre Channel uplinks. Other expansion modules are available and can provide either four or six 10G uplinks.

In my test environment, the UCS 6120XP Fabric Interconnects were connected to a Cisco Catalyst 3750 switch for LAN resources and an MDS 9506 Multilayer Director SAN switch for access to a variety of storage resources.



 
 
 
 
Cameron Sturdevant Cameron Sturdevant has been with the Labs since 1997, and before that paid his IT management dues at a software publishing firm working with several Fortune 100 companies. Cameron also spent two years with a database development firm, integrating applications with mainframe legacy programs. Cameron's areas of expertise include virtual and physical IT infrastructure, cloud computing, enterprise networking and mobility, with a focus on Android in the enterprise. In addition to reviews, Cameron has covered monolithic enterprise management systems throughout their lifecycles, providing the eWEEK reader with all-important history and context. Cameron takes special care in cultivating his IT manager contacts, to ensure that his reviews and analysis are grounded in real-world concern. Cameron is a regular speaker at Ziff-Davis Enterprise online and face-to-face events. Follow Cameron on Twitter at csturdevant, or reach him by email at csturdevant@eweek.com.
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel