The Cisco Unified Computing System combines high-end hardware with integrated management software to create a data center computing platform capable of hosting high-value applications.
Cisco UCS was announced in March and started shipping at the end of July. The system I tested has a manufacturer’s suggested retail price of $81,000.
Labs Gallery: Cisco Unified Computing System Hardware
Labs Gallery: Cisco Unified Computing System Software
In my exclusive hands-on review of the equipment and software that together compose the Cisco UCS offering, conducted on-site at Cisco, I determined that the platform has all the basic ingredients necessary to handle physical and virtual data center operations. The product platform showed some Version 1.0 flaws, but overall demonstrated that a UCS installation can grow in size without a corresponding increase in management staff or policy complexity.
The trade-off for this simplification is a buy-in to a Cisco-only platform on the hardware side. In addition to the server blades and blade chassis, there is a layer of fabric connections that require Cisco gear to complete.
At this time, Cisco uses only Intel Xeon 5500 (“Nehalem”) processors, which means no AMD option. UCS management policy is also heavily tilted toward VMware. That said, I’ve tested several data center servers based on the Intel Xeon 5500 microarchitecture that have provided outstanding performance, and VMware-especially with the introduction of vSphere 4-continues to set a high bar for virtual machine performance and management.
While my work with the Cisco UCS showed that the management tools provide the building blocks for a tightly controlled data center environment, implementing Cisco UCS doesn’t require a rip-and-replace decision regarding system management tools that are likely already well-established in your data center. The UCS Manager software is a device manager that uses an X M L-based API that provides ample integration with system management tools from BMC, CA, HP, IBM Tivoli and Symantec. I used the UCS Manager’s GUI during my tests, but everything that can be done through the GUI can also be accomplished using the CLI (command line interface).
For IT managers who must pay attention to both virtual and physical resources, the Cisco UCS system is well worth considering. The top-notch hardware components are well-integrated with each other and effectively managed with the Cisco UCS Manager software.
Hardware Components
Hardware Components
There is plenty of secret sauce throughout much of the hardware that makes up the Cisco UCS. During my tests, I found the biggest dose in the mezzanine card that makes the connection between the UCS B200-M1 blade server and the UCS 5108 server chassis. Based on technology Cisco gained when it acquired Nuova, the card is able to multiplex LAN, SAN and management traffic, thereby reducing cabling and management complexity.
Check out eWEEK Labs’ gallery of the phsical components used during its tests here.
When I inserted the physical UCS B200-M1 blade server into the chassis, the connection triggered a discovery process that automatically notified the management system of the presence of the newly added hardware.
I’ll come back to the importance of discovery in the software section of this review. For now, it’s enough to say that the physical configuration of the server blade, chassis and Cisco UCS 6120XP Fabric Interconnect was greatly simplified over the separate cabling and management systems in most standard configurations used today.
The mezzanine card is available in three flavors: a Cisco UCS VIC (Virtual Interface Card), a converged network adapter that is compatible with either Emulex or QLogic high-performance storage networking, or a single 10GB Ethernet adapter. My tests were all conducted using mezzanine cards equipped with the converged network adapter. I used both Emulex and QLogic networking systems at various points in my tests.
The UCS B200-M1 blade server is a half-width, two-socket system that uses Intel Xeon 5500 series processors and can support up to 96GB of DDR3 (double data rate 3) RAM. Some of my test systems also had local storage. The blade server can accommodate two small-form-factor SAS hard drives that can be 73GB 15K rpm or 1,46GB 10K rpm in capacity.
The blade servers were unremarkable in performance compared with other Intel Xeon 5500-based systems-which is to say that they were power-efficient, speedy and easily monitored with a number of on-board thermal and power consumption measuring tools.
Chassis
I used a Cisco UCS 5108 blade server chassis during my tests. The chassis is a 6U enclosure that can hold as many as eight half-width servers. (A variant of the UCS 5108 can hold four full-width servers.)
Depending on the type of storage and LAN connection speeds used by the blade servers, the UCS 5108 chassis can be equipped with up to two UCS 2104XP Fabric Extenders. The Fabric Extenders are four-port modules and can be equipped with a variety of either fiber or copper ports.
The chassis provides power, cooling and connectivity between the blade servers and the UCS 6120XP Fabric Interconnect. As will become apparent in the management section of this review, the chassis contributes to the “unified” part of Cisco UCS by slipping into a management domain without adding another discrete management point. As subsequent chassis are added to a Cisco UCS domain, they are managed through the UCS manager without requiring an additional IP address and separate console.
Fabric Interconnects
The Cisco UCS 6120XP Fabric Interconnect is a 1U top-of-rack device that provides Ethernet and Fibre Channel connectivity between the blade servers in the chassis and the LAN or SAN resources.
The Fabric Interconnect can be configured with a variety expansion modules depending on the LAN and storage resources that are available to the servers. Of importance for IT managers is that at least one and not more than two UCS 6120XP Fabric Interconnects can provide consolidated 10 Gigabit Ethernet, 1G-, 2G- and 4G-bps Fibre Channel connectivity for high-performance, low-latency applications.
The Fabric Interconnect is the basis of a USC management domain with a theoretical limit of 40 blade chassis and up to 320 servers. My test environment at Cisco’s demonstration lab was much more modest, composed of a single chassis with five blade servers.
The Fabric Interconnect plays a central role in Cisco UCS. Aside from providing the physical interconnect between the blade servers and the LAN and SAN resources, the Fabric Interconnect is also the management “brain” of the Cisco UCS.
My test environment used two UCS 6120XP Fabric Interconnects for redundancy and increased throughput. The Fabric Interconnects were equipped with N10-E0080 expansion modules that provided eight 4G Fibre Channel uplinks. Other expansion modules are available and can provide either four or six 10G uplinks.
In my test environment, the UCS 6120XP Fabric Interconnects were connected to a Cisco Catalyst 3750 switch for LAN resources and an MDS 9506 Multilayer Director SAN switch for access to a variety of storage resources.
Software Components
Software Components
The integrated hardware would be elegant but incomplete without the Cisco UCS Manager software application. The UCS Manager lives in the UCS 6120XP Fabric Interconnect. As was mentioned previously, the UCS Manager is a device manager that provides discovery and monitoring features, as well as low-level configuration support for the hardware and logical components that make up the Cisco UCS.
For a closer look at the software components of Cisco’s UCS, check out the images here.
During my tests, this meant that I was able to use the UCS Manager to provide low-level identifiers that are normally burned into hardware devices, such as a MAC on a network interface card, to mask physical changes in the network from operating systems and hypervisors running on that hardware.
For example, I was able to use VMotion to move a virtual machine from a VMware ESX Server running on one physical blade server to another, change out the physical blade for one with more RAM and a faster processor, and then migrate the blade back without the VMware ESX Server knowing that it was moving onto a physically different server. In other words, when physical actions are required-either to recover from hardware failure or to facilitate an upgrade-they can be performed in a nondisruptive manner to the operating system or hypervisor.
The UCS Manager interface is divided into five areas: administration, equipment, servers, LAN and SAN. Administration covers the operation of UCS Manager; equipment generally refers to the actual, physical UCS equipment; servers generally covers the logical entities that are created and used in UCS Manager; LAN refers to networking; and SAN refers to storage resources.
Administration of UCS Manager
The UCS Manager is built around a DME (Data Management Engine). Secure access to the UCS Manager is configured from the DME, and all changes to the UCS configuration are conducted as transactions to the DME.
I was able to set up roles that limited UCS Manager users to specific areas of the product interface-for example, access only to LAN functions and/or by organizational area-in my test case, eWEEK West. This compartmentalization based on function or organization makes the Cisco UCS suitable for use by organizations that provide multitenant services where operational separation is necessary.
While it was easy enough to back up the UCS Manager configuration, one of the Version 1.0 weaknesses I found is that the process must be kicked off manually. I would like to see an internal mechanism for scheduling configuration backups, and for these backups to be handled more gracefully. Currently, the backup overwrites the previous configuration.
Fortunately, everything that can be done in the UCS Manager GUI can also be done at the command line. I anticipate that most data center managers will use the CLI to automate tasks such as configuration backup and other routine maintenance tasks.
Resource Pools-the Building Blocks
Cisco uses resource pools to great advantage in UCS Manager.
The basic idea is to pre-position network and storage resources that hamper speedy hardware changes in the data center. In practice, this meant that I created pools of MAC addresses and WWPN (World Wide Port Names) from which policy-driven tools called Service Profiles automated the creation of logical server entities in the UCS management domain.
In an actual organization, these resource pools would be created in cooperation with network and storage engineers, so that when a Service Profile created a system that used one of these unique identifiers, it would be correctly configured to join the network or SAN resource.
During my tests, machines that I created using this method worked correctly. While it was just a matter of walking through a wizard to configure these tools, the underlying knowledge to correctly configure the pools means that only the most experienced IT staff should be involved in this part of UCS Manager setup.
Equipment?ö?ç?ÂReal Physical Devices
Equipment-Real Physical Devices
The physical Cisco UCS devices are managed in the equipment tab-one of the chief features that enables a Cisco UCS installation to grow in size without adding much in the way of management overhead.
When I installed and removed physical server blades from the chassis, I could see the change reflected on the management screen. I was able to monitor power supplies, fans, and temperature and power consumption from the UCS Manager interface. If a subsequent chassis was added, it would be managed in the same screen without adding another management IP address.
IT managers should pay special attention to the firmware management capabilities of UCS Manager. In contrast with the underdeveloped configuration backup system, considerable effort was spent on ensuring that firmware rollouts are handled in a graceful and efficient manner. The firmware deployment system allowed me to see the running, startup and backup versions of the firmware for all the components in the Cisco UCS.
UCS Manager also provides extensive information about the hardware components installed in the rack, and all of this data is discovered automatically. Because Cisco makes all of the physical components and has welded the whole thing together with a fairly elegant management system, hardware enumeration and monitoring of power and temperature are easy to access and well-presented so that problems can be immediately diagnosed.
Servers-Logical Devices
Service Profiles define and provision UCS resources. The MAC address, WWPN, firmware version, BIOS boot order and network attributes are all programmable.
I used Service Profiles to pre-position configuration settings for rapid deployment within my UCS Manager domain, based on the application I wanted to use. I could force servers to use an older version of firmware and pull “burned in” identification information from resource pools that were ready for use without the need to further consult with network or storage staff.
Similarly, I was able to configure LAN and SAN resources so that applications running on my UCS management domain were correctly provisioned without manual intervention on my part to configure the server.
Taken all together, Cisco UCS tied the compute, network and storage management components into a neat bundle of productivity delivered as a server in the management tool.
Technical Director Cameron Sturdevant can be reached at csturdevant@eweek.com.