RLX Technologies Inc.s RLX HPC Cluster Manager offers a simple, flexible way for enterprise IT managers to build a high-performance computing platform with a robust management framework—all at an affordable price.
Click here to read the full review of the RLX HPC Cluster Manager.
2
RLX Technologies Inc.s RLX HPC Cluster Manager offers a simple, flexible way for enterprise IT managers to build a high-performance computing platform with a robust management framework—all at an affordable price.
RLX HPC Cluster Manager is a software add-on packaged with RLXs Control Tower XT blade server management and provisioning tool. In eWEEK Labs tests using HPC Cluster Manager, Control Tower XT and RLX ServerBlades, RLXs Linux cluster manager was easy to use and offered an impressive array of management capabilities. HPC Cluster Manager also provided a comprehensive view of cluster performance, giving us a valid assessment of overall cluster health and performance.
Blade servers are the ideal building blocks for creating large HPC clusters, and newer Xeon processor-based blade servers offer better computational density than do rack-optimized servers. RLX offers the right set of management tools for blades and, when harnessed with HPC Cluster Manager, provides a cost-effective package for deploying Linux-based clusters in HPC environments.
We tested HPC Cluster Manager using an RLX System 600ex chassis with 10 ServerBlade 2800i blade servers. Our testbed used InfiniBand high-bandwidth interconnects, which are suitable for HPC applications. The RLX ServerBlades have embedded InfiniBand HCAs (Host Channel Adapters), and the RLX System 600ex chassis was outfitted with a 10-port InfiniBand pass-through card that connected the HCA on each blade to an external 12-port InfiniBand switch from Topspin Communications Inc. Our setup allowed I/O transfers among server blades in a cluster at 10GB-per-second speeds.
Parallel cluster computing requires fast, scalable, high-bandwidth interconnects for interprocessor communications. As nodes are added to the cluster, aggregate bandwidth goes up proportionately while latency remains constant. InfiniBand addresses interprocessor communications reliability and scalability issues by offering multipath connectivity between nodes and a modular architecture. In addition, the InfiniBands I/O fabric is designed to scale without affecting latency.
In HPC cluster environments, an individual server represents a single node in the cluster. The 6U (10.5-inch) RLX System 600ex supports 10 dual-Xeon server blades, meaning IT managers can create a 70-node cluster in an industry-standard rack.
HPC Cluster Manager can be used to build and manage Linux clusters running Red Hat Inc.s Red Hat Linux Version 2.1 or later. The current clustering software support is limited, and we hope RLX will step up its software support down the road. HPC Cluster Manager currently supports only Platform Computing Inc.s Platform LSF 5.0 and the open-source OpenPBS 2.3 cluster software.
We used a single Web browser interface to create, manage and monitor our test clusters. HPC Cluster Manager is already integrated with the Control Tower interface, so IT managers can easily use one Web browser to perform all system management tasks.
Using the Cluster Manager tab in the Control Tower Web interface, we easily built custom LSF and OpenPBS clusters with Red Hat Linux images. HPC Cluster Manager also provides a useful monitoring page that provides graphical performance data, such as CPU usage, interconnect I/O and system utilization, at a glance.
Technical Analyst Francis Chu can be reached at francis_chu@ziffdavis.com.