The HP LeftHand P4300 4.8TB SAS Starter SAN Solution is an affordable, scalable and manageable iSCSI SAN for midsize and larger organizations.
The centerpiece of the offering is the SAN/iQ storage software and its excellent management capabilities. The combination of LeftHand’s software and Hewlett-Packard’s hardware (the units I tested were 2U DL185 servers) creates a true n-way clustered architecture, allowing for a combination of load balancing and failover.
This architecture allows multiple storage controllers (servers with drive arrays) to present to administrators, users and applications as a single logical system. This facilitates expansion because a single node can be taken offline to upgrade components (such as CPU, RAM or NICs) without downing the whole cluster.
New storage controllers can be added very easily. During eWEEK Labs’ tests, I added one to my cluster in less than 15 minutes simply by assigning IP addresses and an admin user name and password. This sort of flexibility is worth its weight in gold in the always-on world of enterprise storage. In contrast, the Cybernetics mi-SAN-D I recently reviewed offers only active/passive failover, which means no load balancing and offline upgrades only.
After racking and connecting the dual power supplies, dual 1G-bps NICs for data and a third NIC to my management network, I powered up the two units that are bundled together in the HP LeftHand P4300 4.8TB SAS Starter SAN Solution, which starts at $35,000. (Alternatively, a 12TB SATA Starter SAN starts at $30,000.)
The first system booted smoothly and launched an ugly but useful installation tool with which I configured the server name, IP networking, and an admin user name and password. The second server started, but failed to recognize the array controller and therefore didn’t boot completely.
Of course, the first thing I did then was take the server apart, at which point I noticed that the board containing the SAN/iQ software had fallen off of the P400 RAID controller-obviously a casualty of rough handling during shipping. Popping it back on did not solve the problem, but after an e-mail and a quick call, a new controller and a third server were on their way. After I installed the new controller, the server booted up smoothly.
The next step was to install the Windows Solution Pack and the SAN/iQ management console on my Microsoft Windows 2003 EE server. The Windows Solution Pack includes the LeftHand DSM (Device Specific Module) for MPIO, which greatly improves performance between Windows Server and the cluster, and the VDS and VSS providers necessary for virtual machine storage management.
Core of the Offering
The SAN/iQ centralized management console is the core of the offering. All configuration and management takes place here in a streamlined and intuitive GUI. Everything was right at my fingertips, including context-sensitive help.
The Find Nodes Wizard automatically launched, and I chose to scan the entire subnet. The wizard found my two storage nodes and provided a link to launch the Management Groups, Clusters and Volumes Wizard. I created a management group and a new administrator, and noted that it’s possible to have multiple admins with different privileges for different storage entities. This is an essential feature for enterprise or data center implementation.
I then created a standard cluster-the other option is a multisite cluster-and named it. I created a VIP (virtual IP) for the cluster for load balancing. I also created a new volume using the Basic tab, where I had to enter only a volume name and size.
The Advanced tab is where things got interesting. I configured the volume for replication between storage nodes in the same cluster, which extends the performance, availability and reliability aspects of RAID to the network. I also configured snapshots to occur at regularly scheduled intervals. I chose full provisioning for this volume, but later chose thin provisioning for others with the same degree of ease.
The centralized management console announced that it was executing my tasks, the access lights started flashing on the drives, and then the console (and Java) crashed. After I rebooted the server, I was pleasantly surprised that there was no lasting damage from the crash. I launched the centralized management console and created a new iSCSI server. I was immediately able to access it from the Microsoft iSCSI Initiator on my Windows server.
Yet, it’s not all about management-performance in my tests was first-rate. To assess performance, I ran Iometer 2006.07.27 from two Windows Server 2003 servers and generated a number of different workloads to represent a database/e-commerce environment, a mail server environment, a streaming media environment and combinations of these environments. Throughput was consistently in the 115MB to 130MB-per-second range, with average latency in the 8ms to 15ms range. Performance peaked during the streaming media workload, when cache hits were around 50 percent-at just over 160MB ps.
I discovered that by bonding NICs in the TCP/IP Network menu under each storage node, I could dramatically increase throughput.
Icing on the Cake
The icing on the cake are the top-notch alerting and monitoring capabilities. I easily set monitored variables ranging from CPU Utilization to Storage Server Latency for each storage node. Then, clicking from Alert Setup Tasks to Set Threshold Actions, I chose how each alert should be issued-to the console or sent via SNMP or e-mail. When I checked e-mail, I was prompted for an address and then clicked OK.
I navigated to the Email Server Setup tab to enter SMTP settings for use in sending alerts, but there was no way to authenticate (required on my e-mail server) before sending, so I was unable to fully test this feature.
Each storage node has a Hardware tab, and from these tabs I could see the status of various hardware components, such as fan, power and temperature. Complete hardware information is provided on a tab by just that name (Complete Hardware Information).
The only unpleasantness that I have to report is that it took tech support more than 48 hours to respond to an e-mail requesting access to the support Website. I think it is fair to say that any company that shells out $30K for storage will want access to the documentation within 48 minutes rather than 48 hours.
Although I only tested the HP LeftHand P4300 4.8TB SAS Starter SAN Solution plus one additional storage node, the solution is designed to accommodate more than 100TB across dozens of nodes in multiple locations. It was clear to me that the centralized management console contains more than enough functionality and ease of use to satisfy storage admins responsible for 4.8TB or 48TB. Equally important is the ability to upgrade or repair units within the n-way cluster without sacrificing availability or having to rip, replace and learn a new interface.