Core of the Offering

 
 
By Matthew Sarrel  |  Posted 2009-05-26 Print this article Print
 
 
 
 
 
 
 


 

The SAN/iQ centralized management console is the core of the offering. All configuration and management takes place here in a streamlined and intuitive GUI. Everything was right at my fingertips, including context-sensitive help.

The Find Nodes Wizard automatically launched, and I chose to scan the entire subnet. The wizard found my two storage nodes and provided a link to launch the Management Groups, Clusters and Volumes Wizard. I created a management group and a new administrator, and noted that it's possible to have multiple admins with different privileges for different storage entities. This is an essential feature for enterprise or data center implementation.

I then created a standard cluster-the other option is a multisite cluster-and named it. I created a VIP (virtual IP) for the cluster for load balancing. I also created a new volume using the Basic tab, where I had to enter only a volume name and size.

The Advanced tab is where things got interesting. I configured the volume for replication between storage nodes in the same cluster, which extends the performance, availability and reliability aspects of RAID to the network. I also configured snapshots to occur at regularly scheduled intervals. I chose full provisioning for this volume, but later chose thin provisioning for others with the same degree of ease.

The centralized management console announced that it was executing my tasks, the access lights started flashing on the drives, and then the console (and Java) crashed. After I rebooted the server, I was pleasantly surprised that there was no lasting damage from the crash. I launched the centralized management console and created a new iSCSI server. I was immediately able to access it from the Microsoft iSCSI Initiator on my Windows server.  

Yet, it's not all about management-performance in my tests was first-rate. To assess performance, I ran Iometer 2006.07.27 from two Windows Server 2003 servers and generated a number of different workloads to represent a database/e-commerce environment, a mail server environment, a streaming media environment and combinations of these environments. Throughput was consistently in the 115MB to 130MB-per-second range, with average latency in the 8ms to 15ms range. Performance peaked during the streaming media workload, when cache hits were around 50 percent-at just over 160MB ps.

I discovered that by bonding NICs in the TCP/IP Network menu under each storage node, I could dramatically increase throughput.



 
 
 
 
Matthew Sarrel Matthew D. Sarrel, CISSP, is a network security,product development, and technical marketingconsultant based in New York City. He is also a gamereviewer and technical writer. To read his opinions on games please browse http://games.mattsarrel.com and for more general information on Matt, please see http://www.mattsarrel.com.
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel