If your data center is like most data centers, then you’re probably constantly shopping for storage systems and storage upgrades. Although the majority of businesses are slicing their IT budgets this year, recent surveys by Forrester have shown that storage (and security, to a lesser extent) spending is still growing. How can you balance growing storage needs with decreasing budgets, especially when you require a high-performance and fault-tolerant multiterabyte SAN solution?
The first way to narrow your storage area network search if you are price-conscious is to focus on iSCSI products. Sure, you’ll have to give up performance, but the overall savings (on drive arrays, network switches and personnel) provided by IP-based iSCSI look pretty attractive this year. In addition, using SATA (Serial ATA) rather than SAS (serial-attached SCSI) or SCSI drives also keeps the price down. Depending on your usage characteristics, you might not benefit from using the more scalable drive technologies, anyway.
Cybernetics goes to great lengths to establish the right balance of price and features for the price-conscious iSCSI SAN shopper. We’ve all heard about the 80/20 rule: 80 percent of users only need 20 percent of a typical product’s features. Cybernetics takes this to heart with the miSAN D. Where other SAN manufacturers, such as Xiotech, pack innumerable features into their products, Cybernetics focuses on providing only those features you’re likely to use: volume snapshots, internal RAID, full device redundancy and device-to-device replication. Then they add a few valuable features to the mix, such as integrated agentless backup and complimentary tech support. The price for the units as tested is about $16,000.
The test units arrived at the lab in excellent condition and boxed very well in purely recyclable packaging.
Installation could not have been easier. There is a helpful sticker with a map of available ports on the top of each unit. From each unit, I connected two 1G-bps Ethernet ports to a switch for data transmission and one 1G-bps Ethernet port to a separate switch for management. I then connected the two devices directly to each other for failover heartbeat using two 1G-bps Ethernet links. I fired up a Web browser from my management workstation, pointed it at the default management IP address, logged in and began configuring.
The streamlined browser-based management GUI very easy to navigate and use. However, there were two things that disappointed. First, I was not forced to change the default login credentials. This is not the end of the world, because this is not an externally facing system and is therefore unlikely to be attacked. However, as a security guy, this is something I notice. Second, there was a complete lack of help within the management GUI. The units did arrive with CDs containing complete documentation in PDF format, and Cybernetics did everything to keep complexity down, so this is forgivable. But it’s still worth mentioning.
There is only one management account for the entire unit. This is OK in situations in which there is only one storage administrator, but organizations with multiple storage admins will be disappointed by the lack of multiple admin accounts and the accompanying lack of an audit trail for each admin. Likewise, reporting is very basic-pretty much limited to whether the unit is on or not and how much data has been written and read during the last one, three or seven days. All other usage statistics must be obtained through the connected servers’ operating system.
The first thing I did during tests was configure the two units for failover. I designated one system the master and one the slave, then indicated that failover should happen instantly upon fault detection. (Other choices include after 5 or 30 seconds.) I subsequently verified that failover worked properly: When I pulled the plug on the master, the slave became the new master in milliseconds.
I easily created virtual disks and exposed them to my test server. Each virtual disk can have its own snapshot policy or follow the global snapshot policy. I scheduled snapshots to occur at regular intervals. (Options include every X minutes in 15-minute increments, or at a specific day and time.)
The miSAN D excels at built-in archive and backup functionality. Volumes can be configured to replicate snapshots on a regular schedule to other units across WAN links for business continuity and disaster recovery purposes. Individual snapshots can be copied onto media connected directly to the unit via USB. A distinctive feature is that snapshots can be migrated to tape almost transparently. I connected a Cybernetics CY-L881 tape drive to an external SCSI port on the master unit and configured regular backup jobs using the management GUI in minutes.
I measured performance using Iometer 2006.07.07 on a Lenovo RD120 running Windows Server 2003 EE with the Microsoft iSCSI initiator. The server had two 1G-bps NICs, so I was able to use the iSCSI initiator’s MPIO feature to round-robin load balance traffic to increase throughput.
I saw a huge variation in results depending on the block size I used during performance testing. When the miSAN D was able to handle all the Iometer traffic in cache (it has a 4GB read/write cache, which is unusually high for this class of device), performance approached the range of 250 to 300MB per second.
With the miSAN D configured for RAID 0, I launched three Iometer threads and ramped up three threads at a time to a maximum of 24 threads, performed a 50/50 percent sequential/random 33/67 percent write/read mix with a 64KB workload size. Performance peaked at 1072 IOps and 67MB per second, at which point average response time was 42 ms. When I reconfigured the unit for RAID 5 (and a spare drive) and duplicated the test, performance peaked at 1,092 IOps and 68MBps. At this point, average response time was just over 18 ms. This is adequate when the drive array is utilized and excellent performance when data can be cached.