By: Frank Ohlhorst dnu
Thanks to technologies such as Virtual Desktop Infrastructure, Big Data Analytics and e-Discovery services, data center storage needs are growing fast, with no end in sight. To date, data center operators have relied on traditional SAN (Storage Area Network) technologies to meet storage growth needs. However, the cost associated with provisioning, scaling and managing SANs has grown quickly as well, creating an even bigger problem for data center managers struggling to meet growth needs with constrained budgets.
Coraid is out to address the cost and complexity issues associated with competing SAN solutions with its Etherdrive SRX series AoE-based (ATA over Ethernet) storage gear. AoE presents disk storage to servers across a standard Ethernet network using Layer 2. AoE is a much simpler protocol to process than iSCSI or Fibre Channel, which rely on much more complex protocol stacks and are based upon complex SCSI command sets.
The Etherdrive SRX’s high performance and low latency lends itself well to VDI (Virtual Desktop Infrastructure) implementations, where virtual machines need to be created on-demand for users. Here, the higher throughput means that VDI users have faster access (and boot times) to their virtual desktops, while the low latency means those desktops will be more responsive. After all, VDI is a storage-intensive technology and those deploying it will need all the high-speed storage they can get their hands on.
SRX pricing, including 10 GbE support, starts at under $600 per terabyte, with fully loaded appliances priced starting under $15,000.
I was able to test and evaluate Coraid’s EtherDrive SRX series during a test/validation installation at Coraid’s Redwood City, Calif., headquarters. The test environment consisted of a Windows Server Network with 10G Ethernet connectivity and a few different SRX storage arrays, including a 36 Disk SRX4200 (72TB total capacity), a 24 disk SRX3500 (14TB total capacity) and a 16 disk SRX2800 (16TB total capacity) rack-mounted drive enclosures, which Coraid refers to as a shelf. Each shelf uses Coraid’s CorOS parallel processing scale-out SAN operating system, features as many as four 10GbE or 6GbE interfaces and supports RAID 0,1,5,6,10 or JBOD with hot spare disk technologies.
The SRX4200 is a 4U device, the SRX3500 is 2U, while the SRX2800 is 3U, meaning that large amounts of storage can be squeezed into a relatively small amount of rack space. All units feature redundant hot swap power supplies, include hot swap support for SAS, SSD or SATA drives and offer claimed access speeds of greater than 1,800M bps.
The performance tests I conducted, using the IOMETER Exchange 2007 workload generator, showed that the SRX3500 (with 24 15k SAS drives installed) was able to generate 2,907 IOPS (Input/Output operations per second) and the IOMETER Streaming Media simulation was able to generate throughput of more than 1,200M bps.
One of the first things I noticed about the shelves was the quality construction, meaning that the drive bays were sturdy and the drive trays moved with ease, heavy duty plastic was used for the release levers and the rear of the units featured labeled ports and indicator LEDs were abundant on the units. That proves to be important, because it gives a visual indicator to the status of a given component, something that makes swapping out the right drive or plugging into the correct port that much easier. While that may not be unique on a SAN appliance, the differentiator here is that Coraid builds their shelves with commodity components, which keep costs down and speeds the manufacturing process.
I found the units very easy to install. I was able to configure and provision LUNs in a matter of minutes. Once the unit is installed into a rack, installation is a simple matter of plugging in the appropriate cables and powering up the device. Cabling proves to be especially easy, because the Coraid units use AOE technology, which places all SAN traffic on Layer 2 of an Ethernet connection. This means commodity, layer 2 Ethernet switches can be used to interconnect the Etherdrive SAN into a rack and target servers. Layer 2 Ethernet switches usually prove to be much easier to set up than iSCSI or Fibre Channel switches/back planes and are usually much less expensive as well.
Basic setup is accomplished with a command line client utility called CEC (Coraid Ethernet Console), which is able to detect and connect to any active Coraid shelf via Ethernet. The windows version of CEC requires Microsoft .NET 4 and WinPcap (a link-layer access tool for Windows environments) to be installed on the management PC. I found CEC very easy to use. Once launched, it does a “shelf probe” to locate active Etherdrive shelves, which are listed on the CEC interface. Once I located the shelf I wanted to work with, I just had to hit “enter” and I was presented with a CLI (command line interface) to control the unit.
While a CLI may sound like old-fashioned technology, the text-based commands are very simple and easy to execute. In fact, a GUI would be a hindrance to a speedy setup. The command set consists of only a few basic commands, including the “list” command, which gives a list of physical hard drives, while the “make” command is used to create LUNs, define the RAID level and so on. Simply put, I was able to have a LUN configured and available to the Windows server in under a minute, using just three simple commands.
However, there are still more pieces to the puzzle here-and that comes in the form of drivers and HBAs (host bus adapters), Coraid has created AoE drivers for Linux, Windows and VMware, and also has HBAs available as well. Driver installation proves to be very simple on the Windows side of things and does not require a reboot, making it that much easier to quickly add SAN-based storage to an active environment.
I really liked how the Etherdrive LUNs were handled in a Windows environment, where the LUNs take on the persona of local SCSI storage in the Windows Disk Manager, which allowed me to manage the SAN as if it was locally attached storage.
The Etherdrive SRX products also scored well on the resiliency side of things. I was able to hot-swap drives with no problems and I also liked the ability to mirror LUNs across shelves, creating a fault-tolerant SAN environment, which could survive the failure of complete rack. Also, the metadata required to enable an instant swap is stored on the drives themselves, meaning if a complete shelf fails, you could just remove all drives and install them into another shelf. It may even pay to have a spare empty shelf in the rack to facilitate a quick recovery, an option that Coraid offers with its “Zero Hour” support offering.