With storage needs growing exponentially in virtually every industry today, many organizations have jumped on the storage and server consolidation bandwagon, installing NAS (network-attached storage) systems to manage their growing needs. But for some, this method has become increasingly inefficient as more servers are added throughout the organization, making the process more complex and unwieldy.
For those companies, SAN file systems—a relatively new storage paradigm that dynamically allocates storage to each server on the network based on actual need at any given time—might fit the bill. SAN file systems are now available from a host of companies including Apple Computer Inc., IBM, Sun Microsystems Inc., ADIC (Advanced Digital Information Corp.), DataPlow, ClariStor and others.
Apples 64-bit cluster Xsan file system for Mac OS allows organizations to consolidate storage resources and provide multiple computers with concurrent file-level read/write access to shared volumes over Fibre Channel, according to Alex Grossman, senior director of server and storage hardware at the Cupertino, Calif.-based company. The result, he says, is centralized storage management.
Other companies offer similar products. IBMs TotalStorage SAN File System provides a network-based, heterogeneous file system for data sharing and a centralized policy-based storage management capability, while ADICs StorNext Management Suite for SANs combines a file system and storage manager to optimize the use of SAN storage and help ensure the recoverability of data, according to Paul Rutherford, vice president of technology at the Redmond, Wash., company.
While a traditional SAN alone is simply a way to use networking technologies—primarily Fibre Channel protocol over multimode fiber-optic cables—a SAN file system is an overlay on top of the SAN. “It allows servers to communicate and access a variety of storage array types, providing true heterogeneity,” Rutherford said.
In addition, it allows a mix of hosts with heterogeneous operating systems to access the range of storage arrays through one common, logical point of access, said William P. Hurley, senior analyst for applications and software infrastructure at Enterprise Strategy Group in Portland, Ore.
“By providing one logical point of access, SANs perform faster, are easier to manage and can be used to consolidate a broad range of applications and their data sets,” he said.
SAN file systems address a host of problems inherent in the NAS storage paradigm. Chief among them is the ability to reprovision storage often and quickly as storage needs fluctuate.
“If you are overprovisioning storage on a manual basis and spending a lot of people resources to monitor and manage the growth of storage needs across each one of the servers, you might be a good candidate,” said Jeff Barnett, director of storage software at IBM.
“Ask yourself if your overprovisioning tends to result in underutilized capacity because the [database administrator] asked for two terabytes when he really didnt need that much.”
?”> Security is another consideration for moving to a SAN file system.
“NAS is available to anyone who can plug into the network, and Ethernet isnt very secure,” Grossman said. “When you put the storage on a SAN, where its block-level data and isnt attached to your Ethernet connector, security is much higher.”
The sheer volume of data an organization must manage today, which often includes complex financial data or large imaging files, also can prompt a switch to a SAN file system.
IT managers at the New York State Psychiatric Institute, a research organization at Columbia University, chose ADICs StorNext SAN file system for that reason and more. The organization, which performs brain-imaging scans using MRI (magnetic resonance imaging), must deal with files as large as 30G.
“We had to figure out a way to transmit these files throughout the institute without bringing down things like mail, Web services and other daily operations, so we couldnt put it on our regular network,” said Gerald Segal, chief information architect at the institute.
To solve the problem, Segal and his team settled on the SAN file system, which provided an effective way to transmit large amounts of imaging data supporting medical research. In addition, the team chose to use the SAN file system in an unorthodox way—as an image-distribution system.
“Most think of a SAN in terms of storing information and being a central repository, and we use it that way, but for us the added value is also using it to transmit image files,” Segal said. “Its much less costly than if we went to a standardized imaging-distribution system to the tune of ten- or fifteen-fold in savings.”
Other companies are switching as well. BlueCross BlueShield of Tennessee, for example, has installed a SAN file system from IBM to manage its burgeoning storage systems needs, which have grown twenty-fold during the past nine years.
But for some companies, the price of switching to a SAN file server may be prohibitive. A fully functioning SAN file system usually costs from $50,000 to $500,000, according to Enterprise Strategy Group. The price differential exists because some SAN file systems require the use of agents, and some vendors include a volume management tool, while others do not.
That price may simply be too rich for some organizations blood. So, how do you decide whether your organization should take the plunge?
“Its a management issue,” said Brian Babineau, an analyst with Enterprise Strategy Group. “Do I have to add another body to manage another device? How much more will that next device cost me? If it costs you another person or much more in software licenses, thats the point at which it becomes worth considering.”