For those companies, SAN file systems—a relatively new storage paradigm that dynamically allocates storage to each server on the network based on actual need at any given time—might fit the bill. SAN file systems are now available from a host of companies including Apple Computer Inc., IBM, Sun Microsystems Inc., ADIC (Advanced Digital Information Corp.), DataPlow, ClariStor and others.
Apples 64-bit cluster Xsan file system for Mac OS allows organizations to consolidate storage resources and provide multiple computers with concurrent file-level read/write access to shared volumes over Fibre Channel, according to Alex Grossman, senior director of server and storage hardware at the Cupertino, Calif.-based company. The result, he says, is centralized storage management.
Other companies offer similar products. IBMs TotalStorage SAN File System provides a network-based, heterogeneous file system for data sharing and a centralized policy-based storage management capability, while ADICs StorNext Management Suite for SANs combines a file system and storage manager to optimize the use of SAN storage and help ensure the recoverability of data, according to Paul Rutherford, vice president of technology at the Redmond, Wash., company.
While a traditional SAN alone is simply a way to use networking technologies—primarily Fibre Channel protocol over multimode fiber-optic cables—a SAN file system is an overlay on top of the SAN. "It allows servers to communicate and access a variety of storage array types, providing true heterogeneity," Rutherford said.
In addition, it allows a mix of hosts with heterogeneous operating systems to access the range of storage arrays through one common, logical point of access, said William P. Hurley, senior analyst for applications and software infrastructure at Enterprise Strategy Group in Portland, Ore.
"By providing one logical point of access, SANs perform faster, are easier to manage and can be used to consolidate a broad range of applications and their data sets," he said.
SAN file systems address a host of problems inherent in the NAS storage paradigm. Chief among them is the ability to reprovision storage often and quickly as storage needs fluctuate.
"If you are overprovisioning storage on a manual basis and spending a lot of people resources to monitor and manage the growth of storage needs across each one of the servers, you might be a good candidate," said Jeff Barnett, director of storage software at IBM.
"Ask yourself if your overprovisioning tends to result in underutilized capacity because the [database administrator] asked for two terabytes when he really didnt need that much."