Storage and application performance

 
 
By eweek  |  Posted 2007-07-23 Email Print this article Print
 
 
 
 
 
 
 


Vizard: A lot of people dont talk about this, but, at least its my personal viewpoint that the performance issues around storage, particularly in a network environment, is what leads to the low utilization rates there. Weve seen storage today raise because people are dedicating a raise to specific applications because they dont want to be concerned about any performance degradation. So, does the caching of a network appliance kind of take that issue off the table so now we can start getting after these utilization rates and storage? Orenstein: It does to a large extent. Historically, people have had to actively manage all kinds of storage devices and even do things such as tiering, where there might be a low-capacity, high-performance storage tier, and complement that with a higher-capacity, lower-cost tier. All of that, of course, requires work on behalf of the IT administrator, and I dont know any IT administrators who voluntarily choose to slice and dice their storage infrastructure into umpteen different tiers. What were seeing is the move toward a type of architecture that simplifies all that into what can be loosely termed an accelerated archive where, on the one hand, you might keep a large capacity pool of disk based storage, maybe something using, for example, SATA drives that provide a very low cost, but high capacity pool of storage, and then complement that with a caching appliance to provide the performance where its needed. Of course, since that whole environment is dynamic and since the caching appliance can react dynamically to data sets or pieces of data that become frequently accessed and change back and forth, that doesnt require active management on behalf of the IT administrator. Once the infrastructure is set up, they can sort of sit back and let the caching appliance do what it does best, which is make the most frequently accessed data highly available and served with very low latency from high-speed memory.

Vizard: How does the appliance, which sits on the network, discover the various servers its supposed to interact with out there? And is it heterogeneous by nature or do I have to dedicate it to specific operating systems? How does that all work?

Orenstein: It is heterogeneous by nature and doesnt need to be specifically dedicated, but lets walk through the basic deployment model. Right now, Gear6 is focusing on the network attached storage or NAS market, and specifically the NFS protocol. Those environments typically include multiple clients or application servers and multiple NAS devices, usually all networked together with gigabit Ethernet. The deployment model for the Gear6 caching appliance, which we call Cache FX, is to plug that appliance into the network via gigabit Ethernet, and then simply to identify the NAS storage devices that youd like to accelerate. So, those appear in the management. And then from that point, customers can dedicate the application servers that are IO-constrained to view the data through the caching appliance. So its a relatively simple deployment model. One of the benefits of that model is that theres absolutely no change whatsoever to the existing storage devices and theres absolutely no change to the applications as well. Its a very low-risk, high-reward opportunity for anybody whos facing an IO constraint, or sometimes people refer to it as application brownout scenario, where they simply cant get enough performance from their traditional storage devices.

Vizard: Theres a lot of brownout effect out there. What usually happens is a lot of finger pointing because the infrastructure people point their finger at the developers for writing, you know, heavy, bloated applications. And the developers point their fingers back to the network infrastructure guys for having, you know, shoddy networks that are full of traffic that is being abused and not germane to the business at hand. How does this give everybody some kind of middle ground that they can come to without going to war every day over whos wrong?

Orenstein: We like to think of Gear6 as helping customers end the blame game, because that can be so troubling, especially when dealing in multi-vendor environments. One of the things that Gear6 provides to all of our customers and prospective customers, thats freely available on the Gear6 Web site, is a tool that we call NEMo, which is an IO-analysis tool. This is a simple Perl script that can be run on an application server to essentially take a snapshot of the IO traffic between the application and the storage device. We can take that trace and analyze that against a real live product in our lab to determine what wed call an acceleration factor, and present that to customers to say, "This is the range of performance improvement that we see, based on your real, live data." So this is not some guesstimate thats just coming out of a discussion, but actually looking into the IO stream to specifically see what behavior is taking place, and to say how we can help in that particular case. And, frequently, that does sort of remove some of the finger pointing and give customers a really clear picture of isolating the specific IO bottleneck, and then it allows Gear6 to solve that problem.

Vizard: Do you perceive over time that, as the storage arrays get more powerful and a lot of them are carrying their own risk processors now, that more of the application load may move off the server for certain types of applications, and move over to the storage array thats also a co-processor with some basic compute engine capability in it?

 

Next Page: Storage and application performance.



 
 
 
 
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Thanks for your registration, follow us on our social networks to keep up-to-date
Rocket Fuel