App Performance: Orenstein Puts it in Gear6

 
 
By eweek  |  Posted 2007-07-23 Email Print this article Print
 
 
 
 
 
 
 

Gary Orenstein, vice president of marketing for Gear6, joined Ziff Davis Enterprise Editorial Director Mike Vizard for a recent IT Link podcast. A transcript of that interview follows.

Mike Vizard: Today were are going to talk about storage and application performance. Gear 6 has a special device that focuses on caching and storage for certain types of applications. And Im going to let Gary describe how it works. Gary Orenstein: Basically, Gear6 is focused on what we call storage acceleration, particularly application acceleration. The product that Gear6 makes is what we call a scalable caching appliance, which is essentially an appliance that clusters together high-speed, high-capacity memory to serve data 10 to 50 times faster than if that data was being transferred from traditional mechanical disk. The idea is to place this caching appliance in the network to complement all the existing storage that is there, thereby offloading some of the more data-intensive processes from the disk, thereby serving it from memory and ultimately speeding up applications.

Vizard: How do I figure out what parts of the application or the database to put in memory versus rely on disk, and where do I draw those lines?

Orenstein: One of the elegant approaches with caching is that it is, by its very nature, a somewhat management-less implementation. That said, what happens is when the appliance is placed in the network, the appliance will deliver data thats requested from the application over time. The caching appliance is simply keeping the most frequently accessed data in cache. And so, from an administrators perspective, what the administrator needs to do is simply make sure that they have the right-sized caching appliance in their environment, so that the bulk of the active data set remains in cache. But over time, if the workload changes, the cache will simply keep that most frequently requested data in cache. And if a piece of data is no longer being actively used, it will fall out of cache, all the while remaining in that persistent storage layer.

Vizard: So I dont have to do any fancy artwork around figuring out what part of the database is the most frequently called? The appliance will kind of naturally come to that?

Orenstein: Exactly, and that has not always been the case, because, historically, caching has been a very scarce resource. Its been very limited in terms of the amount of cache that could be kept on individual application or database servers and the amount of cache that could be kept on storage systems. But now, our caching appliance allows customers to build cache pools that can range into the terabyte size. Its possible for customers not to worry about how they have to fine tune everything, and simply have a cache size thats large enough to keep the majority of the active data set.

Vizard: Now, most people would have thought, at least historically, that cache is too expensive to use in this kind of context. So, they would sit there and say, you know, were just going to use storage devices and hope for the best and hope for some good IO. But now, Im starting to see cache show up on the drives themselves from certain vendors. So, what is happening with the cost structure around cache thats making it more affordable or reasonable to try this?

Orenstein: A couple good points there. One is that cache has been around in the data center for a long time and will remain at all levels, from the server, to the storage device to the disk drives themselves. In terms of measuring some of the cost/benefit of this type of solution, you really have to focus on the IO operations per second, as opposed to just some of the traditional metrics of capacity, so to speak. And when you look at the equation from an IO operations or IOPS per second perspective, caching is actually relatively cheap compared to the cost of having to deploy hundreds or even thousands of disks to get that same kind of performance level. I think what were seeing today is that, historically, disks have been cheap and customers have chosen to deploy disks because they were cheap, and often deploy them to a certain state just to get performance. But now, with the costs of power, and cooling and space rising, and people looking to consolidate some of their environments, I think folks are realizing that just deploying disks alone isnt necessarily the most cost-effective solution. So what were seeing is customers consolidate some of their storage into a more, higher-utilized pool of service, get the utilization rate up, have the storage capacity for what they need, and then complement that with a caching appliance. The other point is that now - since you can put the caching appliance in the network and have that caching appliance basically provide a boost to any application server or any storage data set - youre able to amortize or spread the cost of that over a much greater range in the data center than historically placing a very limited amount of cache in one location only.

 

Next Page: Storage and application performance.



 
 
 
 
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...

 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel