2The handling of big data needs a new approach because it breaks the old model.
Existing approaches are challenged with petabyte scale. Quantities of all kinds of data are exploding, but machine-generated data [log files, sensor data, etc.] is already massive and continues to grow. “And it’s more than just being about storing, it’s about analyzing it,” Krishnan said. “Keeping that much of that kind of data in the cloud makes tons of sense, because it’s just too unwieldy to have to handle all of that yourself.”
3You can eliminate network delays.
If you’ve got a petabyte of data in the cloud, you need to think about having the application in the cloud. When the data and the application are close together, that is when optimal movement is possible. Google has already proven this with its home-grown architecture. “How does Google spit back results, just like that?” Krishnan said. “The application is running on the cloud, right next to the data. No shoving big data around.”
4Private or public cloud: Run your app on either—it doesnt matter.
Cloud storage can hold unlimited amounts of data. “And it doesn’t matter whether you’re talking about private or public cloud structures,” Krishnan said. “Private and public clouds are just interesting sizzle for journalists and analysts to argue about. There’s no business relevance whatsoever; some data will go into a public cloud [Amazon, Shutterfly, etc.] and some will go into a private cloud. It’s all up to what the customer needs.”
5You can run both virtual computing and virtual storage layer on the same platform.
6You can eliminate a whole tier of storage and management.
By moving applications closer to the data in the cloud, you can essentially eliminate a whole tier of storage—thus a whole rack or several racks of arrays that need to be powered and cooled 24/7. “If you collapse your virtual compute with virtual storage, you are using fewer servers and arrays all around and will see great power savings and better manageability,” Krishnan said.
7You can run ancillary data services directly on the data in the cloud.
As primary applications move to the cloud, ancillary and support applications should follow. Onboard antivirus, video transcoding and others all work much better when close to the data source. If the apps run on cloud storage, they work even better as then no data needs to be moved—even within the cloud itself. For example, you don’t want to pull all the data out—even in chunks—send it to an antivirus server, and then back to storage again. That’s a real waste of time and money.
8Data analysis is more efficient.
9You get substantial Green IT benefits.
10Parallel processing beats centralized compute every time.
Distributed computing wins—whether talking about supercomputer processing, or storage nodes.