Pay-Per-Use Computing: Will It Fly?

 
 
By David Chernicoff  |  Posted 2004-10-01 Email Print this article Print
 
 
 
 
 
 
 

Opinion: David Chernicoff sees value in the PPU model Sun is pushing, but wonders whether, among other things, customers will buy into the concept.

Back, somewhere before the dawn of time, when I first got involved with computers, computing resources were incredibly expensive. On the big iron machines that I first encountered, the cost of use was so high that users were allotted and billed for CPU seconds. And the cost per second could be quite significant. Of course, this was also in the day of punch cards and batch jobs, where a couple of days of work spent punching cards was followed by dropping the cards off at the computing center for the program they contained to be executed at some point in the future by the computer operators. For the cutting edge in computer technology I had available a teletype terminal with a paper tape reader and a 110-baud modem, so that I could do work remotely. But whatever I chose to do was limited by the available CPU time allotted to my account. While not completely obsolete, time-sharing on big iron these days is pretty much limited to academics arguing for cycles on supercomputers to run their pet projects. The business world has moved on to dedicated computing resources that dont require explicit sharing between different applications and departments; the cost of the hardware and software has dropped so low (compared to the old mainframe days) that this makes business and economic sense.
But now it appears we have come full circle with Suns announcement on Sept. 21 of N1 Grid Computing Pay-Per-Use Cycles. In this concept users will be able to buy computing cycles on other peoples computers on an as-needed basis. The PPU concept is based on making use of the N1 Grid Container technology that was announced last march.
For more on Suns pay-for-use computing announcement, click here. Grid Containers are a software partitioning technology within Solaris 10 that is designed to allow server resources to be used more efficiently by creating as many virtual servers within the physical hardware as the hardware can support, up to a maximum of 4,000 containers. Each container looks like its own Solaris server with a dedicated IP address, host name, memory space, file area and root password. With server consolidation being a big play at the moment, Grid Containers are Suns answer to the consolidation question in the Solaris world. Given that Sun sells both hardware and software, it makes sense for the company to provide an easy-to-implement partitioning solution for Solaris in order to push the sales of the big Sun boxes as consolidation servers, much as many Windows Server vendors are now offering VMware with their big SMP boxes in order to consolidate multiple Windows Servers into a single piece of hardware.
While the original N1 Grid vision involved the virtualization of every service within your data center, this new PPU model expands that vision with the promise of making compute power available on an as-needed basis across the Internet. As Jonathan Schwartz pointed out in his blog, this technology is not for latency-sensitive workloads since there is no way to overcome the inherent latency issues that any Internet connection represents. Instead, hes expecting this technology to be used for discrete workloads that can be handed off to a computational cluster and delivered back to the customer when the project ends. Next Page: Convincing customers on price, security.



 
 
 
 
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...

 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel