MetaOS: The Fabric Control That Will Run Next-Gen Data Centers
A metaOS is a virtualization layer of software, wedged between applications and distributed computing resources, that utilizes those resources--and those resources only--to perform scheduling, loading, initiating supervising applications and error handling. Researcher Thomas Bittman explains why these will be vital to the construction of next-generation data centers.LAS VEGAS -- The concept of a so-called "meta operating system" for data centers and smaller IT systems is getting a lot of attention this week at the Gartner Data Center Conference here at the MGM Grand Hotel.
Fundamentally, a metaOS is a virtualization layer of software, wedged between applications and distributed computing resources, that utilizes those resources -- and those resources only -- to perform scheduling, loading, initiating supervising applications and error handling.
Only a small percentage of data centers are employing anything resembling a metaOS at this time, but this eventually will become an integral part of how new enterprise computing will work, researcher Thomas Bittman told conference attendees Dec. 2 in the event's kickoff keynote.
Bittman, a data center research vice president for Gartner, said that virtualization hypervisors such as VMware's ESX and Microsoft's Hyper-V are, in fact, serving as de facto "constructs," or blueprints, for the data center operating systems of the future.
And that future may be sooner than many people think -- perhaps only three to five years away. Most of the ingredients for metaOSes are available now.
"Right now, as we have had for many years, we still have islands of IT," Bittman told the audience of about 2,000 attendees. "They're designed for the client-server model we had for more than a generation. But as computing moves more and more toward a service model, delivered through the Internet cloud, this is requiring a new take on the makeup of the data center itself to enable more agility and performance."
The traditional operating system has been the center of gravity for server computing. Real-time infrastructure architecture, which often requires 10 times the performance now provided by standard operating systems, is changing that completely, Bittman said. 'Service Governors' Will Be Included in MetaOSes
A metaOS consists of a hypervisor like the two noted above and a "service governor" that manages the usage of distributed computing resources at a service level, Bittman said.
"Conceptually, a meta operating system provides the same hardware-enabling functionality that an operating system does, but across distributed servers," Bittman said. "While there may not be a single product that performs the role of the metaOS, the concept is very real and will be made more real as RTI [real-time infrastruture] technologies evolve."
Bittman described today's cloud infrastructure as fairly standardized ("one-size-fits-all-type services"), elastic, "but in chunks," and monolithic -- basically closed -- in nature.
"We're seeing Google, Amazon, Microsoft -- all huge players -- providing most of the cloud services now. It's pretty much 'one provider does it all,' at the moment. But this will all become a lot more granular, with a lot more specialty players in the future," Bittman said.
Bittman envisions fewer large cloud providing players and many more smaller ones who will "join up together in their own little eco-systems" to deliver software services on demand. "The user, most of the time, won't have any idea exactly where the service is coming from -- he just knows that he wants the result," Bittman said.
Bittman also said that storage will continue to be a major cost center as long as data keeps multiplying across the vertical boards at a 60 percent to 70 percent rate each year.
"This is where thin provisioning, data compression and deduplication all come in as super important. All systems will need to be using all three of these methods of keeping only the most important data available in the system," Bittman said.
Finally, enterprises are going to be needing to build what Bittman called dynamic sourcing teams -- groups of IT- and business-savvy employees within each enterprise who can run these new data center fabrics while also being in control of operations costs [due to power, cooling and maintenance of these new hardware systems].
"This combination of skills will be very important going forward. Companies will need these people to manage the day-to-day sourcing decisions a company will make to enable its business to work efficiently," Bittman said.
The Gartner Data Center Conference continues through Dec. 5.