When I hear answers to the question, “What is OpenStack?” I hear echoes of Morpheus trying to explain to Neo, “What is the Matrix?” Morpheus can only talk in metaphors. The Matrix is a dream world, a battery, control. All are true, but none quite captures what it really is.
OpenStack likewise is defined in metaphors. In the past year I’ve heard proponents describe OpenStack as “people,” an “interface,” a “set of APIs” and a “platform.” Last week at the OpenStack Summit in Austin, we also heard it be described as a “strategy for taking advantage of diversity in IT,” an “integration engine,” and, according to CoreOS CEO Alex Polvi, OpenStack is simply “an application.”
All are true, of course, which is part of the allure of OpenStack and also the source of its ongoing identity crisis. OpenStack is six years and 13 versions into its life as an “open-source software platform for cloud computing,” which is how Wikipedia puts it and actually is a pretty good definition.
I’ll throw another metaphor into the pot: OpenStack is a use case. There are a lot of them, and that’s part of the identity problem: Which use case? According to the annual OpenStack User Survey, the number one use case is test and development (63 percent), which is not surprising because that’s where all cloud efforts begin. Others include infrastructure as a service (49 percent) and Web services and e-commerce (38 percent). The total percentages exceed 100 because survey respondents were allowed multiple choices.
The most intriguing use case, however, is network-functions virtualization (29 percent). Work has been quietly proceeding on NFV for the past few years and no one really noticed until AT&T and Verizon recently announced large network architectures deployed on OpenStack.
AT&T claims to be the biggest OpenStack implementation after building 74 AT&T Integrated Cloud (AIC) sites with another 30 coming this year as the telecom service provider works toward its goal of virtualizing 75 of its network operations by 2020.
AT&T didn’t wait for analysts or the media or anyone else to declare that OpenStack was ready for production. They did it, as one official told me earlier this year, simply because “OpenStack is a great platform on which to deploy a network.”
They came to that decision not because the company wanted to become a cloud computing leader, but because of the need for telcos to become more open and agile or risk not being able to keep up with the demand for network services. The result for AT&T was ECOMP (Enhanced Global Control Orchestration Management Policy), which Sorabh Saxena, AT&T’s senior vice president for software development and engineering, described in detail to Summit attendees.
Boris Renski, co-founder and chief marketing officer of Mirantis, whose OpenStack distribution AT&T uses in the AIC, explained how OpenStack also enables telcos to keep their costs under control as they build out enough data centers to meet demand.
OpenStack Makes Strides Despite Persistent Identity Problem
“Telcos spend so much on technology because they operate in silos—there’s the Cisco silo, the Juniper silo, the Dell silo—all over the place,” Renski said. “That hinders the pace of innovation. They need to be able to leverage assets and move fast, with a cloud-like infrastructure, a single fabric, a single set of APIs. OpenStack can solve that.”
One other interesting use case is around the aforementioned “people.” If businesses are looking to modernize their IT processes, learning how to deploy OpenStack is an excellent proving ground.
OpenStack Foundation Chief Operating Officer Mark Collier talked at the Summit about how adopting OpenStack doesn’t automatically make an organization cloud savvy. One user’s initial go-round didn’t fare well because “they realized the importance of the technology but hadn’t changed the culture and process around it,” he said. With those lessons learned, subsequent efforts were more successful. “If your culture is not ready to take advantage of the technology, you will miss out on the opportunity.”
Getting back to that Wikipedia definition, that line doesn’t say “public” or “private” cloud computing. It’s just cloud. The distinctions are not important anymore. Public, private and hybrid refer where the cloud is, but cloud really is becoming about enabling a new way of computing. The Cloud Native Computing Foundation defines it as container-based, dynamically managed and micro-services oriented.
It’s useful to consider that client/server computing lasted almost 30 years as a predominant computing model and the cloud isn’t half that age yet. But things are changing quickly. Polvi showed that you don’t even need OpenStack to deploy OpenStack. He demonstrated CoreOS’ new Stacknetes, a collaboration with Intel and Mirantis designed to deploy OpenStack as a set of container-based services.
Polvi’s demonstration was a bit of a magic trick and was more a proof of concept than a realistic use case. But the point is that cloud technology can pretty much do whatever you want it to do and whatever you want to do with cloud technology you can do with OpenStack. Now, what was it you wanted to do?
Scot Petersen is a technology analyst at Ziff Brothers Investments, a private investment firm. He has an extensive background in the technology field. Prior to joining Ziff Brothers, Scot was the editorial director, Business Applications & Architecture, at TechTarget. Before that, he was the director, Editorial Operations, at Ziff Davis Enterprise. While at Ziff Davis Media, he was a writer and editor at eWEEK. No investment advice is offered in his blog. All duties are disclaimed. Scot works for a private investment firm, which may at any time invest in companies whose products are discussed in this blog, and no disclosure of securities transactions will be made.