Software-defined networking became a commonly used IT buzzword about four years ago, following its origins at Stanford University around 2005. The concept of software-defined storage–which at this point is more of a marketing term than anything else–has been picking up momentum for about the last year or so.
We’re told that software-defined data centers are on the drawing boards.
Now IBM is taking the “software-defined N” concept up another level. Big Blue now has a group it has labeled Software-Defined Environments, which had a past life as the Application, Integration and Middleware group inside the IBM Software group. We can probably agree that Software-Defined Environment groups is a much simpler title.
Let’s look a bit deeper into the meaning of all this. IBM, on its Website, defines SDE as the following:
“A Software Defined Environment (SDE) optimizes the entire computing infrastructure—compute, storage and network resources—so that it can adapt to the type of work required. In today’s environment, resources are assigned manually to workloads; that happens automatically in a SDE. In a SDE, workloads are dynamically assigned to IT resources based on application characteristics, best-available resources and service level policies to deliver continuous, dynamic optimization and reconfiguration to address infrastructure issues. Underlying all of this are policy-based compliance checks and updates, in a centrally managed environment.”
The concept may sound simple, but the actual workings are a bit more complicated.
“I’ve been in this space for about four or five years, but we didn’t call it SDE then,” Matt Hogstrom, CTO of Software-Defined Environments at IBM, told eWEEK. “We started with a product called the WebSphere CloudBurst appliance.”
SDE Started With CloudBurst
eWEEK’s Jeff Burt covered that launch in April 2009. He described Cloudburst as “a new hardware appliance and a new virtualization-focused version of its WebSphere Application Server that, together, give enterprises a smooth path for bringing their service-oriented architecture work into private cloud environments.”
So that may be the actual product that launched this potential new buzzphrase.
“Even though it was branded WebSphere, the underlying engine that drove the orchestration and images into the script execution, et cetera, was generalized and could accommodate a large number of products,” Hogstrom said.
“Through that evolution, we rebranded it (CloudBurst appliance) as IBM Workload Deployer. It’s a unit that sat outside the virtualized infrastructure … that works with a variety of environments, such as VMware, PowerVM, CVM. We’ve always had this notion of a heterogenous cloud, if you will, as part of our core technology.”
Product morphing is the nature of IT and is a way of life in all development, and IBM is no exception. In 2012, the Workload Deployer evolved into a product called the Pure Application System, a turnkey hardware and software solution which enables clean and deduplicated data for transactions.
“The primary difference between our view (of software-defined anything), versus, say, VMware’s view–where VMware has been talking about the software-defined data center for six to eight months–is that the answer isn’t just in compute, network and storage,” Hogstrom said. “It’s also the ability to express and capture patterns of deployment, then deploy those with SLAs (service-level agreements) as the target of what the infrastructure is going to provide.”
Hogstrom’s job as CTO of SDE requires him to bring IBM’s compute, storage and networking groups into what he calls a “context,” so the groups can work together in an incubation-type environment (“We call it Project Armonia, the Greek word for harmony,” Hogstrom said) so they don’t have to work in silos.
‘A Clean, Severable Architecture’
“We needed to have an architecture that is clean, severable, that allows different vendors to play within a standardized set of interfaces, in addition to be able to innovate and deliver substantial value,” Hogstrom said.
The place where all these things come together is in OpenStack, the open-source, open-standards enterprise IT system. “This is what we’ve chosen as our operational paradigm for how we’re building out our software-defined environment,” he said.
OpenStack is a global collaboration of developers and cloud computing technologists producing the ubiquitous open-source, cloud computing platform for public and private clouds. The project aims to deliver solutions for all types of clouds by being simple to implement, massively scalable and feature-rich. The IT consists of a series of interrelated projects delivering various components for a cloud infrastructure solution.
OpenStack, founded by Rackspace Hosting and NASA, has grown into a global software community of developers collaborating on a standard and massively scalable open-source cloud operating system. Its mission is to enable any organization to create and offer cloud computing services running on standard hardware.
Key Use Cases IBM Perceives
IBM has identified a series of use cases which it suspects customers are going to be using, then puts together a framework upon which a customer can start–and this can include using components from other vendors or from the customer’s legacy IT, Hogstrom said.
“Hadoop, for example, is an evolving and emerging workload–companies are at various levels of being to take advantage of that,” Hogstrom said. “Hadoop is a specialized kind of workload; it’s got data nodes that scale in a particular way. So we have patterns that allow you to capture how that workload is represented, how it scales, et cetera, and then you want to be able to tie that to your infrastructure and deploy it.”
Other use cases for which IBM’s SDE group has prepared its OpenStack-based architecture are three-tier Web workloads, databases and others.
Heroku: A Very Different Use Case
One of those others is Heroku, a popular cloud-based application delivery platform.
“Heroku is a very different model,” Hogstrom said. “It already has VMs in the infrastructure, and to an extent, that workload is going to try and manage itself, in terms of deployment, service placements, etc. So we’re incubating how we can take from that software expression down to instance creation to meet the needs of those workloads.”
To facilitate all of this, IBM currently has the turnkey Pure Application System available today for the customers who want to put it on the floor, let it do its thing, and allow them to focus on their businesses, Hogstrom said.
“To a certain extent, the IWD (IBM Workload Deployer) device is also available, and Tivoli SmartCloud Orchestrator has some of that capability today,” Hogstrom said. “Some of what I’ve talked about is aspirational from a build-it-yourself perspective, but it’s also available from a turnkey solution.”
Words of Advice for the Industry
Hogstrom, who was a customer of IBM’s before joining Big Blue five years ago, had some words of advice for the industry about how this should all work, once it gets momentum in the marketplace.
“The only way for this to be successful, in my view, is that the industry needs to collaborate, and in a way that everyone has a new blueprint, if you will, for how these should be rolled together,” Hogstrom said.
“We see OpenStack as that place where the ideas are being shared and the concepts are being incubated. IBM is very strong on participation in the community, because it’s only through community and continuous development that all of this works.”