The media, as well as the market at large, have latched onto the term “cloud computing” with a vengeance. Admittedly, the basic premise of “data center on demand” is pretty sexy. But be warned: all may not be as it seems. The vision and concept of cloud computing and the on-demand data center have been around in one shape or another for decades. The vision has always been sought after but remained just out of reach. Virtualization has made this real, bringing the vision almost into our grasp. The key word here is “almost.”
Those looking to include cloud computing in their architecture need to address the issue of how they can most effectively complement existing architectures. One of the biggest challenges for IT planners and strategists is that the term “cloud” is being used today to describe everything from the traditional software as a service (SAAS) delivery model to infrastructure outsourcing to infrastructure renting. It’s the buzzword du jour with which everyone seems to be trying to associate.
For the purposes of this article, I will ignore the renamed traditional service delivery models and narrow the definition of a cloud to its most basic: an amorphous infrastructure owned and operated by someone else that accepts and runs workloads created by its customers.
Thinking about a cloud in this way, the first and most obvious question becomes: “Can all my applications actually run in such an environment?” If the answer to that question is no, then you must ask, “What subset of my data and applications could safely run there?”
Clearly, there are some applications that you would probably never want out of your control, including those you need in order to pass an audit (for example, to comply with the Sarbanes-Oxley Act, the Payment Card Industry Data Security Standard or the Gramm-Leach-Bliley Act). A cloud translates into the physical at some point in space but, today, you cannot audit its security, file systems and access controls with absolute certainty.
Today’s cloud tools barely manage provisioning and some level of mobility management. Plus, security and audit capabilities are still a long way off, as well as the ability to move the same virtual machine in and out of cloud infrastructures while tracking and tracing its movement and access. Let’s face it: most auditing groups still haven’t even come to grips with the impact of virtualization on basic enterprise data center auditing, let alone cloud governance.
Virtualization: A New Data Center Architecture
Virtualization: A new data center architecture
Virtualization is a new data center architecture that brings with it a range of challenges for traditional data center management tools-and traditional control and audit practices. Some of the more obvious issues include:
When you can make 20 exact copies of an existing server and distribute them around the environment with a click of a mouse, server identity becomes critical. The traditional identity based on “physicality” is no longer good enough.
Physical servers do not move much. VMs, on the other hand, are designed to be mobile. Tracking and tracing them throughout their life cycles is critical to maintaining and proving control and compliance.
3. Data separation
Host servers share resources with the virtual servers running on them. That is, portions of the host’s hardware (such as the processor, memory and networking) are allocated to each virtual server. As of yet, there have been no breaches of isolation between virtual servers. But this isolation will likely not last.
Cloud governance magnifies these challenges. Not only are these three issues now managed and controlled by someone outside the IT department (which doesn’t let an organization off the hook when it comes to its overall governance commitments), but there are now additional challenges specific to the cloud, including:
1. Life cycle management
Once a workload has been transferred to a cloud, how is its life cycle managed? The IT organization gave it birth but how can you audit its location through its life? Did it stay in the cloud to which it was delivered? Were any copies made? Were all instances returned to the IT organization at its death and all backups deleted?
2. Access control
Who had access to the application and its data while it was in the cloud?
Was it altered or tampered with while it was in the cloud?
4. Cloud created VMs
We think of clouds as an infrastructure in which to temporarily place IT workloads. But they also generate their own workloads and transfer these into the data center. We call these “virtual appliances” and they are being downloaded into data centers on a daily basis. Identity, integrity and configuration all need to be managed and controlled here.
Clouds Not Ideal for Critical or Sensitive Information
Clouds not ideal for critical or sensitive information
Ultimately, the cloud as it exists today is just not ready for any application of real importance, which suggests that it’s a place for applications of little importance. In fact, if you read the Amazon user agreement, it describes just that: a service that should not be used for anything critical or sensitive.
One of the more established cloud models out there is Amazon. This is based on selling unused capacity on Amazon’s own infrastructure. The business may be revenue-generating but it is not separate. To its credit, Amazon recognizes the security, management and compliance issues, as well as the fact that its own resource needs need to come first. Neither security nor uptime is guaranteed in the service agreement. Further, Amazon can suspend the service whenever it wants, without liability to its customers. For non-critical, low-usage applications, this might fair but it is not the right environment in which to run anything more important.
A practical approach to clouds
Cloud computing is a vision that has the potential to increase the overall flexibility and responsiveness of your IT organization. But despite the current hype, the technology is just not where it needs to be yet. Presently, there are three pragmatic things you can do to prepare for clouds on the horizon:
1. Understand what is really needed to play in the cloud
The use of virtualization in the data center is creating the term “internal clouds.” With the same basic technology, corporate data centers can keep everything under their control. You should discuss with your auditors how virtualization is impacting their requirements. From there, add new requirements and new policies to your internal audit checklists.
2. Gain experience with “internal clouds”
Make sure you can efficiently implement and enforce the policies (as well as meet the new audit requirements) with the right automation and control systems. Once you know what you need internally, it becomes simpler to practice that in the cloud.
3. Test external clouds
Using low-priority workloads will provide a better understanding of what is needed in terms of life cycle management, and it also helps establish what role external cloud infrastructures could end up playing in your overall business architecture.
Given these pragmatic considerations, right now you should begin transferring your IT organization from “some virtual server use” to building out “internal clouds,” being aware and mitigating the mysteries within the technology. To conclude, it is clear that if you can’t manage, control and audit your own internal virtual environment, there is no chance you can do the same with an external cloud environment.
Jay Litkey is President and CEO of Embotics. A serial entrepreneur with extensive experience launching, financing and growing software companies, Jay has been a pioneer in emerging, high-growth markets that include virtualization, enterprise systems management automation, and Internet video content distribution. He can be reached at [email protected].