I/O Management Ensures Utilization

 
 
By John Waszak  |  Posted 2010-09-21
 
 
 

How to Implement I/O Management in Your Data Center


Managing application performance and availability is difficult. Today's data centers are growing in both size and complexity, particularly because of increased layers of virtualization.

As a result, there is a tremendous amount of infrastructure between the application users and the storage devices that hold the data. Every piece of the infrastructure must operate efficiently to maintain high performance and availability. Conversely, any bottlenecks within the infrastructure can quickly degrade the user experience and business effectiveness.

While management of this infrastructure has typically been focused on the physical components of the data center (that is, workstations, servers, networks and storage devices), monitoring and managing the I/O as it travels through the system of servers, networks and storage arrays is a highly critical component to ensuring overall application performance.

Additionally, monitoring infrastructure cost is just as challenging as managing performance. In some cases, costs and performance are intertwined, with expenditures on infrastructure increased in an attempt to address any performance concerns proactively. As such, it is becoming more important to understand the costs of this infrastructure and effectively manage its utilization.

Utilization is defined as the act of ensuring the business is getting the most out of its infrastructure. With the costs of a storage area network (SAN) connection in the tens of thousands of dollars, it becomes imperative to manage I/O infrastructure to provide the highest degree of utilization while simultaneously guaranteeing performance and a commitment to service-level agreements (SLAs).

All of these points taken together bring a tremendous amount of complexity to the data center environment, causing data center administrators to ask, "How am I going to manage all of this?" The answer: through implementing I/O management.

Definition of I/O Management


Definition of I/O management

I/O management monitors I/O from the application's perspective, allowing the data center to do three things. First, it allows it to better utilize existing infrastructure and avoid the need to purchase new infrastructure. Second, it allows it to proactively manage performance by making adjustments prior to a performance issue.

Third, it allows it to encourage true problem management where the root cause of an issue is established and the problem is permanently resolved. Essentially, I/O management creates a system that can handle the massive volumes of data. It does this by examining the various I/O layers in order to monitor performance, troubleshoot issues and analyze I/O data-which then both prevents and diffuses issues that may arise.

Reasons to implement I/O management

I/O management is required because I/O is complicated and becoming more so with every incremental piece of infrastructure virtualization. This infrastructure has several tiers, with several layers of virtualization, all of which manage I/O. As a result, there is an increasing need for I/O management within today's complex data center.

For example, application I/O includes files systems, SCSI stacks, operating systems, schedulers, hypervisor, volume management, multipathing software, device drivers, PCI subsystems, virtual and real networks and more. But at the end of the day, the application simply says read this file, write this block or copy this memory. The application then expects everything in between to just happen. But that is not the case. Without a proper system in place to manage these requests, the applications will not perform correctly.

This is where I/O management comes in. Imagine I/O as a two-sided process where one side is responsible for the flow of information between a storage device and an application server, and the other side is responsible for the flow of information between the application server and the client trying to access information. On the client side, application performance management (APM) handles transactions between the client and the application server, ensuring that I/O flows without interruption between the client and the server. I/O management does what APM does, except it handles transactions between a storage device and an application server. It works on the I/O infrastructure side to ensure the proper flow of I/O.

I/O Managements Analytical Tools


I/O management's analytical tools

Within I/O management, there are several analytical tools a data center manager can use. For example, I/O management can quickly identify if performance issues originated in the storage network or if they originated within the application server. This allows I/O management systems to identify potential issues before they happen. There are several other pieces of information that I/O management adds to the toolbox.

Using historical performance information, I/O performance trending can benefit the IT administrator during proactive and reactive analyses of performance. Unlike other tools which cannot supply historical performance information, I/O management can consolidate, trend and track the information to create performance charts that the IT administrator can use to pinpoint the problem immediately. Some of the problems I/O performance trending can identify are I/O latency issues and operating system command queues-both of which can affect the speed and consistency of I/O, which can then affect overall application performance.

I/O Management Ensures Utilization


I/O management ensures utilization

I/O management can also ensure utilization. Utilization is highly important for businesses because it ensures that the entire infrastructure a business possesses is being used to its capacity, in an efficient manner. Essentially, I/O management ensures that the business is getting the most out of its infrastructure. While the various elements in a storage network are all specified to carry a certain bandwidth (that is, 4GB Fibre Channel link, 2GB Fibre Channel array port, 10GB Ethernet port, etc.), there are many factors that come into play when determining how many I/O operations the elements can handle.

Sometimes bottlenecks can occur within the various I/O handling layers if they are overloaded. When this happens, I/O path contention areas create visibility issues. In this way, I/O management can add critical insight into performance and availability management, which can then help an administrator configure optimal utilization.

The I/O tools create a combination of data that, when efficiently collected and managed with the right management tool, becomes an invaluable resource to proactively address potential performance problems, reactively troubleshoot the root cause of performance issues, and deploy and maintain an efficient I/O infrastructure.

I/O Management Integrates Information


I/O management integrates information

But I/O management doesn't stop there. Collecting the right information is just the first part of effectively managing availability and performance in today's complex data center environments. I/O management also integrates the information to present useful reports and alerts that provide key benefits such as reduced capital costs and decreased operating expenses. I/O management can also help produce positive revenue by reducing the amount of both scheduled and unscheduled downtime, while simultaneously optimizing performance and utilization of expensive I/O infrastructure.

I/O management implementation offers solutions to several issues that occur within the data center-issues that will only continue to increase as data centers evolve in complexity and size. I/O management tools rectify the problems that occur in these environments by both proactively stepping in before problems arise and also diffusing problems that have occurred prior to I/O management implementation.

As data centers become increasingly complex and the volume of layers of virtualization continues to grow, implementing I/O management will become an increasingly critical step in ensuring the highest degree of utilization, while simultaneously guaranteeing performance.

John Waszak is Vice President of Software Product Management at Emulex. John joined Emulex in August of 2000. Previously, John was the vice president of engineering at Emulex. Prior to Emulex, John was the vice president and business unit lead for the Factory Systems Division of PRI Automation (now Brooks Automation), overseeing a breadth of R&D, operations and business functions. John also ran his own startup technology company for over six years, selling advanced monitoring systems to the thin film industry, which led to a successful acquisition of the company's technology. John holds a Bachelor's degree in Electrical Engineering from the Worcester Polytechnic Institute. He can be reached at John.Waszak@emulex.com.

Rocket Fuel