Cisco's Marcoux Charged Up for Designing a Corporate 'Green Road Map'
Cisco's Marcoux Charged Up for Designing a Corporate 'Green Road Map'
Data center and power-supply industry veteran Paul Marcoux was hired a year
ago to serve as director of all of Cisco Systems' green IT initiatives, and
that's one tall order. Cisco employs some 68,000 people and owns dozens of
workplaces around the world.
Marcoux (pronounced Mar-KOO), whose official title is vice president of green engineering in Cisco's Development Organization Operations, joined the company from American Power Conversion Corp., where he held an executive position reporting to the CTO and founder.
Marcoux has an extensive background both in technology and environmental issues. He is one of the founders of The Green Grid, a nonprofit consortium dedicated to advancing energy efficiency in data centers and business computing ecosystems.
Marcoux also has held executive positions in the financial, health care and technology industries. He has provided consulting, design, engineering and management services for more than 3 million square feet of domestic and international data centers, ranging from small LAN (local-area network) rooms to state-of-the-art data centers requiring dual redundancy.
Marcoux met recently with eWEEK Senior Writer Chris Preimesberger on the Cisco Systems campus in San Jose, Calif.
How do you approach your mission at Cisco Systems?
We're essentially putting together a "green road map" for the entire corporation. This involves everything from recycling cans in the cafeteria to the kinds of energy-conscious products we make. The CDO group is represented by all the manufacturing through research divisions at Cisco with a group that is responsible for building the products that you see.
Now, within that organization-which is very large and very diversified-the level of interaction between each "siloed" group, or business unit, as some people call it, is mostly good, depending upon the level of the organization to its, say, "sister" organization. Sometimes it's not [good]. So my role here is to kind of bridge that. When you have a siloed organization and you're trying to run a horizontal element through it, what you're really creating is a matrix management-and this one's around solving the green issues for CDO.
What qualifications do you bring to Cisco?
A lifetime of experiences in the data center, in power management systems and in power management networking systems.
What's first on your agenda?
We're really using a multipronged approach. There are lower levels where we have set in place teams that make up the Green Engineering Task Force. This group of people within CDO represents some of the most talented people throughout these various silos-we've now corralled them into one group. This group will be able to analyze very strategic elements of the greening issue.
Let me give you some ideas what they'll be tackling: Basically, all IT equipment has power supplies. The efficiency of power supplies is dependent upon the utilization. You can engineer a very high-efficiency power supply, but again, if it's not utilized with a high degree of throughput, it runs in a very low area.
What we have chosen to do is take a look at all our power supplies and redesign them to make them high efficiency over a very broad range of operation.
Putting a New Level of Intelligence into Power Supplies
Does Cisco buy or make its own power supplies?
We buy them, but we have our own requirements. Emerson is a big [hardware] supplier to us.
We're also looking to put the next level of intelligence in power supplies. Why do we need intelligence? This is extremely important when you want the ability to control and output information on the operational parameters of something. The key point is that this output can now go into energy management systems, building management systems and IT management systems.
They can then make value decisions on what you want to do.
Can you offer an example of intelligence in an energy management system?
Everyone knows about virtualization; we're all pretty astute in that now. Suppose your virtualized process wants to transmit a workload in the data center to another data center on a campus, or to another data center on the other side of the planet. We make the assumption about where that workload is going to be transferred-taking into account any additional resources that are needed-that the power and cooling [on the other end] is fine. That is a very dangerous assumption. Right now, it's not a problem because we transmit small requests.
If you convert the data to watts, you're looking at [general workloads] of anywhere from 3,000 to 5,000 watts-about half a rack [of servers]. Those are small loads. But as virtualization becomes more powerful within the data center, you'll be finding larger amounts of data being requested, which will need larger amounts of power [to transport it].
So the ability to send [power control] information ahead to where the data will go-in order to pre-cool it and to validate that the power indeed is available to receive it-becomes extremely important.
Why are the workloads getting so much larger? Because of the sheer depth and amount of data, and newer, heavier files, such as high-definition video?
Yes on all of the above. It's being driven by all sorts of social networking activities. The amount of transference [over the Web] two years ago was a small fraction of what it is today. Two years from now, it will be many, many times larger.
How Virtualization Figures into Power Savings
How does virtualization figure into this power-saving equation? Does it
save-or cost us-energy?
When you transfer a vast amount of data on a virtualized basis, you're going to be activating areas within the data center that have probably cooled down and not processed anything in awhile. So the local building or energy management systems may have throttled those areas down to save energy.
But you need to be able to go there because virtualization, to be successful, has two components to an equation that most people don't realize: You virtualize IT, but you also have to have the equal virtualization of the facility.
So, "VxIT," from a mathematical perspective, is equal to "VxFacility." You have to keep the two in harmony. There is a reason for that. When you virtualize a process on the IT side on a data center that is not a green field, the problem is that the data center was designed with upper and lower [power] limits.
We always knew what happens when you exceeded [a power limit]: The system shuts down. But we did not understand what would happen if you could actually drive a process below its design requirements. Power and cooling are designed for a window of operation. When you go below the lower limits of the window, what happens from a cooling perspective? Systems will shut off. Our root cause analysis is done, the data center crashed, yet no one knows why. The system simply shut down.
What happened was, virtualization saw there was a problem [and] it transferred the workload someplace else, so that line went above the design requirement again. Same thing with the power systems. The frequency among multiple UPSes [uninterruptible power supplies] can become unstable. When that instability exceeds the threshold level, they'll take themselves offline.
The safety circuits are operating; they're doing what they were designed to do. So what's the answer? The answer is to understand that when you virtualize the IT, you have to review the facility part.
Is There a Correlation Between I/O and Power Supply?
Is there a direct correlation between I/O and power supply?
If you are talking about servers, there can be. If you are talking about networks, the answer is no. The correlation depends upon where you have your box within the network. You will find that if you have a box at some point in the network and move that same box to another area, it will consume a different amount of energy.
It also has features and functions. Depending upon where it sits in the network, and understanding the network's configuration, this could impact those features and functions.
What we also have been doing is trying to provide the industry a good benchmark, so that you can begin to analyze networks based on a handicapping system. This handicapping system will allow you to determine the type of network you have, where within the network specific types of boxes are sitting, what features and functions are core, and which are ancillary. This will allow you to have a very strong ability to understand the energy throughput.
What new "green" products will Cisco be producing in the next year or so?
We have projects going on in all kinds of areas. The Nexus 7000 [network switch] is a very interesting machine. That switch allows you essentially to take multiple switches, condense them into one and then be able to use-in one physical box-various areas of the network.
From that aspect, it displaces a lot of other equipment. It uses front-to-back cooling, which is great for hot- and cold-aisle considerations in the data center; it uses variable-speed fans, which again aid in its energy consumption.
Let's jump over to telephones. What's happening here is that our customers are requiring more and more features on their [phone] systems, not less. Features do add cost to energy. So what are we doing here?
Newer phones coming out will be able to run scripts, which can save [battery] power. Let's say you leave your office at 6 p.m. Maybe at 8 p.m., you have your phone automatically shut itself down [using the command script]. It doesn't sound like a lot, but it can save 10 to 15 watts. But 10 to 15 watts, multiplied by millions of phones-that's lots and lots of power [that won't have to be drawn from the grid during a charge-up].
The power supplies that feed our Linksys [wireless Internet network] boxes were just certified by Energy Star [an arm of the U.S. Environmental Protection Agency]. Each device uses only 25 or 30 watts, and the new power supplies save 2 or 3 watts, but multiply that by the millions of units out there in the field, and you'll find a significant savings.
We're also taking a look at ASIC [printed circuit board] design. Virtually all IT devices have these in them. When you plug in such a device, it consumes 80 percent of its power, whether it's being used or not. When we're not using something, we should be able to control it from a power perspective, and that's a big project we're working on now.
We're now cooperating with some of the best research institutions in the country. What we will be doing is controlling the ASICs-actually shutting down portions of the board that are absolutely not necessary, conserving huge amounts of power.
Our early estimates are that we'll be able to save anywhere from 30 percent to 35 percent in energy savings. Imagine if we could do that worldwide.