Close
  • Latest News
  • Artificial Intelligence
  • Video
  • Big Data and Analytics
  • Cloud
  • Networking
  • Cybersecurity
  • Applications
  • IT Management
  • Storage
  • Sponsored
  • Mobile
  • Small Business
  • Development
  • Database
  • Servers
  • Android
  • Apple
  • Innovation
  • Blogs
  • PC Hardware
  • Reviews
  • Search Engines
  • Virtualization
Read Down
Sign in
Close
Welcome!Log into your account
Forgot your password?
Read Down
Password recovery
Recover your password
Close
Search
Logo
Subscribe
Logo
  • Latest News
  • Artificial Intelligence
  • Video
  • Big Data and Analytics
  • Cloud
  • Networking
  • Cybersecurity
  • Applications
  • IT Management
  • Storage
  • Sponsored
  • Mobile
  • Small Business
  • Development
  • Database
  • Servers
  • Android
  • Apple
  • Innovation
  • Blogs
  • PC Hardware
  • Reviews
  • Search Engines
  • Virtualization
More
    Subscribe
    Home Applications
    • Applications
    • Networking
    • Virtualization

    How to Design Real Cooling Solutions for Virtual Servers: What’s in Your Data Center? – 1

    Written by

    Julius Neudorfer
    Published March 3, 2008
    Share
    Facebook
    Twitter
    Linkedin

      eWEEK content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

      (Editor’s Note: This is Part 1 of a 3-Part Series on Cooling Solutions for Virtual Servers. Click here for Part 2 or Part 3.)

      /images/stories/70×50/bug_knowledgecenter_70x70_(2).jpgIt’s 2008 and the virtual environment is the new computing paradigm. The software and hardware even seem to work together as advertised. The virtual machine appears to have many benefits such as providing better resource utilization and management, while presumably saving energy. However, “virtualization” has not repealed the laws of physics. The hardware is very real, and it requires a lot of energy and cooling resources.

      When power consumption is discussed in terms of watts per rack, in the mid-to-late 1990s it ranged from 500-1000W, and perhaps occasionally 1-2KW. Once we all got past the dreaded Y2K frenzy and started concentrating moving forward instead of remediation, the servers got smaller and faster. And they started drawing more power. Today, a typical 1U server draws 250-500W and, when 40 of them are stacked in a standard 42U rack, they can draw 10-20KW and produce 35,000-70,000 BTUs of heat. This requires 3-6 tons of cooling per rack. This was the amount of cooling typically specified for a 200-400 square foot room with 10-15 racks less than five years ago.

      /images/stories/slideshows/neudorfer_cooling_servers/neudorfer_slide1.jpg

      Blade servers provide even more space saving benefits, but as a result, require even higher power and cooling requirements. They can support dozens of multi-core processors, but are only 8-10U high. However, they can require 6-8KW each, and since a standard rack can hold 4-5 Blade servers, it can total 24-32 KW per rack.

      /images/stories/slideshows/neudorfer_cooling_servers/neudorfer_slide2.jpg

      Cooling and Virtualization has taken hold, and it is rapidly becoming the latest de facto computing trend. It has proven to work effectively and does have many benefits – some real, some hypothetical. One of the many claims is that it is more energy efficient, since it can (and usually does) reduce the number of “real” servers. Of course, part and parcel with the “upgrade” to a virtualized environment usually involves new high-performance, high-density servers. In and of itself, it is true that the server hardware does take less energy since there are usually fewer of them. However in practice, the concentration of high-density servers in a much smaller space, while a benefit, has created many real deployment problems.

      /images/stories/slideshows/neudorfer_cooling_servers/neudorfer_slide3.jpg

      Where Is the Downside?

      OK, so if virtualization uses less space, and the servers use less energy overall, where is the downside?

      Power Requirements. Yes, virtualizing the environment will use less server power overall since, if done properly, fewer servers are used. However, many existing power distribution systems cannot handle providing 20-30KW per rack.

      Cooling Requirements: By implication, now that it seems that, if properly implemented, virtualization can potentially use less space and power by using fewer and denser servers, it should follow that they need less cooling. Therefore virtualization should more be energy efficient overall and, presumably, you have made your data center greener, so to speak.

      This where the virtualization efficiency conundrum first manifests itself. As mentioned earlier, data centers that were built just five years ago were not designed for 10, 20, even 30KW per rack. As such, their cooling systems are not capable of efficiently removing that much heat from such a compact area. If all the racks were configured at 20KW-per-rack, the average power/cooling could exceed 500 watts per square foot. Even some recently-built Tier IV data centers are still limited to a 100-150W square-foot average. As a result, many high-density projects have had to spread the servers across half-empty racks in order to not overheat. This lowers the overall average power per square foot.

      Traditional Raised-Floor Cooling

      The “classic” data center harkens back to the days of the mainframe. It had a raised floor which served several purposes. It was used to easily distribute cold air from the Computer Room Air Conditioner (CRAC) and it also contained the power and communications cabling. While mainframes were very large, they only averaged 25-50 watts per square foot. Originally, to make it look neat and organized, everything was set up as rows facing the same way. In many cases, the cold air entered the bottom of the equipment cabinets and the hot air exited the top of the cabinets. The floor generally had no perforated tiles.

      This actually was a relatively efficient method of cooling since all the cold air was going directly into the equipment cabinets and did not mix with the warm air. With the introduction of rack-mounted servers, the average power levels began to rise to 35-75 watts per square foot. Also, it became a problem that the cabinets were all facing the same way, since the hot air now exited out the back of one row of racks into the front of the next row. Thus, the “hot aisle/cold aisle” phenomenon came into being in the mid-to-late 1990s.

      CRAC units were still located mainly at the perimeter of the data center, but the floor tiles now had vents (or were perforated) in the cold aisles. This worked better, and the cooling systems were able to keep up with the rising heat load by adding more and larger CRAC units that had higher-power blowers, and by increasing the size of the floor tile vent openings.

      /images/stories/slideshows/neudorfer_cooling_servers/neudorfer_slide4.jpg

      Still the Predominant Method of Cooling

      This is still the predominant method of cooling in most data centers that have been built in the last 10 years – and many that are still in the design stage. Raised floors became deeper: two, three or four-foot are now somewhat common. This allows more and more cold air to be distributed using this “time-tested and proven” methodology. This is a cost-effective method only up to certain power level, though. Once past a certain power level, this method has multiple drawbacks. For one, it takes much more energy for the blower motors in the perimeter CRACs to push more air at higher velocities and pressures. As a result, they use much more energy trying to deliver enough cold air into a single 2’x 2′ perforated tile to support a 30-KW rack.

      Floor Grates Replace Floor Tiles

      These perforated floor tiles have even been replaced by floor “grates” now. This has been done in an effort to try to supply enough cold air to a rack that needs “tons” of cold air to cool the heat of high-density servers. As an aside, each 3.5KW produces 12,000 BTUs of heat, which requires 1 ton of cooling.

      Unfortunately, 3.5KW-per-rack has been far exceeded many times over with the advent of the “1U” and Blade Server. Now, instead of specifying how many tons of cooling for an entire data center, we may now need 5-10 tons per rack!

      As a result of the poor cooling path efficiency at such high heat loads, the amount of power that is used for cooling of high-density server “farms” has actually exceeded the power used by the servers themselves. In fact, in some cases, for every one dollar spent to power the servers, two or more dollars are spent for cooling. This is primarily due to this path efficiency problem. Ideally, it should use less than half the energy to cool – not twice as much.

      In some cases, the traditional raised-floor perimeter cooling system for high-density applications is causing an overall increase in energy usage rather than a decrease. In addition, it is common that this method is unable to adequately cool a full rack of high density servers.

      Non-Raised Floor Cooling

      Once, a raised floor was considered the only way to cool a “real” data center. Now some newer cooling systems do not require a raised floor. These have placed the cooling system in close proximity to the racks. This not only improves cooling performance, it also improves cooling efficiency. These new systems can be used with existing raised floor systems or non-raised floors. They can be used as a complete solution or as an adjunct to overtaxed cooling systems.

      /images/stories/heads/neudorfer_julius70x70.jpgJulius Neudorfer is the Director of Network Services and a founder of North American Access Technologies, Inc. Since 1987, Julius has been involved with designing Data and Voice Networks and Data Center Infrastructure. He personally holds a patent for a network-based facsimile PBX system. Julius is also the primary designer of the NAAT Mobile Emergency Data Center. Over the last 20 years, Julius has designed and overseen the implementation of many advanced Integrated Network Solutions for clients. He can be reached at [email protected].

      Julius Neudorfer
      Julius Neudorfer
      Julius Neudorfer is the Director of Network Services and a founder of North American Access Technologies, Inc. Since 1987, Julius has been involved with designing Data and Voice Networks and Data Center Infrastructure. He personally holds a patent for a network-based facsimile PBX system. Julius is also the primary designer of the NAAT Mobile Emergency Data Center. Over the last 20 years, Julius has designed and overseen the implementation of many advanced Integrated Network Solutions for clients.

      Get the Free Newsletter!

      Subscribe to Daily Tech Insider for top news, trends & analysis

      Get the Free Newsletter!

      Subscribe to Daily Tech Insider for top news, trends & analysis

      MOST POPULAR ARTICLES

      Artificial Intelligence

      9 Best AI 3D Generators You Need...

      Sam Rinko - June 25, 2024 0
      AI 3D Generators are powerful tools for many different industries. Discover the best AI 3D Generators, and learn which is best for your specific use case.
      Read more
      Cloud

      RingCentral Expands Its Collaboration Platform

      Zeus Kerravala - November 22, 2023 0
      RingCentral adds AI-enabled contact center and hybrid event products to its suite of collaboration services.
      Read more
      Artificial Intelligence

      8 Best AI Data Analytics Software &...

      Aminu Abdullahi - January 18, 2024 0
      Learn the top AI data analytics software to use. Compare AI data analytics solutions & features to make the best choice for your business.
      Read more
      Latest News

      Zeus Kerravala on Networking: Multicloud, 5G, and...

      James Maguire - December 16, 2022 0
      I spoke with Zeus Kerravala, industry analyst at ZK Research, about the rapid changes in enterprise networking, as tech advances and digital transformation prompt...
      Read more
      Video

      Datadog President Amit Agarwal on Trends in...

      James Maguire - November 11, 2022 0
      I spoke with Amit Agarwal, President of Datadog, about infrastructure observability, from current trends to key challenges to the future of this rapidly growing...
      Read more
      Logo

      eWeek has the latest technology news and analysis, buying guides, and product reviews for IT professionals and technology buyers. The site’s focus is on innovative solutions and covering in-depth technical content. eWeek stays on the cutting edge of technology news and IT trends through interviews and expert analysis. Gain insight from top innovators and thought leaders in the fields of IT, business, enterprise software, startups, and more.

      Facebook
      Linkedin
      RSS
      Twitter
      Youtube

      Advertisers

      Advertise with TechnologyAdvice on eWeek and our other IT-focused platforms.

      Advertise with Us

      Menu

      • About eWeek
      • Subscribe to our Newsletter
      • Latest News

      Our Brands

      • Privacy Policy
      • Terms
      • About
      • Contact
      • Advertise
      • Sitemap
      • California – Do Not Sell My Information

      Property of TechnologyAdvice.
      © 2024 TechnologyAdvice. All Rights Reserved

      Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.