Memory: How to Optimize the Power Used
Memory: How to optimize the power used Memory is often overlooked as a factor in determining the overall actual power usage when specifying the configuration of a server. Memory chips vary widely from vendor to vendor and are usually not particularly well documented when it comes to power consumption. Generally speaking, the more memory per chip set module, the lower the power per GB of memory. Also, the faster the memory, the more power it draws (this is tied into the speed of the memory bus of the server and CPUs).Hard drives The capacity, physical density and energy efficiency of hard drives have outpaced the performance increases of many other computing components. We seem to have an insatiable appetite for data storage, which means that it is almost a zero sum gain. However, the power required by the newer, small-form factor 2.5-inch drives is fairly low when compared to "full-size" 3.5-inch drives of only a generation ago. Also, since the magnetic density of the media continues to increase per platter, larger-capacity hard drives use the same energy as smaller-capacity drives (assuming the same drive type). Spindle speed has a direct effect on power efficiency in the same class of drive, a 10K-RPM version, and either a 146GB or 300GB drive uses seven watts in use, and only 3.5 watts when idle. Unless you have a specialized application requirement requiring faster disk response, the 10K-RPM drive offers far more storage per watt for general purpose storage. Consider using the lower-power drives wherever possible as the power savings add up. Recently, solid-state drives (SSD) for notebooks have increased in capacity to as much as 512 GB and also begun to come down in price. They will soon be making inroads into the server market and would result in even more energy saving-especially when compared to the 15K-RPM drives. Of course, check with your server vendor to see what your OEM drive options are. I/O cards and ports While most IT people do not think in terms of how much power the network interface card (NIC) or I/O cards draw, it is an opportunity to save several watts per server. Some servers come with embedded cards; others use add-on cards or a combination of both. When selecting a NIC card, we want the fastest throughput, usually without any consideration of power usage. For example, Intel makes several NIC cards ranging in power from the INTEL PRO/1000 PT (which draws only 3.3 watts) to a 10-Gigabit Dual Fiber 10GB XF card (which draws 14 watts). In the case of OEM server NICs, a major manufacturer's power estimator tool indicates 22 watts for their OEM PCI Gigabit Ethernet card. Since many servers have embedded NIC cards, they may or may not draw power even if they are disabled. If you intend to use multiple NICs for redundancy or throughput, a careful comparison of internal or OEM cards can save several watts per card. Other motherboard components and supporting chip sets Obviously, each server requires it own supporting chip sets which are required to form the complete system. It is beyond the scope of this article to try to compare the wide variety of systems on the market. This is where the different vendors can each tout their claims that their server is the most energy-efficient system on the market. If the system motherboard already is equipped with the majority of on-board NICs, RAID controller or other I/O devices to meet your requirements, then you may not need to add those additional cards. Each major manufacturer now seems to have a power estimating tool for their servers. It is not meant to be an absolute indicator of the actual power that the server will draw, but it will provide a good estimate and a way to compare different components and configurations. The bottom line All these factors add up in determining the power your data center uses. By carefully comparing and selecting more efficient components and configurations options, you can potentially save significant power over time. Remember, by carefully specifying and configuring your servers to adequately meet-but not exceed-your computing requirements, each watt that you save per server can save you a significant amount of money per year. Or it could mean the difference between needing to upgrade the power and cooling in your data center or server room, or being able to continue to operate with the existing capacity of your infrastructure. The last recommendation, and perhaps the most simple and effective method to save energy, is to review the status and purpose of every IT device in the data center. Many studies have shown that there are a significant number of servers and other IT devices that are no longer in production but are still powered up. No one seems to know what application or function they support but no one wants the responsibility of switching them off. So take a total device inventory regularly. You may find several servers, routers and switches that are unused and powered up. Once you find them, just turn them off. Julius Neudorfer is the director of Network Services and a founder of North American Access Technologies, Inc. Since 1987, Julius has been involved with designing Data and Voice Networks and Data Center Infrastructure. He personally holds a patent for a network-based facsimile PBX system. Julius is also the primary designer of the NAAT Mobile Emergency Data Center. Over the last 20 years, Julius has designed and overseen the implementation of many advanced Integrated Network Solutions for clients. He can be reached at email@example.com.
Ideally, try to get as much memory as your application will need, but do not maximize the memory on all the servers just based on the old adage that you can never have too much memory. Over-specified, unused memory increases initial costs and draws unnecessary power over the life of the server. Even though some larger memory chips cost somewhat more per GB, a larger, more power-efficient memory chip can lower the power used over the life of the server. In addition, if you do need to add more memory in the future, it will leave more sockets open.