Chapter 11 Tactics

1. Prerequisite: Monitoring and benchmarking

Although monitoring and benchmarking do not directly create energy savings, these low-cost measures inform efficiency programs and track their impact.

• Calculate and monitor power utilization effectiveness (PUE).
The PUE is the ratio of total energy used by the data center to energy actually consumed by servers


over a particular time period. An ideal data center would have a PUE of 1.0—all energy would be used to power servers. In reality, many data centers have a PUE of 2.0 or higher—the servers u just half of the energy. The rest is consumed by infrastructure systems to keep the data center environment cool and manage power quality. The PUE can change over time and throughout the year depending on server loads and outside temperatures, so it should be monitored regularly to track data center performance.

• Track server utilization.
Average servers operate at less than 10% of their potential capacity, due to unpredictable loading patterns. Installing software that monitors server use helps to identify efficiency opportunities in underutilized servers as well as servers that are no longer being used at all.


• Install sensors to monitor temperature and humidity.
Servers have specific temperature ranges (see Tactic 5). Improved monitoring can identify isolated “hot spots” within the data center where the air is significantly hotter than the average room temperature. This data can be used to focus cooling efficiency programs and allow more servers to be added to the data center without overheating.

• Use kW/ton metric to assess cooling system performance.
The ratio of power consumed by a cooling system (kilowatts) to heat removed (tons, equivalent to 12,000 BTU/hr) is a measure of the cooling efficiency. Optimized cooling systems may operate at 0.9 kW/ton or less. In many data centers, values are above 2.0 kW/ton, indicating a large potential for efficiency improvements.

Additional information

More information on data center energy monitoring is available at:
• Green Grid, “Data Center Power Efficiency Metrics: PUE and DCiE.” October 2007. Accessible at
• Steve Greenberg et al., “Best Practices for Data Centers: Lessons Learned from Benchmarking 22 Data Centers.” 2006. Accessible at

2. Energy efficient software

The energy savings potential can be quite high for software measures, although the costs and expected savings of these measures will vary widely among companies.

• Design or purchase new software that minimizes energy use.
Energy use is rarely an important constraint for software developers. As a result, software often puts high demands on server hardware. More efficient software can accomplish the same task with less energy. Software efficiency is a complex issue, because efficiency measures are specific to individual programs and tasks. Incenting software designers to write more energy efficient code is an important first step for software created in-house. For purchased software, industry standards are still being developed to benchmark software energy performance.

• Implement power management software.
Activating energy management programs can significantly reduce energy use. Like power save modes on desktop computers, servers can be programmed to go into idle mode when they are not being used.

Additional information

More information on software efficiency is available at
• Intel, “Creating Energy Efficient Software.” October 2008. Accessible at

3. Improved server utilization

“Server utilization” refers to the proportion of a server’s processing capacity that is being used at any time. For most servers, energy use does not vary substantially based on the level of utilization. As a result, unused or underutilized servers use nearly as much energy as fully utilized servers. Significant efficiency gains can be accomplished by taking steps to reduce the number of servers running at low or zero utilization,
and these steps can be taken at a comparatively low cost.

• Unplug and remove servers that arenít being used at all.
Surprisingly, a significant fraction of servers (in some cases, 10%) in many data centers are no longer being used. If an office employee quits, others would quickly notice if the unused desktop computer kept turning on every day. Servers are less obvious; they can run their operating system and background applications invisibly for months or years before they are removed. To identify unused servers, run programs to monitor network activity over time. This effort will identify potential “zombie servers,” which then must be individually investigated to determine whether they can safely be unplugged and removed.

• Virtualize multiple servers onto single machines.
New technologies have been developed in the last five years that allow multiple operating system copies to run simultaneously on a single server. This process is known as virtualization. Virtualization offers large energy savings potential, because it consolidates several servers onto a single, more utilized server. Virtualization presents challenges, because entire operating systems must be transferred from one server to another. However, the potential benefits are so great that many companies are now rushing to implement virtualization initiatives. Virtualization potential is often quantified as


3:1 or 5:1, reflecting the number of servers that can be consolidated onto a single machine. In many cases, however, virtualization levels exceeding 20:1 are possible.

• Consider advanced resource allocation through applications rationalization and cloud computing.
In addition to virtualization, new techniques are available that allow computing demands to be allocated to any server with capacity, without
compromising security. Called cloud computing, these programs distribute loads among servers to optimize utilization levels. Unneeded servers may be shut down to conserve power until they are required to handle spikes in load. In addition, applications rationalization measures may be implemented on a single server to allow multiple copies of an application to run simultaneously. In this way, one or more servers may be consolidated onto a single machine.

Additional information

More information on server utilization is available at:
• CiRBA. “How to Choose the Right Virtualization Technology for Your Environment.” November 2007. Accessible at
• Richard Martin and J. Nicholas Hoover. “Guide to Cloud Computing.” Information Week, June 21, 2008. Accessible at

4. Efficient server hardware design

Buying efficient hardware is a cost effective way to capture major energy savings. Although efficient hardware sometimes costs more upfront, when the “lifecycle cost of ownership” is considered, the energy savings over time more than pay back the extra cost. Since most servers are replaced (“refreshed”) every three to four years, frequent opportunities exist to upgrade to more efficient equipment.

• Purchase best-in-efficiency-class (BIEC ) servers.
For a given level of performance (processing speed, RAM, etc.), servers on the market exhibit a wide


range of energy demand. In other words, performance is only slightly correlated to energy. Despite this, most companies’ purchasing decisions do not consider energy efficiency. Working with IT and supply chain departments to prioritize energy efficient server models during normal refresh cycles has the potential to save up to 50% of server energy. And since efficient servers are not necessarily costlier, this is a low-cost opportunity.

• When custom-building new servers, eliminate unnecessary components and use efficient power supplies, fans and hardware.
Some companies custom design the servers used in their data centers. This opens the door to a variety of efficiency measures that save capital costs and energy. The first step is to eliminate unneeded components that come standard in many servers. Items such as disk drives and graphics cards may be unnecessary depending on the server’s function. Next, efficiency of specific components should be considered as part of the purchasing decision. Power supplies, fans, chips and storage drives offer potential efficiency gains. To realize these opportunities, analyze how decisions are made for server components and ensure that energy use is a metric.

• Mandate efficient power supplies.
In recent years, efforts to raise power supply efficiencies have gained momentum. Server power supplies transform electricity to the low voltages demanded by electronic components. Historically, many power supplies have operated at as low as 60% efficiency—up to 40% of energy consumed by the server is lost as heat immediately. Many off-the-shelf servers today have power supplies certified by the 80 PLUS program, which demands at least 80% average efficiency. In fact, power supplies with efficiencies of 90% are available (the 80 PLUS program and the Climate Savers Computing Initiative provide lists of manufacturers offering high efficiency power supplies).

• Use power management equipment to shut down servers.
Many servers are not used for significant periods of the day. Often, unused machines remain on, even when their loads are predictable. Power management applications and hardware (smart “power distribution units”) can be programmed to shut servers down and then bring them back online when needed. Since most servers use more than half of their total energy consumption when idle, power management measures have the potential to significantly reduce server energy use.

Additional information

More information on efficient server hardware is available at:
• Matt Stansberry, “The Green Data Center: Energy Efficient Server Technologies.” Accessible at S/1233340803_462.html
• 80 PLUS, “What is 80 PLUS.” A ccessible at
• Climate Savers Computing Initiative (

5. Cooling system optimization

Cooling systems account for less than half of data center energy use, but there are often large efficiency opportunities that can be implemented with very reasonable payback periods.

Block holes in raised floor. Many data centers use an open plenum beneath a raised floor to distribute air to the server racks. Fans are used to pressurize the air in the plenum. Perforated tiles are positioned where cold air is needed (at the air intake side of server racks), which allow cold air to be pushed up into the room. However, in many data centers, floor tiles are removed to run wires or conduct maintenance and
never replaced. This allows cold air to escape and reduces the efficiency of the cooling system. An easy fix is to cut out small holes for cables and replace floor tiles to cover holes. Companies including KoldLok also market “brushes” and other floor cover materials as inexpensive options.

• Bundle underfloor cables.
In many data centers, airflow is restricted in the plenum by tangles of wires and cables. Organizing underfloor cables can reduce fan
energy use and improve cooling effectiveness, allowing more servers to be added to the data center.

• Relax temperature and humidity constraints.
Allowable temperatures in data centers are typically restricted to narrow ranges in order to reduce risk of server failure. Many data centers adopt the “recommended range” from ASHRAE (a cooling industry organization) of between 64˚ and 80˚F. However, server manufacturers
guarantee their servers will operate reliably in significantly warmer temperatures. For example, a typical Sun server specifies 95˚F as the upper limit temperature.1 Allowing warmer data center temperatures reduces cooling energy use and allows more servers to be added to the data center. For this measure, implementation is simple—raising temperature setpoints requires only controls modifications. Getting buy-in from IT systems operators is the primary barrier to adoption.

• Enclose “hot” or “cold” aisles and block holes in racks with blanking panels.
To maximize efficiency of an air-cooled data center, cold supply air should be physically isolated from hot return air. The simplest way to achieve this is to encapsulate an aisle of server racks by adding end doors, roof panels over the racks and “blanking panels,” which fit into the racks and block air from flowing through empty slots. When implemented, air flows from the cold aisle through the servers


to the hot aisle and exhaust air stream without “short-circuiting” (cold air bypassing servers and merging with hot exhaust air) or
“recirculation” (hot air flowing back to the server inlets, leading to overheating problems). Implementing aisle containment measures can disrupt data center operations if racks need to be repositioned, but can enable up to 25% cooling energy savings.2

• Commission a facility audit.
Mechanical engineering auditors evaluate HVAC systems and operations. After spending a day on-site, they can estimate energy savings and cost impacts of efficiency opportunities. In addition to the cooling system measures described above, they may recommend retrofits to use outside air for cooling, optimize condenser water and chilled water temperature setpoints, and other retrofit measures.

Additional information

More information on data center cooling systems is available at:
• PG&E, “High Performance Data Centers.” January 2006. Accessible at _CENTER S/06_DataCenters-PGE.pdf
• Matt Stansberry, “The Green Data Center: Data Center Infrastructure Efficiency” February 2009. Accessible at S/1234285886_930.html
• ASHRAE , “2008 A SHRAE Environmental Guidelines for Datacom Equipment.” August 2008. Accessible at _Extended_Environmental_Envelope_Final_Aug_1_2008.pdf

6. Other loads: Power supply and lighting systems

• Optimize power supply and conversion systems to maximize efficiency.
The uninterruptible power supply (UPS) typically uses a battery bank to ensure that no blips in power input result in server failure. However, the process of switching


energy used by servers passes through the UPS system, 15% of all energy is lost. One way to improve UPS efficiency is to install a “Delta Conversion” system, which diverts most AC power flows around the AC/DC conversion and battery equipment, greatly reducing conversion losses.

• Reduce lighting energy use with automated controls and more efficient fixtures.
Lights are a small piece of data center energy use, but they can easily be improved. In many data centers, lights are glaringly bright, so that workers can see into the dark racks to configure servers. Furthermore, lights are often on 24-7, since a worker exiting a large data hall never knows if someone else is still at work. Occupancy sensors allow lights to turn off when the data center is empty, potentially saving 50% or more of the lighting energy. Lights can also be divided into separate banks, so that the entire space does not need to be lit when people are working in one area. Finally, the quality of light may be improved by using light colored interior surfaces and server racks and using indirect lighting fixtures.

Additional information

More information on data center power supply systems is available at:
• California Energy Commission PIER , “Uninterruptible P ower Supplies: A Data Center Efficiency Opportunity.” September 2008. Accessible at

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License