fnPrime



« Back to Facilities Management Data Centers Category Home

Eye on Energy: Tips for Improving Data Center Efficiency




By John Bruschi, P.E.

Data centers — facilities housing electronic equipment used for data processing, data storage, and communications networking — operate 24 hours per day, seven days per week, year-round. They also use tons of energy: up to 100 to 200 times as much electricity per square foot as a standard commercial office space.

Such high levels of power consumption create opportunities for energy efficiencies, though maintenance and engineering managers must take care to preserve operational reliability. With rapid growth in information technology (IT) services, managers can avoid costly facility expansions through efficient facilities management that reduces data center space and power capacity demands.

By identifying the leading sources and causes of energy waste in data centers, managers can determine strategies and tactics for system adjustments that will improve data center energy efficiency.

IT equipment
For many data centers, every watt of server power requires a second watt of infrastructure power usage to support that server. Most of that infrastructure power comes from operating the data center cooling system and electric power chain component losses, such as an uninterruptable power supply (UPS). So in many cases, power savings at the server level can lead to nearly double the savings for overall data center operations.

The first step toward saving energy with IT equipment energy involves communication with the IT facility staff to determine the cost and feasibility of more efficient computing. Often, strategies for reducing IT equipment energy use save on server power, as well as cooling power, UPS power, space, and infrastructure capacity. These strategies include:

Turning off unused servers. About 30 percent of the nation’s servers are abandoned but still racked and consuming power, according to the Uptime Institute.

Replace older servers with higher-efficiency models. A faster server refresh rate could be justified by increased savings in energy costs. Energy Star provides resources that list high-efficiency servers. Managers should select servers with high-efficiency power supplies.

Consolidate and virtualize applications. Virtualization drastically reduces the number of servers in a data center, reducing required server power. Typical servers run at very low utilization levels — 5-15 percent on average — while drawing 60-90 percent of their peak power. Running multiple, independent virtual operating systems on one physical computer allows for the same amount of processing to occur on fewer servers by increasing server utilization.

Consolidate hardware. Grouping equipment by environmental requirements of temperature and humidity allow technicians to control cooling systems to the least energy-intensive setpoints for each location.

Eye on the environment
The ASHRAE publication Thermal Guidelines for Data Processing Environments provides recommendations for ranges of temperature and humidity of the server inlet air to ensure reliable server operation. ASHRAE developed these recommendations in collaboration with IT equipment manufacturers. The fourth edition of this publication, released in 2015, recommends an operating temperature up to 80.6 degrees and a relative humidity as low as 8 percent. Other recommendations include:

Reduce or eliminate humidification. Older computer room air conditioning (CRAC) units package electric-powered steam humidifiers. They are inefficient, and they add heat to the cooling airstream. With ASHRAE advocating very low levels of relative humidity, there are rare cases where humidification would be needed. If humidification is essential, managers should consider replacing steam humidifiers with ultrasonic humidification.

Raising supply air temperatures to meet upper range of the ASHRAE thermal guidelines. The warmer the supplied cooling air, the more efficiently the cooling system can operate and the lower the likelihood for energy-intensive dehumidification.

The air in there
Efficient indoor air delivery involves minimizing obstructions in the path of airflow, as well as minimizing the mixing between cooling air supplied to the IT equipment and the hot air rejected by that equipment. Strategies for achieving these goals include:

Implement cable management. Managers can institute a cable mining program to identify and remove abandoned or inoperable cables in the cooling air stream.

Ensure aisle separation. A basic hot aisle/cold aisle configuration is created when data center equipment is arranged in rows of racks with alternating cold aisles— the rack-air-intake side — and hot aisles — the rack-air-heat-exhaust side — between them. This configuration is necessary for preventing the mixing of cold supply air with hot server exhaust air. The greater the separation, the lower the airflow rate needs to be to cool the servers.

Plan for aisle containment. Managers can enhance the effect of a hot aisle/cold aisle configuration by isolating the two sides. They can have technicians install blanking panels and block holes between servers in the racks to prevent cooling air from bypassing the servers or hot exhaust air short-circuiting back to the server air intakes. In a raised floor environment, technicians should seal off cable cut-outs in the raised floor tiles. More advanced methods involve isolating the cold aisle or hot aisle using plastic strip curtains or even rigid walls.

Use variable-speed-drive cooling-unit fans. Older packaged CRAC units are equipped with constant-speed motor fans and typically deliver far more air than is drawn by the servers. But CRAC can be effectively retrofitted with variable-speed drives. When implemented with better aisle containment, this measure ensures a better match between the air delivered for cooling and the air needed by the servers. A deeper retrofit involves replacing the CRAC unit’s fan-motor assembly with direct drive, electronically commutated motor (ECM) plug fans. Besides adding variable-speed capability, the ECM plug fans deliver more energy savings due to a more efficient motor, more efficient fan, and no belt losses.

Cooling considerations
A range of cooling system types remove heat from a data center space and reject it to an outside source. Managers can use these operational strategies and retrofits with such applications:

Use the house air system. If central building air handling equipment serves the data center, is that equipment forced to operate at night and on weekends when it otherwise could be turned off? If the answer is yes, it is more efficient over the course of the year to install a dedicated cooling system to only serve the data center. But there are exceptions for specific periods of the year when using the house system is advantageous.

For example, if the central air handler is equipped with an air-side economizer, then compressor-free cooling is available for the data center during several hours of the year. On the other hand, a dedicated cooling system for a data center with good air management can operate at supply-air temperatures of 70 degrees or higher while the house system could not.

Retrofit or replace air-cooled heat rejection with evaporative-based heat rejection. Retrofit kits are available with evaporative media to pre-cool condenser intake air. Managers need to be sure to specify media with a negligible air-side pressure drop to avoid adding back condenser fan power. Dry fluid coolers can be replaced by open- or closed-loop cooling towers.

Install rack- and row-level cooling. Compared to water and refrigerant, air is a lousy medium for heat transfer. The quicker the transfer of the heat from the servers to either water or refrigerant, the more inherently efficient the cooling system is. In data centers troubled by hot spots due to a rack or row of high density servers, installing in-rack or in-row cooling modules allows for closer coupling of the cooling system to the servers and allows for a higher room-temperature setpoint.

Use water-side economizers for chilled-water systems. Install cooling towers upstream and in series with the chillers — either air- or water-cooled — can reduce or eliminate the cooling load from the chillers for large parts of the year.

Uninterruptible power supplies
UPS typically account for energy losses that are 10-20 percent of IT equipment power. One question to ask the IT facility staff is whether all server equipment really needs UPS protection? Additional common approaches to reducing UPS losses include:

Right-sizing replacement UPS. In lightly loaded data centers, the UPS might operate below the 30 percent load factor, where its efficiency significantly drops off. Managers can consider replacing the UPS with a smaller one that could operate at higher load factors, and therefore, higher efficiencies.

Perform an energy-saver system retrofit. Many newer UPS are available with an eco-mode, which allows clean utility power to bypass power conditioning at 99 percent efficiency. In the event of poor power quality or a lack of utility power, the bypass switch moves the UPS back to power-conditioning mode. Some older UPS can be retrofitted with this energy-saver system.

Monitoring and benchmarking
Submetering and temperature monitoring throughout a data center can provide valuable data, indicating subsystems that require more energy-efficiency efforts. ASHRAE Standard 90.4, Energy Standard for Data Centers establishes minimum energy-efficiency requirements for data centers by using calculation methodologies for tracking mechanical load and electrical losses. These calculated values for a particular data center can then be compared to benchmarked values based on climate zones. Other key metrics to consider are:

Power usage effectiveness (PUE). This metric is defined as the ratio of the total data center power input to IT input power. One common estimate of IT input power is the UPS output that is usually observable at the UPS panel. A PUE greater than 2.0 usually indicates significant room for improvement, while 1.5 is about average and less than 1.2 is excellent. Tracking PUE yields a simple way to track energy-efficiency improvements in cooling systems, fan systems and electric power chain.

Server-intake-air temperature. Managers can deploy wireless temperature sensors at low, medium and high points on a server rack to compare server intake air temperatures to those recommended by ASHRAE.

Return temperature index (RTI). This metric is defined as the ratio of the average air-side temperature differential across the air conditioning units divided by the average air-side temperature differential across the server equipment. A value of 1.0 indicates that the air conditioning equipment is delivering exactly the same amount of air as the server equipment requires. An RTI less than 1.0 implies there is more cooling air available than necessary and that cooling air is bypassing the servers. An RTI greater than 1.0 implies a deficit of cooling air available and that hot server exhaust is short circuiting back to the server air intakes.

Cooling-system efficiency. The most common metric used to measure efficiency of an HVAC system is the ratio of average cooling system power usage (kW) to the average data-center cooling load  in tons. A system efficiency less than 1.0 kW per ton is generally considered good practice.

John Bruschi, P.E., is a senior associate and energy performance engineer with Mazzetti.

Resources for Discussion
The U.S. Department of Energy (DOE) has developed a free, high-level energy profiling tool for data centers called DC Plus. The Data Center Energy Practitioner (DCEP) training program administered by the DOE provides training for using this tool, as well as providing more detail on carrying out an energy efficiency assessment. Managers can check with the local electric utility, which might have a program that offers a free energy assessment of a data center along with incentives for partially offsetting the cost of retrofit projects. Additional resources include:

Uptime Institute Comatose Server Savings Calculator
Energy Star Servers 
ASHRAE Datacom Series, including Thermal Guidelines for Data Processing Environments and Standard 90.4
Center of Expertise for Energy Efficiency in Data Centers
Data Center Energy Practitioner Training
Improving Energy Efficiency for Server Rooms and Closets
Data Center Metering and Resource Guide
Best Practices Guide for Energy-Efficient Data Center Design


Contact FacilitiesNet Editorial Staff »  


posted on 10/18/2016