fnPrime



Ten Tips To Make A Legacy Data Center More Energy Efficient





By Marcus Hassen  
OTHER PARTS OF THIS ARTICLEPt. 1: This PagePt. 2: Focus On Operations To Improve Data Center Efficiency


Many companies are aggressively pursuing energy-saving strategies, coalescing around a new marketplace paradigm to go, buy, and sell green — a message that often comes from the boardroom. Until recently, however, data centers remained a special case. The typical mission critical facility is a large consumer of energy, representing the lion’s share of facility operating costs. But reliability and availability requirements hampered efforts to improve energy efficiency.

In reality these goals are not mutually exclusive. In fact, as power densities push the limits on what can be supported by cooling technologies, energy efficiency can now be a first-step solution.

Many data center energy-efficiency strategies are easiest to implement in new construction, where they have the potential to substantially reduce the total cost of ownership. But facility executives responsible for a 10- to 15-year-old data center facility can also win in the efficiency game by pursuing a variety of energy-saving techniques. Doing so also positions facility executives as valued partners to top management in meeting the company’s wider goal of promoting sustainability.

Because data centers run 24/7/365, even small improvements can achieve significant energy savings. Substantial capital commitments or massive budgets are not required, and modest investments can often be recouped quickly. The following 10 items make up starting point concentrated on systems, equipment and controls that are common to legacy data center operations.

1. Maintain Underfloor Pressure. To begin, seal all unnecessary openings in the raised floor. Common uncontrolled openings include structured cabling cutouts underneath cabinets, openings at the building columns and gaps at the perimeter building envelope. Properly sealing these areas will make it easier to maintain underfloor air pressure, and reduce the strain on the mechanical systems while conserving the cold supply air for its intended use — to cool the IT equipment.

2. Properly Implement Hot Aisle/Cold Aisle Concepts. In a data center’s hot aisle/cold aisle configuration, cold supply air from the underfloor plenum is brought into the cold aisle at the front of each server rack via perforated floor tiles. These cold aisles should be kept separate from hot discharge air of the hot aisle, or the back side of each rack.

To optimize performance, the proper quantity of perforated tiles must be placed only in the cold aisles. It is also important to provide a direct path for return air from above each hot aisle to the computer room air conditioning (CRAC) units. Proper tile placement — coupled with sealing openings in the raised floor — will keep the hot rack discharge air from improperly mixing with cold supply air entering the racks. It is also not advisable to mix perforated tiles with varied percentages of free area in an attempt to generate higher supply airflow rates. In fact, too much flow (typically from grate tiles) may actually result in the supply air bypassing the server inlets. Instead, standard free area perforated tiles with air flow rates governed by underfloor air pressure should be sufficient for most rack densities.

Placing return duct extensions on the CRAC units or return grilles over the hot aisles will provide a path for hot air to return to the CRAC units without elevating the cold air supply temperature delivered to the racks. This strategy costs little to implement because return grilles are inexpensive and appropriate perforated tile placement only requires periodically reviewing arrangements and identifying optimum placement for current space loads.

3. Re-evaluate HVAC System Operating Fundamentals. Adjusting existing controls procedures can be among the most economical and easily implemented energy efficiency strategies in a legacy data center. A common computer room mantra of the past had been “the colder, the better.” When originally designed, many data center computer rooms aimed to maintain a uniform temperature of 72 degrees F. Until about 10 years ago, there was no distinction between hot aisles and cold aisles. Data center users operated under the mistaken assumption that a uniform temperature should — and could — be maintained throughout the space.

Thanks to the ongoing efforts of industry organizations to develop standards (for example, ASHRAE Technical Committee TC 9.9 and its “Thermal Guidelines for Data Processing Environment” or Telcordia GR-3028-CORE), facility executives have gradually recognized that the only place in the computer room where temperature matters is at the supply air inlet to the computer equipment itself. Optimal thermal range conditions at the server inlet currently recommended by TC 9.9 fall between 65 degrees F and 80.6 degrees.

Working back from this range, it becomes crucial to consider how mechanical systems are configured to deliver cool air to the computer load. With the new emphasis on maintaining the cold aisle temperature at the wider thermal range, supply air temperatures can be raised and the chilled water supply temperature set points can be increased as well. Raising the chilled water loop’s supply temperature and thereby reducing the difference between the chilled water and condenser water loop temperatures decrease the chiller’s workload, while still cooling the same amount of fluid. For example, by raising the chilled water supply temperature on a generic chiller from 44 degrees F to 54 degrees, chiller power requirement can be reduced from 0.62 KW/ton to 0.48 KW/ton.

4. Optimize Mechanical System. Old mechanical equipment can be significant energy hogs. But purchasing more efficient equipment may not fit into the coming year’s capital plan. An alternative option: Optimize the performance of existing systems.

Many data centers employ equipment that operates at constant volume such as CRACs, pumps, fans, and chillers that run continuously 24/7/365, consuming enormous amounts of energy. If motors on such equipment can be retrofitted with variable frequency drives (VFDs) and control sequences can be adjusted to enable equipment groups to operate at part-load while maintaining set points, it is possible to realize significant energy savings. A decrease of just 10 percent in fan speed may result in as much as a 27 percent reduction in energy use and prolong the life of the equipment.

A common first step is to retrofit the CRAC unit supply fan motors with VFDs and control all the computer room CRAC units in unison so that only the airflow needed to meet underfloor pressure set point is supplied, conserving energy while still meeting cooling requirements. The savings can provide a payback in less than a year in many parts of the country.

5. Identify Strategic Locations for Thermostats. In most legacy data centers, thermostats are installed in the CRAC return air stream, where air flow may be unpredictable. This can result in uneven CRAC unit loading, which in turn leads to variations in server inlet temperatures. Relocating the thermostats to the supply air stream, where discharge air can be controlled, will provide more uniform underfloor and server inlet temperatures. Relocating the thermostat also enables discharge air temperatures to be increased with more control and accuracy. Pairing this strategy with the airside and waterside shifts in Tip No. 3 will further conserve chiller energy while still providing acceptable supply air temperature to servers.

6. Convert to Two-Way Valve CRAC Units (Variable Volume Pumping System). Converting an existing three-way operational control valve to a two-way valve on a CRAC unit will reduce the overall energy consumption of the chilled water pumps. Traditionally, three-way valves have been utilized in a constant volume pumping application to route the set-point-required chilled water to the CRAC’s cooling coil while bypassing the rest to the chilled water return piping, wasting a lot of energy. The conversion can be made by replacing the three-way valve or by plugging its bypass piping.

In a variable volume pumping configuration, the CRAC unit’s two-way valve modulates as necessary to maintain the discharge supply air temperature set point, so that the chilled water pump’s VFD only needs to ramp up or down based upon the deviation of the differential pressure set point between the chilled water supply and return. When put into place on a system with VFDs, this strategy will further save on pumping flow rates, runtime and electricity usage.

7. Harness the BAS. No longer are building automation systems only for monitoring purposes. Standard today are graphical user interfaces at the front end that display detailed systems flow and equipment control diagrams. A newer BAS can communicate the important parameters of a data center in real time and at a high resolution, enabling the operator to fully visualize systems performance over any operating period.

The programming capabilities, processing speeds and response times of today’s BAS make implementing some of the control strategies presented here possible in the legacy data center. These features allow a multitude of central plant and electrical distribution system parameters to be gathered, from raised floor temperatures and pressures at precise points to the computer floor’s total energy use on a constant basis. From the operator’s workstation, these readings can be used to calculate power usage effectiveness (PUE) and perform iterative adjustments over time to tune the mechanical systems to operate most efficiently. Using the BAS to log and trend power consumption, equipment runtimes and current cooling capacities will help facility executives understand the present state of operations, discover where energy is being wasted and determine optimal systems settings to operate the plant as efficiently as possible.

8. Make-Up Air Handler Replacement. Replacing an older make-up air handler is a relatively low-cost equipment upgrade that can improve energy efficiency and more. Significant advances in technology over the past 10 to 15 years in direct expansion (DX) equipment, including digitally-controlled scroll compressors and improved part-load performance, make first cost and operating expenses reasonable. By selecting a new high efficiency, 100 percent outside air DX unit, immediate energy savings and heat transfer efficiency gains will be realized. A new unit will also meet the latest building code ventilation requirements that older air handlers may not. The new units may also be able to provide full operational airside economizer capability (dependent upon the local climate, proper damper control and sequence of operations).

More importantly, investing in a dedicated outside air system will ensure that only a small, measured quantity of outside air enters the building. This air can be heated or cooled, humidified or dehumidified. The requirements for space pressurization, minimum ventilation and humidity control are therefore all addressed by the make-up air handling unit, allowing the large CRAC units serving the data space to operate at the higher and more efficient temperatures.

9. Extend Energy Efficiency to IT Buying Decisions. IT equipment is typically refreshed every two to three years, often multiple times before major improvements are even planned for data center infrastructure systems. Given that the technology equipment is by far the largest consumer of power in the data center, energy efficiency has quickly become a factor in IT procurement decisions.

On the software side, server virtualization can increase the utilization rates of a typical server threefold, making the data center more energy efficient by requiring fewer servers overall to manage the same computing load. A 60 percent reduction in the number of servers represents about 40 percent savings in energy usage.

10. Replace or Retrofit Inefficient Computer Room Lighting Systems. In some legacy data centers, replacing or retrofitting present lighting fixtures may yield energy savings with attractive paybacks. Lighting controls should also be examined (or installed if not present) so that computer room lighting cycles can be set to match occupancy schedules.

Another strategy is to control all lighting fixtures with occupancy sensors, except those required for emergency and egress lighting. Some computer rooms with a relatively high slab-to-structure height may have been designed with metal-halide, high-bay fixtures. Replacing these with fluorescent fixtures fit with T5HO lamps will not only save energy by allowing automatic controls to be used, but improve the lighting levels in the computer room.

Data center owners are currently directing their technology and facility management teams to reduce energy consumption while maintaining the necessary reliability. Existing data center operators are major players in this movement. By using these 10 approaches, the legacy data center can be a leader in the effort to apply energy efficient practices throughout a company’s mission critical portfolio.

Click & View (pdf)


Continue Reading: Legacy Data Centers

Ten Tips To Make A Legacy Data Center More Energy Efficient

Focus On Operations To Improve Data Center Efficiency



Contact FacilitiesNet Editorial Staff »

  posted on 2/1/2009   Article Use Policy




Related Topics: