fnPrime



Tactics to Boost Efficiency, Not Data Center Downtime



A range of tactics can increase teh energy efficiency of a data center without reducing availability


By Christopher M. Johnston  
OTHER PARTS OF THIS ARTICLEPt. 1: This Page


Managers of data centers are finding themselves increasingly under pressure to improve the energy efficiency of their facilities. Existing data centers present the greatest challenge because of their older infrastructure and restrictions on downtime to make changes. However, there are proven strategies that can enable a data center to increase energy efficiency without jeopardizing availability.

Before considering those options, it’s important to define the kind of data center that might benefit from those measures. These are data centers in which infrastructure availability meets the owner’s needs and reducing availability is not a viable option. What’s more, in these data centers the existing electrical and mechanical infrastructure will remain in service and will not be replaced. These data centers distribute cool air through a raised access floor. Finally, the discussion also assumes that the data center uses large double-conversion UPS systems.

If all of the above apply, facility executives have two options for saving electricity by turning things off: shutting down excess component redundancy or switching off lights.

Shutting down redundancies – This tactic applies if the facility has a single UPS system with more than one redundant module on line. In that case, all but one redundant module can be shut down. When using this practice, offline modules should be rotated to maintain storage battery charge.

Consider a facility that has a UPS output load of 500 kW with three 675 kW modules online. Each module operates at about 25 percent of rated capacity (500 kW/3 modules/675 kW per module); at this percent rated capacity, each module operates at 88 percent efficiency. If one module is shut down, the other two modules will operate at 37 percent of capacity and 91 percent efficiency. At $0.10/kWH for purchased electricity and 8,760 hours per year, this generates annual savings of $17,520. If the facility has redundant UPS systems, and all computer equipment can be supplied from either system — 2N or System plus System arrangements — all redundant modules can be shut down on each system.

Another example: A facility has a 1,000 kW UPS output load that is shared between two UPS systems with three 675 kW modules online in each system. It is operating at 88 percent efficiency. If one module is shut down in each system, the systems’ operating efficiency increases to 91 percent. At $0.10/kWH for purchased electricity, this amounts to $33,200 savings per year.

Reducing lighting use — If data center and support spaces are unoccupied for extended periods of time, energy costs can be reduced by turning off all lighting except for life safety and egress illumination. Using occupancy sensors or time switches will ensure operating and maintenance staff is not inconvenienced when accessing the facility. For example, in a 10,000-square-foot area with a lighting load density of 1.5 watts per square foot, if on average two thirds of that load was avoided, savings of $8,760 per year could be realized. This is based on 8,760 hours per year and $0.10/kWH for purchased electricity.

Increasing Cooling Efficiency

When it comes to increasing cooling efficiency, facility executives have several more options.

Using dedicated outside air units — Providing a dedicated outside air unit for each computer room will satisfy ventilation and pressurization requirements and allow operators to control the relative humidity of the computer room. Maintaining humidity levels between 40 and 55 percent not only allows for very efficient humidification strategies, it also ensures conformance with the current best practices detailed in “Thermal Guidelines for Data Processing Environments,” published by ASHRAE Technical Committee 9.9 (ASHRAE TC 9.9).

Whether a facility uses computer-room air conditioning (CRAC) units, computer room air handling (CRAH) units or air handling units, the mere fact that each unit has humidification and reheat abilities means they will inevitably “fight” due to the variations inherent in their controls and the relative humidity readings across the technology space. “Fighting” means that one unit may be dehumidifying while the adjacent unit is humidifying, thus wasting energy. Humidity varies depending on temperature, and temperature varies depending on the location where it is sensed or measured — below a raised access floor, in a cold aisle, in a hot aisle or a return air unit — and on the temperature stratification that exists within each area.

Using humidity sensors that measure dew point and controlling to an absolute humidity (dew point) with the dedicated outside air unit eliminates this “fighting” and provides consistent humidity values that can be measured independently of local conditions. It also ensures conformance with current best practices in ASHRAE TC 9.9. Once the humidity control is transferred to the dedicated outside air unit, reheat and humidification in existing CRAC or CRAH units can be disabled by turning off the humidifier water supply and pulling humidifier and reheat fuses within the unit.

Moving CRAC unit, CRAH unit or air handling unit temperature sensors away from the return air location to the supply airstream is a strategy for maintaining a constant leaving air temperature. Manufacturers will typically position control sensors in the return airstream. This often leads to uneven unit loading, disparate underfloor supply air temperatures and variation in server inlet temperatures because return air conditions tend to differ for each unit. Controlling to a preset supply air temperature provides thermal uniformity below the raised floor, creating a more stable temperature at server inlets and increasing discharge air temperatures. Combined, all of the above translate into energy savings.

If the facility uses chilled water — If chilled water is used for cooling, raise the chilled water supply temperature as high as practical while ensuring that air entering the servers in the cold aisle is between 68 F and 77 F. This will reduce the chilled water production energy. However, this may not be a practical solution in facilities where the chilled water supply is shared with office space since the office spaces require lower chilled water temperatures to dehumidify.

A word of caution. Discharge air temperature should be increased in small, incremental steps while carefully monitoring server inlet air conditions. Most legacy cooling systems provide less than the required amount of air to the cold aisle to meet combined server inlet demands. The resulting hot air recirculation (from the hot aisle back into the cold aisle) creates mixed air (cold underfloor supply air plus hot recirculation air) to provide server inlet air temperatures within the 68 to 77 F temperature range recommended by ASHRAE TC 9.9.

Increasing the temperature of the underfloor supply air may require increasing supply air quantity to ensure that server inlet air conditions do not exceed the recommended 68 to 77 F range, especially nearer the top of IT racks and cabinets. The additional fan energy used is more than offset by the energy saved when the chillers and compressors are operated at higher leaving water temperatures. On the other hand, if the supply air temperature can be raised without forcing any server inlet temperature higher than 77 F, the supply air quantity should not be increased. As a result, the facility operator should be prepared to witness higher hot aisle and return air temperatures. This is normal and does not constitute a “hot spot.”

Also, when utilizing chilled water for cooling in units with 3-way chilled water control valves, it helps to convert them to 2-way by inserting a normally closed isolation valve in the bypass line. This can reduce the chilled water flow rate as well as pumping energy, especially when the chilled water pumps are controlled by variable frequency drives.

It makes good sense to ensure that cold supply air is used only to cool the computer equipment. This can be achieved by placing supply air tiles in the raised floor only in cold aisles. Openings in the raised floor around walls, ramps and columns as well as cable and wiring penetrations of the raised access floor should be sealed to conserve the cold air supply for the cold aisles. For the same reason, penetrations of the raised access floor beneath computer equipment cabinets and electrical equipment should be sealed.

Energy Savings From Fans

It is a common misconception that reducing the number of CRAC units operating (in a variable fan speed arrangement) will save energy over operating all CRACs at a reduced fan speed. Fan power is a proportional fan speed cubed, so even slight drops in fan speed can result in significant energy savings. In other words, 8 CRACs operating at 100 percent speed will consume more energy than 10 CRACs operating at 80 percent speed. Retrofitting VFDs in all CRACs, CRAHs and air handling units can help control supply air flow rate and maintain a constant static pressure beneath raised access floors.

If the supply air flow rate exceeds the required amount — a small amount of bypass air is preferable to recirculation, but excessive bypass air is wasteful — it may be necessary to reduce the supply air flow rate. Before going ahead with that, make sure it is safe to turn off one or more cooling units while retaining the necessary unit redundancy (assuming that the unit fans are constant speed). If fans are variable speed, monitor the highest server inlet conditions while reducing the air flow. Do not reduce the air flow beyond the point where the highest server inlet temperature exceeds 77 F as recommended by ASHRAE TC 9.9.

 

Savings From On High

Facility executives can trim cooling costs by taking advantage of the space above the racks.

If there’s a suspended ceiling, consider installing a return extension on each cooling unit to above the suspended ceiling and providing return grilles with a large free area above the entire hot aisle. This increases the overall effective height of the data room, separating the warm air from the remainder of the room via use of the ceiling plenum and minimizing stratification below the suspended ceiling. However, complete removal of the ceiling plenum will provide even better performance because of the natural buoyancy of the warm air.

And if there isn’t a suspended ceiling, or if there is at least 10 feet of clear height between the raised access floor and the structure above, installing return duct extensions may make sense. These extensions will permit stratification of hot return air and should open about two feet below the underside of the structure; a good computational fluid dynamics analysis of the room should indicate the stratification depth and enable a skilled designer to determine optimum opening height.

— Christopher M. Johnston

Christopher M. Johnston, PE, is senior vice president and chief engineer, national critical facilities team, for Syska Hennessy Group, a national consulting, engineering, technology and construction firm. His 37 years of experience includes critical facility projects for corporate and institutional clients.




Contact FacilitiesNet Editorial Staff »

  posted on 8/1/2008   Article Use Policy




Related Topics: