Small Steps Lead to Big Data Center Efficiency Gains
When master improvement programs are developed for data centers, the recommendations often target ways to improve reliability and detail a plan to implement current industry best practices. Today, master improvement plans are being broadened to address energy efficiency as well. That approach makes sense. Steps to improve energy efficiency can also bolster reliability — for example, when evaluating and optimizing air distribution.
A case in point is a 30,000-square-foot data center for a Fortune 500 retailer. The data center, located in the Southeast, does not have a Tier certification, but it would fall between a Tier II and Tier III facility, approaching the latter. Facility executives and their engineering team looked for energy efficiency opportunities among systems, equipment, controls and IT equipment most common to many data center operations. The goal was to find improvements with a payback of three years or less.
The starting point was a top-to-bottom review of central plant control strategies. That analysis produced a range of strategies, including:
- Identifying optimum locations for thermostats serving existing computer room air conditioning (CRAC) units.
- Converting a constant chilled water pumping scheme to a variable flow scheme.
- Replacing existing CRAC units with more efficient equipment, including deployment of dual-fed units and variable frequency drive fans.
- Airside and waterside control strategies to improve the ability to deliver cool air to critical loads or extend existing capacities to allow greater load densities.
- Replacing the legacy building management system with a newer, more robust system that would facilitate better integration of trended data.
- Replacing existing rooftop units with newer, more efficient dedicated outside air units selected to meet the most stringent requirement of minimum ventilation air and pressurization and humidity control.
- Investigating maintenance of underfloor pressure and how it could be improved by sealing openings.
In addition, the raised floor perforated tile arrangement and quantities were reviewed and modeling was performed to identify optimal placement to match present space loads. This model would later be used to develop a roadmap to handle future load increases and to address projected higher-density areas.
Facility infrastructure wasn’t the only place the data center looked for energy efficiency opportunities. Working with IT, the operations team developed a new prototype equipment row intended to maximize cabinet use while reducing electrical and cooling requirements of future equipment deployments.
Taking Action
Facility executives had followed smart practices since the facility opened, including hot/cold aisle design and layout and data monitoring and trending software to capture operating trends. The ability to capture data was crucial as it enabled the operations team to present a stronger case for upgrades.
Upgrades had to pass stringent tests: Measures couldn’t require substantial capital investment; in-house operations staff, vendors or engineering consultants had to be able to implement them relatively quickly; and the actions had to provide long-term, measurable results in a relatively short payback period. A range of upgrades passed those tests.
Air Distribution — Under-cabinet bypass airflow was reduced by sealing cable cutouts under legacy cabinets. These had been wide open, allowing cold air to blow directly up the backs of the cabinets and mix with exhaust air from servers. Fire-resistant plank foam — tested and configured — proved an effective and low-cost way to seal the openings. For new cabinet floor penetrations, nylon brush floor grommets became standard. Other openings were also sealed, including ones in the raised floor and the air gaps in the building columns that allowed bypass air from the raised floor to go directly into the ceiling space.
Through-cabinet bypass airflow was also eliminated. Openings between equipment in a cabinet, which allowed cold air to migrate from cold rows to hot rows and mix with exhaust air from servers, were sealed using clear Plexiglass blanking plates. Magnets were used to adhere to blanking plates in cabinets to rails. This standardized practice made deploying and reconfiguring blanking plates quick and easy, increasing utilization and therefore overall effectiveness.
New return air “chimneys,” or sheet metal ducts, were installed to direct return air from above the ceiling directly into each CRAC unit, a measure that enabled the supply and return air temperature differential to be widened.
Central Plant — Major equipment operating setpoints were re-evaluated. Established more than eight years earlier, the original set points for the chiller plant were based on a baseline UPS load that has since more than doubled and is expected to increase another 20 percent in the next 18 months. Adjusting setpoints to align better with the present load produced a substantial portion of the energy savings.
Other gains came from upgrading and adjusting CRAC units. Temperature sensors were relocated from the return air inlet to the supply air outlet to better regulate temperature of air delivered to cold aisles. Once the distribution and setpoint issues had been successfully addressed, facility operators were able to raise supply air temperature by 4 degrees F (from 60 to 64 degrees F) and turn off 11 of the 30 CRAC units that had historically been operating.
IT Equipment — IT evaluated several manufacturers of oft-deployed blade servers and calculated an effective total cost of ownership (TCO) based on energy efficiency and other factors. A “model server” was then selected. Based on this architecture, operations staff devised and tested a new high-density cabinet for deploying the new server mockup. Testing included installing rack-mounted load banks and metering equipment to simulate and record temperature excursions. From this evaluation, a new prototype equipment row was built. Today, new data center cabinets match this standard.
Measurable Results
Before and after performance measurements were taken, supplemented by readings at various stages of program implementation. Progress ranged from improvements in air distribution to more efficient use of electricity.
Balometer readings were taken at the same locations in the computer room at the beginning and during each implementation phase. These readings increased an average of 67 percent from the onset (521 average cubic feet per minute, or cfm) to the culmination (874 average cfm) of the project.
Collectively, these refinements and upgrades significantly increased efficiency. This was most strikingly demonstrated by the fact that while the critical IT load increased 50 percent from 2004 to 2008, over the same period the total building load only increased by 25 percent. Over this period, the Power Usage Effectiveness (PUE), measured by dividing the amount of power entering a data center by the power used to run the computer infrastructure within it, improved by 15 percent. (See “PEU Reduction Snapshot”)
These efforts to improve efficiency dovetailed with the company’s environmental commitment, which, as this program demonstrated, can be successfully extended to what was viewed as the organization’s most energy-intensive operation.
Many opportunities exist for improving the energy performance of legacy data centers, with even seemingly small improvements capable of achieving significant energy savings and operational efficiencies. Substantial capital commitments are not a prerequisite to success — a relatively modest investment in time and money can be recouped in short order. This data center’s facility management team, by carefully developing a green initiative strategic plan and assembling a program team consisting of operations, IT and its design partner to execute it, was able to increase the reliability of its flagship data center while extending its capacity and reducing its energy consumption simultaneously.
Continue Reading: Data Center Master Improvement Program
Small Steps Lead to Big Data Center Efficiency Gains
PUE Reduction Snapshot
Related Topics: