fnPrime



Serving Up Efficiency in Data Centers



Trends in IT operations and advanced facility design strategies offer opportunities to control data center energy costs


By Bill Kosik  


At 100 watts per square foot, a typical 20,000-square-foot enterprise data center will have a peak cooling demand comparable to a 200,000-square-foot commercial office building, and total annual energy consumption roughly equal to a building twice that size. And every new generation of IT hardware seems to bring another jump in data center energy consumption. It’s no wonder there has been so much discussion about the need to improve the energy efficiency of data centers.

But energy use in technology facilities is a complex issue. For one thing, there are multiple reasons to reduce energy use. One is to cut utility costs. Another is to reduce the environmental impact of data centers. A third is to trim the amount of power needed by mechanical systems in order to free up more power for servers and other IT hardware. Sometimes these are complementary approaches; sometimes they are not. Utility costs can be reduced by negotiating new rates without cutting back on energy use. Or environmental impact can be mitigated without lowering costs or freeing up additional power for servers.

Because motivations for efficiency may differ, optimizing energy use is not a one-size-fits-all proposition — it must be tailored to the specific goals and metrics used in the planning process within an organization. Success requires a thorough understanding of all the elements that affect energy use.

It also calls for a multifaceted approach. Reducing energy consumption alone is not enough. Instead, facility executives should address multiple objectives — service enhancement, reliability and reduction of operational costs. These goals were once thought to be mutually exclusive; today, they can be accomplished simultaneously.

Time of Opportunity

Facility executives will find it difficult to optimize energy use if they don’t understand the IT world. Several trends in IT operations can be leveraged to reduce or optimize energy spending.

  • Technology upgrades. Business seems to be caught in an unending cycle: Faster servers enable faster, more memory-intensive software applications, which then require faster servers. The result is often an increase in energy use. But if an IT hardware upgrade is being planned, an opportunity exists to address energy use. Instead of simply buying new hardware, the organization has a chance to seek ways to optimize energy consumption.
  • Reducing IT and operational costs. Organizations are constantly looking for ways to cut operating costs. In the past, organizations often accepted that data centers would consume a lot of energy. That assumption is increasingly being questioned. In addition to the first cost of a server, the total life cycle cost including maintenance and electricity costs should be considered. The best way to control those costs is to take a multifaceted approach, starting at the overall IT strategy and ending at the data center facility. The best time to think about optimizing energy use is at the outset of a new IT planning effort.
  • Potential federal regulations. Last year, Congress ordered the U.S. Environmental Protection Agency (EPA) to investigate server and data center energy efficiency. While it is too early to say if this study will suggest mandatory reductions in energy use, it is clear that the federal government will continue to focus on this subject. That attention will likely result in some type of policy related to the energy use of technology.
  • Technology industry response. The Green Grid is a global consortium formed to “improve energy efficiency in data centers around the globe.” Manufacturers, consultants and service providers — even some that are fierce competitors — are banding together with the ultimate goal of developing “standards, measurement methods, processes and new technologies to improve performance against the defined metrics.”
  • On-going research. Recent studies have begun identifying ways to reduce the power data centers consume. One project carried out by Lawrence Berkeley National Laboratory and Pacific Gas and Electric Company outlined 67 energy-efficiency best practices.
  • Corporate social responsibility. The average American has a carbon footprint of 42,000 pounds per year. A large corporate data center has a carbon footprint equal to 3,400 average Americans. That’s the equivalent of driving 12,000 cars for a year. Many technology companies have stepped forward and developed green policies that not only reduce overall environmental impact of their operations but also can potentially increase their shareholders’ value in the company.

    Taken together, these trends can provide strong support for data-center energy-efficiency measures.
    Server Efficiencies

Looked at one way, the energy efficiency of servers has been increasing. The traditional energy metric for enterprise servers has been power per instruction metric, and by this measure servers have shown tremendous improvements in efficiency over the past several years. However, this metric is misleading. The power consumption of the servers themselves has been steadily increasing. Nevertheless, there are opportunities for a true reduction in server power use.

  • Multicore processors. For certain applications, advances in multithreading and multiprocessing using multicore processors make it possible to trim power consumption while increasing performance.
  • Virtualization. The practice of running multiple workloads on one server — known as virtualization — reduces power consumption of servers by more fully utilizing their capability. This approach improves both IT and energy efficiency. According to estimates, virtualization can cut energy use from 19 to 24 percent.
  • Blade PCs. A pool of blade servers can replace individual desktop machines. The approach improves efficiency because the cabinet-mounted servers operate in a controlled environment with more efficient power supplies, fans, etc.
  • Consolidation. Some organizations mirror, back-up and replicate data to ensure business continuity. Nevertheless, running identical applications to achieve the same end in multiple locations is inefficient in terms of IT operations, infrastructure and architecture; it also creates substantial increases in overall power consumption. This is an area that must be carefully coordinated and integrated to ensure compliance with the overall business, technology and facility reliability objectives. Robust IT planning which may include consolidation strategies not only can achieve operational efficiencies but can also reduce overall power consumption.

Dramatic changes have occurred in the last five years in IT — in areas like power/cooling density and reliability — and facility executives should understand these developments. All too often the IT and facilities teams don’t start communicating until the IT master plan is nearly complete. When this happens, the organization loses opportunities to investigate how the facility and the power and cooling systems will affect the servers and other IT equipment from a reliability and energy-use standpoint. Energy efficiency demands a holistic approach. Including energy use as one of the metrics when developing the overall IT strategy will have a significant impact in the subsequent planning phases of an IT enterprise project.

Improving Facility Efficiency

The analysis and design of the data center facility presents another opportunity to influence energy use of IT operations. Aside from the computer equipment itself, the primary energy consumers are the cooling and power distribution systems. One metric used to analyze options for these systems is called the power usage effectiveness (PUE). This metric, while not yet commonly used, promises to become an important metric in the data center industry. A PUE can be developed for power and cooling systems individually, but more importantly, as a group representing a total PUE for the entire facility. The PUE is a measure of power efficiency and is represented by the equations:

  • PUEtotal = total power delivered to the facility divided by power delivered to IT equipment. Typical range is 1.5 to 3.0.
  • PUEcooling = power required by the cooling system divided by power delivered to IT equipment. Typical range is .42 to 1.2.
  • PUEelectrical = incoming power required to the electrical system divided by power delivered to IT equipment. Typical range is 1.1 to 1.5.
  • PUEtotal = PUEcooling plus PUEelectrical

How can the PUE be used to analyze alternative system types? This is another area where it is vital to identify interdependencies. Climate, cooling system type, power distribution topology and redundancy level (reliability, availability) will drive the power efficiency of these systems. When an analysis is performed to determine peak and annual energy use, these interactions become obvious. Mechanical system can consume approximately 25 to 50 percent of the total energy used in the facility; of that, the power required for cooling will typically be close to 50 percent.

Because the chiller is the second highest energy consumer in a data center (next to the servers), it is valuable to take a basic look at the refrigeration cycle. The primary goal of the refrigeration cycle is to vary the pressure of the refrigerant to control the evaporation and condensing temperatures. In this cycle, lowering the total entropy will decrease the total work — or energy — that is required. As the air temperature moving across the condenser coil increases, the compressor will need to create higher pressures, thereby using more energy. When water is sprayed over the coil (as in an evaporative cooler), the air temperature is decreased and less pressure is required to cause the refrigerant to condense. Similarly, the higher the air temperature moving across the evaporator coil (chilled water coil in this example), the lower the pressure needs to be to cause the refrigerant to “boil” or evaporate. It is important to note that for every 1 degree F increase in chilled water temperature there will be an increase in chiller energy efficiency from 1 to 4 percent, depending on the type of chiller.

One strategy to decrease the energy consumption of the compressor is to elevate the supply air temperature by increasing the chilled water supply temperature or reducing the temperature of the air moving across the condensing coil or both. However, the type of mechanical system will determine whether this strategy can be used, and the power consumed by the various components of the mechanical system will directly affect the PUE.

Another major driver of system efficiency is climate. Depending on the mechanical system type and the climate, the cooling system PUE will range from approximately 0.42 for a chilled water system with cooling towers located in a temperate climate to 1.2 for water-cooled, direct expansion air-conditioning units in a hot, humid climate. This represents more than a 185 percent increase in power for cooling systems solely based on the climate and mechanical system type.

The electrical system is also an important energy consumer. Depending upon the topology of the electrical distribution, there can be significant losses within this system. The major contributing factors are:

  • The loading on the UPS system (if one is present).
  • The type of power conversion equipment.
  • The type of power supply.
  • The type and voltage of electrical power distributed to the computer equipment.

There is a great deal of ongoing research in this area, but it is understood that efficiencies will range from approximately 50 percent (traditional static UPS and power conversion) to upwards of 75 percent (high voltage DC power distribution). The result is that for every kilowatt meant for the computer equipment sent through the electrical distribution system, the usable power that actually gets to the computer equipment will vary from 500 watts to 750 watts.

The use of multiple, concurrently energized power distribution paths, if they are designed correctly, can increase the availability (reliability) of the IT operations. However, running multiple systems at partial loads also decreases the efficiency of the overall system. Electrical power distribution systems have calculated PUEelectrical ranging from 1.1 for a very efficient system to close to 1.5 for a minimally energy code-compliant system with redundant components.

The overall facility PUE takes into account all of the power consumers for the total facility and compares them to the IT power. All else being equal, the calculated PUEtotal for facilities will range from approximately 1.5 for a data center with very efficient cooling and electrical systems located in a dry, temperate climate to more than 3.0 for a facility with minimally energy code-compliant cooling and electrical systems in a hot, humid climate. This represents more than a 100 percent increase in power for non-IT systems based on the climate and the mechanical and electrical system types. It must be noted that there are currently no standards on which to judge a calculated PUE, but the ongoing EPA study will likely suggest what these might be.

The Big Picture

To reduce the overall power consumption of a data center (or to maximize the power available to the computers), one must consider the interdependencies between the various power consumers within a data center. Understanding the power delivery chain will make it possible for key stakeholders to have a dialog. That’s the first step toward an integrated approach where interdependencies can be analyzed, discussed and vetted.

By the end of 2007 or early 2008, there should be a much clearer picture of how the new industry organizations will bring about change, and how future federal and state government policies will help or hinder these organizations’ efforts. Even so, it is clear right now that there is an interdependency among IT infrastructure/architecture, business results, and overall environmental impact, and that the strategic planning and tactical implementation of technology and data centers has moved front-and-center in enterprise IT planning discussions.

Bill Kosik is a managing principal at EYP Mission Critical Facilities. He has developed design strategies for cooling high-density environments, and creating scalable cooling and power models for a variety of scenarios.




Contact FacilitiesNet Editorial Staff »

  posted on 8/1/2007   Article Use Policy




Related Topics: