fnPrime



Too Hot To Handle?



Data center infrastructures are struggling to keep up with the cooling and power demands of new IT hardware


By Bill Angle  


Over the past couple of years, organizations have been increasingly turning to more compact storage and server technologies to save physical space in data centers, improve data center performance and decrease costs. However, these rapidly changing technologies are making it more difficult for facility executives to design data centers that meet the needs of today and tomorrow.

The biggest infrastructure challenges these technologies present are power and cooling. Before the high-power-density technologies are deployed, facility executives should analyze current and planned usage of the new technologies, as well as the data center’s current and future requirements. Depending on the results of this analysis, a facility executive will either have to retrofit an existing data center or design a new one to ensure the infrastructure has the capacity to handle the new technologies.

According to Moore’s law, computing power doubles every 18 months, making technologies faster, cheaper and hotter — literally. Compact technologies are increasing the capacity, speed and energy density of the data center. New storage technologies, such as virtual tape systems, storage arrays and storage area networks, are raising power density. This has been exacerbated by organizations consolidating storage systems and by manufacturers reducing the enclosure dimensions (known as the form factor) to gain greater space efficiency from data center devices.

Impact of Blade Servers

A significant form factor change being deployed is blade server technology. These are compact and thin servers that take up much less space of traditional servers, allowing more servers to occupy a cabinet. Traditionally, standard server racks could only hold around 40 servers; the same space can hold hundreds of blade servers. In addition to saving space, blade servers can help reduce costs related to cabling and power supply connections, according to vendors.

Despite these advantages, the challenges that blade servers create make their overall benefits questionable. High-power-density blade servers packed into one concentrated box generate a tremendous amount of computing power and heat — heat that reaches levels close to or in excess of the capacity of most data centers to cool.

For instance, most data centers were traditionally designed to handle an average 30 to 75 watts per square foot across the total raised-floor area. However, the high power density of blade servers and new storage technologies are requiring data centers to handle 300 watts or higher per square foot in the local area where the blade servers are deployed — and this trend is expected to continue. Research firm Meta Group predicts that this number of watts will double over the next few years, and that local power requirements will reach 1,000 watts per square foot between 2008 and 2010.

This increase in power usage means that blade servers have the potential to create hot spots. These hot spots can cause overheating and often lead to system failure: Hot air leaves one rack and goes into another one, overheating the operating environment of the second rack’s electronics. In some devices, that will cause the processors to default to a lower processing speed to reduce heat output. This will reduce data center performance.

Regardless of their challenges, these new storage technologies and blade servers are increasingly being deployed. According to IDC, a research firm, blade servers accounted for two percent of the overall server market last year; however, the firm predicts that blade server sales will more than double in 2004 and that this market will reach nearly $4 billion in revenue by 2006.

Addressing Cooling Challenges

High-density technologies and the heat problems they create led the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) to issue new environmental guidelines for data centers. Released in January 2004, “Thermal Guidelines for Data Processing Environments” provides a power roadmap and recommended environmental conditions and protocols that should provide for more seamless integration of IT equipment into data centers.

However, the largest issue for facility executives is finding a way to get cooling where it belongs. There are a variety of ways to help alleviate cooling challenges in data centers.

  • One strategy is to install high-power-density technologies at high static pressure areas of the computer room air conditioning (CRAC) units. Facility executives should make sure that the discharge air ducts from the CRAC units are aligned correctly to optimize the cool air path to the floor grilles.

A thermal modeling program that predicts where hot spots may exist will aid the facility executive in tuning raised-floor static pressure. In this regard, it is important to understand the Venturi effect of rapid air movement under the floor. The faster the air under the floor moves past a raised-floor opening, the less static pressure will be developed. In some shallow floors (less than 12 inches), the speed of the air under the floor will cause a negative pressure in the first few tiles near the CRAC unit. The counter-intuitive result is that most airflow and cooling capacity in the standard raised-floor environment is actually farther away, near the end of the airflow from the CRAC unit.

  • If high-density server racks are clustered, it is difficult to cool them. Facility executives can geographically isolate the racks so there is more available capacity and the concentration level remains low. Additionally, isolating blade servers allows them to effectively borrow underutilized cooling from racks located next to them. Unfortunately, this solution may limit flexibility in placing electronic devices. However, in older data centers without auxiliary cooling devices, layouts will be constrained by available air conditioning capacity.

  • In a hot aisle/cold aisle arrangement, server racks are arranged front-to-front (cold aisle) and back-to-back (hot aisle). A cold aisle has perforated floor tiles that enable cool air to come up from the raised floor; a hot aisle does not have perforated tiles. Because the racks face each other, this arrangement allows cool air from the cold aisle to circulate through each rack and go out the back into the hot aisle.

  • Facility executives can leverage auxiliary blowers to locally increase air to blade servers. One auxiliary blower should be placed above the cabinet removing heat and one underneath providing underfloor cooling air circulation directed toward the servers. However, this approach can exhaust the cooling capacity of the local CRAC unit in a small area, starving other local devices of cool air. This spot cooling uses air movement devices designed to cool hot spots by increasing local airflow. In a standard raised floor environment, this method will work on individual cabinets up to 8 kw of rack load.

  • Finally, enclosed liquid cooling solutions can also help address hot spots. Fluids such as water or refrigerant transfer heat better than air. Heat densities above 8 kw, up to 20 kw, can be accommodated by bypassing the CRAC cooling air and transferring heat directly to the main heat removal system using these devices.

Depending on the bulk load in the data center, an existing design will use one or several of these methods to remove heat. The main heat rejection system is the most important factor in determining the remaining life and capacity of existing data centers.

Upgrade Or Build New?

When deploying new storage technologies and blade servers throughout data centers, organizations need to take into account the corresponding costs, including those related to operational requirements. As more heat is removed from cabinets and transferred into the main HVAC system, that system’s capacity can be exhausted. If a new system must be installed or an upgrade is to occur during active computer room production, the work can be both disruptive and costly. At some sites, the risk of a data center shutdown far outweighs new facility construction costs. Often, retrofitting the current data center might not necessarily be as good an option as designing and building a new data center.

Industry analysts are reporting that many companies are coming to this conclusion as well. Meta Group wrote in a recent report that 35 percent of Global 2000 companies are evaluating their current data center facilities and are starting to design their future facilities. Additionally, research firm Gartner predicts that 20 percent of enterprises will have to upgrade their data centers by 2005 to address the power and heating demands of high-power-density technologies.

However, the question of whether to build a new data center or retrofit an existing one cannot easily be answered because the decision is based on a variety of factors, such as the cooling capacity the data center room can currently handle; the adoption rate of high-power-density technologies by the organization; and the number of blade servers or storage technologies the organization wants to deploy. Factors that lie completely outside of the data center’s cooling requirements must also be considered, such as disaster recovery, physical security or business growth. And the migration costs for moving computer equipment to the new site and network costs may argue against building a new data center. Without specific site and business variables, it is difficult to know whether to retrofit or to design a new data center.

Can the Existing Data Center Meet Cooling Requirements?

There is an indicator facility executives can use to determine if their existing data centers will meet current and future power and cooling requirements.

The critical step is analyzing the current operation versus the design capacity of the data center. Care should be taken in determining the operational temperature spread between the supply and return temperatures for the heat transfer fluids. These temperatures should be compared to the design temperature spread. By conducting an analysis, facility executives can determine how many watts per square foot their data center can handle compared to how many watts per square foot it will have to handle once these high-power-density technologies are added to the facility. In many older data centers, with redundant CRAC units for fault tolerance and maintenance activity, the desired design spread in heat removal fluids cannot be easily attained because of the sharing of load with the redundant units. However, by using auxiliary devices and heat separation methods mentioned above, the temperature reserve provided by CRAC redundancy can be utilized. This will effectively increase the total capacity of heat removal and, therefore, extend the use of the existing facility.

After analyzing current and future requirements, facility executives should use that information as a baseline of their facility’s existing infrastructure, then develop and implement a growth plan for heat removal from the data center. Successful implementation of these technologies requires careful, coordinated planning between information technology providers and facility providers and can extend existing capacities.

Bill Angle is principal consultant at CS Technology, an international firm that offers technology strategy, architecture, engineering and implementation services.




Contact FacilitiesNet Editorial Staff »

  posted on 7/1/2004   Article Use Policy




Related Topics: