fnPrime



Tackling HPC's Massive Heat Generation





By Kevin J. McCarthy Sr.  
OTHER PARTS OF THIS ARTICLEPt. 1: High-performance Computing (HPC) Is Likely to Become MainstreamPt. 2: Power and Communication Needs of SupercomputersPt. 3: This Page


In the example above, the entry-level HPC rejected 512 tons of heat to water. The massive heat generated in a small area by these systems will require chilled-water cooling systems.

Here is an example of how Cray systems are cooled: The Cray XT6 computer has a 16-inch fan in the bottom of each cabinet. This fan sends 3,000 CFM of air across a vertical heat sink in the cabinet. The air is cooled by a pumped refrigerant system mounted on the top of each cabinet. For a 40-cabinet system, the room will need to support 120,000 CFM of air, and will require a water supply to the heat exchanger at the end of each row, which will send the 512 tons of heat load to the chiller plant.

The other major player in the HPC market is IBM. IBM has a long history of direct water cooling of computers, and they have brought the idea back in their P6 HPC product. Each cabinet will require a direct water connection in an IBM installation. This system does not have the massive air movement requirement.

An HPC will require an upgrade of the conventional HVAC system in the form of package chillers and chilled-water piping. Although chillers are an expensive first-cost element, they are more cost-efficient to operate over the life-cycle of the facility compared with competing cooling technologies. Converting an entire facility to chilled-water cooling will save money over the long term.

Comparing the two cooling systems, the pumped refrigerant system would require less chilled water piping, as each HPC equipment row has a heat exchanger at each end. The refrigerant piping is part of the Cray installation. This does create a single point of failure in the heat exchanger, but so would a chilled-water header that feeds 10 IBM HPC cabinets. At the end of the day, any facility cooling system needs to be flexible, as the IT manager will select the HPC vendor.

With a packaged chilled-water system removing the heat of the HPC, the net result will be a very small heat load on the air-side system. This may enable a data center to turn off some air conditioning equipment, while maintaining enough running units to manage humidity. Using chilled water also makes available an option of installing a fluid cooler that would allow free cooling in the cooler seasons, depending on location. The Jaguar HPC, mentioned earlier, is the fastest computer in the world and, according to Cray, the room housing it required 100 fewer computer room air conditioners than before because of the pumped refrigerant system.

Manufacturers are experimenting with high-temperature HPC products, which will reduce cooling requirements. For example, the Cray XT6 has a maximum inlet temperature of 32C (89.6 F), and they are attempting to raise this to 50C (122 F); as a result, this system may require little or no cooling.

Ensuring Reliability

Currently, HPCs are not dual-corded devices; however, as they emerge from the institutional research environment and enter the colocation data center market, it can be expected that the manufacturers will embrace dual power.

An HPC is unlikely to require qualitative changes to the data center's existing uninterruptible power supply (UPS) topology or emergency power generation model; rather, what will be required is a relative increase in UPS power that is brought to a small area of the data center. The availability of chilled water, UPS power, and generator power will vary by facility, and all of these facets will need to be researched before installing an HPC.

Given the multiple forces that could drive adoption of HPC in commercial and institutional data centers, now is the time to start researching and planning. n

Kevin J. McCarthy Sr. is vice president of EDG2 Inc., a global engineering and program-management firm specializing in the design of mission-critical facilities such as data centers, high performance computing, digital hospitals, network operation centers, trading floors, call centers, telecommunication switch sites (central office facilities), broadcast facilities and complex structures. The firm is headquartered in McLean, Va.




Contact FacilitiesNet Editorial Staff »

  posted on 2/2/2011   Article Use Policy




Related Topics: