fnPrime



data centers

High-performance Computing (HPC) Is Likely to Become Mainstream





By Kevin J. McCarthy Sr.  
OTHER PARTS OF THIS ARTICLEPt. 1: This PagePt. 2: Power and Communication Needs of SupercomputersPt. 3: Tackling HPC's Massive Heat Generation


Until recently, academic and government research centers were the exclusive domains of high-performance computing (HPC) because of the supercomputers' ability to perform sophisticated mathematical modeling. In 1976, the first Cray-1 supercomputer was installed at Los Alamos National Laboratory. Designed by Seymour Cray, whom many regard as the "father of supercomputing," the Cray-1 clocked a speed of 160 megaflops, or 160 million floating-point operations (FLOPS) per second. Last year, Cray Inc. installed what was at the time the world's fastest supercomputer at Oak Ridge National Lab; named "Jaguar," the XT5 System has a clock speed of 1.8 PetaFLOPs, or 1,800,000,000,000,000 FLOPS per second. This surpassed the IBM "Roadrunner" system, installed in 2008, which was the first computer to pass the PetaFLOP barrier.

Fast-forward to March 2010, when Cray introduced the CX1000, which HPCwire.com described as an "entry-level" or "mid-level" machine designed for "the typical data center environment."

Not exactly "typical" by today's standards, Cray's and other manufacturers' cluster-based supercomputers are likely to become a mainstream solution for data centers that require high-availability clusters for 24x7 transaction processing. HPC will also become the required computing platform for Internet content providers if the forecast of a 3D Internet within five to 10 years is accurate. In addition, the advent of "P4" — "predictive, preventive, personalized, participatory" medicine — will require HPC analysis of each individual's genome, opening the way for a sea change in medicine, how doctors treat patients, and how medical technologies are applied.

Dramatic Changes

Compared with today's typical data center, an HPC facility will require dramatic increases in power and cooling capacity. These massively paralleled networks of specialized servers have a load density of 700 to 1,650 watts per square foot, while most current data centers have a load density of 100 to 225 watts per square foot.

For a company that has decided to enter the 3D Internet world, let's consider a hypothetical entry-level HPC: a 700-teraFLOP (700 trillion FLOPS) computer cluster using the Cray XT6 platform.

This HPC will fill two rows of 20 cabinets at 45 kilowatt per cabinet. These cabinets are not typical racks; they are custom enclosures that are 22.5 inches wide and 56.75 inches deep. At 35 square feet per cabinet (including cold and hot aisles, service space, air conditioner space and PDUs), this will total only 1,400 square feet of space. If the data center has a capacity of 100 watts per square foot, the HPC would consume the power available for 18,000 square feet. There are two potential solutions: Either install additional capacity for this system, or set aside a lot of data center space with nothing in it, and divert the power from that area to the HPC. The system will require 1,800 kilowatts and will reject 6.14 million BTUs per hour or 512 tons of heat to water. The system is also heavier than today's typical rack, which is partially loaded with technology. Each HPC cabinet is fully configured with technology and weighs 2,000 pounds per cabinet.


Continue Reading: A Supercomputer in Your Future?

High-performance Computing (HPC) Is Likely to Become Mainstream

Power and Communication Needs of Supercomputers

Tackling HPC's Massive Heat Generation



Contact FacilitiesNet Editorial Staff »

  posted on 2/2/2011   Article Use Policy




Related Topics: