Corporate Needs Shape Data Centers
A critical challenge is keeping up with demands for cooling, power
Once upon a decade ago, mission-critical data centers were largely the province of financial giants and large medical centers. The average company could rely on simpler, less expensive options for protecting critical operations. Today, with business being conducted at the speed of the Internet, customers are demanding — and receiving — 7x24 services, even from small e-commerce specialty shops.
“More businesses are on a 7x24 schedule,” says Robert Cassiliano, chairman of the 7x24 Exchange and president and CEO of Business Information Services. “For them, information has to be available in real time all the time.”
The data center has moved from being a “necessary evil that was not a part of the core business to something companies cannot live without,” says Cyrus Izzo, senior vice president and national critical facilities director for Syska Hennessy Group. “If a local ATM is out of order or a customer cannot get pay-per-view, the company is not meeting customer needs.”
Evolution of needs
As corporate needs evolve, the demand on data centers increases, says R. Stephen Spinazzola, vice president and director of engineering for RTKL. In the information industry, these demands are called “mission creep.” What that means is the data center suddenly finds itself responsible for additional functions and an elevated level of reliability that is considerably past the initial design intent.
“For example, data centers in hospitals and medical centers began by supporting payroll, insurance paperwork and money issues,” says Spinazzola. “Then they started using digital communications for their pharmacies. Now the data center is a clinical service in many hospitals.”
In conventional business operations, demands on data centers also have been affected by the requirements of Internet commerce. As a result, the performance of data centers is no longer a cost center relegated to a “we have to have it” mentality. “The new data center is often the core, the heart and soul behind the business,” says Izzo.
Often, today’s data centers are very complex with high requirements for reliability, says John Pappas of Mazzetti & Associates. “This is driving some owners to consolidate multiple sites into one larger, more robust facility.” The consolidation frequently results in lower operational costs, according to Pappas.
On the other hand, Leo Soucy Jr., of Facilities Engineering Associates sees “much smaller data centers — often 1,500 to 5,000 square feet — that are housing much more critical data processing operations.”
New applications
Soucy has also noticed more interest in mission-critical design for locations that are not strictly data center environments, such as broadcast facilities, hospital centers and certain manufacturing operations.
Another driving factor, says Pappas, is “the development of new applications that continue to increase the processing requirements of facilities, from new medical applications for patient information to number crunching and projection modeling for financial institutions to more complex finite element modeling in engineering research.”
Sept. 11 changed the way businesses look at disaster recovery requirements, says Cassiliano. “Today, companies are building facilities with some distance between them so that critical facilities cannot be hit by the same disaster.”
New legislation is also making businesses more aware and accountable for their assets, says Izzo. Thanks to colossal mismanagement in such large publicly traded companies as Enron and WorldCom, the boardroom no longer can hide behind bad business judgment. Instead, more businesses must account for critical assets and what they are doing with those assets.
Spinazzola identifies another force driving the new focus on data centers: a combination of pent up demand and the aging of the computer infrastructure. “Equipment, servers, hubs, et cetera, have a four-year life expectancy, while the reliability of the facility’s mechanical and electrical infrastructure has a 10- to 15-year life expectancy,” he says.
With lightning-fast blade servers becoming more common in today’s data center environment, two issues become paramount. Today’s information technology demands power — lots of power — and produces significantly more heat than its predecessors. As the technology keeps getting smaller and smaller, these power needs and byproduct heat dissipation requirements become more complex challenges.
Can today’s existing data centers handle the new demands? Cassiliano believes in many cases, the answer is yes. “We can make upgrades to HVAC and power systems to accommodate the newer equipment in most instances. But air flow needs to be analyzed and the room redesigned for the proper airflow dynamics to cool today’s servers.”
Izzo agrees that the first hurdle is the increased cooling demands of smaller, faster computer processing. “Facilities designed with a hot aisle/cold aisle design have a bit of a leg up when it comes to renovations,” says Izzo.
But there are times when it isn’t practical to bring the current data center up to date. “The need for reliability sometimes makes upgrading existing facilities too costly or too much trouble, given the potential disruptions that may occur,” says Pappas.
“Many existing data centers cannot be modified to meet the much higher watts per square foot requirements,” says Soucy. “You may be able to accommodate 75 watts per square foot, but not the 100 and 150 watts per square foot loads.”
Designing data centers
Because some older data centers cannot be retrofitted, today’s IT designers are trying to build as much flexibility and scalability as possible into today’s designs. That means data center designers need to know where the corporation sees its data needs growing both short term and long term.
As Izzo notes, how much flexibility can be built in depends on the corporate checking account, the ability to predict how operations may change in five or 10 years, and what will happen with the next generations of technology. On the latter point, Izzo sees one constant: “Servers will continue to get faster, smaller and hotter. And chief information officers are going to expect the data center to absorb the new models and to be able to put more of them in the same footprint. That’s going to drive up tower requirements and heat output.”
Probably the biggest challenge is knowing how the data center will be configured, says Mark Evanko, principal engineer for BRUNS-PAK. “Even if you have two data centers that each requires 100 watts per square foot, how they are configured makes each different.”
Evanko compares the configuration to a land development. Not that many years ago, all computer data center landscapes featured the equivalent of ranch houses, bi-levels and colonial homes, Evanko says. “But in the new landscapes, we’re seeing a ranch house, then a colonial, then a 10-story building. Plans for the data center’s future may call for replacing three two-story houses with three 35-story office buildings.”
So, even if a data center is designed for 100 watts per square foot, it still could fail to meet current or future data center demands for those 10-story buildings where 700 to 800 watts per square foot may be needed.
New IT challenges
The No. 1 concern of new data center technology is high density cooling, says Cassiliano. “Blade servers, for example, generate a significant amount of heat so they require additional cooling.”
New data centers are being built to meet much higher watt-per-square foot load densities, says Soucy. “The emphasis is on air conditioning requirements. We now see watt densities of 75 watts per square foot as normal.” In new design parameters, Soucy sees even higher densities, sometimes 100 to even 150 watts per square foot.
There are many ways to address the issue of new cooling requirements, Pappas says. Strategies include spreading the equipment out, building isolated high-density areas or using one of the new spot cooling products on the market. “A clear understanding of air flow and heat extraction can also solve these problems in existing data centers.”
“Data centers are being designed with taller equipment racks which generate additional heat,” says Cassiliano. “So we must use the software tools available to analyze the cooling and air flow requirements of each configuration carefully.”
Cooling approaches
High-density cooling requires careful evaluation of the thermodynamics associated with principles of heat dissipation, says Evanko. “High-density cooling requires huge caverns of space.”
Existing data centers may simply not have the clearances needed to move air effectively for cooling. “Today, we are looking at warehouse-type facilities with 20-foot clearances,” says Evanko. “It’s an engineering challenge to dissipate the heat.”
Addressing that challenge increasingly requires sophisticated engineering software that models air flow with computational fluid dynamics, Izzo says. “We use a computational model of the space and help the owner locate very hot equipment to preclude developing hot spots in the data center. Before the design is complete we also try to crystal ball where additional equipment will be deployed so that it does not create future problems.”
Major manufacturers also are working on the problem. Spinazzola knows at least three manufacturers that have air-cooled cabinets that can handle up to 8 kilowatts per cabinet. Evanko says that IBM, Sun Microsystems and HP are investigating liquid cooling, basically water, as the medium for cooling. The reasons for using water were outlined by Donald L. Beaty, president of DLB Associates in the Fall 2004 edition of 7x24 Exchange NewsLink. Beaty points out that the specific heat capacity of water is more than four times the specific heat capacity of air and the density of water is about 1,000 times the density of air. In addition, the thermal conductivity of water is about 30 times greater than that of air. Water also has a much lower heat transfer resistance.
Those numbers show that water is an excellent heat transfer agent. But facilities executives and some designers are concerned about combining liquid cooling with so much electrical power and humidity-sensitive computing equipment.
“The movement to at least using liquid in some capacity is already gaining momentum through hybrid solutions utilizing liquids within the equipment packaging and then air-to-liquid heat exchangers as the interface with the surrounding environment,” writes Beaty.
The combination is viable, according to Spinazzola, “if you can live with water at the rack level as part of your data center’s deployment.”
Because major computer manufacturers are involved, Evanko expects water-cooled equipment will be available shortly and that designers and facilities executives will need to accommodate it.
7x24 Exchange Focuses on End-to-End Reliability
7x24 Exchange is the leading knowledge exchange for facilities executives, users, designers and builders of mission-critical enterprise information infrastructures. Founded in 1989, 7x24 Exchange was shaped by a group of technology and facility professionals.
“At that time, information technology and facilities executives worked in separate silos, not as a team,” says Robert Cassiliano, chairman of 7x24 Exchange and president and CEO of Business Information Services, a tech services company. “We decided we needed a forum where people could talk together about the issues and understand the problems that facilities people face and technology people face. We held our first meeting in a brokerage house in New York City with 16 people.”
Last fall’s conference drew more than 375 people. Today, the group that started with a goal “to improve end-to-end reliability by promoting dialogue” has 278 member companies and 14 chapters across the country.
Making sure information continues to flow among professionals, 7x24 Exchange holds two major conferences on current topics annually. “At our spring and fall conferences, we cover important issues related to uptime,” explains Cassiliano. “For instance, high-density cooling is a hot topic now so there are a number of presentations on that subject. Our conference following Sept. 11 had major Wall Street companies speaking about how the disaster impacted their operations and the lessons they learned. When the Northeast power outage occurred, details of its causes and predictions on further occurrences as well as actions to protect mission-critical facilities were covered.”
Vendors also participate and react to 7x24 Exchange issues. For example, issues about power supply redundancy brought to vendors’ attention by Exchange members led to the development and implementation of dual power cord technology. Breaker and harmonic problems also were brought by members to vendors. “Vendors react because their client base is a large part of our membership,” says Cassiliano.
Currently, the Exchange is creating an advisory board of high-profile technology executives. These executives are expected to provide 7x24 Exchange with a better understanding of data center operations from the perspective of the chief information officer and chief technology officer. “We need the input from the senior level on what they see as important to 7x24 operations,” says Cassiliano.
|
Related Topics: