A design that can meet today’s needs
and tomorrow’s challenges starts with
a close look at an organization’s requirements
In case anyone hasn’t noticed, data centers have arrived. It’s hardly been a decade since they first emerged from the dark corners to which data management functions were once relegated by most organizations. Today, organizations of all kinds are facing the music about the mission-critical nature of the information they use and the need for dedicated infrastructure to support it.
It’s an evolution that’s happening across the board, spreading far beyond the major financial institutions and telecom companies that have long blazed the trail on data-center design and operation.
State-of-the-art data centers are coming to health care, where HIPPA requirements and a competitive labor market are upping the ante on maintaining the latest in digital technology. They’re coming to academia, where old, department-by-department data-management practices have become unreliable, unwieldy and expensive. And they’re even coming to the low-tech segments of corporate America, where Sarbanes-Oxley, Securities and Exchange Commission requirements, and the plain old free market have made speed and reliability essential in everything from e-mail to file storage.
“A lot of companies are biting the bullet on the need to build dedicated data centers for the first time,“ says Brion Sargent, vice president of operations, HOK’s Dallas office. “Others are recognizing that the time has come to admit that their data center looks like the Mir space station and do something about it.“
For facility executives charged with balancing the biggest issues in data center design and management today — reliability and cost — data centers’ emergence brings a complicated set of issues.
All shapes and sizes
Today’s data centers come in all shapes and sizes. Some are dedicated spaces located within facilities that are primarily used for other functions — corporate headquarters buildings, for instance. Growing numbers are located on dedicated sites across campus, across town or across the country from the operations they support. In keeping with a trend that first emerged with the Internet boom of the late 1990s, some organizations opt to outsource their data management to specialized contractors that maintain their own data center facilities. With the construction cost for a high-end data center rocketing from some $20 million five years ago to nearly $100 million now, it’s a decision organizations are weighing very carefully.
“The industry is more mature now than it was in the late ‘90s, and companies are approaching these decisions from a more thoughtful, business-focused perspective than they were a few years ago,“ says Bill Kosik, Chicago managing principal, EYP Mission Critical Facilities.
The sobering costs associated with building and later maintaining a data center have helped make for a pragmatic approach to their design. Today’s ideal data center is efficient and adaptable, but devoid of bells and whistles that are not essential to supporting IT applications.
“We used to get people asking for a lot of ‘RBLs’ — that’s red blinking lights — but now it’s kind of passé to show off your technology,“ says HOK’s Sargent. “Today, data centers are more lab-like with lots of easily interchangeable technology that helps you get the job done in the best way possible.“
That said, as organizations develop new respect for the importance of their data, so too are they beginning to regard their data center as an extension of their mission. That means that, while form follows function within the four walls of the data center, today’s designers pay attention to exterior aesthetics.
This is particularly true of the growing numbers of data centers that are being developed in suburban office parks or residential areas as organizations search for sites on which to build and maintain them affordably.
“We still do lots of anonymous concrete box data centers, but the most satisfying are those where the corporation opts for a building that speaks to the public about what the corporation is about, “ says Doug McCoach, vice president, RTKL. “We’re starting to see that more and more.“
Sustainable design is also garnering increased attention where data centers are concerned.
“Data centers inherently do not lend themselves to energy conservation, so you have to look for every opportunity,“ says Joseph Lauro, senior project architect with Gensler, which recently collaborated on the first LEED Platinum data center in Urbana, Md. Because of the difficulty in reducing energy consumption on the project, Lauro reports, the design team focused on using the site to its advantage, maximizing opportunities to provide alternative transportation, use alternative fuel, maximize water efficiency and use low-VOC materials.
In addition, many organizations — particularly those concerned about attracting and retaining sought-after IT workers — want to soften the edges of their data centers by surrounding the “white space“ that actually houses the IT equipment with daylit work areas offering the amenities of Class A office space.
“A good data center is a lousy place to have a desk and a good office is a bad place for computer equipment,“ says McCoach. “So we’re seeing design with an eye toward accommodating the full spectrum of uses — people as well as equipment.“
The desire to make data centers more accommodating to occupants is balanced by those who want data centers to be fortresses — an attitude that has led to increased use of state-of-the-art security strategies like fingerprint and retinal access control.
“We’re getting lots of calls these days from folks who want new data centers that are essentially concrete bunkers,“ adds Sargent. “It’s a direct response to Hurricane Katrina, which made people all around the country — but especially in areas that could be hit by a natural disaster — think long and hard about what something like that could do to their businesses.“
Smaller, faster, hotter
If there is one issue that is common to the design process for every data center, it is the need to plan creatively for cooling. Today’s servers are magnitudes smaller and faster than even very recent generations. They also generate more heat.
Because new servers are so much smaller than in the past, data centers are expected to accommodate more of them. While four to six servers per rack, with a total power consumption of 1 to 2 kilowatts per rack, was once the standard, 10 to 15 per rack is normal today. With “blade“ servers it is possible to fit up to 64 servers in a rack consuming about 10 kw per rack.
“These chips and processors are becoming power-hungry little monsters,“ says Cyrus Izzo, senior vice president and national critical facilities director, Syska Hennessy Group.
Rising temperatures in data centers are a threat to reliability, making cooling a major priority. But the standard cooling approach for data centers — cool air forced upwards through a raised floor — is not only expensive and inefficient, but also simply doesn’t work anymore in some data centers. Facility executives, data center designers, and IT manufacturers are all scrambling to find ways around simply filling their data centers with more, bigger air-conditioning units to balance out every increase in server capacity.
“The biggest issue of the day by far is high-density cooling, which presents a major challenge to facilities managers on how to retrofit existing sites to accommodate high density technology such as blade servers,“ says Robert J. Cassiliano, president and CEO of Business Information Services, a technology services and consulting firm, and chairman of 7x24 Exchange.
Many of today’s data centers attempt to address the cooling issue through load neutralization strategies, which reduce the demand on ambient cooling systems by concentrating cold air on the IT equipment itself. Some such systems attach to the back of individual racks or create a “cold aisle“ that rows of racks back onto. Newer solutions include enclosed racks that contain cooling systems and water-based systems which, despite their cooling potential, give some experts the jitters.
“Everyone is sort of waiting for someone else to implement a water-based system on a large scale first,“ says Izzo.
Despite widespread hesitation, many in the industry think that the use of water-based systems is only a matter of time. “From everything I am hearing from the experts, get ready for the comeback of water cooling,“ says Cassiliano. “Despite great resistance, it may be the only alternative to cool high density environments properly.“
Ultimately, servers and chips may be manufactured with internal cooling media that will enable the systems to regulate themselves. And developments on the power consumption side promise to reduce the amount of energy consumed by IT equipment, which will reduce both energy bills and cooling demands. Experts anticipate that more efficient servers and power supplies, along with systems that can operate on DC power instead of AC, will help curb power consumption in years to come.
Larger problem
According to Kenneth Brill, executive director, the Uptime Institute, however, all of these energy-saving measures are little more than re-arranging deck chairs on the Titanic.
“Power consumption per $1,000 spent on IT has been increasing by three times every year, and that trend is going to continue for the most part,“ says Brill. “The various measures you can take will make some difference, but the fact is that we’ve arrived at the point where the cost of the infrastructure for a data center exceeds the cost of the hardware. That’s a radical change.“
It’s a change, say Brill and others, that truly necessitates a new outlook on the relationship between IT and facilities as data demands drive facility programming as never before.
“Real estate and facilities costs are going to be fully 10 percent of the IT budget,“ says Brill. “We need changes in the applications that are driving these increases, changes in the way facility costs are charged back, and changes in the way facility and IT people work together. All of these things need to start happening.“
What color is your data center?
The process of designing the ideal data center for an organization begins with a series of questions that can help a facility executive zero in on the organization’s needs. What kinds of applications does the data center need to support? How much reliability does the organization really require? What are the resources available for maintenance and upkeep? How often will systems be upgraded? The answers to questions like these should guide every issue in the design process.
While data centers vary widely to reflect organizations’ different needs, good designs generally have at least a few elements in common. One key word, say experts, is integration — each component of the design should complement the other components, right down to details like cable management.
According to Robert McFarlane, partner, Shen Milsom & Wilke, “the cable itself is smaller and more uniform, but there’s more of it. As a result, cable management — under-floor, overhead, and within the equipment cabinets — must be part of an integrated solution.“
In addition, a good design allows the removal of a piece of electrical equipment while maintaining a functional data center, says Bill Kosik, managing principal, EYP Mission Critical Facilities.
“The term is ‘concurrent maintainability’ and it means you can take out any one piece without affecting the critical load,“ he says.
Another must is flexibility. No one has a crystal ball, and experts report seeing brand-new data centers effectively become obsolete within a year or two because of the incredibly rapid pace of change in IT equipment. The best thing a facility executive can do, say experts, is to make educated guesses about the direction the organization is headed and build in as much flexibility as possible.
“You should plan to have the infrastructure in place that you are likely to need in the future, even if you don’t need it now,“ says Kosik. “An example would be putting in pipes with the ability to accommodate future water cooling needs.
“The last thing you want is a facility that is out of date and rigid 36 months from now,“ says Cyrus Izzo, senior vice president and national critical facilities director, Syska Hennessy Group. “That’s a very hard discussion to have with your CEO.“
|
About the 7x24 Exchange
7x24 Exchange is a nonprofit organization that works to facilitate knowledge exchange among those who design, build, use and maintain mission-critical enterprise information infrastructures. The organization’s goal is to improve end-to-end reliability by promoting dialogue among these groups.
“The organization has 360 member companies, hosts two conferences annually, and sends out a newsletter,“ says Robert J. Cassiliano, chairman of 7x24 Exchange and president and CEO of Business Information Services, a technology services and consulting firm.
7x24 Exchange was founded on the assumption that professionals involved with data center uptime issues often work in isolation when dealing with technical, budget, political, and career issues, and that lessons learned through individual trial and error can benefit the group as a whole. To encourage dialogue and facilitate the sharing of experiences and solutions, 7x24 Exchange hosts regular conferences for facilities and IT professionals working on uptime issues.
“The 2005 fall conference, held in San Diego, had 452 attendees, a record attendance,“ says Cassiliano. “Conferences highlight technology and facility executives, tutorials, case studies, and lessons learned (for example, after 9/11 and after the Northeast power outage).“
The organization’s next conference will be held in Orlando, Fla., June 4-7.
|