Data Centers: Myths of Energy Efficiency
Too many facility executives think improving energy efficiency means putting reliability at risk. Energy Star, Lawrence Berkeley Lab and others are working to dispel that notion
As complex as they are critical, data centers and telecommunications switching rooms present a range of challenges for facility executives. Not least among these is the delicate balancing act of ensuring reliability without letting energy costs balloon.
Data centers and telecommunications switching rooms, known as central offices to those in the industry, provide mission-critical support to the daily operations of all kinds of organizations. Telecom central offices provide the IT infrastructure that is mission-critical for telephone companies, Internet service providers and other organizations in the communication industry. Data centers process and store electronic information, images, files and correspondence — a function that is increasingly critical with the passage of laws like the Health Insurance Portability and Accountability Act (HIPAA) and the Sarbanes-Oxley Act, which require increased data storage and management by companies across industry sectors. Those laws are prompting organizations to build or expand data centers.
Because even a few minutes of downtime can jeopardize an organization’s operations significantly, data centers and central offices must be highly reliable. But these spaces also use lots of energy. Depending on the size of the space and the age and type of equipment it contains, energy consumption typically ranges from 20 to 100 watts per square feet. Newer servers, while smaller and more powerful, consume more energy; therefore, some high-end facilities with state-of-the-art technology guzzle up to 400 watts per square foot. In short, these spaces consume many times more energy than office facilities of equivalent size.
Up to 75 percent of this energy feeds power-hungry servers and other IT equipment. The remainder — roughly 25 percent — results primarily from the operation of mechanical and electrical systems that keep the lights on and, above all, keep the IT equipment cool. Smaller, more powerful IT equipment is considerably hotter than older systems, making heat management a major challenge.
“Thermal management is really the biggest issue at the moment,” says Magnus Herrlin, principal, ANCIS Inc. “There’s too much hot equipment in these spaces. It’s leading to failures.”
For a facility executive, the risk of a shutdown is nothing short of terrifying. As a result, says Stuart Brodsky, national
program manager, commercial property markets with the Department of Energy and Environmental Protection Agency’s ENERGY STAR program, facility executives tend to shelve most concerns about energy efficiency for fear of compromising reliability.
“It is very clear that the mandate to reduce risk of equipment failure results in high energy costs for cooling equipment,” he says.
According to Brodsky and other experts, however, the reality of the situation is far more nuanced than a black-and-white choice between efficiency and reliability. And the prevalence of several widespread myths and misconceptions is contributing to the sense that facility executives must give up all thought of efficiency to ensure reliability.
Myth No. 1:
Nothing can be done about high energy consumption in data centers and telecom central offices.
|
It’s true that facility executives have a limited amount of control over much of the energy consumed in these critical spaces. After all, only about 25 percent of the total consumption will be affected by most energy-conservation measures a facility executive can implement. And a good portion of that 25 percent is unavoidable.
“Achieving savings of 5 or 10 percent on the total energy consumption of the space is an optimistic goal,” says Herrlin.
Still, given the amount of energy devoured by data centers and telecom central offices, even single-digit savings are significant. To work toward this goal, experts recommend de-emphasizing the role of the chiller and focusing instead on optimizing air management. Among other strategies, this means preventing air that has been heated by the IT equipment from recirculating into the front of the servers.
“You want to capture cool air and return it as directly as possible to the cooling unit,” says Steve Spinazzola, vice president in charge of RTKL’s Applied Technology Group.
“The more you mix the air inside the data center, the more you are destroying the energy,” Herrlin says.
In addition, experts recommend making sure that the HVAC system is optimized by fine-tuning dampers, belts, fans, pumps and drives; ensuring that air filters are cleaned regularly; and checking thermostats and occupancy sensors to make sure they are doing their jobs.
Similarly, says William Kosik, Chicago managing principal, EYP Mission Critical Facilities, “very rigorous maintenance procedures benefit both reliability and efficiency.”
A building automation system can help increase control and monitoring capabilities, while allowing for the tracking and analysis of operational data. So, too, can technologies that enable facility executives to save on non-critical functions.
“We use motion detectors to control the lights in the central office,” says Kathy Brill, manager, energy, BellSouth. “There is less need for people to be in there than there was in the past, so there is no reason to pay for lights when the area has no one in it.”
The potential savings from measures like these may seem small in the face of energy consumption that pushes 100 watts per square foot. And, relatively speaking, they are.
“There are just not that many opportunities for efficiency,” says Kosik. “So you have to take advantage of all the opportunities that exist.”
Myth No. 2:
Energy-efficiency measures compromise reliability.
|
Perhaps the most common misconception related to energy in critical spaces is the notion that efficiency has to come at the expense of reliability. That myth has many facility executives running away from measures that might decrease their costs.
“People are really scared that they are going to implement an efficiency measure that saves $10,000 and cause a failure that costs $2 million,” says Herrlin. “They are right in thinking that there can be a contradiction between energy efficiency and reliability, if you don’t do it correctly. But that doesn’t have to be the case. In fact, the two can go hand in hand.”
Sound air management practices, for example, can aid reliability and trim energy consumption by reducing heat and eliminating the need for excessive cooling. And prudent HVAC commissioning strategies can reduce energy use, extend the life of HVAC equipment and even increase reliability. Kosik points to the example of a data center that has been designed to accommodate an organization’s future growth, as is often the case.
“It may be quite a while before that space is full to capacity with the amount of equipment it was designed to ultimately accommodate,” he says. “You don’t need to cool as though it were full of equipment that’s generating heat. As long as it’s done right, right-sizing the AC with multiple smaller units will actually improve reliability by creating redundancy, as well as producing energy benefits.”
Myth No. 3:
There is no such thing as “cool enough.”
|
For years, facility executives overseeing data centers and telecom central offices have taken a hit-it-with-a-sledgehammer approach to controlling the spaces’ temperature.
“The fear of system failure leads people to overestimate what they need,” says Brodsky. “It might lead someone to cool to 55 degrees to avoid an overheating problem.”
Indeed, say experts, old thinking on data center and central office management held that more air conditioning was almost always better. Now, however, studies are showing that the IT equipment housed in these spaces can tolerate a fairly wide range of environmental conditions.
“You can relax the temperature and humidity range for these spaces more than people have traditionally thought,” says Herrlin, adding that new standards put the recommended temperature range at 68 to 77 degrees for data centers and 65 to 80 degrees for central offices.
“It turns out that the issue is not the temperature so much as it is the constancy of the environment,” says Rod Sluyter, RS Consulting. “What you really want to avoid is a lot of sudden changes in conditions. If you can keep the temperature constant, it’s OK to bring it up a bit. That’s one way you can save a little on the energy bills.”
Myth No. 4:
A data center or telecom central office is no place for non-traditional approaches to cooling.
|
Savvy facility executives know better than to experiment with unproven technologies or strategies where reliability matters most. Few would argue that a data center or central office is an ideal place to test a brand-new approach to cooling. However, experts say there are proven alternatives to traditional chiller plants exist that are both effective and reliable while also being offering better energy efficiency.
Thermal storage — maintaining a tank of ice or chilled water that provides cooling during the day and is replenished at night when the energy rates are lower — is a common practice in both types of spaces, and may reduce energy costs, though not necessarily overall consumption.
Facilities in cooler climates have the option of using airside economizers that draw in cool outside air as a way of supplementing or even replacing traditional chillers.
“Many central offices operate with airside economizers, but data centers generally don’t,” says Herrlin. “So that is an opportunity for a data center if the conditions are right and there is proper air filtration.”
Waterside economizers that use heat exchangers can be similarly effective while reducing the risk of airborne pollutants, and water-cooled cabinets can be used to create a cooled mini-environment for IT equipment at less expense than cooling the entire space.
Myth No. 5:
It’s impossible to trim the energy consumption of IT equipment.
|
While moderate improvements in energy consumption can be achieved through measures like those described above, facility executives are often frustrated to discover just how much of the energy consumed in a data center or central office is used by the IT equipment itself. The idea that a portion of the energy use is untouchable can make other efforts to shave consumption feel futile.
“You realize quite fast that, if you are going to make a significant dent, you need to cut down on the 75 percent that goes to the IT equipment,” says Herrlin. “That’s the only way to really hit the jackpot on efficiency improvements.”
Fortunately, experts say, there are ways to reduce the energy consumption of servers and other IT equipment; they just demand a level of inter-departmental communication and coordination that, too often, does not exist.
“There is a huge amount of variation in the energy efficiency of both power supplies and UPS [uninterruptible power supply] systems, and a big range between the best and the worst,” says Bill Tschudi, principal investigator with
Lawrence Berkeley National Laboratory. Tschudi is studying energy consumption in data centers. “You can see a 20 percent improvement as a result of using a more efficient UPS, and that may be more expensive upfront, but we’re talking about a payback of three to six months.”
However, Tschudi says, “The IT department may not care if they buy an efficient power supply as long as it’s cheap, and facilities departments often do not have a say in those decisions.”
To truly maximize the opportunities to conserve energy in a data center or central office, then, top management and decision-makers in both IT and facilities need to understand the importance of working together.
In addition, says Kosik, “There is a lot that is out of the facility owner or manager’s control right now that may come under focus over the next 5 to 10 years.”
For example, says Energy Star’s Brodsky, “There are efforts under way to make servers that provide the same function at less plug load and that produce less heat.”
Experts on data centers and central offices note that the complexity of these spaces and the fact that no two are exactly alike make it difficult to develop concrete benchmarks or guidelines on energy consumption. However, efforts are under way to better understand the way these spaces use energy and issue recommendations for operating them as efficiently as possible. For facility executives who feel that they are working in isolation to solve the efficiency vs. reliability issue, this is good news.
Lawrence Berkeley National Lab’s says that best practices on data centers should be available on the organization’s Web site soon. In addition, Tschudi’s team is developing a self-benchmarking protocol to help facility executives understand where their facilities are relative to realistic energy efficiency goals.
“Right now people don’t know how they are operating,” says Tschudi. “That’s a real problem when it comes to attempting to make improvements.”
Similarly, the Energy Star program is examining the operations and energy performance of telecommunications switching rooms around the country.
“We may not end up with Energy Star benchmarks for these places because of their complexity,” says Brodsky. “But we can still help people to identify many opportunities that result from good practices and efficient operations. It can mean a very big difference in the bottom line.”
Abigail Gray, a contributing editor to Building Operating Management, is a writer who specializes in facility issues. She is the former editor of EducationFM magazine.
Related Topics: