fnPrime



What’s New in Data Center Design?



New thinking ranges from temperature targets to edge computing to liquid cooling.


By Maryellen Lo Bosco  


While new data center construction keeps going like gangbusters, innovation in building design continues at a remarkable pace.

As data centers have grown in size and complexity, advances in technology have given building owners more options, says Alan Lurie, managing director, data center project management, CBRE. The first step is a business conversation, says Lurie. The idea is to discover what the business does, what data processing is being accomplished, and what the impact of an outage would be. Understanding those factors is crucial to identifying the right level of cost and creating a data center that is optimized, he says. In the past, IT asked for too much wattage, wasting “tons of electricity” in cooling costs, Lurie says. “Now data centers are scalable, so that capacity can be added as needed, using a modular approach.” Right sizing means walking the line between building “too big a palace” and ensuring there is room to expand without having to interrupt operations, Lurie explains.

Some innovations track with sustainability. For example, traditionally data centers relied on diesel power for backup generators in the event the facility lost power, but some companies are now using natural gas for backup systems. Natural gas doesn’t require big storage tanks on site and has fewer emissions, says Gary Cudmore, global director, data centers, Black & Veatch. 

There is growing interest in the use of renewable energy. Some data centers do have onsite renewables, but wind and solar power cannot provide a constant supply of electricity. As a result, a data center owner would buy renewable energy credits from renewable sources to cover the amount of grid power that the data center uses. 

Grid-scale battery storage is also coming on line. “You can store enough power in batteries on site to run a data center if you lose utility power; the only thing holding it back is cost,” says Cudmore. This technology is 18 to 24 months away from becoming commercially viable, he says. In the future, “you will see a big shift from onsite generation to grid-scale battery storage,” Cudmore says. Some utility companies are already using this technology when it is mandated by the government, as it is in the state of California. 

High density computing, which will continue to increase with the use of driverless cars and the Internet of Things, is creating the need for “edge computing,” in which data is processed as close as possible to its use to keep latency low. Latency refers to the amount of time it takes one data device to talk to another data device, or the time it takes for data transmission. Micro data centers, as small as 20 feet by 8 feet, will be plug-in devices with generators attached. “Almost like booster towers and cell towers, they will keep signals to cars very close,” says Lurie.

Edge data centers will also be convenient for streaming providers, since streaming over long distances costs money and sometimes doesn’t work very well. More cost effective and efficient is to put “smaller, self-contained data centers with convergence infrastructure closer to where many end users are,” according to Terence Deneny, vice president, Structure Tone Mission Critical. 

Over the past several years, many innovations have taken place in data center cooling. First, The American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) widened the bands for the temperature and humidity in data centers —allowing for less humidity and higher temperatures. According to Cudmore, this change allowed facilities to use a modern version of venerable technologies such as evaporative and adiabatic cooling. “Getting away from compressorized cooling can save 20 to 30 percent in energy consumption in the data center,” Cudmore says. It also uses less water, which confers a double benefit.

According to Matt Danowski, associate vice president, CallisonRTKL, a new technology in the data center is immersion cooling using dielectric fluid, an insulating oil that does not conduct electricity. The oil allows cooling fluids to come into direct contact with electronic components, instead of using cold air to remove heat from servers and other devices. Immersion cooling is primarily used for high density servers using more than 15 kilowatts per rack.   

Because immersion cooling eliminates traditional underfloor air distribution, raised flooring is no longer necessary. As a result, there is no longer a need for an emergency power-off system, says Danowski. The power-off system was primarily required by electrical codes when wiring was being routed below the raised floor in an air plenum. 

Another trend is to have applications in more than one data center. Rather than build one expensive high redundancy data center, some companies are building multiple cheaper facilities with less redundant capacity. If one data center goes down, another can handle the application loads. “Rather than build a Tier Four center, hyperscale data centers are building more Tier One and Tier Two centers so that if one goes down the others can handle the data,” says Danowski. 

Also new is a move away from traditional single-story data centers to multistory facilities, says Deneny. “People are looking to fit more and more data center space on existing campuses. We used to look at Watts per square foot; now we are looking at megawatts per acre and how much power can fit on those campuses.”

Maryellen Lo Bosco is a freelance writer who covers facility management and technology. She is a contributing editor for Building Operating Management. 




Contact FacilitiesNet Editorial Staff »

  posted on 6/4/2019   Article Use Policy




Related Topics: