Virtualization and Other Tips to Slash Data Center Energy Use





By Kenneth Brill  
OTHER PARTS OF THIS ARTICLEPt. 1: Understanding the True Cost of Operating a ServerPt. 2: This PagePt. 3: New Metric for Data Center Efficiency Proposed


It’s crucial that IT departments understand the facility costs related to servers because they’re buying servers at an amazing pace. In the United States, the number of servers doubled — from 5 million to 10 million — between 2000 and 2005. By 2010, another 5 million servers are expected to be in use, bringing the total to 15 million.

There are several things that can be done to address server energy use. One option is server virtualization (which has already been assumed in the server growth figures above or the total number of installed servers would have been significantly higher than shown). As things stand now, most servers run only one application. Utilization rates for servers doing transaction processing are often in the low single digits. With virtualization, a single server runs multiple applications. That can reduce the number of servers and the facility costs associated with them.

But there’s an easier way to get started: Turn off servers that are no longer doing anything. Up to 30 percent of all servers are technologically comatose, but they’re still out in the data center, gobbling up power as if they were doing useful work. It simply hasn’t been a priority for IT to identify and remove these servers. With all the pressure on head count, IT departments may have thought that they couldn’t afford the staff time for such a seemingly insignificant task.

To address the combined impact of server energy use, McKinsey & Co. in collaboration with the Uptime Institute recommend that corporations consider a new metric for data center productivity and energy efficiency: Corporate Average Datacenter Efficiency, or CADE. (See part 3.) CADE will enable corporations to understand current low efficiency levels which should motivate organizations to substantially improve performance. This process literally involves being willing to harvest the “gold” that is lying all over the floor.

Another recommendation is that IT departments be given responsibility for data center facility capital and operating expenses. What’s more, IT should appoint an “energy czar” to spearhead efforts to reduce energy use, starting with simple steps like turning off unused servers.

For some facility executives, the thought of IT having more responsibility for data center facilities is hard to take. But in organizations where IT and facilities have a collaborative relationship, facility executives will say that it doesn’t matter which group actually has responsibility for data center facility costs.

Regardless of whether IT or facilities is responsible for those costs, it’s in the facility executive’s interest to develop a collaborative relationship with IT. Doing so will bring a host of other benefits to facility executives. One of the biggest is the elimination of surprises.

Facility executives often complain that IT never consults with them before buying new hardware. Instead, new servers or storage devices simply show up on the loading dock with the assumption that they can just be plugged in. No thought is given to the power and cooling demands of the new equipment — or the additional structural floor load it represents.

But it doesn’t have to be like that. IT and facility executives come from two very different perspectives, but with a little effort both sides can understand the other point of view. Even today, IT and FM have a close working relationship in some corporations. In organizations where facility and IT executives often butt heads today, rising pressure to control energy costs is a strong incentive to start listening to each other.


Continue Reading: Critical Facilities

Understanding the True Cost of Operating a Server

Virtualization and Other Tips to Slash Data Center Energy Use

New Metric for Data Center Efficiency Proposed



Contact FacilitiesNet Editorial Staff »

  posted on 11/1/2008   Article Use Policy




Related Topics: