fnPrime



skyscraperDigital Realty Trust has a 775,000-square-foot campus in Franklin Park, Ill., designed to support hyperscale tenants; it has 80 MW of power on site.Environmental Systems Design

Data Centers For The Hyperscale Cloud



For huge data centers, energy cost and availability, as well as connectivity, are priorities.


By Paul Schlattman  
OTHER PARTS OF THIS ARTICLEPt. 1: This PagePt. 2: Colocation Providers Announce Hyperscale Offerings


Third-generation data centers are a far cry from the mission critical closets of yesteryear. Today’s hyperscale data center is designed to support a massively scalable infrastructure — up to 240MW to 320MW in 1 million square feet of space. They now number roughly 300 worldwide, with more than 45 percent of hyperscale data centers located in the United States.

A look at the public announcements and design strategies of cloud providers reveals some common elements in a range of areas, including utilities, scalability, renewable energy, PUE targets, and techniques critical to speed to market.

Site selection

When it comes to electrical utility capacity and cost, the larger cloud providers seem to locate where electrical utility rates are lower than the national average and where there is an abundance of capacity to build their own substation. The minimum capacity appears to be 32MW, with a growth rate of 8MW. 

Ever since Greenpeace uncovered the carbon footprint of the major cloud providers, energy credits have become more important. Google, leading the charge, has been purchasing vast amounts of wind farms to support operations. Others, such as Iron Mountain, have made commitments to renewable energy to support their tenants. 

While data centers historically do not hire a large number of people, local government loves to announce the popular names that are coming to their town. Local incentives can be broad in range and may include personal property tax reductions on servers and equipment, free land, and water rights.

While a fiber network from multiple sources has always been important, capacity of fiber now plays an important role in cloud providers. Most networks have limitations on capacity of 32MW of server utilization, and bandwidth may limit the size of the installation. In many cases, however, the fiber providers will come to the site if the business case is justified. 

Hyperscale cloud providers require scalability in both internal infrastructure and building area. Some providers purchase large areas of land, the lowest cost metric in their overall pro forma. A 36MW hyperscale data center will house approximately 240,000 square feet of raised floor area; total space will reach approximately 500,000 square feet with the MEP support area included.

Design scalability

While each cloud provider has its own unique approach to growth, the common factor among providers is to ensure reliability and scalability. A 2N design approach (system + system) was developed in the late 1990s, but today more cloud providers have migrated to a concurrently maintainable strategy that integrates reliability and maintenance. This can be done in three different electrical design configurations.

• Distributed redundant. Each of four modules is 1.5 MW with a total capacity of 6 MW per data center area, expandable to 36 MW. This design allows for load transfer at the server level downstream. With this configuration, it is difficult to manage loads, and operations become cumbersome. Each module is backed up by a generator, and the total load on the generator also includes HVAC.

• Distributed redundant with static switch. This design is the same as the distributed redundant above, but the load transfer happens upstream and is more reliable. Loads are easily managed in this design, and 99.999 percent uptime can be achieved with a single utility feed.

• Block redundant design. This configuration allows for a catcher block design where a reserve bus/generator will catch six different 1.5MW modules. The second utility feed raises the overall cost per megawatt to the point that it often negates the cost benefits. 

Mechanical designs also look for reliability and scalability. Additionally, cloud providers really drive power usage effectiveness (PUE) to the lowest increments — 1.2 or better. While the colocation industry has targeted 1.4 PUE or better, cloud providers aim even lower through creative designs. Mechanical designs can include CRAC units with free cooling fan coils, and direct evaporative systems seem to be doing well among cloud providers. One of the latest trends is to focus on water-use reduction. That’s particularly valuable when direct evaporative systems are used.


 

SIDEBAR: Three Kinds of Cloud

Supporting the explosive growth in cloud computing are three types of cloud.

1. Public cloud — A public cloud is one based on the standard cloud computing model, in which a service provider makes resources, such as applications and storage, available to the public over the Internet.

2. Private cloud — Private cloud computing is a single-tenant environment where the hardware, storage, and network are bought by and dedicated to a single client or company.

3. Hybrid cloud — The hybrid cloud offers a mix of public and private cloud computing, where public cloud resources are integrated with private or virtual private cloud services to create a unique hybrid environment.


Continue Reading: Critical Facilities

Data Centers For The Hyperscale Cloud

Colocation Providers Announce Hyperscale Offerings



Contact FacilitiesNet Editorial Staff »

  posted on 8/21/2017   Article Use Policy




Related Topics: