When data centers are running at peak efficiency, maintenance costs also typically are reduced to the lowest possible level.
Identifying Energy Waste in Data Centers
Retrocommissioning, load monitoring and effective maintenance can help managers ensure energy-efficient operations
Identifying energy-wasting systems and components is another major step toward improving data center energy efficiency. The key to success is a data center infrastructure management (DCIM) system, or some means of monitoring facility equipment and the power usage effectiveness (PUE).
Most data centers have some type of DCIM system, but if a facility does not have one, managers are missing out on an opportunity to improve energy efficiency, as well as to support efforts to maintain efficiency and improve uptime. Managers must have a means to measure energy efficiency in data centers in order to know where to start looking for improvements.
Among the benefits of a DCIM system are the ability to: track real-time PUE; drive initiatives for improvement; monitor infrastructure health via thresholds and alerts to prevent down time; control rack power with graceful shutdown of servers; and efficiently trend active power and environmental conditions.
Monitoring the equipment and receiving trend data and specific thresholds and alerts allows operators to recognize equipment that needs maintenance, whether it is a bearing going out in a fan or a compressor nearing the end of its life. A DCIM also can let operators know if sensors are out of calibration or if equipment is not operating efficiently. Both of these situations can cause significant amounts of energy waste.
Using outdated practices and failing to keep up with industry changes can cause wasted energy. One great example of this problem is keeping the data floor at refrigerator temperatures. Small changes to operating temperatures and sequence of operations can yield tremendous savings for a small investment. Keeping the data floor at 65 degrees when the equipment and system can easily support 70 degrees or more for cold aisle and supply temperatures can mean months of free cooling instead of running the chillers, pumps and all associated equipment required for the mechanical system. One small change, such as raising the cold aisle temperature, not only decreases energy consumption. It also curtails maintenance costs and lowers the run time on mechanical equipment.
Look at loads
An IT load that is lower than the design intent also can waste energy in a data center. Equipment might be oversized for a variety of reasons, and the facility often runs more equipment then necessary for redundancy and uptime purposes.
Managers might find themselves in this situation because the data center is new and the day one loads might not be there, or the facility has been in operation for a while and the load has decreased or was never fully realized. Whatever the reason, managers must tailor the solution to the facility’s setup and needs.
If servers are spread out on the data floor and all areas need air distribution and cooling, one solution might be to relocate the servers to a central location and limit the cooling to that area, providing the ability to shut down some equipment and rotate the operation. This tactic saves energy, maintenance and equipment run time.
If technicians do not perform timely maintenance throughout a facility, the energy loss can be significant. But the reality of equipment going down due to a lack of maintenance is an even greater issue in a data center. Data centers usually receive excellent maintenance and are kept up to date because of the nature of the facility and requirements to maintain the uptime of the IT equipment.
For example, an air handling unit that goes down because the bearing went bad is redundant, so it is not a big deal to wait to get the motor repaired or the replacement one in to be changed, right? The problem is if the redundant one is down, what happens when the next one also fails? A lack of maintenance can cause significant energy loss, whether the issue is dirty filters or improperly greased bearings, but the real issue is system dependability and compromised uptime.
Related Topics: