Common Problems With Data Center Commissioning
Commissioning efforts and results for data centers and critical facilities vary widely, from glorified vendor startups billed as complete commissioning to a small army of technicians checking off boilerplate “one size fits all” forms, with little understanding about how the equipment they are inspecting and commissioning actually operates, let alone how it is most likely to fail. In the real world of data center commissioning, this approach results in some all-too-common problems.
Commissioning technicians tend to focus heavily on familiar equipment they understand while sometimes glossing over or even ignoring unfamiliar equipment they don’t understand. This lack of understanding often extends to original equipment manufacturer (OEM) vendor service technicians, and even more so to third party service technicians. It is not uncommon for OEM vendor service technicians to be unfamiliar with standard features built in to their own equipment. Furthermore, commissioning reports often tell an incomplete story, failing to articulate in summary and in organized detail what exactly was tested, how it was tested, when it was tested, and how it performed.
It’s far more effective to take a very site-specific, customized process to commissioning. This entails taking the time to thoroughly understand the design intent (owner requirements), the actual design, the actual installed equipment, options, and systems, and any customization or other peculiarities of a specific jobsite.
Interfaces between different equipment sets and vendor subsystems are the areas where many problems occur. Vendors generally do a satisfactory job of getting their own equipment working, and also performing operator training for their own equipment and sub-systems. After all, they have a warranty and brand name to protect. However, many field technicians are too focused on “inside the box” individual component performance, to the point of attempting to prove specifications that are not relevant to the project (i.e., extreme overload conditions).
Many hours and precious resources can be wasted on minutia, such as simulating every minor alarm point. However, too often no one is taking a look at the big picture: how all the sub-systems need to work together and where the realistic failure points exist. Then, when the available commissioning time or budget are nearly exhausted, important overall performance tests are rushed, cut short, or otherwise compromised.
Sometimes the exact opposite mistake is made, with a commissioning process that knowingly skips steps needed to methodically validate key component performance points in an attempt to go straight to the “finish line.” When an integrated system test is attempted prematurely, so many issues may crop up at the same time that they cannot be dealt with effectively. Or important issues may go completely unnoticed in the mayhem, such as a bypass loop that was never operated during commissioning. Subsequently, years later when it finally needs to be deployed in an actual emergency, it malfunctions because it was never right from day one.
Related Topics: