text
stringclasses
1 value
metadata
dict
4ASHRAE Datacom Series, Book 4 1 791 Tullie Circle, NE Atlanta, GA 30329-2305 www.ashrae.org/bookstoreASHRAE Datacom Series 4 The Guide Every Datacom Professional Needs Data center rack heat loads are steadily climbing, and the ability for many data centers to deliver either adequate airflow rates or chilled air is now being stretched to the limit to avoid decreased equipment availability, wasted floor space, and inefficient cooling system operation. This situation is creating a need for liquid cooling solutions to reduce the volume of airflow needed by the racks and for lower processor temperatures for better computer perfor-mance. This book provides cooling system designers, data center operators, equipment manufacturers, chief information officers (CIOs), and IT specialists with best practice guidance for implementing liquid cooling systems in data centers. This second edition includes updated references and further information on approach temperatures and liquid immersion cooling, plus guidance on water quality problems and wetted material requirements. It also includes definitions for liquid and air cooling as they apply to the IT equipment, along with an overview of chilled-water and condenser water systems and other datacom equipment cooling options. The book also bridges the liquid cooling systems by providing guidelines on interface requirements between the chilled-water system and the technology cooling system and on the requirements of liquid-cooled systems that attach to a datacom electronics rack to aid in data center thermal management. This book is the fourth in the ASHRAE Datacom Series, authored by ASHRAE T echnical Committee 9.9, Mission Critical Facilities, Data Centers, T echnology Spaces and Electronic Equipment. This series provides com-prehensive treatment of datacom cooling and related subjects. ISBN 978-1-936504-67-1 Product code: 90564 3/1 4 Liquid Cooling Guidelines for Datacom Equipment Centers Second Edition Liquid Cooling Guidelines for Datacom Equipment Centers | Second Edition 9781936 50467 1 LiquidCoolingCover_2ndEd_tb.indd 1 2/6/2014 3:01:27 PM Liquid Cooling Guidelines for Datacom Equipment Centers Second Edition © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers is authored byASHRAE Tech- nical Committee (TC) 9.9, Mission Critical Facilities, Data Centers, Technology Spaces, and Electronic Equipment.ASHRAE TC 9.9 is composed of a wide range of industry repre-sentatives, including but not limited to equipment manufacturers, consulting engineers, datacenter operators, academia, testing laboratories, and government officials who are allcommitted to increasing and sharing the body of knowledge related to data centers. Liquid Cooling Guidelines for Datacom Equipment Centers is not anASHRAE Guideline and has not been developed in accordance with ASHRAE’s consensus process. Any updates/errata to this publication will be posted on the ASHRAE Web site at www.ashrae.org/publicationupdates.For more information on the ASHRAE Datacom Series, visit www.ashrae.org/datacenterefficiency. For more information on ASHRAE TC 9.9, visit http://tc99.ashraetcs.org/ © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines forDatacomEquipment Centers Second Edition ASHRAE Datacom Series Book 4 Atlanta © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. ISBN 978-1-936504-67-1 © 2006 and 2014 ASHRAE 1791 Tullie Circle, NE Atlanta, GA 30329 www.ashrae.org All rights reserved. Printed in the United States of America Cover image by Joe Lombardo of DLB Associates. ASHRAE is a registered trademark in the U.S. Patent and Trademark Office, owned by the American Society of Heating, Refrigerating, and Air-Conditioning Engineers, Inc. ASHRAE has compiled this publication with care, but ASHRAE has not investigated, and ASHRAE expressly disclaims any duty to investigate, any product, service, process, procedure, design, or the like that may be described herein. The appearance of any technical data or editorial material in this publi-cation does not constitute endorsement, warranty, or guaranty by ASHRAE of any product, service,process, procedure, design, or the like. ASHRAE does not warrant that the information in the publica-tion is free of errors, and ASHRAE does not necessarily agree with any statement or opinion in thispublication. The entire risk of the use of any information in this publication is assumed by the user. No part of this publication may be reproduced without permission in writing from ASHRAE, except by a reviewer who may quote brief passages or reproduce illustrations in a review with appropriatecredit, nor may any part of this publication be reproduced, stored in a retrieval system, or transmittedin any way or by any means—electronic, photocopying, recording, or other—without permission inwriting from ASHRAE. Requests for permission should be submitted at www.ashrae.org/permissions. Library of Congress Cataloging-in-Publication Data Liquid cooling guidelines for datacom equipment centers. -- Second edition. pages cm -- (Datacom series ; 4) Includes bibliographical references and index. Summary: "Provides information on liquid cooling for datacom equipment centers. Concerned with energy efficiency"-- Provided by publisher. ISBN 978-1-936504-67-1 (softcover) 1. Data processing service centers--Cooling. 2. Electronic data processing departments--Equipment and supplies-- Protection. 3. Hydronics. 4. Electronic apparatus and appliances--Cooling. 5. Electronic digital computers-- Cooling. 6. Buildings--Environmental engineering. I. American Society of Heating, Refrigerating and Air- Conditioning Engineers. TH7688.C64L57 2014697.9'316--dc23 2014000819 ASHRAE S TAFF SPECIAL PUBLICATIONS Mark S. Owen , Editor/Group Manager of Handbook and Special Publications Cindy Sheffield Michaels , Managing Editor James Madison Walker , Associate Editor Roberta Hirschbuehler , Assistant Editor Sarah Boyle, Assistant EditorMichshell Phillips, Editorial Coordinator P UBLISHING SERVICES David Soltis, Group Manager of Publishing Services and Electronic CommunicationsJayne Jackson, Publication Traffic Administrator Tracy Becker , Graphics Specialist P UBLISHER W. Stephen Comstock © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Contents Acknowledgments .........................................v i i Chapter 1 Introduction ..................................... 1 1.1 Definitions ........................................ 3 1.2 Liquid Cooling Systems ............................. 5 Chapter 2 Facility Cooling Systems .......................... 9 2.1 Introduction ....................................... 9 2.2 Equipment ....................................... 1 4 Chapter 3 Facility Piping Design ............................ 2 5 3.1 General ......................................... 2 5 3.2 Spatial Considerations ............................. 2 6 3.3 Basic Piping Architecture .......................... 2 7 3.4 Piping Arrangements for the Cooling Plant ............ 3 6 3.5 Water Treatment Issues ............................ 3 9 3.6 Seismic Protection ................................ 3 9 Chapter 4 Liquid Cooling Implementation for Datacom Equipment ............................. 4 1 4.1 Overview of Liquid-Cooled Racks and Cabinets ........ 4 1 4.2 Overview of Air- and Liquid-Cooled Datacom Equipment 46 4.3 Overview of Immersion Cooling of Datacom Equipment . 50 4.4 Overview of Coolant Distribution Unit (CDU) ........... 5 2 © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. vi Contents Chapter 5 Liquid Cooling Infrastructure Requirements for Facility Water Systems ........................ 5 9 5.1 Facility Water Systems (FWS) ........................ 5 9 5.2 Non-Water Facility Systems ......................... 8 0 5.3 Liquid Cooling Deployments in NEBS Compliant Space. .82 Chapter 6 Liquid Cooling Infrastructure Requirements for Technology Cooling Systems ................... 8 7 6.1 Water-Based Technology Cooling System ............. 8 7 6.2 Non-Water-Based Technology Cooling System ......... 9 2 References and Bibliography ................................ 9 3 Glossary of Terms ......................................... 9 5 Appendix ................................................ 1 0 1 Index ................................................... 1 0 3 © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. viiAcknowledgments Representatives from the following companies participated in producing this publication: APC Aavid Cray Inc.Dell ComputersDLB Associates Consulting EngineersEYPMCFHewlett PackardIBMIntel CorporationLawrence Berkeley National LabsLiebert CorporationLytronMallory & Evans, Inc. NCRDepartment of DefensePanduitRittalSanminaSGISpraycoolSyska & Hennessy Group, Inc.Sun Microsystems Trane ASHRAE TC 9.9 wishes to particularly thank the following people: Don Beaty, John Bean, Christian Belady, Tahir Cader, David Copeland, Rhonda Johnson, Tim McCann, David Moss, Shlomo Novotny, GregPautsch, Terry Rodgers, Jeff Rutt, Tony Sharp, David Wang, and Kathryn Whitenack for their participation and continual improvements in the final document. Don Beaty of DLB Associates Consulting Engineers, Jeff Rutt of Depart- ment of Defense, and Kathryn Whitenack of Lytron for providing the coor- dination and leadership on developing the individual chapters. Dr. Roger Schmidt for his invaluable participation in leading the overall development of the book. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. viii Acknowledgments In addition, ASHRAE TC 9.9 wishes to thank the following people for their comments and feedback: William Angle, Cullen Bash, Neil Chauhan, Brian Donabedian, Jack Glass, Chris Malone, Vance Murakami, Larry Rushing, Prajbit Singh, William Tschudi, and Randy Zoodsma. Second Edition ASHRAE TC 9.9 would like to thank the following people for their work on the 2011 “Thermal Guidelines for Liquid Cooled Data Processing Environments” whitepaper that was incorporated into the second edition of Liquid CoolingGuidelines for Datacom Equipment Centers: Dan Dyer , DLB Associates; John Grisham, HP; Madhu Iyengar , IBM; Tim McCann, SGI; Michael Patterson, Intel; Greg Pautsch , Cray; Prabjit Singh, IBM; Robin Steinbrecher , Intel; and Jie Wei , Fujitsu. Additionally, ASHRAE TC 9.9 would like to thank the following people for their contributions to the section on liquid cooling for NEBS compliant spaces:Marlin Vogel, Juniper Networks, and David Redford, A T&T. Furthermore, ASHRAE TC 9.9 wishes to thank the following people for their review comments and feedback: John Bean, APC-Schneider Electric; Rhonda Johnson, Panduit Corporation; Roger Schmidt, IBM; and Bob Wasilewski, Dennis Hellmer , and John Lanni, DLB Associates. Finally, ASHRAE TC 9.9 wishes to thank Michael J. Ellsworth, Jr., IBM, for his participation in leading the overall book revision. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 11 Introduction From a holistic point of view, the data center, with its installed information technology equipment (ITE), introduces the potential for several levels of heat rejection. The primary source to consider is the utility plant that provides the over - all cooling to the data center. This utility plant may be a stand-alone facility dedi-cated solely to the data center or, as is more typically the case, part of a largerinstallation that provides cooling to other locations within the building where the data center is housed. The utility plant that provides the cooling to the building/data center generally uses chillers used to cool the building’s hydronic distributionnetwork. The chilled water can be used to provide cooling to packaged computerroom air-conditioning (CRAC) units situated inside a data center or to large air-handling (CRAH) units installed externally to the data center (even to the build-ing). Air conditioning for the data center can also be provided by mechanicalrefrigeration units (e.g., direct expansion [DX] devices), whereby the condenser unit is typically placed outside the building envelope. In this case, no water isdelivered to the data center floor. Nearly all current and legacy data center IT equipment is air-cooled by the means described in the preceding paragraph. With rack level heat loads steadily climbing, the ability for many data centers to deliver either adequate airflow ratesor sufficient chilled air is now being stretched to the limit. This situation hascreated a need for alternative cooling strategies with liquid cooling being the mostprominent solution. The overall goals of a liquid-cooled implementation are totransfer as much waste heat to the facility water as possible and, in some of theimplementations, to reduce the overall volume of airflow needed by the ITE withinthe racks. In addition, liquid cooling may be required to achieve higher datacomequipment performance by cooling electronic components (e.g., microprocessors)to lower temperatures. This book, which was created by ASHRAE Technical Committee 9.9, provides equipment manufacturers and facility operations personnel with a common set of guidelines for various liquid cooling strategies. This publication is not inclusive ofall types of liquid cooling sources (e.g., absorption chillers) but is representative of © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 2Introduction generally accepted liquid cooling systems. It covers an overview of liquid cooling, various liquid cooling configurations, and guidelines for liquid cooling infrastruc-ture requirements. Specifically, this book provides guidelines for the following: Facility supply water temperature classification (Section 5.1.1) Facility water operational requirements (Section 5.1.2.1) Facility water flow-rate recommendations and fouling factors (Section 5.1.2.2) Facility water velocity considerations (Section 5.1.2.3) Facility water liquid quality/composition (Section 5.1.2.4) Facility water-wetted material requirements (Section 5.1.2.5) Refrigerant piping (Section 5.2.2) Rack-water operational requirements (Section 6.1.1) Rack-water flow rates (Section 6.1.2) Rack-water velocity considerations (Section 6.1.3) Rack-water quality/composition (Section 6.1.4) Rack-water wetted materials requirements (Section 6.1.5) Rack-non-water operational requirements (Section 6.2.1) It also reviews considerations for the following: Chilled-water piping Electrical power sources and connections Monitoring Reliability and availability Commissioning The following chapters are arranged to describe the various liquid cooling systems that can exist within a data center and, more importantly, how they can be connected to transfer heat from one liquid system to another. Chapters 2 and 3 provide the reader with an overview of the chilled-water and condenser watersystems (see Figure 1.1), followed by an overview of datacom equipment coolingoptions (technology cooling system and datacom equipment cooling system shownin Figure 1.1) in Chapter 4. Chapter 5 characterizes the liquid cooling systems byproviding guidelines on the interface requirements between the chilled-water systemand the technology cooling system. Chapter 6 outlines the requirements of thoseliquid-cooled systems that attach to a datacom electronics rack and are implementedto aid in data center thermal management by focusing on the technology coolingsystem. Chapter 7 provides an expanded reference and bibliography list to enable the reader to find additional related materials. Chapter 8 provides a useful glossary of common terms used throughout this book. An appendix is also included thatprovides survey results of customer water quality. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 3 The second edition includes the following changes: Chilled-water system (CHWS) has been replaced with facility water system (FWS) in recognition that a chiller is not always a requirement for delivering cooling water to a datacom facility. Discussions on approach temperatures have been added to Chapter 2 (Section 2.2.2.3). Discussion on liquid immersion cooling has been added to Chapter 4 (Section 4.3) References to the 2012 Thermal Guidelines for Liquid Cooled Data Process- ing Environments ha ve been added where appropriate. In Chapter 5, the facil- ity water system requirements now refer to classes W1 through W5 (Section 5.1.1). The guidance on water quality problems and wetted-material requirements in Chapter 5 has been updated (Sections 5.1.2.4 and 5.1.2.5). A discussion on liquid cooling for network equipment-building system (NEBS) compliant spaces has been added to Chapter 5 (section 5.3). 1.1 DEFINITIONS For the purposes of this book, the following definitions will be used: Liquid cooling is defined as the process where a liquid (rather than air) will be used to provide a heat removal (i.e., cooling) function. Liquid-cooled rack defines the case where a circulated liquid provides heat removal (cooling) at a rack or cabinet level for operation. Liquid-cooled datacom equipment defines the case where liquid is circulated within the datacom equipment for heat removal (cooling) operation. Liquid-cooled electronics defines the cases where liquid is circulated directly to the electronics for cooling with no other heat transfer mechanisms (e.g., aircooling). In each of the definitions above, when two-phase liquid cooling is used, liquid is circulated to the rack, equipment, or electronics, and gas and/or a mixture of gas and liquid circulates from the rack, equipment, or electronics. These liquid cooling definitions are slightly broader than the one used inASHRAE’s Datacom Equipment Power Trends and Cooling Applications but are more relevant to liquid cooling in the datacom industry today. It is important to keep in mind that the definitions above do not limit the cooling fluid to water. A variety of liquids could be considered for application, including liquids that could be in a vapor phase in part of the cooling loop. Air cooling defines the case where only air must be supplied to an entity for operation. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 4Introduction Air-cooled rack defines the case where only air must be provided to the rack or cabinet for operation. Air -cooled datacom equipment defines the case where only air is provided to the datacom equipment for operation. Air -cooled electronics defines the cases where air is provided directly to the electronics for cooling with no other form of heat transfer. When liquids are used within separate cooling loops that do not communicate thermally , the system is considered to be air cooling. The most obvious illustration covers the chilled-water CRACs that are usually deployed at the periphery of many of today’s data centers. At the other end of the scale, the use of heat pipes or pumpedloops inside a computer, wherein the liquid remains inside a closed loop within theserver, also qualifies as air-cooled electronics, provided the heat is removed from theinternal closed loop via airflow through the electronic equipment chassis. There are many different implementations of liquid cooling to choose from. Belo w are several scenarios (see Chapter 4 for a more complete overview of liquid cooling implementations). One option uses an air-cooled refrigeration system mounted within the data- com equipment to deliver chilled refrigerant to liquid-cooled cold plates mounted to the processors. For this implementation, the heated air from the liquid-to-air heat exchanger (i.e., condenser) is exhausted directly to the datacenter environment. From a data center perspective, the rack and electronicsare considered to be air-cooled since no liquid lines cross the rack envelope. A different implementation may use a liquid-to-air heat exchanger mounted abo ve, below, or on the side or rear of the rack. In this case, the heat exchanger removes a substantial portion of the rack’s waste heat from the air that is eventually exhausted to the data center. This implementation does notreduce the volumetric airflow rate needed by the electronics, but it doesreduce the temperature of the air that is exhausted back into the data center. This example describes a liquid-cooled rack since liquid lines cross the rack envelope. This system is shown in Figure 4.8. Yet another implementation uses liquid-cooled cold plates that use water, dielectrics, or other types of coolants that are chilled by a liquid-to-liquid heat exchanger that rejects the waste heat to the facility water. The waste heat rejection to the facility water can occur via one or more additional liquid loops that eventuallyterminate at an external cooling tower or chiller plant. This implementation of liquidcooling reduces the amount of waste heat rejected to the facility ambient and alsoreduces the volumetric airflow rate required by the rack’s electronics. From the datacenter perspective, this implementation describes liquid-cooled racks and electron-ics since liquid lines cross the rack envelope and also cross over into the serversthemselv es. This system is shown in Figure 4.10. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 5 1.2 LIQUID COOLING SYSTEMS The definitions above apply to air and liquid cooling in a data center. They do not define the various liquid cooling loops that may exist within a data center. Figure 1.1shows a typical liquid-cooled facility with multiple liquid cooling loops. Each loop is described below with some of the variations considered. The terminol- ogy used below will be applied throughout the rest of the book. datacom equipment cooling system (DECS): This system does not extend beyond the IT rack. It is a loop within the rack that is intended to perform heat transfer from the heat-producing components (CPU, memory, power supplies, etc.) to a fluid-cooled heat exchanger also contained within the IT rack. Some configurations mayeliminate this loop and have the fluid from the coolant distribution unit (CDU) flowdirectly to the load. This loop may function in single-phase or two-phase heat trans-fer modes facilitated by heat pipes, thermosiphon, pumped fluids, and/or vaporcompression cycles. Fluids typically used in the datacom equipment include water, ethylene glycol or propylene glycol/water mixtures, refrigerants, or dielectrics. Ata minimum the datacom equipment cooling system would include heat collectionheat exchangers, commonly referred to as cold plates (single phase) or evaporators(two phase), as well as a heat-of-rejection heat exchanger and may be furtherenhanced with active components such as compressor/pump, control valves, elec-tronic controls, etc. technology cooling system (TCS): This system would not typically extend beyond the boundaries of the IT space. The exception is a configuration in which the CDU is located outside the data center. It serves as a dedicated fluid loop intended to Figure 1.1 Liquid cooling systems/loops within a data center. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 6Introduction perform heat transfer from the datacom equipment cooling system into the chilled- water system. This loop configuration is highly recommended, as it is needed toaddress specific fluid quality issues regarding temperature, purity, and pressure asrequired by the heat exchangers within the datacom equipment cooling systems.Fluids typically used in the technology cooling loop include water, ethylene glycolor propylene glycol and water mixture, refrigerants, or dielectrics. This loop may alsofunction by single-phase or two-phase heat transfer modes and may facilitate transferby heat pipes, thermosiphon, pumped fluids, and/or vapor compression cycles. At a minimum the technology cooling system would include a heat collection heatexchanger (likely integral component of the datacom equipment cooling system), aheat rejection heat exchanger, as well as interconnecting piping. This system may befurther enhanced with such active components as compressors/pumps, controlvalves, electronic controls, filters, hydronic accessories, etc. facilities water system (FWS): This system is typically at the facility level and may include a dedicated system for the IT space(s). It primarily consists of the system between the data center chiller(s) and the CDU. The facility water system typicallyincludes the chiller plant, pumps, hydronic accessories, and necessary distributionpiping at the facility level. The chiller plant would typically use a vapor compressioncycle to cool the chilled-water supply temperature (43°F–48°F [6°C–9°C]) substan-tially below indoor ambient temperature (typically 75°F [24°C] and up to andbe yond 95°F [35°C]). The chiller system may offer some level of redundancy for critical components such as chillers, cooling towers, and pumps. In an economizermode of operation (Section 2.2.4), a plate and frame heat exchanger may be used inplace of the chiller. Direct expansion (DX) equipment can also be used in the chilled-water system. DX equipment provides direct heat dissipation to the atmosphere and is therefore the last loop for that design method. Limitations include distance for the split systemsand cost of operation. Generally, in most areas systems become economically break-even at 400 tons of refrigeration. Larger systems favor non-DX designs unless othercircumstances warrant more extensive DX deployment. Smaller thermal ride- through devices can be introduced for individual or special cases within this loopdesign. condenser water system (CWS): This system consists of the liquid loop between the cooling towers and the data center chiller(s). It is also typically at the facility level and may or may not include a dedicated system for the IT space(s). Condenser waterloops typically fall into one of two fundamental categories: wet-bulb-based or dry-bulb-based system. The wet-bulb-based loops function on an evaporative process,taking advantage of lower wet-bulb temperatures, thereby providing coolercondenser water temperatures. The dry-bulb-based loops function based on thedifference of condenser water loop temperature versus ambient dry-bulb tempera- © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 7 ture. To allow heat transfer with the dry-bulb-based system, the condenser water loop must be at some temperature substantially above the ambient dry-bulb temper - ature to allow adequate heat transfer from the condenser water into the outdoorambient air. These loops would typically include: outdoor heat rejection device (cooling tower or dry fluid cooler), pumps, expansion tanks, hydronic accessories,and distribution piping. Any of these systems can be eliminated. For instance, in an installation where the CDU fluid flows directly to the load, the datacom equipment cooling system is eliminated. Alternatively, where fluid from the chilled-water system flows to a heatexchanger in the rack (this is strongly discouraged), the technology cooling systemis eliminated or the technology cooling system and datacom equipment coolingsystem are eliminated (again, this is strongly discouraged). General ranges for air-cooled equipment and evaporative water cooling as well as the refrigerant types and compressor efficiencies result in a unique engineering design for each data center. For cooling component selection, the design engineermust consider economical operation, first cost, expandability, reliability, redundancy,and fault tolerance. To complicate the design further, all of the systems are interre-lated. For example, undersize one loop and that will limit the successive removal ofheat to the next level. When designing a data center, the interdependence must be evaluated. ASHRAE’s Datacom Book Series (specifically, Design Considerations for Datacom Equipment Centers, Datacom Equipment Power Trends and Cooling Applications, and Thermal Guidelines for Data Processing Environments) provides more details about the various systems and their design requirements. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 92 Facility Cooling Systems 2.1 INTRODUCTION Facility cooling systems can vary in configuration and equipment. Time and cost pressures can cause a disproportionate focus on simply designing cooling systems that deliver the capacity needed and ignoring other critical considerations. However, the focus should be much broader and balance the various considerationsand trade-offs, such as flexibility; scalability; energy efficiency; ease of installation,commissioning, and operation; ease of maintenance and troubleshooting; and avail-ability and reliability. Any one or combination of the six considerations justmentioned can significantly change how the facility cooling systems are designed, what equipment is used, and the overall system architecture. 2.1.1 Flexibility Data center cooling systems should be designed with features that will mini- mize or eliminate system outages associated with new equipment installation. These features should be added to both the central plant cooling systems and build- ing chilled-water piping architecture. Some of these features include valved and capped piping connections for future equipment, such as water-cooled racks, central station air handlers, CRACs, and central plant equipment. The central plantshould be configured to add additional chillers, pumps, and cooling towers as theload increases. A properly managed load and growth plan or strategy should bedeveloped and used to incorporate future computer and cooling systems. Overall flexibility is often limited by the pipe sizes used in the central plant and distribution system. After a data center is online, changing pipe size to increase capacity is typically prohibitive from both outage risk and implementa-tion cost perspectives. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 10Facility Cooling Systems 2.1.2 Scalability The building cooling systems should be designed to accommodate future load gro wth of the computer equipment. Unless adequately planned growth and expan- sion is included in the data center, it will be obsolete in a very short time. Computer technology changes every two to five years, and the cooling system will need to be expanded to accommodate this load growth. Therefore, building piping architecture(CWS and FWS) should be designed to support a future building cooling loaddensity. Although first cost often dictates pump selection and pipe sizing, pumpingenergy, flexibility, and chilled-water storage will need to be considered to determinea total cost of ownership. The central plant should have enough space for future chillers, pumps, and cool- ing towers. The central plant chilled- and condenser water system pipe headersshould be sized to operate efficiently from day one through the ramp up and for the future projected load. Sizing piping for the full build out or future growth will save energy and allow smaller active components during the early life of the building. If the budget does not allow for any additional building costs for future growth, the owner needs tomake sure that real estate is available next to the existing central plant. 2.1.3 Ease of Installation, Commissioning, and Operation Cooling service equipment should be designed such that it can be installed in easy , visible, and readily accessible locations. Commissioning is an effective strategy to verify that the cooling systems are operating as intended in the original building design and should be considered for every project. In Design Considerations for Datacom Equipment Centers (ASHRAE 2009b), Chapter 12 provides helpful information regarding cooling services for equipment cooling systems. This chapter details the five steps of formal commission-ing activities, starting with the facility’s intent and performance requirements (asdetermined by the project team), and following with the Owner’s Program document, the Basis-of-Design document, and the project Commissioning Plan. These activitiesinclude factory acceptance tests, field component verification, system constructionverification, site acceptance testing, and integrated systems testing. Commissioning at full load (full flow) to prove hydraulic capacity should be a requirement. Loop isolation segments should be commissioned to prove circulation capacity around each segment with looped supply and return to provide concurrentmaintainability without loss of cooling service. Data centers should be designed with a central control or command center to o versee building operations. The control center should house all of the building oper - ations systems, such as security monitoring, energy management and control systems (EMCS), system control and data acquisition (SCADA) systems, building automation systems (BAS), and fire alarms. This control can be co-located with thecomputer system control room. It should be staffed for 24-hour operation. All emer - © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 11 gency procedures, protocols, and a personnel list should be included in the control center and updated as changes occur. Power and cooling loads on facility equipment,such as UPS, chillers, and electrical feeders, should be monitored to determine loadgrowth and available capacity. Configuration management groups or boards,consisting of IT and facilities departments, should be established to control andmanage data center infrastructure. 2.1.4 Ease of Maintenance and Troubleshooting The ease of maintenance and the ability to troubleshoot problems quickly and accurately are essential elements of a high-availability datacom facility. The first element of this planning should be to maintain adequate working clearance around cooling equipment. Manufacturers’ recommendations for working clearancesshould be used as a minimum for serviceability areas. Designers need to provideaccess to allow maintenance and operation of valves, control devices and sensors,and large equipment. Lifts, hoists, and cranes may be mounted within the centralplant to help facilitate removal of heavy equipment and components. These devicesshould be incorporated into the existing structural members of the plant room. Forexample, a hoist and rail system could be placed above the chillers in the plant roomto facilitate rapid removal of end plates and/or compressors. Also, space should beprovided to facilitate demolition and removal of entire cooling system components,such as chillers. If space is at a premium, tube pull areas for each chiller can be sharedsince usually one chiller is being worked on at a time. Chilled and condenser waterpipes should be routed to avoid conflict with removal of cooling system equipment.Mechanical equipment, such as pumps and chillers, should be arranged in such amanner as to facilitate complete replacement. Isolation valves must also be locatedto allow for replacement without interrupting service, which makes layout andassembly of the entire piping system important. Building owners should consider a computerized maintenance management system (CMMS) to help manage equipment maintenance. These systems can record maintenance history and automatically dispatch work orders for future maintenance.Manufacturers’ specific maintenance requirements and frequencies can be input ordownloaded into the CMMS. It is much easier and desirable to coordinate equipmentoutages and maintenance than to deal with an unscheduled equipment failure due tolack of adequate maintenance. Energy management and control systems (EMCS) or building automation systems (BAS) sensors and device outputs can be trended over time and used for system diagnostics and troubleshooting. EMCS data can also be used to monitor andcharacterize system performance over time. For example, chiller amp readings and/or chilled-water temperature differentials and flows can be used to calculate andmonitor chiller efficiency and load growth. Control systems should have a fail-safecondition that allows mechanical flow and ONoperation. Typical general building management systems shut down on the loss of control. Data center systems shouldturn on to keep systems online. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 12Facility Cooling Systems Load growth over time can be compared to chilled-water capacity, and this information can be used to project time frames for plant expansion or increase in individual component capacity, i.e., replace an 800 ton chiller with a 1200 ton chiller.In addition, flowmeters and pressure sensors can be installed in the building coolingdistribution or secondary loop and used to monitor chilled-water flows and capac-ities in the piping architecture. This information can be used to determine the bestlocation in which to install the newest water-cooled computer equipment or used to calibrate a network model of the chilled-water systems. Finally, analog thermome-ters, pressure gauges, and flow-measuring instrumentation (orifice plates, balancingv alves, etc.) should be installed in chilled-water piping and used to gain additional information on system performance. Sensors need placement in both the primaryloop and the auxiliary loop to allow control if either part of the loop is being serviced.Manual operation may also be used if the alternate loop is temporary. 2.1.5 Availability and Reliability A key to having a reliable system and maximizing availability is an adequate amount of redundant equipment to perform routine maintenance. If Nrepresents the number of pieces to satisfy the normal cooling capacity, then often reliability stan- dards are considered in terms of redundant pieces compared to the baseline of N. Some examples would be: N+ 1—full capacity plus one additional piece N+ 2—full capacity plus two additional pieces2 N—twice the quantity of pieces required for full capacity 2 ( N+ 1)—full capacity plus one additional piece and the entire assembly repeated again (backup site) A critical decision is whether Nshould represent just normal conditions or whether Nincludes full capacity during offline routine maintenance. For example, in an N+ 1 redundancy scenario during routine maintenance of a single unit, the remaining capacity from Nunits is exactly what is required to meet the cooling load. If one of the Nonline units fails while another is under maintenance, the cooling capacity of the online units is insufficient. The determination of the facility load represented by Nshould also be made based on local design conditions. ASHRAE has statistical design data based on 0.4%, 1%, and 2% excursions on an annual basis for both dry-bulb and for wet-bulbtemperatures. The 0.4%, 1%, and 2% design conditions are exceeded 35, 87, and175 hours, respectively, in a typical year. As an example, in Atlanta, the dry-bulb temperatures corresponding to the 0.4%, 1%, and 2% conditions are 93.8°F, 91.5°F, and 89.3°F (34.3°C, 33.1°C, and 31.9°C), respectively, and the wet-bulb tempera-tures corresponding to the same percentage excursions are 77.2°F, 76.2°F, and75.3°F (25.1°C, 24.6°C, and 24.1°C) (ASHRAE 2009b). In addition, facilities with © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 13 a high tier level should use even more stringent design conditions to comply with design standards. For instance, the Uptime Institute’s site infrastructure standardrecommends designing for the N= 20 years value for dry bulb and the extreme maxi- mum wet bulb temperature for wet bulb (Uptime 2009-2012). For Atlanta, the N= 20 dry bulb temperature is 102.6°F (39.2°C) (8.8°F [4.9°C] above the 0.4% value), and the extreme maximum wet bulb temperature is 82.4°F (28°C) (5.2°F [2.9°C]above the 0.4% wet bulb value). In an N+ 2 redundancy scenario, even if one unit is down for maintenance, there is still an additional unit online above the required capacity that can compensate if one of the online units fails. This scenario means that there is always a redundant unitat all times, whereas N+ 1 would have a redundant unit for the majority of the time but not during routine maintenance. The 2N and 2(N + 1) systems can apply to pipe work as well as components. A2Ndesign will not need loop isolation valves since there is a complete backup system. However, due to the thermal inertia of the fluid in the systems, both systems must be operational for seamless transfer without disruption if one system detects a fault. These configurations of N+ 1 through 2(N + 1) exemplify the significant vari- ation in the design of redundant systems to achieve equipment availability at leastequal toN. Overlaid on top of these configurations is the influence of human error. Many failures are caused by human error; some configurations improve reliabilityfrom a mechanical standpoint but may increase the potential for human error. Chilled-water or thermal storage may be applied to the data center central cool- ing plant to minimize or eliminate computer equipment shutdown in the event of apo wer failure to the datacom facility. Chilled-water storage tanks are placed in the building distribution chilled-water piping. In the event of a power failure, the chilled-water pumps use the UPS to provide a constant flow of chilled water to the equip-ment. The chilled-water tanks are used in conjunction with pumps to provide coolingw ater. The tanks should be sized to have enough storage capacity to match the battery run time of the UPS systems. Once the generators are operational, the chillers andpumps will resume normal operation. For more information refer to Design Considerations for Datacom Equip- ment Centers (ASHRAE 2009). The following chapters in that document provide helpful information regarding cooling services for equipment cooling systems: Chapter 4, “Computer Room Cooling Overview”—The relationship between methods of heat rejection as accomplished by direct expansion versus chilled- water systems is described, along with the functional characteristics and inter - relationships of the refrigeration cycle, condensers, chillers, pumps, piping, and humidifiers. The chapter concludes with a description of control parame- ters and monitoring methodologies. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 14Facility Cooling Systems Chapter 13, “Availability and Redundancy”—This chapter details aspects of availability, such as the concept of “five nines,” failure prediction, mean time between failure (MTBF), and mean time to repair (MTTR). Concepts ofredundancy, such as N+1 , N+ 2, and 2N are introduced, defined, and dis- cussed. Diversity and human error, as well as some practical examples ofmethods to increase availability and redundancy, are presented. Chapter 14, “Energy Efficiency”—Specific topics include chilled-water plants, CRAC units, fans, pumps, variable-frequency drives, humidity control, air- and water-side economizers, part-load operation, in-room airflow distri- bution, and datacom equipment energy efficiency. 2.1.6 Energy Efficiency Energy efficiency has become an increasingly important attribute of data center central plant design. The reasons include the desire on the part of the industry to limit the increasing energy consumption of data centers nationally, to reduce the carbonfootprint of the industry, and to take competitive advantage of the lower costs asso-ciated with reduced energy consumption. The increased use of liquids in coolingdata centers has been instrumental in reducing energy consumption; the lowest power usage effectiveness (PUE) values in the industry are typically associated withliquid cooling due to a combination of improved approach temperatures and moreefficient pumping. Two ASHRAE publications, Best Practices for Datacom Facility Energy Efficiency , Second Edition (ASHRAE 2009a), and Green Tips for Data Centers (ASHRAE 2012b), provide excellent guidance with regard to increasing the energy efficiency of data centers. 2.2 EQUIPMENT The facility cooling equipment is a broad topic and more detail can be obtained in the ASHRAE Handbooks. For the scope of this book, only the chillers, heat rejec- tion equipment, heat exchangers, and pumps are covered.2.2.1 Chillers Chiller is the term used to describe mechanical refrigeration equipment that produces chilled water as a cooling output or end product (Figure 2.1 ). Basic infor - mation on chillers can be obtained from Chapter 43 of the 2012 ASHRAE Hand- book—HVAC Systems and Equipment (ASHRAE 2012a). Figure 2.1 Generic chiller diagram. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 15 The chilled water produced by a chiller for building cooling is commonly 42°F to 45°F (5.5°C to 7°C) but can be selected and designed to produce temper - atures that are higher or lower than this range. Often this temperature is too low for data center cooling. A warmer temperature will provide enough cooling with- out dehumidifying further and has the potential for significant energy savings. Thedo wnside to the higher temperature is that it can require more flow if the air-side temperatures are not concurrently raised and, hence, larger pipes and/or pumps. The chilled water can be 100% water or a mixture of water and glycol (to prevent the water from freezing if piping is run in an unconditioned or exterior area) plus other additives such as corrosion inhibitors. The capacity of chillers (and all otherheat transfer coils, such as heat exchangers and cooling coils) is influenced or de-rated when glycol or additives are included. In a typical chilled-water configuration, the chilled water is piped in a loop between the chiller and the equipment cooling system. It is important to consider the chiller’s part-load efficiency, since data centers often operate at less thanpeak capacity and redundant equipment is often left in operating mode, evenfurther reducing the operating load of the chillers. One way to classify chillers is the basic method of chiller heat rejection (air- cooled or water-cooled). For air-cooled chillers, the compressors reject the heat they gain to the atmosphere using air-cooled condenser sections (Figures 2.2 and 2.3). Figure 2.2 Schematic overview of a generic air-cooled chiller flow. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 16Facility Cooling Systems The majority of air-cooled chillers are located outside to facilitate heat rejection to the atmosphere. However, due to spatial or other constraints, the air-cooled chiller components can be split from the heat rejection component (typically an air-cooledcondenser) located remotely from the chiller. Water-cooled chillers use a second liquid loop called the condenser water loop. The condenser water loop is typically connected to an open or closed cooling tower (evaporative heat rejection unit) as shown in Figure 2.4. A water-cooled chiller is shown in Figure 2.5. Based on electrical consumption per quantity of cooling produced and depending on the type of configuration used, w ater-cooled chillers are typically more energy efficient than the air-cooled equiv- alent, primarily due to lower condensing temperatures. More information on chillerscan be found in Chapter 43 of the 2012 ASHRAE Handbook—HVAC Systems and Equipment (ASHRAE 2012a). 2.2.2 Heat Rejection Equipment 2.2.2.1 Cooling Tower A cooling tower is a heat rejection device that uses evaporative cooling (Figure 2.6). Cooling towers come in a variety of shapes, sizes, configurations, and cooling capacities. Since cooling towers require an ambient airflow path in and out, they are located outside, typically on a roof or elevated platform (Figure 2.7). The elevation of the cooling tower relative to the remainder of the Figure 2.3 Typical packaged air-cooled chiller. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 17 cooling system needs to be considered when designing the plant because the cool- ing tower operation and connectivity rely on the flow of fluid by gravity. Basic information on condenser water systems can be obtained from Chapter 14 of the 2012 ASHRAE Handbook—HVAC Systems and Equipment (ASHRAE 2012a). Evaporative cooling needs a reliable water source. Depending on the quality of water and the extent of suspended solids that will stay in the remaining water afterFigure 2.4 Schematic overview of a generic water-cooled chiller flow. Figure 2.5 Water-cooled chiller. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 18Facility Cooling Systems evaporation, a bleed rate to remove residue must be added to the evaporation rate. This is generally between 3.5 and 4 gallons (13.2 and 15.1 liters) per ton of refrig- eration per hour but varies depending on location, elevation, and water chemistry. This will contribute to the total water storage or well source requirements. Figure 2.6 Schematic overview of a generic cooling tower flow. Figure 2.7 Direct cooling towers on an elevated platform. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 19 The generic term cooling tower is used to describe both open-circuit (direct- contact) and closed-circuit (indirect-contact) heat-rejection equipment (see Figures 2.8and 2.9). The indirect cooling tower is often referred to as a closed circuit fluid cooler orfluid cooler . Where an open cooling tower is used, a heat exchanger should be considered to isolate the open water loop from the chilled water or other closed loop supplying the equipment cooling system to limit the possibility of fouling. For data centers or other mission critical facilities, onsite water storage is a consideration. First, for water-cooled plants with evaporative cooling towers, make- up water storage could be provided to avoid the loss of cooling tower water followinga disruption in water service. For large data centers, the tank size corresponding to Figure 2.8 Directoropencircuitcoolingtowerschematicflowdiagram. Figure 2.9 Indirect cooling tower schematic flow diagram. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 20Facility Cooling Systems 24 to 72 hours of onsite makeup water storage could be in the range of 100,000 to well over 1,000,000 gallons. Second, chilled-water storage could provide an emer - gency source of cooling. Placing the chiller plant on a UPS can be very costly;chilled-w ater storage may offer a short period of cooling to keep chillers off a UPS. Prolonged power outages, however, will still require that chillers be fed from emer - gency generators to maintain operations. Typically the selection of cooling towers is an iterative process since there are a number of variables resulting in the opportunity for trade-offs and optimization. Some of those variables include size and clearance constraints, climate, requiredoperating conditions, acoustics, drift, water usage, energy usage, part-load perfor - mance, etc. Designers should give special attention to freeze protection when designing w ater-cooled chiller plants with cooling towers. Freeze protection for towers when overlooked and or not properly maintained can result in component damage andcomplete loss of chiller operation during a freeze event. A common approach is touse towers with basin heaters or remote tower basins located below the towers andwithin a heated space. It is also necessary to give careful consideration to all externalpiping that could experience freeze damage. Trace heating and insulation are impor - tant mitigation considerations. Best practices include current transformers on allfreeze protection heaters and monitoring during low-ambient conditions (which allows for human intervention to avoid damage and or loss of plant operation). 2.2.2.2 Dry Cooler Dry coolers typically consist of a finned tubular liquid-to-air heat exchanger with fans to move the air across the heat transfer surface (Figure 2.10). The process water flows in a closed circuit, requiring no makeup water as is traditionally required Figure 2.10 Dry cooler. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 21 with open-circuit cooling towers. Heat is transferred sensibly rather than latently (i.e., evaporatively), making the dry cooler a less efficient heat rejection device thana cooling tower. Dry coolers can be used alone or in conjunction with evaporative heat rejection equipment on closed loop condenser systems in order to reduce water consumption and provide redundancy. Hybrid closed-circuit cooling towers that combine evapo-rative and sensible heat rejection surfaces in one unit are also an option. 2.2.2.3 Approach Temperature Heat exchange processes such as cooling towers, plate-and-frame heat e xchangers, and dry coolers introduce some inefficiency due to required temperature differences. These temperature difference are referred to as the approach tempera- tures. More specifically, the approach temperature is the difference between the outlet temperature of the fluid being cooled minus the relevant (wet-bulb or dry-bulb) inlet temperature of the cooling fluid. For example, an open-circuit coolingtower having a condenser water outlet temperature of 85°F (29.4°C) and enteringambient air wet-bulb temperature of 78°F (25.5°C) has an approach of 7°F (3.9°C).Figure 2.11 illustrates the example. In general, the closer the specified approach is, the more effective (e.g., larger) the heat exchanger. Typical design approach temperatures for various heat exchang- ers can be found in Table 2.1. The approach temperature that is available from a particular piece of heat rejec- tion equipment is largely determined by the heat exchange surface area and the massof air that is moved across this surface area. Ultimately, the actual approach temper - Figure 2.11 Approach temperature for an open-circuit cooling tower. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 22Facility Cooling Systems ature used in the design will be based on computer simulations to balance capital expenditure (heat exchange surface area) and fan energy (mass of air used for cool-ing) while still realizing an attractive return on investment (ROI). Even though it has less of an impact on the ROI, the fluid transport systems (piping and pumping) should also be included in this simulation because overall system flow is related to the cooling T,and the other component of the pump selec- tion (pressure) is determined by the required differential pressure. The pressure dropthrough piping, heat exchangers, and the heat rejection equipment (along with the lift required for open systems) all have an effect on the energy used by this system. The evaluation of pumping efficiency versus flow/pressure stability should also be carefully evaluated at this phase of the project. The approach temperatures of other heat exchangers in the system should also be considered when determining the approach temperature required for the heat rejection equipment. A more effective selection for the heat rejection equipmentmay be obtained if the approach temperatures in the remainder of the system arecarefully optimized. Even if effective selections are made for the heat rejection and exchange equipment during the initial phases of the project, degradation of the design approach temperatures (i.e., larger approach temperatures) due to fouling or the use of glycols must be consid-ered. This is especially important for sensible heat rejection equipment and heate xchangers that use water from an open system for cooling. Finned heat exchangeTable 3.1 Approach Temperatures for Various Heat Exchanges Type of Heat ExchangeLeaving Fluid Temperature Being CooledRelevant Cooling Fluid Entering TemperatureTypical Design Approach Temperature Open-circuit Cooling TowerCondenser water Air wet bulb 5°F to 7°F (2.8°C to 3.9°C) Closed-circuit cooling towerCondenser water Air dry bulb 7°F to 12°F (3.9°C to 6.7°C) Finned dry coolerCondenser water Air dry bulb 15°F to 20°F (8.3°C to 11.1°C) Adiabatic finned dry coolerCondenser water Air wet bulb 10°F to 15°F (5.6°C to 8.3°C) Plate and frame heat exchangerCondenser water Faclities water 3°F to 5°F (1.7°C to 2.8°C) Direct evaporative coolerCondenser water Air wet bulb 10°F to 15°F (5.6°C to 8.3°C) Cooling distribution unitFacilities water Technology cooling system water8°F to 15°F (4.4°C to 7.4°C) © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 23 surfaces are prone to atmospheric fouling, especially in urban and industrial areas. Finned surfaces are commonly found in closed-circuit cooling towers, finned dry cool-ers, adiabatic finned dry coolers, and direct evaporative coolers. This fouling may alsobe the result of seasonal conditions (e.g., cottonwood seeds or pollen clogging the finned surfaces of sensible heat rejection equipment) or long term corrosion on surfacesin contact with water or other fluids. In addition to fouling, the use of glycols for freezeprotection can also have a negative effect on the ability of the fluids to exchange energy,resulting in higher approach temperatures. 2.2.3 Pumps Pumps and pumping system design should take into account energy efficiency, reliability , and redundancy. It may be possible to design pumps with variable-speed drives so that the redundant pump is always operational. Ramp-up to full speed occurs on a loss of a given pump. Use of multiple pumps running at slower speeds may also save energy , depending on the part-load efficiency of the pumps at both conditions. Premium efficiency motors should always be specified since the paybackperiod will be short with 24/7 operation. Basic information on pumps can be obtained from Chapter 39 of the 2012 ASHRAE Handbook—HVAC Systems and Equipment (ASHRAE 2012a). Piping systems are covered in the same volume of the Handbook in Chapter 13. 2.2.4 Economizer Mode of Operation Fundamentally, the economizer process involves using favorable ambient weather conditions to reduce the energy consumed by the facility cooling systems. (The IT industry still has some concerns regarding air-side economizers, but these are beyond the scope of this liquid cooling document.) Most often this is accomplished by limitingthe amount of energy used by the mechanical cooling (refrigeration) equipment. Sincethe use of an economizer mode (or sequence) reduces the energy consumption whilemaintaining the design conditions inside the space, another term that is used in thebuilding cooling industry for economizer mode operation is free cooling. ANSI/ASHRAE Standard 90.1-2010, Ener gy Standard for Buildings Except Low-Rise Residential Buildings, is a “standard” or “model code” that describes theminimum energy efficiency standards that are to be used for all new commercialbuildings and include aspects such as the building envelope and cooling equipment. This standard is often adopted by an authority having jurisdiction (AHJ) as the code for a particular locale. Water-side economizer cycles often use heat exchangers to transfer heat from the condenser water system to the chilled-water system as conditions permit. If water-side economizers can precool return water to chillers, thus reducing the loadto the chillers under favorable environmental conditions, it is called an integrated economizer . If, on the other hand, it only operates when it can meet 100% of the cool- ing load, it is typically called a par allel economizer . Integrated economizers can save © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 24Facility Cooling Systems more energy than parallel economizers due to increased hours of operation. However, system pressure drops can increase with integrated economizers, andbypasses are often added so that the increased system pressure drop is only incurredduring economizer operation. As its name suggests, a heat exchanger is a device that relies on the thermal transfer from one input fluid to another input fluid. One of the fluids that enters the heat exchanger is cooler than the other entering fluid. The cooler input fluid leaves the heat exchanger warmer than it entered, and the warmer input fluid leaves coolerthan it entered. Figure 2.12 is a simplistic representation of the process. Figure 2.13 shows a typical installation of plate and frame heat exchangers. Figure 2.12 Simple overview of the heat exchanger process. Figure 2.13 Insulated plate-and-frame heat exchangers. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 253 Facility Piping Design 3.1 GENERAL The piping architecture defines the relationship between the cooling source (plant) and the load (electronic equipment). The architecture should consider simplicity, cost, ease of maintenance, ease of upgrade/change, ease of operation, controls, reliability, energy usage, etc. Typically the basic options for piping architecture are established and then re viewed for their effectiveness within the spatial constraints of the facility. For example, a loop may look like a good option from a piping architecture perspectivebut the routing paths available may not provide the space and locations needed tocreate effective loops. This chapter is organized into: Spatial considerations, including routing Basic piping architecture Piping arrangements for the central plant Water treatment issues Earthquake protection One other important consideration is pipe sizing criteria. Analysis of plant, distrib ution, and terminal pipe sizes results in trade-offs between capital and oper - ational costs. Larger pipe sizes yield lower water velocities, which, in turn, lower pumping power (smaller pumps) and operational costs. Generally, velocities should be as high as practical without sacrificing system integrity. (See discussion of veloc-ity limits in Section 5.1.1.3). Typically, increased pumping energy does notoutweigh the lower capital cost or the space savings associated with the smaller pipe sizes. Pipe sizing also affects how much additional cooling load can be added to thedata center at a future date. As cooling and power densities continue to grow, datacenters must be scalable to house this future growth. One strategy is to oversize thechilled-water piping plant mains and distribution headers to accommodate future © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 26Facility Piping Design load increases. Oversizing these will save energy and allow smaller pumps for much of the data center’s life. The chilled-water piping must be planned and designed so the data center can remain operational while adding more computer equipment. 3.2 SPATIAL CONSIDERATIONS Usually the spatial considerations start by determining what piping can be located in the data center and where. Some examples of stakeholder concerns/pref- erences are: No overhead piping, minimal overhead piping, or no constraint on overhead piping other than to avoid routing directly above electronic equipment. All piping mains and pipes above a certain size can be run in the data center or are confined to pipe galleries, chases, troughs, utility pits, etc. The spatial constraints do not just apply to routing but also to the accessibility of valves and terminations as well as the impact of piping penetrations through fire- walls. Stakeholder piping location preferences combined with the physical constraints of the facility often have as much or more influence on the piping archi- tecture as any other influence, such as operations, energy, or redundancy.Consideration should be given to laying out large piping first so as to minimizepressure drop (e.g., large radius bends, few changes of direction). Then arrange pumps, chillers, etc., to tie into the piping with minimal changes of direction. This will likely result in chillers and pumps arranged in less than a linear fashion— perhaps at 45° instead of 90°. Although each layout and design is different, a good starting point is to allo- cate cross-sectional dimensions that are twice the pipe diameter. Some exam - ples: If the pipe is 6 in. (15.2 cm) in diameter, then allow 12 × 12 in. (30.5 × 30.5 cm) in cross section. If the pipe is 18 in. (45.7 cm) in diameter, then allow 36 36 in. (91.4 × 91.4 cm) in cross section. These allocations are to allow for the pipe flanges, pipe guides/supports, valve handles, etc. Also, the piping design must consider expansion and contraction as well as seismic considerations. For example, the piping may be stored and installed in an ambient temperature of 90°F (32°C) and operate at 45°F (7°C). The differentialtemperature (90°F – 45°F = 45°F [32°C – 7°C = 25°C)]) can cause significant move-ment during system startup and cooldown. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 27 3.3 BASIC PIPING ARCHITECTURE Inherent to the concept of liquid cooling is extending the infrastructure that contains the cooling media to an area local to the datacom equipment. In addi- tion, the infrastructure delivery method is typically at a much finer granularity (i.e., many more points of connection) based on the terminal use of the liquid. Cooling equipment may be serving a few racks, a single rack, or perhaps evenmultiple components within a single rack. The infrastructure itself is typically a copper or steel piping network. Many differ - ent piping architectures and flow principles can be used to extend the piping network to the rack and the increased points of connection while at the same time providing an overall system that is consistent with the necessary reliability and flexibility of a data - com facility. The following group of diagrams within this section represents some ofthe different piping architectures and flow principles that can be used. 3.3.1 Direct Return (Figure 3.1) A direct return system is the most basic type of piping system and is used in traditional HV AC design where there are a reduced number of connection points. In Figure 3.1 Example of direct return flow principle. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 28Facility Piping Design this system, the supply and return piping is fed in a radial manner and the loads that are closest to the cooling plant have the shortest supply piping lengths and the short- est return piping lengths. However, the direct return method, when used in an application that has many close connection points, may require an excessive amount of balancing valves to ensure proper system operation. This is due to the variation in supply and return piping lengths to a given load. Advantages 1. Least expensive to construct, uses a minimal amount of pipe, valves, and fittings. 2. Simplest to operate and understand. Disadvantages 1. Least reliable since only one source of cooling exists. 2. No redundancy in piping to the load. Any pipe failure or leak or future addition could jeopardize system availability. 3. May require additional balancing valves. 3.3.2 Reverse Return (Figure 3.2) The objective of the reverse return flow principle is to inherently create a piping network with an element of self-balancing. This is achieved by having the loads supplied by piping closest to the cooling plant also be the loads that are at the most remote end of the return piping and vice versa. This is achieved byessentially having the flow in the return piping parallel the flow in the supplypiping as it feeds the various loads around the building. This results in thecombined length of supply and return piping for any given load being approxi-mately equal, which creates a system that can be considered self-balancing. Advantages 1. Simple to operate and understand. 2. Self-balancing. Disadvantages 1. Less reliable; again, only one source of cooling. 2. No redundancy in pipe or chilled-water routes. Routine maintenance or system e xpansion could require complete system shutdown. 3. A little more expensive to install than direct return (i.e., more piping required). © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 29 3.3.3 Looped Mains Piping Schemes The remaining piping architecture examples illustrated in this section involve the use of looped piping mains. Looped piping mains involve a closed loop that is tapped at various points to feed loads. The flow of liquid within the loop can occur in two directions from the source and, in theory, there is a “no-flow zone”near the midpoint of the loop. More importantly, the architecture of looped piping mains also allows for a section of main piping to be isolated for maintenance or repair. Loads that were downstream of the isolated section can then be backfed from the other side of the looped mains to allow for greater online availability of the cooling system. V arious piping architectures can be used to create a loop design. These varia- tions allow a system to attain different levels of isolation, different hydraulic char - acteristics, and different levels of modularity. The following diagrams illustrate some examples of looped piping architectures. Figure 3.2 Example of reverse return flow principle. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 30Facility Piping Design 3.3.4 Single-Ended Loop with Direct Feed (Figure 3.3) A single-ended loop has a single point of connection (supply and return piping) to the plant. The piping is typically looped within the datacom area and, in this particular configuration, the loads are directly fed from the mains’ loop piping. A popular application of this piping architecture is for an air-cooled CRAC unit based cooling system, where CRAC units are located around the perimeter of the datacom area. Figure 3.3 Single-ended loop with direct feed. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 31 Advantages 1. Self-balancing. 2. Increased reliability over direct and reverse returns systems with two piping routes to the load. 3. Individual pipe sections and future equipment installations are serviceable without system shutdown. Disadvantages 1. Increased complexity and understanding. 2. Increased installation costs. 3.3.5 Single-Ended Loop with Common Cross Branches (Figure 3.4) Similar to the previous example (Figure 3.3), the same single-ended loop archi- tecture is used. The difference is in the connection of the loads, which are now indi- rectly fed from cross-branch piping connected at two locations to the mains’ loop. The cross-branch piping is said to be common since it is used by multiple loads on both sides of its route as a supply and return flow path. This method not only allows for a bidirectional flow of liquid in the mains but also within each cross branch. As such, it provides multiple paths for flow to reach the majority of the loads should a section of the mains’ loop or the cross branch needto be isolated for reasons of maintenance or repair. Advantages 1. Increased reliability with multiple piping routes to load. 2. Self-balancing.3. Used primarily for water-cooled rack units. 4. Individual pipe sections and future equipment installations are serviceable without system shutdown. Disadvantages 1. Increased installation costs. 2. Increased operational complexity. 3.3.6 Single-Ended Loop with Dedicated Cross Branches (Figure 3.5) The same single-ended loop is used here as with the previous two examples (Figures 3.3 and 3.4) and the same cross-branch piping as in the previous example (Figure 3.4). Now, however, the indirect connections of the loads are supplied from an increased number of cross-branch pipes. This allows for an increase in granularity of the loads and, therefore, an increased level of reliability (i.e., the isolation of a section of cross-branch piping will not impact as many loads since there are fewerconnections per cross branch). © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 32Facility Piping Design As is apparent from the diagram, this architecture involves many more cross branches, so the increased granularity needs to be evaluated against increased cost and hydraulic/control complexity. Advantages 1. Increased reliability with multiple piping routes to load. 2. Self-balancing.3. Individual pipe sections and future equipment installations are serviceable without system shutdown. Figure 3.4 Single-ended loop with common cross branches. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 33 Disadvantages 1. Increased installation costs. 2. Increased operational complexity. 3.3.7 Double-Ended Loop with Direct Feed (Figure 3.6) The only difference between the single-ended loop (shown in Figure 3.3) and the double-ended loop (shown in Figure 3.6) is that in this piping architec- ture there are two connections to the plant, which eliminates the single point of f ailure that exists for all single-ended loop piping configurations (e.g., if a need Figure 3.5 Single-ended loop with dedicated cross branches. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 34Facility Piping Design exists to isolate the piping between the connection to the plant and upstream toward the plant itself, this method will still allow cooling to all loads via thesecond connection). To illustrate an even greater level of reliability, consider the second connection to the loop to be from an operationally independent plant. These independent plants could even be in geographically different locations within the same facility. Advantages 1. High reliability. 2. Redundant piping routes to load and a second cooling supply and return mains from the plant. 3. Redundant cooling supply and return piping from a second central plant. Figure 3.6 Double-ended loop with direct feed. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 35 4. Individual pipe sections and future equipment installations are serviceable without system shutdown. 5. Self-balancing. Disadvantages 1. Increased installation costs. 2. Increased operational complexity. 3.3.8 Double-Ended Loop with Common Cross Branches (Figure 3.7) As stated previously, the difference between this piping architecture and the single-ended loop with common cross branches (shown in Figure 3.4) is that two connections to the plant are made to eliminate the single point of failure. Figure 3.7 Double-ended loop with common cross branches. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 36Facility Piping Design Advantages 1. High reliability. 2. Redundant piping routes to load and a second cooling supply and return mains from the plant. 3. Redundant cooling supply and return piping from a second central plant.4. Individual pipe sections and future equipment installations are serviceable without system shutdown. 5. Self-balancing. Disadvantages 1. Increased installation costs.2. Increased operational complexity. 3.3.9 Double-Ended Loop with Dedicated Cross Branches (Figure 3.8) Similarly, the principal difference between this piping architecture and the single-ended loop with dedicated cross branches (shown in Figure 3.5) is that two connections to the plant are made to eliminate the single point of failure. Advantages 1. High reliability. 2. Redundant piping routes to load and a second cooling supply and return mains from the plant. 3. Redundant cooling supply and return piping from a second central plant.4. Individual pipe sections and future equipment installations are serviceable without system shutdown. 5. Self-balancing. Disadvantages 1. Increased installation costs.2. Increased operational complexity. 3.4 PIPING ARRANGEMENTS FOR THE COOLING PLANT The cooling plant equipment can be configured various ways. For example, the chillers can be configured in series or parallel and have different preferential loading schemes. The pumping and flow can be configured as constant flow, stepped variable flow, or variable flow. The building owner or occupant will have to perform an engi-neering analysis to determine which configuration is best for their data center. Figure 3.9 shows a typical decoupled or condenser water system/chilled-water system pumping configuration. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 37 3.4.1 FWS Pipe Sizing The pipe conveyance has to support real load rather than an average bulk load on the raised floor. Today’s data center has an eclectic grouping of various loads ranging from cross-connect racks (no-load) to blade servers. The FWS pipe work must provide cooling at hot spot areas during the life of the data center. Since these areas might change with new equipment over time, flexibility to provide hotter areas with coolingmust be oversized in the FWS distribution. Example: if the average load is at a designdensity of 100 W/ft 2(1076 W/m2), the FWS distribution should be able to supply any local area of the raised floor with 175–200 W locally. In today’s environment of chang-ing technology, all FWS piping should plan for a series of additional water taps off thedistrib ution to serve the future requirements for auxiliary cooling equipment.Figure 3.8 Double-ended loop with dedicated cross branches. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 38Facility Piping Design 3.4.2 Loop Isolation Valve Failures The basis for reliable service in the mechanical plant and pipe work uses loop isolation valves to isolate the operating system from the repair or expansion. If the isolation valve itself fails, both sides of the loop are then exposed to the next isolation valves. Depending on the design configuration, this might include the redundantpump or chiller and render the capacity of the system below critical load. Therefore,the entire loop strategy relies on the loop isolation valve reliability and service life. To remove this single point of failure from the system, either a double-pipe system (independent of the first system) or a double set of isolation valves is used in loca-tions where a single valve can affect critical load. Figure 3.9 Condenser water system/chilled-water system distribution piping. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 39 3.5 WATER TREATMENT ISSUES Whenever an oversized pipe system with redundant pathways and expansion sections is prebuilt for future service and then installed into a data center, the oper - ation of the center needs water treatment to protect the system. Pipe walls interact with the water they circulate. Water treatment includes testing and injecting chem- icals to prevent the pipe walls from corrosion. Pipe loops may not have any flow atall for extended periods, and expansion sections may never have flow. The life of the treatment chemicals is approximately 30 days (depending on strength and waterquality). Where pipe flow is low or is not disturbed, fouling and corrosion may occur.Periodically the flow in these pipes must be accelerated to full flow for a short timeeach month to re-protect the pipe walls. In stagnant segments, the water flow mustbe directed to the areas to re-coat those surfaces. Allowing the operations staff togain experience shifting cooling water conveyance is also important for operationalreadiness in the event of a leak. The expansion legs should have a circulation jumper(1/4 in. [0.64 cm] bypass) with valves to allow a flushing of these pipes monthly tokeep protection and fouling under control. The entire process can be automated ifloop isolation and VFD drives can be controlled by the automation system. 3.6 SEISMIC PROTECTION Depending on the design area and the risk aversion of the data center insurance underwriters, the mechanical and fire protection pipes over 4 in. (10.2 cm) in diam- eter may come under bracing requirements for seismic protection. A qualified struc- tural engineer will be required to design for these requirements. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 414 Liquid Cooling Implementation for Datacom Equipment Chapter 1 briefly describes several examples of different implementations of liquid cooling technologies. More importantly, Chapter 1 clearly defines air-cooled and liquid-cooled datacom equipment, as well as air-cooled and liquid-cooled racksor cabinets. The following sections provide detailed descriptions of the variousliquid cooling implementations. Section 4.1 illustrates cooling at the rack/cabinet level, showing several common configurations for air, liquid, and combination (air+liquid) cooling. A more detailed look inside the rack/cabinet is provided in Section 4.2, which discusses air- and liquid-cooled datacom equipment. Because liquid cooling often uses a coolant distribution unit (CDU) to condition and circulatethe coolant, Section 4.3 provides a detailed overview of this system. The words fluid and coolant are used interchangeably throughout. Throughout this chapter, the liquid lines are shown in the figures as a solid or dashed line and airflow is repre- sented with arrows that contain dots. 4.1 OVERVIEW OF LIQUID-COOLED RACKS AND CABINETS A rack or cabinet is considered to be liquid-cooled if liquid must be circulated to and from the rack or cabinet for operation. The following figures illustrate cooling at the rack/cabinet level. The first is a basic air-cooled rack. The remaining figures show other options that use liquid cooling or a combination of air cooling and liquid cooling. The figures in this section all show the coolant supply and return lines underthe raised floor. Other facility implementations may allow such lines to be routed above the floor or from the ceiling. Coolant supply and return connections for the rack/cabinet can be from the base, side, or top. Figure 4.1 shows a purely air-cooled rack or cabinet implementation. While this book’ s focus is on liquid cooling, this figure provides a baseline with which the vast majority of datacom center operators are familiar. It should be noted that while the figures all show a front-to-back configuration for the airflow and the rack, it can also be configured as front-to-top or front-to-back-and-top (ASHRAE 2012c). © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 42Liquid Cooling Implementation for Datacom Equipment Figure 4.2 shows a combination air-cooled and liquid-cooled rack or cabinet that could receive the chilled working fluid directly from some point within the FWS or CWS loop. By the definitions in Chapter 1, the rack or cabinet is liquid-cooled since coolant crosses the interface between the facility and the rack or cabinet. One imple-mentation could have the electronics air-cooled, with the coolant removing a largepercentage of the waste heat via a rear door heat exchanger. Another implementation Figure 4.1 Air-cooled rack or cabinet. Figure 4.2 Combination air- and liquid-cooled rack or cabinet. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 43 could have the coolant delivered to processor spot coolers (some form of cold plate), with the balance of the electronics being air-cooled. The descriptions provided are twoof many different implementations and should not be understood as the only possibleimplementations. Further details of the implementation within the rack, cabinet, ordatacom equipment are provided in Section 4.2. It is important to note that this config-uration is susceptible to condensation because there is no CDU in place to raise thetemperature of the chilled fluid above dew point, if necessary. Figure 4.3 shows a purely liquid-cooled rack or cabinet. One example of such an implementation may have all the electronics in the rack or cabinetconduction-cooled via cold plates. This cooling method could deploy water, refrigerant, or other dielectric coolant as the working fluid. Another implemen-tation may have all the electronics cooled via liquid flow-through (e.g., forcedflo w boiling), jet impingement, spray cooling, or another method that deploys a dielectric coolant to directly cool the electronics. Yet another implementationwould include a totally enclosed rack that uses air as the working fluid and anair-to-liquid heat exchanger. Further details are provided in Section 4.2. Similarto Figure 4.2, the configuration in Figure 4.3 is also susceptible to condensation because there is no CDU in place to raise the temperature of the chilled fluid above dew point, if necessary. Figure 4.4 shows a combination air-cooled and liquid-cooled rack or cabinet with an external CDU. The CDU, as the name implies, conditions the technologycooling system (TCS) or datacom equipment cooling system (DECS) coolant in a variety of manners and circulates it through the TCS or DECS loop to the rack, cabi-net, or datacom equipment. This implementation is similar to that of Figure 4.2, withthe exception that there is now a CDU between the facility (FWS or CWS) level Figure 4.3 Liquid-cooled rack or cabinet (side view). © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 44Liquid Cooling Implementation for Datacom Equipment supply of chilled fluid and the rack or cabinet. This implementation allows the CDU to condition the coolant delivered to the rack or cabinet to a temperature above thefacility’s dew point. Figure 4.5 shows a purely liquid-cooled rack or cabinet implementation. This implementation is similar to that of Figure 4.3, as well as Figure 4.4, where an exter - nal CDU is included. Figure 4.4 Combination air- and liquid-cooled rack or cabinet with external CDU. Figure 4.5 Liquid-cooled rack or cabinet with external CDU. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 45 Figures 4.6 and 4.7 are the final implementations to be discussed in this section. These implementations have a lot in common with the implementations of Figures 4.4 and 4.5, respectively. One obvious difference is the fact that the racks or cabinets shown in Figures 4.6 and 4.7 now possess dedicated CDUs, i.e., internal CDUs. The CDUs are shown at the bottom of the rack, but other configurations could Figure 4.6 Combination air- and liquid-cooled rack or cabinet with internal CDU. Figure 4.7 Liquid-cooled rack or cabinet with internal CDU. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 46Liquid Cooling Implementation for Datacom Equipment include them on the side or top of the rack. This implementation provides more flex- ibility to the datacom center operator in that the racks or cabinets can now conditiontheir coolants to vastly different conditions as a function of the workload or the elec- tronics within. Another benefit is that different coolants (e.g., water, refrigerant,dielectric) can now be deployed in the different racks as a function of workload or electronics type. Additional detail is provided in Section 4.2. 4.2 OVERVIEW OF AIR- AND LIQUID-COOLED DATACOM EQUIPMENT While Section 4.1 dealt with the external interaction between the building infra- structure and the rack or cabinet, this section examines the possible liquid cooling systems internal to the rack or cabinet. Described within this section are seven data- com cooling configurations, which represent the most commonly used configura-tions. Other configurations are possible but are not examined here for brevity’s sake. The systems remove heat from the rack or cabinet by means of a fluid whose chemical, physical, and thermal properties are conditioned by a CDU for that purpose. The conditioned fluid may be water, an antifreeze mixture, dielectricfluid, or refrigerant. Figure 4.8 shows a rack or cabinet with combined air and liquid cooling. The computer room air and the conditioned liquid each remove a part of the heat load.In this configuration, air is the only coolant entering the datacom equipment. The air- to-liquid heat exchanger extracts heat from the air that is either entering or leavingthe datacom equipment and, as a result, reduces the heat load on the computer roomair conditioning and reduces hot air recirculation. The heat exchanger may bemounted on the rack or cabinet doors or in any other location along the airstream. Figure 4.8 Open air-cooled datacom equipment in an air/liquid-cooled rack. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 47 Whichever the case, care should be taken to consider heat exchanger effectiveness and condensation, especially with configurations where the heat exchanger is upstream from the datacom equipment. Finally, if the air pressure drop across theheat exchanger is sufficiently low, no additional fans will be required to achieve thenecessary airflow rate. Figure 4.9 shows an enclosed cabinet where air is again the only coolant enter - ing the datacom equipment, but here computer room air is excluded, and the entireheat load (apart from heat loss though panels) is removed by the conditioned fluid. Additional fans will probably be required to drive sufficient airflow through the system. To cool the datacom equipment, this configuration requires the sameamount of cooling air as the configuration shown in Figure 4.1. The fans and theheat exchanger may be located in various positions. For example, the fans may bein a duct on the rear door, and the heat exchanger may be at the bottom of the cabi-net. The objectives are to achieve sufficient airflow to extract the total heat loadand to achieve an airflow distribution that avoids uneven cooling and “hot spots” within the cabinet. Figure 4.10 illustrates a rack or cabinet where the internal heat transfer is by liquid only. The heat exchanger and pump are shown as “optional” to recognize the case where the facility working fluid from the FWS or CWS loop is sufficientlyconditioned to allow direct entry to the datacom units. Unless there is a local CDUassociated with the cabinets, one important benefit of the CDU, namely, the limitingof fluid leakage volume, is lost. Figure 4.11 shows the addition of computer room air cooling to a Figure 4.10 system. In view of the variety of heat sources and their different form, we could expect this to be a more frequent configuration since airflow through complex orheterogeneous geometries is easier to implement with air than with liquids. Figure 4.9 Closed air-cooled datacom equipment in a liquid-cooled cabinet. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 48Liquid Cooling Implementation for Datacom Equipment The arrangement in Figure 4.12 goes one step further. Like Figure 4.11, it shows a combination of air and liquid cooling, but now the air is a closed loop that stays within the cabinet. The air is cooled by a separate heat exchanger. This configuration may represent a cabinet housing multiple pieces of datacom equipment, some liquid-cooled and others air-cooled. It may also represent datacom equipment with some components that are liquid-cooled (e.g., the CPU) and other components that are air- cooled (e.g., the power supplies). Figure 4.10 Closedair-andliquid-cooleddatacomequipmentinaliquid- cooled rack. Figure 4.11 Liquid-cooled datacom equipment in a liquid-cooled rack. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 49 Figures 4.13 and 4.14 illustrate the application of vapor compression cycle heat transfer within the rack or cabinet. Figure 4.13 uses an internal DECS loop, and in Figure 4.14 air is the working fluid between the datacom equipment and the vapor compression system. In all of the systems listed above, the principal heat “sink” is the facility condi- tioned fluid in the FWS (chiller) or CWS loop, in some cases assisted by computerroom conditioned air. When using a DX system, the facility conditioned fluid would Figure 4.12 Openair-andliquid-cooleddatacomequipmentinanair/liquid-cooled rack. Figure 4.13 Liquid-cooled datacom equipment in a liquid-cooled rack usinga vapor compression system. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 50Liquid Cooling Implementation for Datacom Equipment be liquid refrigerant. When significant heat loads are removed by conditioned fluids, the computer room air temperature may be allowed to rise, presenting a morecomfortable operating environment. However, the current focus on fluids is drivenprimarily by escalating heat dissipations and the subsequent risk of thermal failureand shortened life span that are associated with high chip operating temperatures, aswell as lower energy costs because liquid solutions are more efficient than air. 4.3 OVERVIEW OF IMMERSION COOLING OF DATACOM EQUIPMENT Figure 4.15 illustrates an example implementation of liquid-immersion cooling datacom equipment in a rack or cabinet. Immersion cooling has been explored and proposed as an alternative liquid-cooling solution for datacom equipment. In this methodology the server electronics are in direct contact with (i.e., immersed in) asuitable electrically, optically, and materially compatible heat transfer fluid, thusproviding direct-contact cooling for most of the server components. Immersion cooling has been investigated in several unique implementations including the following: 1. Open/semi-open bath immersion cooling of an array of servers in a tank of dielectric fluid that cools the board components via vaporization. Dielectric vapor is condensed using a bath-level water-cooled condenser. 2. Open/semi-open bath immersion cooling of an array of servers in a tank of mineral oil that cools the board components through natural and forced convec- Figure 4.14 Air-cooleddatacomequipmentinaliquid-cooledcabinetusinga vapor compression cycle. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 51 tion. The pumped oil is cooled using an oil-to-water heat exchanger (Figure 4.15). 3. Sealed immersion cooling of individual servers or components with refriger - ants or other suitable dielectric fluids that cool components via vaporization. V apor generated is condensed using a server- or rack-level water-cooled condenser. 4. Sealed immersion cooling of individual servers or components with rack-level dielectric fluids or mineral oil via sensible (i.e., single phase) heat transfer. The pumped dielectric fluid is cooled using a dielectric fluid-to-water heatexchanger. Due to the better thermal properties of the immersion liquid (compared to air) and the almost complete transfer of the heat generated directly to the liquid coolant by the IT components, immersion cooling enables the use of warmer facility water to cool the equipment (as compared to a traditional air-cooled data center withchilled-water cooled room air conditioners). This results in both energy and capital Figure 4.15 Immersioncooleddatacomequipmentinaliquid-cooledrackusing a vapor compression system. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 52Liquid Cooling Implementation for Datacom Equipment cost savings as chilled water plants and room air conditioners need not be run for as many hours during the course of the year. In some geographies and data centerconfigurations they could arguably be eliminated. Further, in the case where all theserver components are completely immersed in the liquid coolant, the server-levelair-moving devices can be eliminated to reduce server electricity consumption,noise, and associated service actions. Despite the potential advantages of this cooling technology, immersion cooling of the servers in dielectric or mineral oil fluids faces several challenges, primarily due to the novelty of the solution and the lack of long-term statistically significantfield data as compared to traditional water- and air-cooling solutions. Studies tobetter understand the impact on long-term server reliability, coolant-server chemicalinteractions, impact on optics, safety and toxicity of the coolant, end-of-life recy-cling of the servers and coolant, and field serviceability are ongoing. Additionally, conventional spinning disk drives cannot be directly immersed in the coolant. They either must be hermetically sealed or be placed outside the immersion-cooled enclosure to be air or conduction cooled. Legacy-storage richor archival data centers will therefore require some air cooling and the associatedfacility-level infrastructure such as room air conditioners and chilled water plants.High heat flux components cannot be directly immersion cooled without modifi-cation to the lid or addition of liquid-coolant optimized heat sinks. V aporizationof dielectric coolant must be carefully managed in this situation to avoid reaching the coolant’s critical heat flux limit. The corresponding formation of a vapor filmon the hot component can dramatically reduce the surface heat transfer, resultingin a sharp increase in the component’s temperature. Immersion cooling can potentially be a very energy-efficient and cost-effective cooling solution and is thus an area of increasing research and development in the IT industry. However, for high performance computing systems that use high powerCPUs and GPUs for storage-rich data centers using spinning disk drives and tape andfor data centers located where energy costs are low and/or the climate is cool, air andwater cooling continue to be competitive and attractive solutions. 4.4 OVERVIEW OF COOLANT DISTRIBUTION UNIT (CDU) The coolant distribution unit (CDU), as the name implies, conditions the TCS or DECS coolant in a variety of manners and circulates it through the TCS or DECS loop to the rack, cabinet, or datacom equipment (see Figure 1.1 for cooling loop descrip- tions). For example, the CDU can condition the coolant for temperature and cleanliness(particulate and chemical). The CDU implementations shown in Figures 4.15–4.20 all demonstrate coolant temperature control via the heat exchanger. It should be noted thatcoolant temperature control can also be achieved via heat exchanger bypass. The CDUcan also be designed to condition the coolant if the TCS or DECS coolant is a refrigerantor dielectric. The rack or facility coolant supplied to the CDU can be refrigerantsupplied by a DX system. The functionality of the CDU will depend, in large part, on © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 53 the coolant that is being conditioned and the level at which the CDU is implemented (i.e., internal to the datacom equipment, internal to the rack/cabinet, or at facility leveland external to the rack). In all cases, the CDU includes a pump or compressor to circu-late the fluid through the coolant loop. While all of the figures show counterflow heate xchangers, other types are possible. The CDU provides a number of important benefits, as follows: Pr evention of condensation: It provides an opportunity for temperature con- ditioning, which could allow the coolant to be delivered to the electronicsabo ve the dew point. Isolation: It may allow the electronics to be isolated from the harsher facility water in the FWS or CWS loop. The loop supplied by the CDU also uses alower volume of coolant, so a coolant leak is less catastrophic. Coolant flexibility: The separate loop associated with the CDU allows users the flexibility to use any number of coolants. T emperature control: The separate loop associated with the CDU allows users the flexibility of running the electronics at a desired temperature. Thistemperature can be above, at, or below the dew point. Figures 4.16–4.21 illustrate the breadth of current implementations of CDUs. While not shown, it is implied that some form of quick disconnect (or quick coupling and a ball valve) is implemented at all points where coolant crosses the “CDU boundary” line, which is indicated by a dashed-dotted line around the CDU. The use of quick disconnects allows for low loss of coolant and facilitates replacement of aCDU in the event of a catastrophic failure. It is important to keep in mind that thereare many possible implementations of CDUs outside of those shown inFigures 4.16–4.21. Figures 4.16 and 4.17 illustrate implementations internal to the datacom equipment. Figures 4.18 and 4.19 illustrate implementations internal to the rack/cabinet. Figures 4.20 and 4.21 illustrate implementations external to the rack/cabinet and at the facility level. Further descriptions follow. Figure 4.16 shows an implementation of a CDU internal to the datacom equip- ment. The datacom equipment rejects the heat from its electronics via a cold platethat docks with another rack/cabinet-based cold plate.At a very basic level, the CDU consists of a cold plate, an accumulator (or a reservoir), and a pump (or multiplepumps for redundancy). The operation of the pump is regulated by some form ofcontroller, which is in communication with one or more sensors within the CDU. The data from the sensors are used to control the level of operation of the CDU. Similarly to Figure 4.16, Figure 4.17 illustrates an internal datacom equipment implementation of a CDU. The basic functionality for this implementation is similar to that for Figure 4.15, with the exception of the use of a liquid-to-liquid heatexchanger to reject the datacom equipment heat. In other implementations, the CDUcould deploy a liquid-to-air heat exchanger. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 54Liquid Cooling Implementation for Datacom Equipment Figure 4.18 shows a CDU implementation internal to the rack or cabinet, as shown in Figures 4.6 and 4.7. The CDU shown consists of a liquid-to-liquid heat exchanger, an accumulator (or a reservoir), a pump (or multiple pumps for redun- dancy), a chemical bypass filter for solvent and water removal, and a particulate f ilter. Applications of an internal CDU will often use refrigerant or dielectric as the cooling fluid. These fluids necessitate the use of additional filters. This implemen-tation also interfaces with one or more controllers that control the operation of the CDU. The controllers can use sensor inputs at the datacom equipment, rack, or facil- ities level. While a liquid-to-liquid heat exchanger is shown, this implementationcould also deploy a liquid-to-air heat exchanger. Similarly to Figure 4.18, Figure 4.19 illustrates the internal implementation of a CDU at the rack/cabinet level, as shown in Figures 4.6 and 4.7. The basic func- tionality of this CDU is similar to that shown in Figure 4.17, with the main difference being the use of a vapor compression system. A key benefit of a vapor compression Figure 4.16 Internaldatacomequipment-basedCDUthatusesadocking station and cold plates. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 55 Figure 4.17 Internal datacom equipment-based CDU using liquid-to-liquid heat exchanger. Figure 4.18 Internalrackorcabinet-basedCDUusingliquid-to-liquidheatexchanger. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 56Liquid Cooling Implementation for Datacom Equipment system is that it allows the user to drop the temperature of the working coolant below that of the coolant to which it is rejecting the heat from the electronics. Unique tothe vapor compression system, a compressor (or multiple ones for redundancy) andan expansion valve are used. While a liquid-cooled condenser is shown, an air-cooled condenser could also be deployed. Figure 4.20 shows a CDU implementation external to the rack and at the facility le vel, as shown in Figures 4.4 and 4.5. This form of CDU can supply cooling fluid to a single rack or a row of racks. In its most basic form, this CDU consists of a liquid-cooled condenser, a pump (the additional space typically allows for redundantpumps), and an expansion tank. The CDU communicates with one or more control-lers at the datacom equipment, rack/cabinet, and facility levels, which can controlthe operation of the CDU. It is possible, but probably not likely, that an air-cooled condenser could be deployed. This is probably not likely due to the amount of heatthat has to be rejected. One of the key benefits of this type of implementation is that it isolates the racks/cabinets and datacom equipment from the facility water in theFWS or CWS loop, which tends to be much harsher on the internal materials. Figure 4.21 shows the implementation of a CDU external to the rack or at the facility level, as shown in Figures 4.4 and 4.5. The basic functionality is simi-lar to that of Figure 4.19, with the key difference being the use of a vaporcompression system. Similar to Figure 4.19, the basic vapor compression system uses a compressor (or multiple ones for redundancy), a liquid-cooled condenser,and an expansion valve. This implementation of the CDU can also communicatewith one or more controllers at the datacom equipment, rack/cabinet, and facilitylevels. Similar to the implementation of Figure 4.21, it is possible, but probably Figure 4.19 Internal rack or cabinet-based CDU using a liquid-cooledcondenser and a vapor compression system. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 57 not likely, that an air-cooled condenser could be deployed. This implementation also isolates the racks/cabinets and datacom equipment from the facility waterin the FWS or CWS loop. The temperature in the datacom equipment can rise very, very quickly if the CDU f ails. Therefore, CDU implementations need to be redundant and fault-tolerant. All of the strategies discussed in Section 2.1.5 regarding CRAC availability and redun - Figure 4.20 Facility-based CDU using liquid-cooled condenser. Figure 4.21 Facility-basedCDUusingliquid-cooledcondenserandavaporcompression system. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 58Liquid Cooling Implementation for Datacom Equipment dancy apply to the CDU. Multiple CDUs should be implemented to eliminate down- time from mission critical installations. The chiller supplying the cooling fluid to theCDU also needs to be redundant and fault-tolerant to ensure it is always supplying cooling water to the CDU. The CDU vendor also must design the CDU to be fault-tolerant. For instance, the coolant pump in the CDU is most likely the point of failure. Most systems will have redundant pumps. The pump will either alternate on a relatively short-termbasis (weekly or monthly) or one pump will be the primary pump and a secondarypump will be tested periodically to make sure it is operational. The pumps will likelyhave isolation valves so that the nonfunctional pump can be isolated for replacementwithout bringing the entire CDU (and associated datacom equipment) down forrepair. On smaller systems that may not accommodate dual pumps, the vendor willselect a pump with a long life. In this situation, it is expected that the CDU will beupgraded before the pump wears out. Power to the CDU also needs to be backed upby a UPS. This will allow the datacom equipment to continue running or to shutdown gracefully in the event of a power outage. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 595 Liquid Cooling Infrastructure Requirements for Facility Water Systems This section defines interface requirements between the facilities controlled by the building facility operators and the datacom equipment controlled by the datacom manufacturers. Demarcation lines for the facility water system (FWS) are providedto describe where these interfaces occur in relationship to the datacom equipmentand its location within the data center or telecom room. 5.1 FACILITY WATER SYSTEMS (FWS) Figures 5.1 and 5.2 show the interfaces for an external liquid-cooled rack with remote heat rejection. The interface is located at the boundary of the facility FWS loop and does not impact the technology cooling system (TCS) or datacom equipment cool - Figure 5.1 Combinationair-andliquid-cooledrackorcabinetwithexternalCDU (same as Figure 4.5). © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 60Liquid Cooling Infrastructure Requirements for Facility Water Systems ing system (DECS) loops, which will be controlled and managed by the cooling equip - ment and datacom manufacturers. However, the definition of the interface at the FWS loop affects both the datacom equipment manufacturers and the facility operator where the datacom equipment is housed. For that reason all of the parameters that are key tothis interface will be described in detail in this section.Also, note that the coolant loopsin the cooling distribution unit (CDU) are typically separate. 5.1.1 Facility Supply Water Temperature Classification The classification of facility water supply temperature for various facility cooling systems or infrastructures is summarized in Table 5.1. The facility water classes W1 through W5 are defined in the sections below. Schematic represen- tations of the facility cooling infrastructure corresponding to the water classescan be found in Figure 5.3. Compliance with a particular water temperature class requires full operation of the equipment under normal (nonfailure) conditions. The IT equipment specific for each class requires different design points for the cooling components (cold plates, thermal interface materials, liquid flow rates,piping sizes, etc.) The facility supply water temperatures specified in the table are requirements to be met by the IT equipment for the specific class of hardware manufactured. For the data center operator, the use of the full range of temperatures within theclass may not be required or even desirable given the specific data center infra-structure design. Figure 5.2 Combinationair-andliquid-cooledrackorcabinetwithinternal CDU (same as Figure 4.6). © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 61 Table 5.1 ASHRAE Liquid Cooling Guidelines Liquid Cooling ClassTypical Infrastructure Design Facility Water Supply TemperaturePrimary Facilities Cooling EquipmentSecondary/Supplemental Cooling Equipment W1 Chiller / Cooling Tower Water Side Economizer (with Dry Cooler or Cooling Tower)2°C – 17°C W2 2°C – 27°C W3 Cooling Tower Chiller 2°C – 32°C W4 Water Side Economizer (with Dry Cooler or Cooling TowerN/A 2°C– 45°C W5 Building Heating System Cooling Tower or Dry Cooler > 45°C Figure 5.3 ASHRAE liquid cooling classification, typical infrastructure design schematics. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 62Liquid Cooling Infrastructure Requirements for Facility Water Systems There is currently no widespread availability of IT equipment in ranges W3– W5 today. Product availability in these ranges in the future will be based on market demand. It is anticipated that future designs in these classes may involve trade-offs between IT cost and performance. At the same time these classes wouldallow lower cost data center infrastructure in some locations. The choice of ITliquid-cooling class should involve a total cost of ownership (TCO) evaluation ofthe combined infrastructure and IT capital and operational costs. 5.1.1.1 Class W1/W2: T ypically a data center that is traditionally cooled using chillers and a cooling tower but with an optional water side economizer toimprove energy efficiency depending on the location of the data center. 5.1.1.2 Class W3: F or most locations these data centers may be operated without chillers in a water-side economizer mode. Some locations may stillrequire chillers to meet facility water supply temperature guidelines during peak(i.e., design) ambient conditions for a relatively short period of time. 5.1.1.3 Class W4: T o take advantage of energy efficiency and reduce capital expense, these data centers are operated in a water-side economizer mode without chillers. Heat rejection to the atmosphere can be accomplished by either a cooling tower or a dry (closed-loop liquid-to-air) cooler. 5.1.1.4 Class W5: T o take advantage of energy efficiency, reduce capital and operational expense with chillerless operation, and make use of the waste energy. The facility water temperature is high enough to make use of the water exiting the IT equipment for heating local buildings. Note: Additional descriptive information on the infrastructure heat-rejection devices mentioned above is given in Chapter 2. 5.1.2 Additional Building Facility Water Considerations The majority of liquid cooling applications are anticipated to use water or water plus additives (e.g., propylene glycol, ethylene glycol, etc.) The following sections focus on these applications. Different design and operational requirements will certainly be required for alternative liquids, such as refrigerants or dielectrics (seeSection 5.2 for systems using coolants other than an aqueous solution). Datacomequipment that requires a supply of water from the data center facilities will typicallyadhere to the operational requirements contained within this section. 5.1.1.1 Operational Requirements. F or classes W1 and W2, the datacom equipment may generally accommodate facility water supply temperatures set by acampus-wide operational requirement. It also may be the optimum of a balancebetween lower operational cost using higher-temperature facility water systemsversus a lower capital cost with low-temperature facility water systems. Consider - ation of condensation prevention is a must (see Section 5.1.2.2). In the FWS loop,insulation will typically be required. In the TCS and DECS loops, condensation control is typically provided by an operational temperature above the dew point. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 63 The maximum allowable water pressure supplied to the TCS and DECS loops should be 125 psig (860 kPa) or less. The facility water flow-rate and pressure differential (i.e., the pressure difference between supply header and return header) requirements for the CDU are a function of the FWS supply temperature and the chemical makeup of the water (e.g., antifreeze, corrosion inhibitors). Manufacturers of the IT equipment or the CDU will typicallyprovide configuration-specific flow-rate and pressure differential requirements thatare based on rack heat dissipation as well as a given FWS supply temperature. The FWS should incorporate a water flow control valve, typically a two-way modulating water flow valve. Generally, these valves modulate when the datacom equipment is operating and are fully closed when the datacom equipment is poweredoff. Depending on the design of the facility water system, a method of bypassingchilled water or de-energizing facility water pumps when the datacom equipment isnot operating may be needed. Alternatively, for systems that must maintain aconstant FWS flow rate, three-way valves should also be considered. For classes W3, W4, and W5, the infrastructure will probably be specific to the data center, and therefore the water temperature supplied to the water-cooled IT equipment will depend on the climate zone and may vary throughout the year. Inthese classes, it may be required to run without a chiller installed, so it is critical tounderstand the limits of the water-cooled IT equipment and its integration with theinfrastructure designed to support the IT equipment. This is important so that thoseextremes in temperature and humidity allow for uninterrupted operation of the datacenter and the IT liquid-cooled equipment. The temperature of the water for classes W3 and W4 will depend on the design of the heat-rejection equipment, the heat exchanger between the cooling tower and the secondary water loop, the design of the secondary water loop to the IT equip-ment, and the local climate. To accommodate a large geographic region, the rangeof water temperatures was chosen as 35°F to 113°F (2°C to 45°C). For class W5, the infrastructure will be such that the waste heat collected from the datacom equipment warm water can be redirected to nearby buildings. Accom- modating water temperatures nearer the upper end of the temperature range will bemore critical to those applications where retrieving a large amount of waste energy is essential. The water supply temperatures for this class are specified as greater than45°C (113°F) because the water temperature may depend on many parameters suchas the climate zone, building heating requirements, and distance between the datacenter and adjacent buildings. Of course, component temperatures within the ITequipment need to be maintained within their temperature limits while using thehotter water as the heat sink temperature. In many cases, the hotter water heat sinktemperature will be a challenge to the IT equipment thermal designer, although withmuch lower temperatures there may also be opportunities for heat recovery for build-ing use, depending on the configuration and design specifications of the systems towhich the waste heat would be supplied. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 64Liquid Cooling Infrastructure Requirements for Facility Water Systems 5.1.2.2 Condensation Considerations. Liquid cooling classes W1, W2, and W3 allow the water supplied to the IT equipment to be as low as 2°C (35°F), which is below the ASHRAE allowable room dew-point guideline of 17°C (63°F) for Class A1 equipment environments (Thermal Guidelines for Data Processing Envi- ronments, 3rd Edition, ASHRAE 2012c). Electronics equipment manufacturers areaware of this and are taking this into account in their designs. Commensurately, data center relative humidity and dew point should be managed according to the ASHRAE 2012 Thermal Guidelines for Data Processing Environments. If fluid operating temperatures below the room dew point are expected, careful consider - ation of condensation should be exercised. It is suggested that a CDU (as shown in Figures 5.1 and 5.2) with a heat exchanger be used to raise the coolant temperature to at least 18°C (64.4°F) to eliminate condensation issues or design the facilitieswater system to have an adjustable water supply temperature that is set to 2°C (3.9°F)or more above the dew point of the data center space. 5.1.2.3 Facility Water Flow Rates. Possible FWS flow rates are shown in Figure 5.4for given heat loads and given temperature differences. Temperature differences typically fall between 9°F and 18°F (5°C and 10°C). Figure 5.4 Typical chiller water flow rates for constant heat load. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 65 Appropriate fouling factors should be included in the selection of all heat-transfer or heat-exchange devices. Fouling factors are an additional thermal resistance that occurs when deposits build up on heat-transfer surfaces over a period of time. Thesedeposits are carried by the coolant water. In order to maintain the required heat transferacross a device in spite of the fouling of one or both surfaces of the exchanger, the flowcan be increased, the temperature of the supply water can be decreased, or a combi-nation of these approaches can be used. Typical fouling factors derived from ft 2·°F·h/BTU (m2°C/W) are: Muddy or silty 0.0006 ft2·°F·h/BTU 0.00003 m2°C/W Hard (15 grains/gal) 0.0006 ft2·°F·h/BTU 0.00003 m2°C/W City water 0.0002 ft2·°F·h/BTU 0.00001 m2°C/W Treated cooling tower 0.0002 ft2·°F·h/BTU 0.00001 m2°C/W Refer to the list of references on how to correctly apply these factors when sizingheat exchangers. 5.1.2.3. Velocity Considerations. The velocity of the water in the FWS loop piping must be controlled to ensure mechanical integrity is maintained over the life of the system. Excessive water velocity can lead to erosion, sound/vibration, waterhammer, and air entrainment. Particulate-free water will impart less damage to thetubes and associated hardware. Table 5.2 provides guidance on maximum velocities in piping systems that operate over 8,000 hours per year. Flexible tubing velocitiesshould be maintained below 1.5 m/s (5 ft/s). Excessive water velocity in piping systems also increases the pressure drop and energy usage of the system. 5.1.2.4. Liquid Quality/Composition. Table 5.3identifies the water quality requirements that are necessary to operate the liquid-cooled system. The reader is encouraged to refer to Chapter 49 of the 2011 ASHRAE Handbook—HVAC Appli- cations. This chapter, titled “Water Treatment,” provides a more in-depth discussionabout the mechanisms and chemistries involved. Table 5.2 Maximum Velocity Requirements Pipe Size Maximum Velocity (fps) Maximum Velocity (m/s) >3 in. (7.6 cm) 7 2.1 1.5 to 3 in. (3.8 cm to 7.6 cm) 6 1.8 <1 in. (< 2.5 cm) 5 1.5 All flexible tubing 5 1.5 © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 66Liquid Cooling Infrastructure Requirements for Facility Water Systems 5.1.2.4.1 Water Quality Problems. The most common problems in cooling systems are the result of one or more of the following causes: Corrosion: There are various forms of corrosion: uniform corrosion, galvanic corrosion, crevice corrosion, pitting corrosion, environmentally induced crack - ing, hydrogen damage, intergranular corrosion, dealloying, and erosion corro - sion. Uniform corrosion removes more metal than other forms of corrosion, but pitting corrosion is more insidious and difficult to predict and control (Jones 1996). In typical cooling systems with wetted materials such as copper and alu - minum alloys, steels, and stainless steels, aluminum is clearly the most prone topitting corrosion and steel is the most prone to uniform corrosion. In cooling systems without adequate water chemistry control, steel will uniformly corrodeand copper and aluminum will also pit. Steel requires treated water to preventcorrosion. A small fraction of the copper water-carrying tubing will fail inuntreated water due to pitting with a mean time to failure of about two years(Singh 1985). Aluminum is not recommended in cooling systems unless thewater chemistry, including aluminum-specific corrosion inhibitors, is undervery stringent control. Stainless steels will generally not pit or uniformly cor - rode in reasonably controlled waters that are free of sulfur-reducing bacteria. Stainless steels do require some dissolved oxygen in the water for their surfacepassivation. The selection of cooling loop materials and manufacturing processes is critical to a loop’s reliability. Tube and pipe surfaces, especially copper tube surfaces, should be free of contamination such as that from carbon films result- ing from residues associated with tube drawing operations, reducing the inci-Table 5.3 Water Quality Specifications for Facility Water System (FWS) Loop Parameter Recommended Limits p H 7t o9 Corrosion inhibitor Required Sulfides <10 ppm Sulfate <100 ppm Chloride <50 ppm Bacteria < 1000 CFUs/mL Total hardness (as CaCO3) <200 ppm Residue after evaporation <500 ppm Turbidity <20 NTU (Nephelometric) © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 67 dence of pitting corrosion. Stainless steel hardware must not be sensitized and must be properly passivated. Sensitized stainless steel hardware may sufferintergranular corrosion. Unpassivated stainless steel suffers superficial corro-sion that may contaminate the water. Aluminum is not recommended as awetted material in the cooling loop. If aluminum must be used then corrosion resistant alloys should be selected (including Al-clad alloys) and an aluminum-specific corrosion inhibitor must be added to the water. It is recommended that the corrosivity of the cooling water towards the alloys in the system be checked periodically. While uniform corrosion can be readily measured, pitting corrosion testing requires a more sophisticated electrochemicalapproach that few laboratories are equipped to conduct (P . Singh et. al. 1992). pH is an important water chemistry variable. Pourbaix diagrams for metals indicate that most metals corrode the least around the neutral pH range, some corrode at a little higher than pH = 7, and some at a little lower pH. Corrosionis also driven by high levels of chlorides, sulfides, and sulphates in the water,but one cannot make reliable predictions of corrosion rates from the waterchemistry, except under very extreme water-chemistry conditions. F ouling (insoluble particulate matter in water): Insoluble particulate matter settles as a result of low flow velocity or adheres to hot or slime-covered sur - faces and results in heat-insulating deposits and higher pressure drops in the loop. A deposit is generally iron with small amounts of copper and mineral scales such as calcium carbonate and silt. Fouling is related to the amount ofparticulate matter or total suspended solids in the fluid. A full-flow loop filtra - tion system is not typically needed if the makeup water is of good quality. Aside stream filtration system may provide adequate solids removal at a smaller capital cost. The operational aspect of filter monitoring and change-out fre - quency must be considered and a specific maintenance program established. Scale (precipitation of salts directly on metal surfaces): Scale is a dense layer of adherent salt that precipitates on surfaces as a result of the concentra - tions of the salts exceeding their solubility limits. Higher temperatures promote scale formation by lowering the salts’ solubility limits. Scale typically consists of calcium carbonate and magnesium carbonate. Hard waters (water high in dis - solved calcium and magnesium cations) are prone to scale formation on hottersurf aces when the water pH is high. Soft waters low in these dissolved ions are less prone to scale formation. Hard waters are generally less corrosive becausethe scale formed on metal surfaces retards the diffusion of oxygen to thecathodic areas. In closed-circuit cooling systems that do not allow air to ingressnor water to escape via evaporation, scale formation is generally not an issue.Evaporation and subsequent concentration of the chemistry can occur in ventedexpansion tanks as well as through fittings and elastomers (gaskets, etc.) in thesystem. If carbon dioxide from the air is allowed to dissolve in the water, thereduced propensity to scale formation will leave the metal surfaces less pro- © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 68Liquid Cooling Infrastructure Requirements for Facility Water Systems tected from the cathodic half-cell reaction, thus, increasing the metal corrosion rate. In cooling loops closed to air, corrosion inhibitors must be added and theirconcentration routinely maintained over the life of the system. Micr obiologically induced corrosion (MIC) (corrosion due to bacteria, fungi, and algae): Carbon steels, stainless steels, and alloys of copper and aluminum may suffer microbiologically induced corrosion. This is especiallytrue if the water in a piping system is stagnant and has a pH from 4 to 9 in thetemperature range 50°F to 122°F (10°C to 50°C). Even if there is no recordedincident of MIC in computer closed-loop cooling systems, precautions mustbe taken to avoid bacteria in the water. Slime and deposit formations are a characteristic of MIC. Slime consists of accumulated microorganisms and their secretions. Once MIC has begun, biocide treatment may not be effective because organisms sheltered beneath the depositsmay be out of reach of the injected biocide. It is best to assemble the cooling loophardware with minimal bacteria contamination and to treat the water with a suit - able biocide the first time the system is filled with water, followed by biocideinjection well before the bacteria content gets to 1000 colony-forming units per milliliter (CFU/ml). Bacteria can greatly increase the risk of pitting. Pitting can occur at weld joints and high stress locations. Aluminum corrosion can be accelerated by microorganisms in neutral pH w ater. Copper, a known toxin to bacteria, can be attacked by some types of bacteria with a high tolerance for cupric ions. Aerobic-bacteria-induced slime formations on stainless steels can be initia - tion sites for pitting corrosion. MIC on stainless steels often occurs at weldments,directly on the weld metal, or in the heat-affected zones on either side of the weld. Note: Suspended solids and turbidity can be an indication that corrosive prod- ucts and other contaminants are collecting in the system. Excessive turbidity may indicate corrosion, removal of old corrosive products by a chemical treatment program, or the contamination of the loop by another water source. Suspended solidsat high velocity can abrade equipment. Settled suspended matter of all types cancontribute to pitting corrosion (deposit attack). Similarly, ions in the water may alsocause these same issues. Some examples are: The presence of copper may be an indication of increased copper corrosion and the need for a higher level of copper corrosion inhibitor. Excessive iron is an indication that corrosion has increased, existing corrosion products have been released by chemical treatment, piping has been added to the secondary loop, or the iron content has increased in the makeup water. The presence of manganese is also a good indicator of corrosion if its concen- tration is greater than 0.1 ppm. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 69 Where water-softening equipment is deployed, a total hardness of 10 ppm or greater indicates that the hardness is bypassing the softener, that the softener regeneration is improper, or that some contamination from another system ispresent, such as a cooling tower or city water. The presence of sulfates is often an indication of a process or water tower leak into the TCS loop. High sulfates contribute to increased corrosion because of their high conductivity. 5.1.2.5 Wetted Material Requirements. The FWS loop of the CDU permits the following material set. The chemicals that are added to the system must be compatible with all of the loop materials: Copper Alloys: 122, 220, 230, 314, 360, 377, 521, 706, 836, 952 Polymer/Elastomer: Acrylonitrile butadiene rubber (NBR) Ethylene propylene diene monomer (EPDM) Polyester sealant (anaerobic) Polytetrafluoroethylene (PTFE) PolypropylenePolyethylene Solder/Braze: Soldering is not recommended because solder joint reliability is poor due to the relati vely high porosity in solder joints. Brazing is the recommended method for joining water-carrying copper hardware. Neither brazing or soldering should be used for joining steels or stainless steels. Acceptable solder/braze alloys include thefollowing: Solder alloy: Lead-free alloys containing copper, silver, and tin. Solder flux: A flux suitable for lead-free soldering should be used. A post- soldering cleaning/removal of all flux residue must be performed prior to filling the system for operation. Braze filler metal: BCuP or BAg alloys. Braze flux: AWS Type 3A. A post-braze cleaning/removal of all flux residue must be done. Stainless Steels: Low-carbon 300 series stainless steels are preferred. Regular 300 series stain- less steels may be used in the absence of high-temperature treatment (i.e., welding). Heat-treated 400 series stainless steels may be used for high mechanical stress applications. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 70Liquid Cooling Infrastructure Requirements for Facility Water Systems Carbon Steels: Carbon steels may be used, provided that steel-specific corrosion inhibitor(s) are added to the system and proper inhibitor concentration is maintained. 5.1.3 Piping Considerations The placement of the piping for connecting the facility liquid cooling system to the liquid cooling system of the rack is an important consideration in the total design of the data center. This should be considered early in the design phase of a new datacenter but can be retrofitted into existing data centers. Both are taken into consid-eration in the layouts proposed in order to minimize cost of initial construction orupgrades to existing data centers. Care must be taken to avoid placement of any cool-ing piping that may interfere or reduce airflow to the datacom equipment. Figure 5.5 depicts one option of installing water cooling into a data center and distributing it to liquid-cooled electronic racks. In this case the CDUs are being fed by facility water piping that resides on the perimeter of the data center. These CDUs are also located near the perimeter of the data center. Location of the CDUs near theperimeter would concentrate any leaks from the FWS loop in this area and permitfocus of leak detection and containment for this area. By locating the CDUs in thisarea, the impact on the air distribution that may be required for the racks is mini-mized. Hose and/or piping distribution to the racks from the CDUs can be laid outbelo w the raised floor and below the racks parallel to the rows (of racks). Also, any valves, strainers, or instrumentation in the piping may be easily accessed for oper - ation and maintenance purposes, thereby eliminating risk of accidentally unplug-ging or harming any communication cables near the racks of equipment. Figure 5.5 Location of CDU units in data center—Option 1. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 71 As an alternative to the CDU units that provide conditioned water at the proper temperature and quality to the electronic racks, these units could be just distribution units that distribute water from the chilled- or process-water lines located at theperimeter of the data center as shown in Figure 5.5. In this case, no heat exchangeoccurs in the distribution unit. For installation with multiple systems on a closed loop, the installation should include an air-removal device (e.g., expansion tank) to facilitate the removal of air from the facility water lines. Regardless of whether a FSW loop has been provided with anair-removal device or not, the facility water lines should be vented when bringing a newsystem online. Pressure on the facility water lines must not exceed 125 psi (860 kPa). Facility w ater connections must be accessible. The volume of a water-based coolant in piping increases as the temperature of the coolant increases. If nothing is done to compensatefor this change in volume, the pressure in a closed system will increase. This change inpressure may be more pronounced if the temperature of the coolant in the coolingsystem experiences a large change throughout the year. An expansion compensationsystem may be included in the design of the piping system to help stabilize the pressurein the system. A slight modification to the configuration shown in Figure 5.5 is shown in Figure 5.6. In this case, the CDU units are located against the outer wall of the data center to provide increased control of leak monitoring and detection. In addition, thisoption may provide improved piping connections between the facility water system and the CDUs. As stated for the previous option, the CDUs may be considered to bejust liquid distribution units where no heat exchange occurs. Figure 5.6 Location of CDU units in data center—Option 2. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 72Liquid Cooling Infrastructure Requirements for Facility Water Systems 5.1.3.1 Facility Water Connections to Datacom Equipment. Datacom equip- ment racks can be connected to the facility chilled-water systems by either a hard pipe fitting or a quick disconnect attached to OEM flexible hoses. The quick discon- nect method is very popular among datacom equipment manufacturers. Eachmethod has its own advantages and disadvantages, which will be discussed further. Using the hard pipe method, connections between the facility water system and datacom equipment rack can be flanged, screwed, threaded, or soldered, depending on the pipe materials used on both sides of the interface. However, fitting and pipe material must be compatible. The type of pipe material and hardpipe connections will vary by the end user or customer. The end users will haveto clarify their requirements or standards to the OEM and design engineer for asuccessful project. Most datacom equipment manufacturers requiring connection to the FWS loops or CDU unit generally provide flexible hoses containing a poppet-style fluid coupling for facilities connection. A poppet-style quick disconnect is an “industrialinterchange” fluid coupler conforming to ISO 7241-1 Series B standards. Brass orstainless steel couplings are most commonly used today and must be compatiblewith the connecting pipe material. If rack loads and flows are excessive, it is recom-mended that a duplicate set of supply and return lines or hoses be deployed toenhance the fluid delivery capacity of the rack. The design engineer will have to determine if this is necessary during the design. One of the main disadvantages ofthe quick disconnect is that it has a very large pressure drop or loss associated withit. This pressure loss must be accounted for in all pipe sizing and pump selectionprocedures. The design engineer must consult with the coupling OEM for exact pres-sure losses for the specific project or system. Finally, the interface must be properly insulated to prevent any condensation formation. Insulation material should be the same before and after the interface. The supply and return piping before and after the interface should be properly labeled. This is critical to prevent human error by accidentally switching the two lines. When using quick disconnects, it is suggested to mix the sockets and plugs between supply and return lines to key them against cross-connection at rack installation. Other mechanical specialties should be included in the facility water piping prior or upstream of the interface. Isolation valves such as ball, gate, or butterfly valves should be installed upstream of the interface to isolate the connection from the building facility water system to perform maintenance on the disconnect or toreplace it with one of a similar size or larger should the rack load or functionalitychange. In addition, a strainer should be installed after the isolation valve to catchany particles in the piping system that could damage any of the rack equipment orclog the heat transfer devices. A balancing valve should be provided after the isola-tion valve to adjust the flow in the building facility water system and help push waterto other branches within the piping circuit. Finally, any monitoring or controlling instrumentation such as facility water pressure, temperature, and flow should beinstalled at the interface to ensure adequate water conditions exist for proper rackheat exchanger operation. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 73 5.1.4 Electrical Considerations The electrical power sources and connections serving the CDU units (Figures 5.1 and 5.2) generally must be highly reliable in nature. Liquid-cooled computing applications generally require continuous cooling to ensure continuous operation of the computers. Liquid-cooled cabinets (with built-in heat exchangers) typically require continuous operation of the fans and their control system within the cabinet and the water-circulating pumps that provide the cooling water to the cabinet heat exchanger. The fans, pumps, and their control system are typically powered through a built-in power supply with provision for input from two sources. From this power supply, internal transformers and wiring split the circuits as necessary to support the multiple fans and control system provided with the unit. In the event that one of the power supplies becomes unavailable, internal switching allows all internal electrical components and systems to be fed from the remaining active power supply. Typically, at least one of the feeds to the cabinet will be backed up by an uninter - ruptible power supply (UPS). The UPS serves to provide continuous power to the cabinet when utility power is unavailable. Liquid-cooled cabinets are likely to be powered from the same UPS system that serves the computers themselves. It is expected that in facilities requiring the highest level availability, the cabinet power supplies will be supported by UPS, and that maintenance bypasses will be provided to ensure that both cabinet supplies are available during maintenance of one of the power paths. The maintenance bypass helps ensure that the internal transfer switch is not relied on for continued operation during maintenance events. Of course, in all cases, the manufacturer’s guidelines for the connections should be followed. Facility-water-cooled cabinets generally rely on chilled water provided by the b uilding infrastructure. Regardless of whether or not the chilling source is from a central building chiller or a dedicated chiller, continuous facility water supply will likely be necessary to ensure continued cooling. Electrically, to ensure water supplyto the cabinet heat exchanger, the circulating pumps and their control systems shouldalso be powered by a UPS. This could be either a separate UPS dedicated to themechanical plant or the same UPS that powers the computers, depending on the elec-trical systems to be provided. Modular cooling CDU units (Figure 5.2) are usually provided by the computer manuf acturers and matched to the computers that they are expected to cool. As with the liquid-cooled cabinets, these units must be installed and wired in accordance with the manufacturers’ instructions. These units typically contain the heatexchanger, pumps, and controls necessary to support the associated computer. With-out continuous operation of this equipment, the associated computer cannot bee xpected to operate. Similar to the computers themselves, the modular cooling units are also fitted with built-in power supplies with provision to be powered from twoseparate sources. From an electrical standpoint, the power feeds to the modular cool-ing units should have the same reliability as the power feeds to the computer. It ise xpected that both the power supplies to the modular cooling unit and to the computer will be backed up by a UPS. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 74Liquid Cooling Infrastructure Requirements for Facility Water Systems 5.1.5 Monitoring Depending on the nature and configuration of concept and design of each appli- cation, there could be numerous variations of monitoring schemes and parameters to be monitored in a liquid-cooled system. The primary goal is to measure the critical parameters of the liquid systems supporting cooling of the computers (flow, pres-sure, temperature, etc.) and communicate those values to those responsible for theoperation of the equipment. Before considering the specific parameters and associ- ated nuances, let us first consider some high-level communication standards. Critical facilities typically use sophisticated building automation systems (B AS), which are sometimes also referred to as energy management systems or facil- ity monitoring systems. These systems are undergoing an industry-wide transfor - mation as they migrate away from a strictly proprietary control language to a web-based communication strategy. Likewise, the initial attempts to develop some open protocol standards by way of BacNet, LonWorks, and manufacturer-specific inte-grators (gateways) with vendor lookup tables (VLT) are now being challenged to goa step further to be capable of communicating with standard protocols, such as XML, SOAP , etc. This trend and market-driven effort, coupled with the increasing realiza-tion that critical infrastructure and IT operations have become “close-coupled” in“real-time, ” has resulted in a collaborative push to create a mutual standard for communicating data between the traditional silos of IT and facilities services. A leading organization in this effort is called Open Building Information Xchange or oBIX (previously called CABA XML/Web Services Guideline Committee), which is a technical committee at the Organization for the Advance- ment of Structured Information Standards (OASIS). Manufacturers of liquid-cooledhardw are, racks, and infrastructure equipment are encouraged to keep abreast of these standards and open protocols and ensure their internal and integral monitoringcomponents and/or systems can communicate easily with both the EnterpriseNetwork (IT) and the critical infrastructure monitoring systems (facilities services),which will both need access to this critical information. Other recommendations to be considered are that the manufacturers must pro vide clearly documented specifications that define acceptable ranges, alert values, alarm values, and (if present) any automatic shutdown values and showwhere, if these values are exceeded, actions are required to protect the hardware andassociated applications. Consideration must also be given to the accuracy and main-tainability of these monitoring systems and components. For example, thermistorsmay be more reliable and hold calibration better than thermocouples for the measurement of critical temperatures. Sensors should be housed in drywells if possi-ble to allow removal and replacement without a loss of liquid inventory or an outage. Where possible, “plug and play” designs should be used that allow a vendor’s product to be easily set up and installed into a pre-existing rack or monitoring system backbone. Computer and server manufacturers should take advantage of © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 75 the existing network and communications in place for their hardware to commu- nicate with the Enterprise as a means to convey the critical liquid cooling and otheren vironmentally related parameters to the IT departments consistent with other application specific data. They should also include easy means for infrastructurespecific systems (BAS) to connect directly to their hardware to retrieve the sameenvironmental data directly. The incorporation of monitoring standards is being blended, with a good exam- ple being the introduction of electrical “smart” power strips. The need to monitor andbalance electrical load, especially between redundant power paths, prompted new power strip products capable of not only monitoring the amount of power suppliedthrough each outlet but also having the capability to control each outlet’s ON/OFF condition. These intelligent devices can communicate through the existing enter - prise LAN/network, thereby eliminating the need to install the separate BAS LANto each associated cabinet. Many “smart” power strips also offer the ability to add environmental sensors to communicate internal cabinet data with breaker data. Byproviding a single communication interface between the Enterprise LAN and theBAS LAN; these data can also be linked to the building management system forannunciation of out-of-normal thresholds prior to thermal shutdown conditions. A24-hour staff is usually necessary to allow adequate response time in these circum-stances. If a full-time staff is not present, an added feature could be to allow a remoteoperator to shut down affected equipment in an orderly process. 5.1.6 Reliability and Availability Reliability and availability for liquid-cooled racks and electronics are two distinctly different items. By one definition, the reliability of a system or subsystem refers to the probability that the system or subsystem will remain operational for agiven period of time. Typical metrics for reliability include mean time between fail-ure (MTBF) and L10 life (given as the length of time within which 10% of thede vices will fail). Availability generally refers to the length of time during which equipment is available for work. One measure of availability is uptime. This section provides information on the key subsystems that are to be consid- ered when designing for high reliability and availability. Information is also pro vided regarding a number of events that can impact reliability and availability. 5.1.6.1 Subsystems Affecting Reliability and Availability .Chapter 4 pro vides a detailed description of the various implementations of liquid-cooled racks and electronics. The reliability and availability of these are highly dependent onkey subsystems and sensors. Following is a list of the key subsystems and sensors to be considered, along with a brief discussion. Pumps—These include water, refrigerant, and dielectric pumps for the vari- ous liquid cooling implementations. Depending on the configuration, pumpscan be present in all of the coolant loops (CWS, FWS, TCS, and DECS). © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 76Liquid Cooling Infrastructure Requirements for Facility Water Systems Redundancy can be provided in each loop, but this is more difficult and less cost-effective at the individual server level (particularly 1U and single blades).Multiple pumps running in parallel are typically more efficient than a singlepump and can provide improved reliability. F ans and blowers—These are used in many of the implementations of liquid cooling. Examples include (1) servers where only the processors are liquid-cooled and the balance of the server is air-cooled or (2) enclosed racks wherethe electronics are air-cooled and the heat is rejected to an air-to-liquid heatexchanger. Fans and blowers are especially critical for enclosed racks,although such systems typically offer n+ 1 or better redundancy. Contr ollers—These can be server-, rack-, or facility-based. The controllers control or interface with rack-based pumps, fans, and power distribution units,as well as the same devices at the facility level. Both hardware and firmwareare of concern with regard to the controllers. Leak detection systems—These may use rope or other types of sensors to detect water (most critical), refrigerant, or dielectric leaks at the server, rack,and facility levels. Such systems can interface with rack- or facility-basedcontrollers and can be configured to deactivate various levels of the datacomfacility in the event of a leak. They can also send various types of alarms outin the event of a leak. Such systems may also be implemented with watershutoff valves that provide isolation of the racks. Heat exchangers—These can be found at the server, rack, and facility lev- els. Fouling is of particular concern, and the type and quality of fouling var - ies depending on the type of liquid that is passing through the heate xchanger. Pressure drop through these devices should be minimized to reduce the pumping power required to pump the liquid through them. Theheat exchanger should be optimized for minimum approach temperature.Capital and operating costs should also be considered when making a heatexchanger selection. Connectors and fittings—These can be found at the server, rack, and facility levels. High-quality fittings are necessary to ensure leak-free joints. Further details are provided in Section 5.1.2.1. T ubing—Tubing for liquid-cooled systems is found at the server, rack/cab- inet, and facility levels. High quality tubing is necessary to ensure leak-freeoperation. Sensors—T emperature, pressure, flow, and liquid level sensors are among the key sensors in liquid-cooled systems. 5.1.6 .2Issues Impacting Reliability and Availability .A number of events can lead to reduced system reliability or reduced availability. While there are distinct differences between reliability and availability, the two of these are closely related. For example, frequent power outages that are accompanied by voltage spikes result in a lack of availability during the outage, as well as equipment © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 77 (electrical) stress that may eventually lead to equipment failure (reduced reliability). Following is a listing and description of some key items that affectreliability and availability. This section describes the impact of several water-related events on system reli- ability and availability (e.g., the disruption of facility water to a rack). For the sakeof brevity, only facility water is discussed. The reader should be aware that similar events that involve other fluids such as refrigerants and dielectrics will similarlyimpact the reliability and availability of liquid-cooled systems. For example, thedisruption of a refrigerant to a rack will have an impact similar to the disruption offacility water to a rack. The reference to facility water only is not intended to suggestthat this is the only or the recommended method of implementation for liquid-cooledsystems. P ower outages—For facilities with neither UPS nor backup power generation capability, a power outage will result in the unavailability of the datacom equipment. This lack of power for the electronics will also result in a lack ofpower for pumps and blowers at the facility, rack, and server levels. WhereUPS or backup power generation is available, temporary (UPS) or reducedpower will be available to allow for graceful equipment shutdown or reducedoperation. W ater flow—This can encompass partial or complete disruption of water flow or variations in water temperature. These changes will impact the capac-ity to remove heat from the electronics and, in turn, will impact the availabil-ity of the rack. W ater hammer—As liquid-cooled solutions are propagated throughout data- com centers, increasing numbers of rack-dedicated water shutoff valves (see“Leak Detection Systems” in Section 5.1.5.1) will be deployed. Improper operation of any of these can lead to water hammer throughout the facility,which, in turn, can lead to damage at the server, rack, and facility levels. Othersources of water hammer (e.g., pump cycling) should also be considered aspotential sources for water hammer. W ater flow balancing—Large systems consisting of hundreds of liquid- cooled racks will have a more complex flow network in the datacom facility. Improperly designed rack-level cooling loops can lead to improper water flowdistribution to the racks. W ater quality—Water quality is impacted by a number of items, including the presence of suspended solids, bacterial contamination, and incorrectwater pH level. The primary subsystems impacted are: Heat exchangers —Improperly maintained water quality will lead to heat exchanger fouling (see also Section 5.1.1.4 for additional discus-sion of water quality issues), which, in turn, will lead to reduced cool-ing capacity and reduced availability. In many cases, heat exchangerscan provide a single point of failure, and designs that will minimize the impact of these failures should be considered. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 78Liquid Cooling Infrastructure Requirements for Facility Water Systems Pipes—Improperly maintained water quality will lead to fouling of the supply and return piping. Pipe fouling could manifest itself inreduced flow in cross sections and an undesirable increase in pres-sure drops. In many cases, piping or hose can provide a single pointof failure, and designs that will minimize the impact of these failures should be considered. Connectors and fittings—Improperly maintained water quality will lead to fouling of connectors and fittings. This may lead to challengesduring repair and maintenance or leaks in between servicing. Seismic events—Seismic events will lead to compromising of joints if thereis inadequate stress relief at such joints. As an example, the joints between a f acility water loop and a rack could be damaged during a seismic event if flexible piping is not used. Facilities using seismic isolators with movable rack foundations should ensure a sufficient service loop of flexible tubing to avoid stressing the cooling connections during seismic events . The facility cooling system’s piping distribution system should also be designed and installed to accommodate seismic activity. Operator error—As with any type of cooling system, operator error shouldal ways be considered. With a liquid-cooled system, inadvertent disruption of the water supply to a rack will lead to a shutdown (and possible damage) of therack. Facility design should accommodate such a scenario. Interlocks or auto-mated controls should be considered along with checklists and procedures . 5.1.6.3 Recommendations. Liquid cooling represents a new paradigm for datacom center operators. As with any new approach to doing things, education will play a large role in the successful implementation of liquid cooling. In general, most of the general rules for current air-cooled implementations apply. Datacom center operators should have cooling contingency plans; implement cooling system redundanc y;deploy critical subsystems ,such as pumps that have high reliability and a vailability; and should place subsystems such as pumps (water, refrigerant, dielectric) and rack-based cooling systems on UPS. 5.1.7 Commissioning Commissioning at the interface is an essential part of the project. Since the interf ace is the boundary between the facility’s cooling systems and the computer cooling systems (FWS shown in Figure 1.1), it is imperative that all devices function properly and that controls work adequately, predictably, andconsistently under both normal and fault conditions. In addition, it is vitallyimportant that the facility’s cooling system provide the proper flows, pressures,and temperatures required by the computer/rack manufacturer. Commissioningstarts very early in the design phase of the project. All requirements for the data- © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 79 com equipment should be provided to the design engineer, who then incorporates these requirements into the design. During commissioning, tests and measure - ments should be performed on the various components, as well as the integratedsystem, to ensure they function properly and to ensure the datacom equipment, environment, and supporting utilities meet specifications and respond to anom - alies as expected. Depending on the products and configurations, some typical devices or components found at the interface may include valves, sensors, strainers, and quick disconnect couplings. The following list provides examples of tests and/or procedures that could be included in commissioning liquid-cooled rackdeployments: V alves provide proper shutoff and closure. Actuators fully open and close the valve and are accurately monitored by the B AS system (including that the automated valve is mapped to the proper loca- tion on any BAS graphic). Strainers can be properly drained and screens removed for cleaning. BAS sensors provide accurate readings. Flowmeters are installed according to manufacturers’ recommendations (e.g., in a straight length of pipe with no other fittings or devices around it). “Dripless” quick disconnects facilitate installation and removal of new cool- ing devices. System parameters are within the expected range as predicted by the engi- neered design (e.g., pressure drops, flow rates, water quality, inlet and return temperatures, etc.) Prefunctional and functional testing is carried out in accordance with ASHRAE Guideline 0-2013, The Commissioning Process. Establish a written baseline depicting the original conditions and values at time of startup as historical documentation. In summary, commissioning is important to ensure that all aspects of the deplo yment are compliant with the original design intent and specifications and that the systems meet all aspects of the Owner’s Program Requirements. Commis- sioning should follow a formal written plan and should start during the project’sdesign phase. The building owner should consider hiring a commissioning agent to prepare a detailed list of tests and services to be included in the commissioningeffort. For more on the commissioning process, consult ASHRAE Guideline 0-2013, The Commissioning Process (2013b). © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 80Liquid Cooling Infrastructure Requirements for Facility Water Systems 5.2 NON-WATER FACILITY SYSTEMS Facility water is not the only choice to provide cooling to a data center. Figures 5.7 and 5.8 show alternative solutions involving direct expansion (DX) equipment with a remote air -cooled condenser. The benefits and design suggestions will be discussed in this section. Further technical information can be found in Chap- ter 38 in the 2012 ASHRAE Handbook—HVAC Systems and Equipment. Figure 5.7 CDU (DX unit) supplying coolant to rack or cabinet. Figure 5.8 Modular CDU (DX unit) within rack or cabinet. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 81 5.2.1 Air-Cooled Condensers Air-cooled condensers should be placed in an area that will ensure an adequate air supply with little to no recirculation of hot discharge air. In addition, they should be located in a clean air area that is free of dirt, hot air, steam, and fume exhausts. Restricted airflow through the condenser will reduce operating efficiency of the unitand can result in high head pressure and loss of cooling. As such, units should belocated no closer than 5 ft (1.52 m) from a wall, obstruction, or adjacent unit. Alwaysrefer to the manufacturers’ recommendations for maintenance accessibility and loca-tion recommendations. A direct expansion system may be less expensive to installthan a facility water system, depending on the system size, capacity, and growth potential. In addition, a DX system does not require domestic water storage for reli-able operation, since cooling towers are not used to condense the working fluid. DXsystems can be used to cool an entire data center or provide localized cooling to “hot spots” within the data center. 5.2.2 Refrigerant Piping The efficiency and reliability of a direct expansion system can hinge on the piping that interconnects the refrigerant-condensing and air-handling sides of the system. Operational difficulties will be readily apparent if the interconnecting piping is not designed and installed properly. The following guidelines will help eliminate or minimize any operational difficulties with interconnecting piping: Design a simple and direct layout that reduces the amount of system refriger - ant. Route the suction line from the evaporator to the compressor by the short- est path. Limit the overall line length, including the vertical suction and liquid risers. Enough subcooling may be lost as refrigerant travels up the liquid riser to cause flashing. Use different pipe sizes for horizontal and vertical lines to make it easier to match line pressure drop and refrigerant velocity to suction-line requirements. Properly size the suction line to ensure proper oil entrainment and avoid sound transmission. Riser traps are unnecessary. If the riser is properly sized to maintain velocity, adding a trap only increases the suction-line pressure drop. Double suction risers may be unnecessary due to the type of compressor selected. Eliminate any foreign matter in the system and install suction filters. Provide a 1 in. (2.5 cm) pitch toward the evaporator for every 10 ft (3.05 m) of run to prevent any refrigerant that condenses in the suction line from flowing to the compressor when the unit is offline. Use insulation on the suction lines if moisture condensation or dipping causes a problem. Select the smallest practical liquid line size for the application. Limiting the refrigerant charge improves compressor reliability. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 82Liquid Cooling Infrastructure Requirements for Facility Water Systems The liquid line should include a replaceable filter drier to permit proper sys- tem cleanup and removal. The unit should be inspected or changed whenever it is serviced or opened up. A moisture-indicating sight glass should be added to permit a visual inspec- tion of the liquid column for bubbles. Solenoid valves are required to prevent the liquid refrigerant from filling the e vaporator when the compressor stops and will prevent slugging when the com - pressor restarts. They also prevent siphoning, which could allow an elevatedcolumn of liquid to overcome the gas trap and flow back into the compressor. Hot gas bypass valves should be considered for scalability. The use of hot gas bypass valves will allow minimum run times and prevent short-cycling when the initial equipment does not develop sufficient load to allow the compres-sors to run. 5.3 LIQUID COOLING DEPLOYMENTS IN NEBS COMPLIANT SPACE The use of liquid close-coupled cooling systems, as illustrated in Figure 5.9, has extended from the IT environment to include active deployment in network equipment-building system (NEBS) spaces. The Alliance for Telecommunica-tions Industry Solutions (A TIS) Sustainability in Telecom: Energy and ProtectionCommittee (STEP) Network Physical Protection (NPP) subcommittee is working on a standards project (Issue 113: Distributed Refrigerant Cooling Infrastructure)to standardize the components used in pumped liquid refrigerant close-coupledsystems. As communications and data transport systems evolve, heat rejection Figure 5.9 Liquid-coolingsystems/loopswithinaNEBScompliantdatacenter. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 83 requirements have risen considerably. Common heat signatures of 400 to 800 W per frame are now being replaced with cabinets supporting 20 kW and more. Whilethis range is relatively low compared to common IT equipment space described inDatacom Equipment Power Trends and Cooling Applications, 2nd Edition, exist-ing office-cooling infrastructures are often unable to support the increased demand(ASHRAE 2012c). Liquid cooling systems provide a bridge system supplement- ing existing cooling infrastructure as well as providing primary cooling in bothgreenf ield and brownfield applications. Temperature and humidity ranges for NEBS spaces may differ from those outlined inThermal Guidelines for Data Processing Environments, 3rd Edition, (ASHRAE 2012c). The specific NEBS ranges are specified in the Telcordia “GR-63-CORE NEBS™ Requirements: Physical Protection” standard (Telcordia 2012). The facility supply water temperature identified in Table 5.1 should be reviewed and adjusted as needed to assure compliance with the GR-63 equipment inlet tempera-ture parameters. While the use of distributed R134a refrigerant systems is supportive of NEBS en vironments, deployment of these systems is not limited specifically to these envi- ronments. The use of distributed R134a refrigerant or other dielectrics is suitable fordeplo yment in IT and other equipment spaces 5.3.1 NEBS Space Similarities and Differences The facility water system or infrastructure supporting the CDU (Figure 5.9) is essentially the same for NEBS space deployments as it is for datacom equipment. Systems often use economizers to assist in mitigating energy use. NEBS space deployments do have very strict restrictions on the distribution of w ater within the active equipment space. While water-side connections are permit- ted, they are typically connected outside of the space to cooling units (e.g., computerroom air handlers) or are restricted to the perimeter of the supported space. This limi- tation eliminates the risk of equipment contamination or failure due to water solu-tions leaking from distribution piping, connectors, or hoses. The typical NEBS space is different in basic construction in that it uses almost e xclusively slab floors with overhead cable racking and piping. This overhead piping arrangement adds to the risk factor of equipment damage due to water solution leak-age. Alternative liquids to water provide an amenable solution that supports enhanced close-coupled cooling while meeting water distribution restrictions. The typical use of R134a refrigerant liquid distribution systems provides an effectiveheat rejection transport medium while ensuring that, if there is a leak, equipment willnot be damaged by the escape of the inert gas. Other refrigerants or dielectrics mayalso be effective in supporting the unique nature of NEBS spaces. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 84Liquid Cooling Infrastructure Requirements for Facility Water Systems The separation of water-based cooling fed by the CDU to the refrigerant-based distribution to the close-coupled cooling units also affords the option of using a DX compressor-based cooling feed in place of the water-based cooling feed. 5.3.2 CDU Use in NEBS Space The use of refrigerant-based liquid cooling systems in NEBS spaces requires the deployment of a CDU. The CDU provides a cost-effective thermal transfer point between the facility water supply systems and the refrigerant distribution infrastruc- ture, including close-coupled cooling units. The CDU facility water connections are subject to the same limitations applied to a typical computer room air handler. The CDU unit is typically placed outside of the equipment area or along the perimeter of the supported space. Full leak contain-ment is typically deployed as part of the common infrastructure. Multiple CDUs may be placed to provide adequate capacity and to comply with refrigerant distribution effective line length requirements. These deployments are often arranged in groups of cabinets with common power, space, and cooling capac-ities. 5.3.3 Refrigerant Distribution Infrastructure Refrigerant distribution infrastructure is used to supply cooling units over, near, and within equipment racks. The infrastructure includes supply and return piping, port interfaces, and hard copper line interconnects or flexible hoses for system connectivity. Supply and return lines may be placed under a raised floor or, moretypically, above equipment racks. 5.3.4 Connections Refrigerant distribution systems should use rigid copper piping compliant with ASTM (American Society for Testing and Materials) Type L/EN0157, Type Y , or Type ACR. Connections are required to be brazed (not soldered) to assure reliability. Type ACR is recommended for full interoperability between manufacturers of distributed refrigerant systems. Connections to the facilities water system follow the same guidelines as outlined in Section 5.1.3.1 with the option to connect via a direct drop of rigid copper piping or via a flexible hose and quick disconnect arrangement. 5.3.5 Condensation Consideration The CDU supporting the refrigerant distribution system actively maintains a def ined temperature separation between the equipment area dew point and the distributed refrigerant temperature to ensure that condensation does not form on the distribution infrastructure. While this design should be sufficient, it is recommendedthat the distribution infrastructure be covered with a closed-cell elastomeric cover - © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 85 ing (i.e., insulation) to prevent condensation from getting on equipment below the piping. The covering will also provide the piping with a measure of physical protec-tion from impact or abrasion. 5.3.6 Close-Coupled Cooling Units Close-coupled cooling units are cooling units that are located very close to the datacom equipment, much closer than traditional units such as CRACs or CRAHs. A variety of close-coupled units are available in the market to meet specific cooling considerations. They may be deployed above equipment hot aisles, adjacent to equipment (in-row), directly attached (spot cooling), behind the equipment (reardoor heat exchangers), or integrated directly with equipment. Different units may beintegrated into a single system. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 876 Liquid Cooling Infrastructure Requirements for T echnology Cooling Systems The interfaces that occur between the rack-level cooling hardware deployed to enhance rack-level thermal management or extend data center thermal management capacity aredescribed here. The position of this interface is shown in Figure 6.1. 6.1 WATER-BASED TECHNOLOGY COOLING SYSTEM One example of this loop is displayed in Figure 6.1 (same as Figure 4.4); other e xamples are shown in Figures 4.5–4.14. Since these solutions that remove the heat load near or at the rack are designed by companies that control the details of the liquid and materials within this loop, only some broad guidelines can be givenin this section. Figure 6.1 Combinationair-andliquid-cooledrackorcabinetwithexternalCDU (same as Figure 4.5). © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 88Liquid Cooling Infrastructure Requirements for Technology Cooling Systems One implementation could have the electronics air-cooled, with the coolant removing a large percentage of the waste heat via a rear door heat exchanger or a heat exchanger located above the rack. Another implementation would include atotally enclosed rack that uses air as the working fluid and an air-to-liquid heatexchanger.Another would be with the coolant passing through cold plates attachedto processor modules within the rack. The CDU can be external to the datacomrack, as shown in Figure 6.1, or within the datacom rack, as shown in Figures 4.6and 4.7. 6.1.1 Operational Requirements In most cases the liquid that is supplied to the rack or cabinet will be above the room dew point. The ASHRAE book Thermal Guidelines for Data Processing Envi- ronments, 3rd Edition (ASHRAE 2012c) specifies a maximum dew point of the room for a Class A1 environment of 63°F (17°C). Two techniques can be used tomaintain liquid supply temperatures above the dew point—one is to set the controlpoint of the liquid above the Class 1 environmental specification of 17°C or have asupply temperature controlled such that it is adjusted a set amount above themeasured room dew point. The maximum allowable water pressure for the TCS loop should be 100 psig (690 kPa) or less. 6.1.2 Water Flow Rates Water flow rates for the TCS loop are set by the manufacturer of such cooling equipment. Figure 6.2 shows the flow rates for given heat loads and given temper - ature differences. Temperature differences typically fall between 9°F and 18°F (5°C and 10°C). 6.1.3 Velocity Considerations The velocity of the water in the rack’s liquid cooling loop piping must be controlled to ensure that mechanical integrity is maintained over the life of the system. V elocities that are too high can lead to erosion, vibration, and water hammer.Lower velocities lead to lower pressure drop and lower pumping power required totransport the liquid. Table 6.1 provides guidance on maximum chilled-water piping velocities. Any flexible tubing should be maintained below 5 ft/s (1.5 m/s). 6.1.4 Water Quality/Composition The quality of cooling water in a technology cooling system (TCS) loop is crit- ical for the performance and the longevity of the datacom equipment. Cooling water of poor quality can cause adverse effects in a water system, such as reduced cooling capacity, increased energy consumption, and premature equipment failure. Table 6.2 gives the recommended specification for the TCS liquid coolant. If water is outside these ranges, it does not mean the system will have water-quality- © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 89 driven issues. In fact, water well outside these ranges has been used in cooling loops successfully. The main issue in these other loops is a much greater require-ment for, and emphasis on, the water chemistry treatment program. Meeting theproposed water quality specifications will make the required water chemistry program much simpler. 6.1.4.1 Replacement Water Guidelines. W ater in the TCS loop should be changed as needed. If the water exceeds the limits of the values in Table 6.2 orexceeds the limits of the established water chemical treatment plan, it should bereplaced. Generally a partial replacement will be sufficient to regain proper clean-liness levels.Figure 6.2 Water flow rates for TCS loop and for constant heat load. Table 6.1 Maximum Velocity Requirements Pipe Size Maximum Velocity (fps) Maximum Velocity (m/s) >3 in. (7.6 cm) 7 2.1 1.5 to 3 in. (3.8 to 7.6 cm) 6 1.8 <1 in. (< 2.5 cm) 5 1.5 All flexible tubing 5 1.5 © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 90Liquid Cooling Infrastructure Requirements for Technology Cooling Systems The TCS loop makeup water source is important in starting with a clean system and maintaining it. A system using the CWS loop as the feed source can be success- ful; however, as mentioned earlier, the water treatment program’s efficacy is criticalto the success of that loop. Typically, the TCS loop should be filled with soft, deion- ized, distilled, or reverse-osmosis product water. 6.1.4.2. Special Problems in a TCS loop. Because there is no regular blow- down (sediment purge) from a TCS loop, strainers or side-stream filters may beneeded to remove debris that exists in the system. In any water loop, bacteria will develop in “dead legs” (unused runs of piping that have been isolated with valves from the rest of the system) where there is no water flow. Contamination may result when “dead legs” are reconnected to thesystem; biocides can help reduce the impact but cannot prevent this. Consider theoperational use of the liquid system during design. If there are long runs of pipingto electronics that may possibly be isolated for long periods of time (a week or more),it may be worth establishing a bypass or some type of minimum flow rate to avoidthis potential problem. Regardless of the nature of the water, any biocide or chemicaltreatment will break down over time in a dead leg, allowing a proliferation of bacteriain the loop.This problem can be exacerbated by underfloor piping systems, wherethe possibility of “out of sight, out of mind” operations can leave unused pipingisolated for long periods of time. If a section of pipe is no longer needed, it may bebetter to remove it or isolate it and drain it rather than leave it charged with water.Table 6.2 Water Quality Specifications—TCS Cooling Loop Parameter Recommended Limits p H 7t o9 Corrosion inhibitor Required Biocide Required Sulfides <1 ppm Sulfate <10 ppm Chloride <5 ppm Bacteria <100 CFU/mL Total hardness (as CaCO3) <20 ppm Conductivity 0.20 to 20 micromho/cm Total suspended solids <3 ppm Residue after evaporation <50 ppm Turbidity <20 NTU (nephelometric) © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 91 The level of bacteria, which may indicate system contamination, is lower in the TCS loop than in the CWS loop. Although the cooling tower can operate effectively with 100,000 or more organisms per mL, you should take corrective action imme- diately when bacteria counts exceed 1,000 CFU/mL in the TCS loop. 6.1.4.3 Water Treatment. Before any new computer system is placed into oper - ation, you should flush the loops thoroughly to remove as much suspended materialand debris as possible. A chemical detergent cleaning is also desirable. It is important to ensure the detergent residue is rinsed away prior to filling the loop for operation. To avoid loop problems later, you should seek the advice of a water treatment specialist early in the design stage of your system and diligently follow the program that the specialist recommends. 6.1.5 Wetted Material Requirements This section describes the recommended materials for use in supply lines, connectors, manifolds, pumps, and any other hardware that makes up the TCS loop at your location. Copper Brass with less than 15% zinc content and without lead content Stainless steel—304 L or 316 L Ethylene propylene diene monomer (EPDM) rubber—peroxide cured Materials to Avoid in the TCS Loop The following materials must never be used in any part of your water supply system. Oxidizing biocides, such as chlorine, bromine, and chlorine dioxide Aluminum Brass with greater than 15% zinc (unless corrosion inhibitor is added to pro- tect high zinc brass) or brass containing lead Irons (nonstainless steel) While aluminum is an excellent heat transfer material, is lightweight, and is lo w-cost, it can be problematic in a closed-loop system containing copper. While it is possible to inhibit corrosion of both metals in the same system, it is very difficult to do and requires more expensive treatments and a higher level of care and moni-toring. It may be simpler, with the pervasiveness of copper in these systems, to avoidthe problems by excluding aluminum from the design. 6.1.6 Monitoring Consideration should be given to providing for appropriate monitoring of crit- ical parameters consistent with typical best practices for any mission critical system.F or additional information and guidance, refer to Section 5.1.4, “Monitoring.” © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 92Liquid Cooling Infrastructure Requirements for Technology Cooling Systems 6.2 NON-WATER-BASED TECHNOLOGY COOLING SYSTEM The most prevalent liquids used in the TCS loop other than water are refriger - ants and dielectrics. The refrigerants are generally pumped as single-phase liquid to the datacom racks. Once the liquid reaches the rack, a phase change may occur to achieve high heat transfer. Of course, with these systems the CDU is a speciallydesigned unit that provides the heat exchange and pumping components that arespecifically designed for the specific refrigerant or dielectric. In addition, the pipingand/or hose components that are used to transport the fluid to the rack are of a specialdesign to eliminate any leak potential and prevent failures. 6.2.1 Operational Requirements In most cases the liquid that is supplied to the rack or cabinet will be above the room dew point. The ASHRAE book Thermal Guidelines for Data Processing Envi- ronments, 3rd Edition (ASHRAE 2012c) specifies a maximum dew point of the room for a Class 1 environment of 63°F (17°C). Two techniques can be used to main-tain liquid supply temperatures above the dew point—one is to set the control pointof the liquid above the Class 1 environmental specification of 63°F (17°C) or have a supply temperature controlled such that it is adjusted a set amount above themeasured room dew point. 6.2.2 Liquid Requirements The flow rates, velocities, and quality levels of the refrigerant or dielectric will be specific to the design provided by the manufacturer of the cooling system. 6.2.3 Wetted Material Requirements The materials required for the liquid loop will be specified by the manufacturer of the cooling system. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 93References and Bibliography ASHRAE. 2009a. Best Practices for Datacom Facility Energy Efficiency , Second Edition. Atlanta: ASHRAE. ASHRAE. 2009b. Design Considerations for Datacom Equipment Centers , Sec- ond Edition. Atlanta: ASHRAE. ASHRAE. 2011. ASHRAE Handbook—HVAC Applications. Atlanta: ASHRAE. ASHRAE. 2012a. ASHRAE Handbook—HVAC Systems and Equipment. Atlanta: ASHRAE. ASHRAE. 2012b. Gr een Tips for Data Centers. Atlanta: ASHRAE. ASHRAE. 2012c. Datacom Equipment Power Trends and Cooling Applications. Atlanta: ASHRAE. ASHRAE. 2012c. Thermal Guidelines for Data Processing Environments, 3rd Edition. Atlanta: Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc. ASHRAE. 2013a. ANSI/ASHRAE/IESNA Standard 90.1-2013, Ener gy Standard for Buildings Except for Low-Rise Residential Buildings. Atlanta: ASHRAE. ASHRAE. 2013b. ASHRAE Guideline 0-2013, The Commissioning Process. Atlanta: ASHRAE. Baer, D. 2004. Managing data center heat density. HP AC Engineering 76(2):44–47. Beaty, D. 2004. Liquid cooling: Friend or foe. ASHRAE Transactions 110(2):643–52. Beaty, D., and R. Schmidt. 2004. Back to the future: Liquid cooling data center considerations. ASHRAE Journal 46(12):42–46. Belady, C., and D. Beaty. 2005. Data centers: Roadmap for datacom cooling. ASHRAE Journal 47(12):52–55. Belady, C. 2001. Cooling and power considerations for semiconductors into the ne xt century. Proceedings of the International Symposium on Low Power Electronics and Design, IEEE, Huntington Beach, CA, pp. 100–105. Chu, R.C. 2004. The challenges of electronic cooling: Past, current and future. J ournal of Electronic Packaging, Transactions of the ASME 126(4):491–500. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 94References and Bibliography Delia, D.J., T.C. Gilgert, N.H. Graham, U. Hwang, P .W. Ing, J.C. Kan, R.G. Kemink, G.C. Maling, R.F. Martin, K.P . Moran, J.R. Reyes, R.R. Schmidt, and R.A. Steinbrecher. 1992. System cooling design for the water cooled IBMEnterprise System/9000 processors. IBM Journal of Research and Develop- ment 36(4):791–803. Jones, Denny A. 1996. Principles and Prevention of Corrosion, 2nd Edition. New York: Prentice Hall. Kakac, S., and H. Liu. 2002. Heat Exchangers: Selection, Rating and Thermal Design. CRC Press, Inc. Kurkjian, C., and J. Glass. 2005. Air-conditioning design for data centers— Accommodating current loads and planning for the future. ASHRAE Transac- tions 111(2):715–24. Pautsch, G. 2001. An overview on the system packaging of the CRAY SV2 super - computer. Pr oceedings of IPACK’01, pp. 617–24. Schmidt, R. 2005. Liquid cooling is back. Electr onicsCooling 11(3):34–38. Schmidt, R., and B. Notohardjono. High end server low temperature cooling. IBM J ournal of Research and Development 46(6):739–51. Schmidt, R., R. Chu, M. Ellsworth, M. Iyengar, D. Porter, V . Kamath, and B. Lehman. 2005. Maintaining datacom rack inlet air temperatures with water cooled heat exchangers. Proceedings of the Pacific Rim/ASME International Electronic Packaging Technical Conference (INTERpack), San Francisco,CA, July 17–22, Paper IPACK2005-73468. Singh, P . 1985. Poughkeepsie, NY . Private communication with the author. Singh, P., G.T. Galyon, J.H. Dorler, J. Zahavi, and R. Ronkese. 1992. Potentiody- namic polarization measurements for predicting pitting of copper in cooling w aters. Paper 212, Corrosion 92, The NACE Annual Conference and Corro- sion Show, Nashville, TN. Shah, R.K., and D.P . Sekula. 2003. Fundamentals of Heat Transfer Design.N e w Jersey: John Wiley & Sons, Inc. Stahl, L., and C. Belady. 2001. Designing an alternative to conventional room cooling. Pr oceedings of the International Telecommunications and Energy Conference (INTELEC), Edinburgh, Scotland, October. Telcordia. 2012. GR-63-CORE, Network Equipment-Building System (NEBS) Requirements: Physical protection. Telcordia Technologies Generic Require- ments(4). Piscataway, N.J.: Telcordia Technologies, Inc. Trane. 2001. As equipment evolves so must piping practices... Split systems and interconnecting refrigerant lines. Engineers Newsletter 27(4). Uptime Institute. 2009–2012. Data Center Site Infrastructure Tier Standard: Topology . New York: Uptime Institute. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 95Glossary of T erms aerobic bacteria: bacteria that live and grow in the presence of oxygen, often an indication of slime that can foul equipment; take remedial action when you detect 1000 organisms/mL or greater. air cooling: the case where only air must be supplied to an entity for operation. air-cooled rack: the case where only air must be provided to the rack or cabinet for operation.air-cooled datacom equipment: the case where only air is provided to the datacom equipment for operation.air-cooled electronics: the cases where air is provided directly to the electronics within the datacom equipment for cooling with no other form of heat transfer; if the datacom equipment contains both liquid-cooled and air-cooled electronics, theequipment itself is considered liquid-cooled. aluminum: a lightweight, silver-white, metallic element. anaerobic bacteria: bacteria that can live without the presence of oxygen; generally absent in water with a high pH; take remedial action when you detect 10 organisms/ mL or greater. availability: a percentage value representing the degree to which a system or component is operational and accessible when required for use.BAS: b uilding automation system.BMS: b uilding management system.brownfield: abandoned or underused industrial or commercial facilities available © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 96Glossary of Terms for reuse, expansion, or redevelopment. CDU: coolant distribution unit. CFD: computational fluid dynamics. CFU: colon y-forming unit.chloride: an indication of water softener regeneration problems if the system chlo- ride level is much higher than the chloride level of the replacement water; increased le vels of chloride can increase corrosion and indicate the need for the addition of higher levels of corrosion inhibitors. CMMS: computerized maintenance management system. condenser water system (CWS): consists of the liquid loop between the cooling tower and the data center chiller(s); it also is typically at the facility level and may or may not include a dedicated system for the information technology space(s) (seeFigure 1.1). conductivity: a measure of the mineral content in the water; in a nitrite program, high conductivity is generally an indicator of bacterial degradation of the nitrite.copper: a ductile, malleable, reddish-brown, corrosion-resistant diamagnetic metallic element. coolant distribution unit (CDU): conditions the technology cooling system (TCS) or datacom equipment cooling system (DECS) coolant in a variety of manners and circu - lates it through the TCS or DECS loop to the rack, cabinet, or datacom equipment.corrosion: deterioration of intrinsic properties in a material due to reactions with its environment.CRAC: computer room (refrigeration based) air conditioner. CRAH: computer room (water cooled) air handler. data center: a building or portion of a building whose primary function is to house a computer room and its support areas; data centers typically contain high-end serv- ers and storage products with mission critical functions. datacom: a term that is used as an abbreviation for the data and communications industry.datacom equipment cooling system (DECS): an isolated loop within the rack that © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 97 is intended to perform heat transfer from the heat-producing components (CPU, memory, power supplies, etc.) to a fluid-cooled heat exchanger also containedwithin the IT rack; this system is limited to the information technology (IT) rack (seeFigure 1.1). dead legs: unused runs of piping that have been isolated with valves from the rest of the system.dielectric fluid: a fluid that is a poor conductor of electricity. DX: direct expansion (i.e., refrigeration).economizer, air: a ducting arrangement and automatic control system that allows the cooling supply fan system to supply outdoor (outside) air to reduce or eliminate the need for mechanical refrigeration during mild or cold weather. economizer, water: a system by which the supply air of a cooling system is cooled directly or indirectly or both by evaporation of water or by other appropriate fluid (in order to reduce or eliminate the need for mechanical refrigeration). EMCS: ener gy management and control system. facility water system (FWS): primarily consists of a system between the data center chiller(s) and the CDU; the chilled-water system includes the chiller plant, pumps, hydronic accessories, and necessary distribution piping at the facility level; thissystem typically is at the facility level and may include a dedicated system for theinformation technology space(s) (see Figure 1.1). firmware: data stored in a computer's read-only memory (ROM) or elsewhere in a computer's circuitry that provides instruction for the computer or hardware devices; unlike normal software, firmware cannot be changed or deleted by an end-user andremains on the computer regardless if it is on or off. fouling factors: added resistance to the transfer heat from the liquid caused by deposits fouling the heat transfer surface. greenfield: a project which lacks any constraints imposed by prior network equip- ment spaces (i.e., new or fully refurbished) heat pipe: also defined as a type of heat exchanger, a tubular-closed chamber containing a fluid in which heating one end of the pipe causes the liquid to vaporize and transfer to the other end where it condenses and dissipates its heat; the liquid thatforms flows back toward the hot end by gravity or by means of a capillary wick. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 98Glossary of Terms hydronic: a term pertaining to water used for heating or cooling systems. IT:information technology. iron: a heavy, ductile, magnetic metallic element; is silver-white in pure form but readily rusts. liquid cooling: the case where liquid must be circulated to and from the entity for operation.liquid-cooled rack: the case where liquid must be circulated to and from the rack or cabinet for operation. liquid-cooled datacom equipment: the case where liquid must be circulated to and from the datacom equipment for operation. liquid-cooled: def ines the cases where liquid must be circulated directly to and from the electronics within the datacom equipment for cooling with no other form of heat transfer. manganese: a gray-white or silvery, brittle metallic element, which resembles iron but is not magnetic; important only if manganese is present in concentrations greater than 0.1 ppm in the replacement water. milligram per liter: equi valent to ppm. molybdate: a commonly used corrosion inhibitor. MTBF: mean time between failure. NEBS: Netw ork equipment-building system—a set of safety, spatial, and environ- mental design guidelines applied to telecommunications equipment.nitrite: a commonly used corrosion inhibitor.NTU: nephelometric turbidity unit. OASIS: Or ganization for the Advancement of Structured Information Standards.oBIX: Open Building Information Exchange. pH: a measure of hydrogen concentration, used to determine whether the water has either corrosive or scaling tendencies; pH is a logarithmic scale of concentration ofhydrogen ions (H+) as compared to that of pure distilled water that has an equivalentpH of 7. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 99 Pourbaix diagram: A plot of potential versus pH used to predict the thermodynamic tendency of a metal to corrode. ppm: parts per million. rack: frame for housing electronic equipment.redundancy: often expressed compared to the baseline of N, where Nrepresents the number of pieces to satisfy the normal conditions; some examples are “N +1 , ” “N+2,” “2N, ” and “2(N + 1)”; a critical decision is whether Nshould represent normal conditions or whether Nincludes full capacity during offline routine main- tenance. Facility redundancy can apply to an entire site (backup site), systems, or components; IT redundancy can apply to hardware and software. refrigerants: in a refrigerating system, the medium of heat transfer that picks up heat by evaporating at a low temperature and pressure and gives up heat on condens-ing at a higher temperature and pressure. reliability: a percentage value representing the probability that a piece of equipment or system will be operable throughout its mission duration; values of 99.9% (“three nines”) and higher are common in data and communications equipment areas; forindividual components, the reliability is often determined through testing; forassemblies and systems, reliability is often the result of a mathematical evaluationbased on the reliability or individual components and any redundancy or diversitythat may be used. SCADA: system control and data acquisition. scale: a deposition of water-insoluble constituents, formed directly on the metal surface.sulfate: an inorganic ion that is widely distributed in nature.suspended solids and turbidity: a cloudy condition in water due to suspended silt or organic matter. technology cooling system (TCS): serv es as a dedicated loop intended to perform heat transfer from a datacom equipment cooling system into the chilled-water loop; this system would not typically extend beyond the boundaries of the informationtechnology space—the exception is a configuration in which the facility condition-ing and circulating unit is located outside the data center (see Figure 1.1). temperature, dew point: the temperature at which water vapor has reached the satu- ration point (100% relative humidity). © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 100 Glossary of Terms total hardness: the sum of the calcium and magnesium ions in water. VLT: v endor lookup table. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 101Appendix The table above depicts the water quality of a number of computer facilities that were surveyed prior to the installation of systems that required liquid cooling. The facilities that are shown in this table were all able to meet the recommended limitsof this document after they implemented water treatment plans.Survey of Customer Water Quality of Chilled- Water System Loop Parameter Recommended LimitsTypical Values Prior to Treatment pH 7 to 9 5.2 to 9.9 Corrosion inhibitor Required – Sulfides <10 ppm 7 to 290 Sulfate <100 ppm 62 to 2780 Chloride <50 ppm 29 to 3070 Bacteria <1000 CFU/mL 450 to 7700 Total hardness (as CaCO3)<200 ppm 80 to 2750 Residue after evaporation <500 ppm 140 to 13986 Turbidity <20 NTU (nephelometric) 3.3 to 640 © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Introduction Index A absorption chillers 1 accumulator 53–54 acrylonitrile butadiene rubber 69air- and water-side economizer 14, 23, 61 air entrainment 65 air-cooled air-cooled datacom equipment 4, 46–47, 50, 95 air-cooled electronics 4, 95 air-cooled rack 4, 41–42, 95 algae 68 availability 2, 9, 11–14, 28–29, 57, 62, 73, 75–78, 95 B BacNet 74 bacteria 66, 68, 77, 90–91, 95–96, 101 bacterial contamination 77 BAS (building automation systems) 10–11, 74–75, 79, 95 C CABA XML 74 calcium carbonate 67 CDU 5–7, 41, 43–47, 52–60, 63–64, 69–73, 80, 84, 87–88, 92, 96–97central plant equipment 9central station air handlers 9 © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 104 Index chemical bypass filter 54 chemical detergent 91chilled-water storage 10, 13, 20chillers 1, 3–4, 6, 9–17, 20, 23, 26, 36, 38, 50, 58, 62–64, 73, 96–97 chloride 66–67, 90, 96, 101 closed circuit 19, 21, 23, 67 cold plates 4–5, 43, 53–54, 60, 88commissioning 2, 9–10, 78–79 computer room air-conditioning (CRAC) 1, 4, 9, 14, 30, 57, 96 computerized maintenance management system (CMMS) 11, 96condensation 43, 47, 53, 62, 64, 72, 81, 84 condenser 1–2, 4, 6–7, 10–11, 13, 15–17, 21, 23, 36, 38, 50–51, 56–57, 80, 81, 96 condenser water loop 6–7, 16condenser water system (CWS) 2, 6, 10, 23, 36 controllers 53–54, 56, 76coolant distribution unit (CDU) 5–7, 41, 43–47, 52–60, 63–64, 69, 70–73, 80, 84, 87–88, 92, 96–97 cooling loops 3–6, 52, 68, 77, 88–90 cooling service equipment 10 cooling tower 4, 6–7, 9–10, 16–23, 62–63, 65, 69, 81, 91, 96 corrosion inhibitor 15, 63, 66, 68, 70, 90–91, 96, 98, 101counterflow heat exchangers 53CPU 5, 48, 52, 97 D data center 1–2, 4–7, 9–15, 19, 25–26, 36–37, 39, 51–52, 59–60, 62–64, 70–71, 80–82, 87, 96–97, 99, 101 data center chiller 6, 96–97 datacom equipment cooling system (DECS) 2, 5–7, 43, 49, 52, 59, 62–63, 75, 96,99 deionize 90 dielectrics 4–6, 43, 46, 50–54, 62, 75–78, 83, 92, 97 direct expansion (DX) 1, 6, 13, 49, 52, 80–81, 84, 97direct return 27–28distill 90, 98double-ended loop 33–37 E earthquake protection 25, 39economizer 6, 14, 23–24, 62, 83, 97 © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 105 energy management and control systems (EMCS) 10–11, 97 ethylene glycol 5–6, 62ethylene propylene diene monomer (EPDM) 69, 91evaporative cooling 16–17, 19 F facility cooling systems 9, 23, 78facility water system (FWS) 3, 6, 10, 37, 42–43, 47, 49, 53, 56–57, 59–60, 62–66, 69, 70–72, 75, 78, 81, 97 fluid cooler 7, 19 fouling 2, 19, 23, 39, 65, 67, 76–78, 97fouling factors 2, 65, 97 free cooling 23 fungi 68 G glycol 5–6, 15, 23, 62 H heat exchanger 4–7, 14–15, 19–24, 42–43, 46–48, 51–55, 63–65, 72–73, 76–77, 85, 88, 97 heat pipes 4–6, 97humidifiers 13 L LAN 75 leak detection systems 76–77 liquid-cooled liquid-cooled cold plates 4 liquid-cooled condenser 56–57 liquid-cooled datacom equipment 3, 41, 46, 48–49, 98liquid-cooled electronics 3liquid-cooled rack 3–4, 41–46, 48–49, 51, 59, 60, 75, 77, 79, 87, 98 loop isolation segments 10 loop isolation valve 13, 38 looped piping main 29 M mean time between failure (MTBF) 14, 75, 98 mean time to repair (MTTR) 14microbiological activity 68 © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 106 Index O Open Building Information Xchange (oBIX) 74 Organization for the Advancement of Structured Information Standards (OASIS) 74, 100 P pH 66–68, 77, 90, 95, 98–101, 101 polyester sealant (anaerobic) 69polyethylene 69polypropylene 69 polytetrafluoroethylene (PTFE) 69 poppet-style quick disconnect 72power outages 20, 76 propylene glycol 5–6, 62 pump 5–7, 9–15, 22–23, 25–26, 36, 38, 47, 51, 53–54, 56, 58, 63, 72–73, 75–78, 91–92, 97 R rack envelope 4 redundancy 6–7, 12–14, 21, 23, 26, 28, 53–54, 56–57, 76, 78, 99 refrigerant 2, 4–7, 43, 46, 50–54, 62, 75–78, 81–84, 92, 99 refrigerant piping 2, 81 refrigeration 1, 4, 6, 13–14, 18, 23, 97 reliability 2, 7, 9, 12–13, 23, 25, 27, 31–32, 34, 36, 38, 52, 69, 73, 75–78, 81, 84, 99 reverse osmosis 90reverse return 28–29, 31 S scalability 9–10, 82scale 4, 67, 99 seismic events 78 sensor 11–12, 53–54, 74–76, 79single-ended loop 30–33, 35–36SOAP 74 sulfate 66, 69, 90, 99, 101 sulfides 66–67, 90, 101suspended solids 17, 67–68, 77, 90, 99system control and data acquisition (SCADA) 10, 99 T technology cooling system (TCS) 2, 5–7, 43, 59, 87–88, 96, 99 thermal storage 13 © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. Liquid Cooling Guidelines for Datacom Equipment Centers 107 thermosiphon 5–6 three-way valves 63 total hardness 66, 69, 90, 100–101 tubing 65–66, 76, 78, 88–89 turbidity 66, 68, 90, 98–99, 101 two-way modulating water flow 63 U UPS 11, 13, 20, 58, 73, 77–78 V vapor compression cycle 5–6, 49–50 variable-frequency drive 14velocity 2, 25, 65, 67–68, 81, 88–89volumetric airflow 4 W water water hammer 65, 77, 88 water quality 2–3, 39, 65–66, 77–79, 88–90, 101water-cooled chiller 16, 17, 20water-cooled rack 9, 31water-side economizer 14, 23, 62 wetted materials 2–3, 66, 69, 91–92 © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. The ASHRAE Bookstore is a one-stop shop for all ASHRAE publications, including books, standards, guidelines, Handbook volumes, CDs, DVDs, and additional electronic resources. These publications provide important information for engineers, architects, and others covering HVAC&R design and application as well as cutting-edge topics such as sustainability, building performance and commissioning, energy efficiency, and indoor air quality. ASHRAE members are entitled to discounts on many ASHRAE publications. Visit www.ashrae.org for more information on the many benefits ASHRAE membership has to offer. Visit the ASHRAE Bookstore to discover additional titles that complement your professional and educational needs.If you found Liquid Cooling Guidelines for Datacom Equipment Centers , Second Edition, a valuable resource, consider ordering other books in the ASHRAE Datacom Series. Other titles include: • PUE™: A Comprehensive Ex amination of the Metric • Green Tip s for Data Centers • R eal-Time Energy Consumption Measurements in Data Centers • P articulate and Gaseous Contamination in Datacom Environments • High Density Dat a Centers—Case Studies and Best Practices • Bes t Practices for Datacom Facility Energy Efficiency • Structural and Vibration Guidelines f or Datacom Equipment Centers • Design Considerations f or Datacom Equipment Centers • Dat acom Equipment Power Trends and Cooling Applications • Thermal Guidelines f or Data Processing Environments Bookstore www.ashrae.org/bookstore © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. © 2014 ASHRAE (www.ashrae.org). For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE's prior written permission. 4ASHRAE Datacom Series, Book 4 1 791 Tullie Circle, NE Atlanta, GA 30329-2305 www.ashrae.org/bookstoreASHRAE Datacom Series 4 The Guide Every Datacom Professional Needs Data center rack heat loads are steadily climbing, and the ability for many data centers to deliver either adequate airflow rates or chilled air is now being stretched to the limit to avoid decreased equipment availability, wasted floor space, and inefficient cooling system operation. This situation is creating a need for liquid cooling solutions to reduce the volume of airflow needed by the racks and for lower processor temperatures for better computer perfor-mance. This book provides cooling system designers, data center operators, equipment manufacturers, chief information officers (CIOs), and IT specialists with best practice guidance for implementing liquid cooling systems in data centers. This second edition includes updated references and further information on approach temperatures and liquid immersion cooling, plus guidance on water quality problems and wetted material requirements. It also includes definitions for liquid and air cooling as they apply to the IT equipment, along with an overview of chilled-water and condenser water systems and other datacom equipment cooling options. The book also bridges the liquid cooling systems by providing guidelines on interface requirements between the chilled-water system and the technology cooling system and on the requirements of liquid-cooled systems that attach to a datacom electronics rack to aid in data center thermal management. This book is the fourth in the ASHRAE Datacom Series, authored by ASHRAE T echnical Committee 9.9, Mission Critical Facilities, Data Centers, T echnology Spaces and Electronic Equipment. This series provides com-prehensive treatment of datacom cooling and related subjects. ISBN 978-1-936504-67-1 Product code: 90564 3/1 4 Liquid Cooling Guidelines for Datacom Equipment Centers Second Edition Liquid Cooling Guidelines for Datacom Equipment Centers | Second Edition 9781936 50467 1 LiquidCoolingCover_2ndEd_tb.indd 1 2/6/2014 3:01:27 PM
{ "source": "4) Liquid Cooling Guidelines for Datacom Equipment Centers, 2nd Edition.pdf", "length": 225197 }