hash stringlengths 32 32 | doc_id stringlengths 7 13 | section stringlengths 3 121 | content stringlengths 0 2.2M |
|---|---|---|---|
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 7.2.2.1.3 Cold aisle covers | An improvement of the "hot aisle, cold aisle" concept is provided by placing covers over the cold aisles as shown in figure 15. This approach reduces the loss of cold air losses and is able to demonstrate a rapid reduction in energy usage. Figure 14: "Segregation of hot aisles and cold aisles ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 35 Figure 15: Cold aisle covers |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 7.2.2.1.4 Segregation using curtains | A low cost, simple and rapidly deployable solution to problems of air-flow employs plastic curtains hanging from the ceiling of the computer room to isolate hot areas from cold areas as shown in figure 16. This approach has the advantage that there is no impact on the installed infrastructure associated with fire detection and suppression (sprinklers). Figure 16: Segregation of hot/cold aisles using curtains |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 7.2.2.1.5 High density areas | Vendors are predicting trends for equipment with significantly increased the energy consumption density (kW/m2). This is an issue for many of legacy data centres that are not designed to provide such high levels of cooling. As an example, how could a computer room built with an average ratio of 0,7 kW/m2 be adapted to new generation racks of blade servers, for which it is necessary to cool 20 kW over a 1 m2 area. If there are no restrictions on floor space, such as in a data centre following consolidation initiatives, this problem can be solved by not filling the cabinets fully, and locating the cabinets such that maximum efficiency of cooling is maintained without having to change existing air flows. However, where the use of floor space has to be optimized there will be commercial pressure to fully load cabinets. For this case, very high density areas are the appropriate answer, but these areas have to be considered separately from other "traditional" IT equipment areas and provided with their own energy and cooling needs. The "pod" concept is supported by a number of vendors and extends the "hot aisle, cold aisle" concept by reducing the volume within the data centre that is subject to environmental control. Pod are enclosed and secured areas containing a specific set of cabinets/racks which are provided with the required level of environmental control (as shown in figure 17 and figure 18). One of the advantage of such a solution is these areas can be installed outside computer rooms. ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 36 Figure 17: The "pod" concept Figure 18: Pods within a computer room |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 7.2.2.2 Reduction of thermal waste in cabinets/racks | Any potential improvements in energy usage offered the approaches detailed in clause 7.2.2.1 by the approaches of may be impacted by the failure of installers and maintainers to fit/re-fit "blanking panels" in the front and rear of the racks within cabinets when equipment is not installed or has been removed - which creates significant losses of cold air coming into the rack and contributes to a inefficient management of cooling as indicated in figure 19. This approach involves very little cost and is able to demonstrate a rapid reduction in energy usage. Figure 19: Installation of blanking plates ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 37 |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 7.2.3 Modification of temperature and humidity | |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 7.2.3.1 General | Increasing the temperature and adjusting humidity levels in computer rooms without violating vendors' specifications enables substantial reductions in energy usage associated with environmental control without significant Capex. The level of reduction depends upon some basic factors such as the size of the room, occupancy ratio. EN 300 019 standards define the environmental classification for network telecommunications equipment. The European Code of Conduct [4] requests to vendors to consider changing the current temperature ranges applicable to information technology equipment to approach or adopt the same as those for network telecommunications equipments defined in EN 300 019-1-3 [11]. Experiments have been undertaken by some major telecommunications operators to determine the impact of increasing the average temperature in computer rooms without violating vendors' specifications. |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 7.2.3.2 Results of experimentation | The experiment described below was undertaken in a telecommunications operators data centre in Paris. Details of experiment: • The underground data centre comprised a 1 000 m2 computer room operating non-critical systems (development, backup, etc.) within which the heat dissipation was in the range 300 W/m2 to 1 500 W/m2. The information technology equipment comprised servers, disk arrays, robotics and networking equipment and exhibited operational temperature ranges from 18 °C to 28 °C - 30 °C (dependent on vendor specifications) which is more restrictive than the EN 300 019-1-3 [11] applied to network telecommunications equipment. • The computer room cooling was provided by air cooling (recycling mode) from seven dry cooler units (80 kW per unit) and the energy consumption for air cooling was between 40 % and 60 % of the total energy consumption • Measurement instrumentation comprised 40 temperature and hygrometry sensors (measurement every 5 min) together with measurement of air cooling energy consumption. • The first change in environmental conditions is defined in table 13. This resulted in reduced the energy consumption by 12 % without operational failure since 2007. This is shown in figure 20. Table 13: Step 1 changes in environmental conditions % of cooling 0% 20% 40% 60% 80% 100% 120% 20° 24° 28° Intitial First step Second step % of cooling Figure 20: Computer room temperature and subsequent savings in energy usage ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 38 • The second change in environmental conditions is defined in table 14. This resulted in reduced the energy consumption by 20% without operational failure since 2007. This is also shown in figure 20. Table 14: Step 2 changes in environmental conditions |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 7.2.3.3 Time before "system-shutdown" | Many types of information technology equipment undergo automatic shutdown when temperatures exceed the vendors' maximum operating temperature specification (typically 30 °C or 32 °C). The available time to repair and/or restart cooling before automatic shut-down occurs is a major operational concern. The time to repair is defined as the interval between a total failure of the air conditioning system (with no redundancy) and the time at which the temperature in the room reaches a maximal functional limit. It depends upon a number of variables, the most important of which are: • the area and volume of the space/room being cooled; • the contents of the space/room; • the quantity and type of information technology equipment (racks, servers, etc.) and the global electrical consumption; • the type, number and capacity of the CRAC units; • air conditioning system redundancy; • the operating temperature of the room. In circumstances where the total cooling demand of the equipment is very high the time to repair can be short and is further reduced if the higher operating temperatures are applied. Figure 21 shows the results of experimental work (based on computation and observation of real events) in a computer room with the following characteristics: • thermal load: 362 kW; • computer room area: 1 080 m2 (thermal load per unit area: 335 W/m2); • computer room volume: 3 240 m3 (thermal load per unit volume: 112 W/m3); • cooling system: an air-conditioning system working in pure recycling mode (no free cooling). Seven dry cooler units (each with a cooling power of 80 kW) are used, associated with seven air treatment units. Cold air is blown through perforated tiles on the floor and the hot air outputs through the ceiling. ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 39 0 20 40 60 80 100 120 140 21 22 23 24 25 26 27 Operating temperature (deg C) Time to repair Figure 21: Example showing effect of operating temperature on "time to repair" Figure 21 clearly shows that increasing the operating temperature significantly reduces the time before system-shutdown. This is an obvious concern for IT Managers and Operation Managers and may lead to this approach to energy usage reduction being ignored. However, a variety of approaches can be applied to provide lower, but significant, savings including: • segregation of strategic "mission critical" business from other less critical activities and only apply increased operating temperature and humidity to those less critical areas; • segregation of network telecommunications equipment from the information technology equipment and only apply the increased temperature and humidity to the network telecommunications equipment. NOTE: Such segregation would not be required and greater savings would be possible if information technology equipment vendors change their operating specifications to those of network telecommunications equipment as requested by the European Code of Conduct [4]. |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 7.2.3.4 Restrictions on implementation | In an Uptime Institute Tier 4 data centre, the risk posed by a cooling system failure is reduced by the presence of redundant environmental control systems and/or power distribution equipment. Any associated risks of operating the information technology and network telecommunications equipment are also minimized. However, in data centre meeting the requirements of lower Uptime Institute Tiers, those risks and their resultant impact on the business conducted from and by the data centre need to be analysed. |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 7.2.4 Alternative cooling mechanisms | |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 7.2.4.1 Free cooling | If the opportunity exists and outdoor conditions permit, the re-introduction of air-side economizers (free cooling) or water-side economizers (cooling tower) approaches may be considered to reduce energy usage without heavy Capex. Free cooling uses external air or water temperature conditions for cooling rooms by introducing fresh air to the computer room equipment using traditional methods. This can produce significant reductions in energy usage and has a direct effect on the improvement of PUE (since cooling may represent 35 per cent to 45 per cent of total energy consumption. There are two principle approaches to "free cooling": • "free air-cooling": based on the air temperatures outside the building containing the data centre; • the use of existing cold water sources, as sea, lake, river or other. The ideal climatic combination at the intake and exhaust of the equipment lies in the range 22,5 °C to 22,7 °C and 45 % to 50 % relative humidity without irregular deviations in either parameter. If the suppliers of information technology equipment were to adopt the operational environmental specification applied to network telecommunications equipment (see EN 300 019-1-3 [11]) the period of time that free cooling could be applied and the resulting saving (see table 16) would generally increase. ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 40 NOTE: Optimum environmental control will be achieved by monitoring the environmental conditions should take place as close as possible to the intake and exhaust at the rack level. The effectiveness, and therefore the use, of free cooling are determined by two principle climatic factors external to the data centre premises: • mean air temperatures throughout the year (see figure 22 shows the mean temperatures across the world); • mean relative humidity throughout the year. Figure 22: Annual mean temperature across the world Total reduction in energy usage is directly linked to the external climatic conditions. The potential reduction is proportional to the total period during which the external environmental conditions match the ideal combination. Table 15 provides a methodology to evaluate energy savings for the use of free cooling. NOTE: 100 % of matching climatic conditions do not correspond to 100 % reduction of energy consumption since the remainder of the cooling infrastructure such as chillers, CRACs, pumps and fans are still required and consume energy (only the refrigerant batteries are inactive)."F" in table 15 is typically 0,6 (as is used in figure 16 and figure 23). Table 16 and figure 23 show the result for a series of examples using this methodology. Table 15: Free cooling savings profile Total energy consumption of environmental control system W Total number of hours per year 8 760 Number of hours of matching conditions during year H Energy consumption fraction of environmental control system made inactive during periods of matching conditions F % of potential free cooling per year P Savings R % of potential free cooling P=(H/8760) Savings on cooling R = P x W x F Table 16: Example of free cooling savings No of hours of potential free cooling (hours[days]/year) % of free cooling per year (H/8760) Cooling load savings (P x W) Total energy savings (R) 1 000 [42] 11,42 % 4,22 % 2,53 % 2 000 [83] 22,83 % 8,45 % 5,07 % 3 000 [125] 34,25 % 12,67 % 7,60 % 4 000 [167] 45,66 % 16,89 % 10,13 % 5 000 [208] 57,08 % 21,12 % 12,67 % 6 000 [250] 68,49 % 25,34 % 15,20 % 7 000 [292] 79,91 % 29,57 % 17,74 % 8 000 [333] 91,32 % 33,79 % 20,27 % 8 760 [365] 100,00 % 37,00 % 22,20 % ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 41 0,00% 20,00% 40,00% 60,00% 80,00% 100,00% 120,00% 1 2 3 4 5 6 7 8 9 0,00% 5,00% 10,00% 15,00% 20,00% 25,00% % of energy savings % of hours of free cooling Figure 23: Free cooling savings |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 7.2.4.2 Direct liquid cooling | This topic is included here for completeness but it is for future study. |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 7.2.4.3 Emerging technology (auto cooled chassis or chip-level cooling) | |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 7.2.4.3.1 Cooling-on-the-chip | This next generation technology aims to directly apply the cooling to the semiconductor packages within equipment such as servers. Hardware manufacturers are working on these technologies which are predicted to provide significant improvements in energy efficiency. |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 7.2.4.3.2 Auto-cooled chassis | Chassis or racks already exist which contain integrated cooling systems (air or liquid). |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 7.2.5 Enhancements of cooling systems | The following enhancements of cooling systems involve high Capex and Opex and represent a significant risk in terms of business continuity and quality of service during the implementation phase: a) installation of high-efficiency variable-speed air-handler fans and chilled water pumps; b) optimization of data centre airflow configuration; c) installation of high-efficiency CRAC units; d) sizing/re-sizing of cooling systems and the configuration of redundancy to maximize efficiency; e) increasing the temperature difference between chilled water supply and return in order to allow a reduction in chilled water flow rates. ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 42 8 Infrastructure requirements to optimize energy efficiency The implementation of effective planning of pathways and spaces in accordance with EN 50174-1 [8] and the data centre specific aspects of EN 50174-2 [9] is intended to maximize the energy efficiency of environmental control systems. The key aspects are: • the use of generic cabling leading to structured growth and changes via distributors as opposed to ad-hoc direct point-to-point cabling within the computer room (except in very specific localized areas); • the use of hot aisle and cold aisle or POD approaches together with detailed requirements and recommendations to ensure that cooling air reaches the intended location and exhausts are not obstructed. 9 Improvement of energy efficiency of power distribution systems |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 9.1 General | This clause describes the approaches that may be employed to increase the energy efficiency of the power distribution systems within the data centre. Some of the approaches are applicable to legacy data centres while others are more likely to be applied in new data centres. Further details are provided in clauses 10 and 11. Figure 24 shows the power distribution systems within the data centre from the main electrical arrival to, and within, the network telecommunications and information technology equipment. Each component in the power distribution system has a specified and measurable energy efficiency. Figure 24: Power distribution system Actions to increase the efficiency of the power distribution system in an existing data centre will, except for actions such as the deployment specific software solutions for energy management, require significant Capex investments and is not without risk for business continuity. Indications of the impact of some of the actions in this domain of energy efficiency are shown in annex A. ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 43 |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 9.2 Uninterruptible Power Supplies (UPS) | |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 9.2.1 Efficiency | The most popular UPS technologies including "line interactive " and "double conversion" (see figure 25). UPS technologies such as "delta conversion" are recognized but are proprietary technology from one specific vendor. Figure 25: Multi-vendor UPS technologies The IEC defines the efficiency of a UPS as "the ratio of output power to input power under defined operating condition". As shown in figure 26: • a UPS is generally more efficient when used at full load; • line-interactive technology is more efficient than the double conversion for reasons linked to input load. 80% 85% 90% 95% 100% Line interactive Double-conversion Line interactive 97% 98% 98% Double- conversion 87% 90% 94% 95% 25% 50% 75% 100% Figure 26: UPS efficiency However, the data in figure 26 is based on an average of specified performance values taken from main UPS vendors and was reported in a report from Ecos Consulting-Epri Solutions [10]. The efficiency of the different technologies varies significantly. For example: • at 50 % load, the most efficient technology is the "line-interactive" (with a 98 % efficiency); • "double-conversion" technology is the least efficient; ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 44 • in general, UPS running in excess of 50 % load have good efficiency, but there is a dramatic decrease under 40 % (see figure 26). In many legacy data centres, UPS systems are installed with the maximum capacity anticipated for the future needs. It is not uncommon for this capacity to never be fully used. In addition, redundancy requirements also promote the operation of UPS systems below their full capacity. In the case of 2N redundant configuration (independent of the UPS technology), it is always necessary to manage power so that systems are loaded beyond 50 % of capacity. Consequently every "2N" mode operates at less than maximum efficiency. Modular UPS systems (see clause 9.2.2) allow the capacity to be mapped to the demand thereby improving energy efficiency. Both UPS types can be mixed, in either the same or in different zones of the data centre. Some use a traditional UPS as their main source, but use smaller, modular systems as the second source for their most critical hardware to give "2N" redundancy without incurring that cost for the entire data centre. |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 9.2.2 Modular UPS | In legacy data centres, UPS commonly use double conversion technology, as shown in figure 25. This means two conversions (AC-DC and DC-AC), generating consequent losses of energy. Recently, vendors have begun to offer smaller modules (from 10 kVA to 50 kVA) to build "modular" UPS systems. The main advantage of the modular UPS approach is the ability to grow capacity taking account of real needs (with an initial sizing). These modules are "hot pluggable" and can be removed for exchange or maintenance. Modular systems are also generally designed to accept one more module than required for their rated capacity, making them "N+1 ready" at lower cost than large traditional UPS system. Used correctly, modular UPS systems run at higher efficiency since they can be used close to maximum rated capacity (see figure 26). The main concern regarding modular UPS is reliability since the probability of failure increases with the increased number components employed. Figure 27: Example of modular line interactive UPS technology ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 45 |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 9.3 Energy efficiency improvement solutions | |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 9.3.1 Measurement of energy efficiency of existing equipment | This involves the review of the existing power distribution equipment within the chain in terms of its energy efficiency. This may be undertaken without significant Capex. |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 9.3.2 Energy capacity management | This involves the use of electrical capacity management tools in order to deliver a more efficient usage of consumption. This requires significant Capex (for the monitoring equipment) and Opex (installation and software). The implementation of electrical capacity management can generate some immediate reduction in terms of consumption, without any work on the power distribution components, and without any risks for business continuity. |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 9.3.3 Review of policy | |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 9.3.3.1 General | The implementation of improvements in energy efficiency by changing the distribution policy (such as AC or HVDC) have an impact on the equipment used and can represent significant Capex and a risk to business continuity. Such changes cannot be recommended in existing data centres. |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 9.3.3.2 HVDC versus AC | ETSI has published EN 300 132-3 [12] which discusses 400 V AC and DC power distribution. Some trials of HVDC have been undertaken on an international level by a number of organizations such as power distribution manufacturers, hardware vendors and also by laboratories, universities. These trials clearly demonstrate an interest to investigate more deeply and provide full 400 V DC directly into the IT equipment. Many documents, studies, and standards suggest that direct DC can generate some savings from 5 % to 15 % depending on several conditions. Technically, servers from main vendors could accept direct 400 V DC and work on this continues. One of the main problem concerns safety rules when working in a HVDC environment (which is quite unusual in the IT world) making maintenance and technical interventions on a DC supply chain much more costly than on AC, due to lack of competencies on the market. The present document judges that it is too early to take a definitive position on the usage of HVDC and that the work undertaken, results from trials and the announcements of main actors (vendors, manufacturers) will be reviewed before further recommendations are made. NOTE: Several standardization institutes are working relevant areas such as HVDC 400 V plugs, connectors and safety breakers. |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 9.3.4 High efficiency distribution equipment | This involves the installation of individual power distribution equipment with improved energy efficiency specifications. Typical examples include: • high-efficiency power distribution units; • high-efficiency motors in fans and pumps; • UPS units that exhibit improved efficiency over the over full range of load; • rotary-based UPS units; • correctly sized power distribution and conversion to optimize efficiency. ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 46 These solutions represent significant Capex and a risk to business continuity. |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 9.3.5 Backup power | The use of on-site generated energy (more often with fuel generator sets) as back-up to the main electrical power source does not improve energy efficiency. It is intended to be deployed in crisis situations where the aim is to get back to normal operation as quickly as possible. Note that backup power infrastructures shall be designed to provide 100 % of the energy needs for a specified and not insignificant period (e.g. 72 hours minimum). |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 10 Energy efficiency within existing data centres | Table 17 shows the short and medium term actions that may be employed to improve energy efficiency within existing data centres by reference to the clause numbers of the present document. Table 17: Short and medium term actions within existing data centres Technology area Short term actions Medium term actions IT infrastructure 6.3.1 6.3.2 6.3.3.2 6.3.3.3 6.3.4 Environmental control systems 7.2.1 7.2.2.1 7.2.2.2 7.2.3 7.2.4.1 7.2.5 Physical infrastructure 8 Power distribution systems 9.3.2 9.3.3 9.3.4 |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 11 Energy efficiency within new data centres | |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 11.1 General | This clause details the discipline that requires consideration during the design, installation and operation of a data centre in order to maximize savings on energy usage and reducing its "Carbon Footprint". Many of the solutions detailed in clauses 6, 7, 8 and 9 are applicable during the design of a new data centre. However, it is clear that the design and operation of the most energy efficient data centres requires a multi-disciplinary approach with each discipline contributing in its own domain to make energy usage more efficient - rather than one specific initiative concerning a single domain. Every aspect of the data centre requires detailed consideration, ranging from its location via the consolidation actions employed to the tools and processes adopted to monitor the operational aspects of the data centre. The process is described schematically in figure 28. ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 47 Figure 28: The path to energy efficiency within new data centres |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 11.2 Multi-disciplinary approach | |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 11.2.1 Location | The selection of a location is primarily affected by the availability, and quality, of power supplies. National or local legislation, regulation and constraints concerning industrial buildings influence the selection of location of new data centres. However, facilities offered by local authorities such as tax reductions, subventions, green bonus also have an effect. The selection of location is fundamental in relation to the "green" credentials based upon the following criteria and impacts: • the availability of "green" energy sources such as solar, wind or hydraulic; NOTE 1: Solar or wind energy sources do not guarantee the continuity required for a data centre, but can represent a non-negligible part of energy when appropriate conditions are respected. • the nature of energy present (renewable or not), its costs, its quality and the availability of backup supplies; • the type of materials you will use for the building (walls, roof, windows, etc.); • proximity of airport, ports, railway stations, highways - which has consequences on the shipping costs and on CO2 emissions associated with transport. The selection of a location is important for energy efficiency since climatic conditions will influence "cooling" approaches (which will both reduce the need for cooling). NOTE 2: In case of free-water cooling, proximity of a river, lake, sea, or other source of cold water will be a main factor. In such a case, floods or Tsunamis risks have to be taken in account. |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 11.2.2 Energy sources | |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 11.2.2.1 Main energy | In case of an Uptime Institute Tier 4 data centre, two separate sources of energy, coming from two remote plants or two different sources of energy are required and this dramatically limits the range of potential locations around the world that can support such data centres. ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 48 |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 11.2.2.2 Backup energy | Backup energy has to be dimensioned to cover 100 % of data centres needs for a determined period and provision has to be made using "on site" sources with generators or other renewable energy solutions. |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 11.2.3 Building conception | The specification of a building housing a data centre is directly impacted by its location. The orientation of the building together with the thermal insulation properties of the building walls, roof, windows, have a significant effect on the power demands associated with environmental control of the building interior. European standards for energy efficient buildings are in preparation. National approaches exist for energy efficient buildings such as HQE (Haute Qualité Energétique) for France and LEED (Leadership in Energy and Environmental Design) founded in the USA. |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 11.2.4 Internal design | In order to maximize the overall energy efficiency, the internal structure and configuration of the data centre should be analyzed to ensure that the appropriate power and environmental control resources are applied in different areas. A minimum set of identifiable spaces within data centres are: • Computer rooms (principally accommodating information technology equipment). • Network telecommunications rooms (accommodating network telecommunications equipment). • Technical rooms (accommodating UPS, batteries, etc.). Each of these, and any other, spaces shall be assessed in term of their demands for electrical power and cooling based upon their size and operating environmental conditions. Computer rooms also should be assessed in terms of the need for, and structural implications of, the constructions of raised floors. The internal design process should also cover the impact on cooling system introduced by the necessary utilities and services in the spaces. To support this activity, the design of data centre cabling is specified in EN 50173-5 [7] and EN 50173-2 [6] and the planning and installation of the cabling is specified in EN 50174-1 [8] and EN 50174-2 [9]. Figure 29: Example schematic of an Uptime Institute Tier 3 data centre design ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 49 |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 11.2.5 Energy and cooling | The introduction of high efficiency power distribution systems and components as described in clauses 9.3.3 and 9.3.4 has a direct effect on external power demand in support of the information technology and network telecommunication equipment but does not have an automatic effect on PUE unless accompanied by improvements in the power demand of the environmental control systems. The use of free cooling as described in clause 7.2.4.1 represents the primary method of improving PUE in new data centres. The reduction of energy involved in environmental control by means of increasing operating temperature and humidity as described in clause 7.2.3 also has a direct effect in PUE. |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 11.2.6 IT infrastructure | The IT infrastructure of new data centres has to: • be designed for maximum agility, scalability, and a common shared architecture; • adopt the most efficient hardware technologies proposed by the vendors (requiring vendors to prove the real efficiency of their technologies, and their impact on TCO). Figure 30: IT infrastructure layers Independent of the Uptime Institute Tier level a new data centre will comprise a combination of: • a rack-based architecture using very powerful serves will need high-density areas using, as a minimum, covered cold aisles with raised floors(as described in clause 7.2.2.1.3) or pods (as described in clause 7.2.2.1.5) in perhaps otherwise uncooled rooms and which are exhausted outside the building). This means the size of the computer room is not a main criteria, but the greatest concern is to carry the right resources to the right place. For these areas, the most appropriate indicator is the ratio of total power provided to the rack / total computational power (that is X kW Y TPM-C, described in clause 5.3.2.1); • legacy servers or Unix mainframes, which have the same problems as racks, consuming a lot of energy in a small area, and with an important heat dissipation; • other kind of equipment such as robotics, storage, network telecommunications equipment, or non racked servers that may be accommodated in traditional computer rooms, for which the right ratio is the kW/m2 (see clause 5.3.2.1). It should be noted that storage arrays, due to increased density, have energy needs approaching those of servers. We calculate an average need of 300 W per Terabyte. (For example, last generation of disks arrays can have more than 100 Tb capacity on a few square metres) .A future trend for next generation storage will be "Diskless storage" but the actual costs of these solutions to compare with disks technology make it not attractive for Companies. |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 11.2.7 Software | Infrastructure management software brings some benefits in the way to manage the energy needs more efficiency, and have a clear view of the potential of power in a computer room, in a rack, in a server. ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 50 Same if some thermic measurement tool is implemented in the Computer room. This can visualize the hot zones, in the room or in a rack, and propose some modelization for a better design of the computer room. This contributes to the improvement of the efficiency of energy usage for cooling. |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 11.2.8 Processes | Two major types of actions are to be launched. First, a global consolidation programme to reduce the number of technical and logical components, which are the main cause of the energy expenses (see topic on consolidation methods), Second time, a set of automation tools for operations in the data centre. The NGDC has to be seen as a "Service Factory", providing the right service, with the right resources, at the right time. That means all operational steps have to be automated - from the provisioning of resources to the way applications are managed. The ITIL (IT Information Library) best practices are key for a good management of the IT. Involved ITIL process in energy efficiency: • Asset management to know what is existing in the IT; • Capacity Management measure how what exists is used and size more precisely application needs; • Service Level Management to provide the right level of service for an application. |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 12 Conformance | |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 12.1 Existing data centres | To conform to the present document the following assessment needs to be undertaken: • A: Awareness of total energy consumption of the designated data centre. • B: The total computational load. • KPI = B/A. To achieve minimal conformance over a period of time the KPI has to show an improvement of 15 % compared to the original KPI. To achieve basic conformance over a period of time the KPI has to show an improvement of 25 % compared to the original KPI. To achieve enhanced conformance over a period of time the KPI has to show an improvement of 50 % compared to the original KPI. |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 12.2 New data centres | To conform to the present document: • the target PUE shall be calculated in accordance with clause 5.3.1. • equipment and processes shall be established to enable measurement of the electrical consumption at different points to determine PUE (i.e. to determine the comparative values of clause 5.3.1). The initial PUE is unlikely to meet the target PUE but may be expected to reduce and based on the assumptions of the plan shall meet the target PUE when the total computational load matches that of the target. Unless there are changes in application or performance of the data centres that justify a re-calculated target value, the actual PUE of the data centre based on total computational load shall continue to meet or exceed the target value. ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 51 New data centres should be designed with a low PUE target, which means every aspect of the design has to be studied, from the location where the data centre is to be built, to the way the applications will be operated in the data centre. This also covers the future plans of the vendors to improve the energy efficiency of the equipment, such as direct cooling of the heat sources at chip level. Assessment of energy needs in terms of energy per square metre will no longer be appropriate, due to the localized needs of some individual racks (perhaps 40 kW to 100 kW in 2010). Some specific closed high power-density areas (pod) should be built inside the computer rooms. In parallel a programme of information system (IS) rationalization should be launched, to minimize the number of applications, operating systems and IS complexity, since these are frequently the major causes of the excessive energy consumption. Such work on functional complexity is the most beneficial approach to maximising energy efficiency. |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 13 Recommendations | |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 13.1 Existing data centres | |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 13.1.1 General | There are three mains approaches concerning energy management in an existing data centre: • reduction of PUE (increase efficiency); • reduction of energy consumption (cost effective); • optimum usage of existing resources (environment and/or technical). For each of approaches some specific actions should be initiated. It should be noted that some actions will have positive effects via multiple approaches. |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 13.1.2 Reduction of PUE | In addition to conforming to clause 12.1, it is recommended that equipment and processes be established to enable measurement of the electrical consumption at different points to determine PUE (i.e. to determine the comparative values of clause 5.3.1). All actions related to reducing power consumption of cooling systems infrastructures (chillers, fans, engines, pumps) and increasing he energy efficiency of the power distribution system (UPS, PDU, Transformers), will have a positive effect on the PUE. It should be noted that most of these actions are costly, and represent a risk for business continuity. |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 13.1.3 Reduction of energy consumption | In addition to conforming to clause 12.1, it should be noted that actions to reduce the energy consumption of the information technology equipment (such as power management, capacity management and consolidation initiatives) will have a positive effect on energy consumption and energy costs, but could have a negative effect on PUE. These actions can be balanced by the actions in computer rooms as such as aisle segregation, higher operating temperatures and free cooling. |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 13.1.4 Optimum usage of existing resources | In addition to conforming to clause 12.1, actions should be undertaken to optimize the usage of existing resources including capacity management, virtualization and logical consolidations in order to increase computational load of servers. |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 13.1.5 High density areas | High-density areas should not be created that would impact air-flow and the effective cooling of existing areas. In enterprise data centres, it is recommended not to load racks fully and to balance the use high- and low-density equipments in rows. ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 52 |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 13.2 New data centres | |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 13.2.1 General | For a Tier 1 or 2 data centre, the target PUE should be in the range 1,3 and 1,5 (DCIE 66,7 % to 76,9 %). For a Tier 3 or 4 data centre, the target PUE should be in the range 1,6 and 2,0 (DCIE 50 % to 62,5 %). NOTE: The target ranges reflects the opportunity for free-cooling as shown in table 16 and the higher values of PUE for higher Tier data centres reflects the need for redundancy of cooling and power distribution systems. |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 13.2.2 Location study | This should be the main factor in the planning of future data centres. Following a comprehensive study covering risks (including natural, political, economical etc.) the choice of the location of the data centre will have a determinant aspect on all costs (Capex and Opex) as follows: • locations with mean annual temperatures able to provide the maximum free cooling potential can generate up to 60 % of savings on cooling energy; • the requirements for redundant infrastructures (separate power supplies using separate paths) in higher Tier level data centres place constraints on possible location. |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 13.2.3 Data centre construction | |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 13.2.3.1 External | The nature of the building components has an important impact on energy consumption. The building should neither as a "pressure cooker" or a "Thermos flask". The building should be constructed according to "green" standards such as HQE and LEED. |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 13.2.3.2 Internal | Data centres should be designed with different areas which will not only provide improved energy efficiency but also minimize Capex and Opex, by considering: • high density areas, treated separately from other areas, in small specific rooms; • traditional computer rooms, with a kW/m2 ratio; • segregation of mission-critical from non-strategic business equipment; • several environmental conditions regarding the criticality of the equipment and/or the free cooling capabilities during the year. This is shown schematically in figure 31. ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 53 Figure 31: Environmental conditions |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 13.2.4 Cooling | |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 13.2.4.1 systems | There should be a mix of traditional and free cooling systems. Traditional cooling should be applied: • where areas needing heavy cooling, such as high density areas, rooms with strategic mission-critical IT equipments; • when temperatures are not compatible with free cooling. In all cases, the cooling equipments should be "energy efficient" requiring the latest generation of fans, pumps, engines. |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 13.2.4.2 Temperature control | Mean temperature in computer rooms should reflect the criticality of the equipment or the application. It is not recommended to maintain low temperatures (18 °C to 20 °C) in rooms containing non-critical equipment or equipment that is specified to operate in accordance with EN 300 019-1-3 [11]. The introduction of an ETSI requirement for information technology equipment within network data centres to conform to EN 300 019-1-3 [11] is critical to allowing substantial reductions in energy consumption of cooling systems. |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 13.2.5 IT Infrastructure | |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 13.2.5.1 Architecture and policy | Policy should be focused on shared components of the IT infrastructure. As described in figure 32 the architecture should be based on blade farms for X86 servers, and partitionable high-end Unix servers or traditional mainframes for large database processing. For storage, a common shared infrastructure should be built on a SAN architecture. A data-tiering policy should be adopted with appropriate physical support (Low- cost storage for non critical data, high-end disk arrays for critical). ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 54 |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 13.2.5.2 Automation and capacity management | As described in figure 33 all operation processes should be automated using specific tools including: • supervision; • backup; • high-availability - disaster recovery; • inventory - asset management; • scheduling; • monitoring (measurement); • capacity management; • utility computing. |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 13.2.6 Organization - processes | It is recommended that the data centre is viewed as a component of a "Service", for which it provides technical environment and a set of functionalities, specific for operating and monitoring those Services, as shown in figure 32. Note that the technical infrastructure components and the logical layers are existing only because the Service Delivery stands on it. It is recommended that the data centre is considered as a factory, producing some Services, with an engagement of service level as shown in figure 33. Figure 32: The future data centre internal operation architecture ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 55 Application Application Application Application Technical Environnement – Energy – Cooling – Space floor Fire detection, … Network Environnement (main acces, backbones, cabling…) IT Infrastructure : Servers, storage, robotics, swithches, routers, directors, SAN, LAN, … Consolidation – Virtualization Layer Automation Tools : Provisionning, inventory, monitoring Service 1 Gold Service 2 Silver Service 3 Bronze Service N Process automation layer – Capacity Management Service Level Agreement layer Acceptance controls STOP Figure 33: The data centre as a service provider |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 14 Future opportunities | |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 14.1 General | Vendors in all areas are considering new solutions to meet changing needs for energy consumption, cooling and computing power and are proposing future equipment which integrates these demands. |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 14.2 Energy | • Produce your own renewable on-site energy, as much as possible. • Re-use wasted heat. • HVDC if significant energy consumption decrease is proved and tested. • New generation, higher efficiency, UPS. |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 14.3 Cooling | Free air or water cooling opportunities are of major concern for new data centres and will have a great impact on the PUE. Liquid or gas cooling directly in the servers or auto-cooled chassis or frames (previously used for mainframe computing components) are to be studied. |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 14.4 Energy efficient IT hardware | Follow the technical road-map of vendors, ensure in all sourcing or purchasing process of the real efficiency of equipment: • servers: new standards concerning temperature range and component efficiency (transformers, direct DC in the server, cooling on the chip, efficient fans, dynamic power management, etc.); ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 56 • storage arrays, libraries, etc.; • switches, routers, directors, ports, etc. |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 14.5 Software, tools and re-engineering | Data centres have to be equipped with a set of tools and gauges in order to be able to easily measure efficiency, and launch a set of initiatives concerning existing infrastructures including: • measurement tools for monitoring IT activity (CPU, Memory, I/O, etc.); • capacity management for IT infrastructure (the right resource at the right moment); • utility computing (automation for processes) - inventory, provisioning; • computing workload management; • virtualization - grid; • consolidation initiatives. |
a525c3531c3b8f5d14495cfdffcb2180 | 105 174-2-2 | 14.6 Consolidation initiatives | Think consolidation for servers, storage, and prepare common shared infrastructure to host applications as shown in figure 32. ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 57 Annex A (informative): Indications of the effect of energy efficiency actions Table A.1 provides indications of reduction of energy consumption using some the actions detailed in clauses 6, 7 and 9. The reductions shown are first order effects only, that is they do not show associated changes in one domain resulting from an action in another domain. Table A.1: Indicative reductions in energy consumption Actions Typical savings (%) Nature of action Description Clause references Within the domain Overall Domain: IT infrastructures Removal of obsolete equipment 6.2.1, 6.2.2 8 3,6 (see note 1) Short term Enablement of power management 6.2.3.2.1 10 4,5 (see note 1) Short term Activation of sleep mode 6.2.3.2.2 30 13,5 (see note 1) Short term Capacity management 6.2.3.3 8 3,6 (see note 1) Short term Virtualization 6.2.4.3 40 18,0 (see note 1) Short term Logical consolidation 6.2.4.4 45 20,3 (see note 1) Short term Functional consolidation 6.2.4.3, 6.2.4.4 and 6.2.4.5 60 27 Long term Domain: Environmental control systems Improvement of cooling efficiency 7.2.2.1.2 7.2.2.1.3 7.2.2.1.4 30 11,1 (see note 2) Medium term High density areas 7.2.2.1.5 25 9,3 (see note 2) Medium term Modification of temperature and humidity 7.2.3 40 14,8 (see note 2) Short term Free cooling - air-side 7.2.4.1 50 18,5 (see note 2) Medium term Free cooling - water-side 7.2.4.1 30 11,1 (see note 2) Medium term Cooling-on-the-chip and auto-cooled chassis 7.2.4.3 80 29,6 (see note 2) Long term Increasing the temperature difference between chilled water supply and return 7.2.5 3 1,1 (see note 2) Short term High-efficiency variable-speed air-handler fans and chilled water pumps 7.2.5 3 1,1 (see note 2) Medium term Sizing/re-sizing of cooling systems and the configuration of redundancy 7.2.5 10 3,8 (see note 2) Short term Domain: Power distribution systems HVDC 9.3.3 7 1,1 (see note 3) Long term High-efficiency power distribution units 9.3.4 5 0,8 (see note 3) Long term High-efficiency motors in fans and pumps 9.3.4 2 0,3 (see note 3) Medium term Improved efficiency UPS units 9.3.4 10 1,5 (see note 3) Medium term Rotary-based UPS units 9.3.3 8 1,2 (see note 3) Medium term Capacity management 9.3.4 3 0,5 (see note 3) Short term NOTE 1: The overall saving shown assumes that IT infrastructure initially constitutes 37 % of overall energy consumption. NOTE 2: The overall saving shown assumes that environmental control systems initially constitute 45 % of overall energy consumption. NOTE 3: The overall saving shown assumes that power distribution systems initially constitute 15 % of overall energy consumption. ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 58 History Document history V1.1.1 October 2009 Publication |
d5dcdc7007958ac950012314e8b6e4ab | 105 174-2 | 1 Scope | ETSI EN 305 174-2 [3] specifies a minimum set of required practices for energy management which are applicable to ICT sites of all sizes and business models. These are taken from a sub-set of those practices recommended by CLC/TR 50600-99-1 [1]. CLC/TR 50600-99-1 [1] also contains a much wider range of recommended practices which are applicable to specific designs of ICT site and may be applied to improve the energy management beyond the minimum requirements of ETSI EN 305 174-2 [3]. The present document: • maps the practices of CLC/TR 50600-99-1 [1] to general application of ETSI EN 305 174-2 [3] and also to the specific design options which may apply in a given ICT site; • details examples of the impact of such practices in relation to reductions in energy consumption or improvements in energy efficiency or management. In addition, the present document addresses the end-of-life and maintenance aspects of WEEE (as in ETSI EN 305 174-8 [4] and ETSI TS 105 174-8 [5]). |
d5dcdc7007958ac950012314e8b6e4ab | 105 174-2 | 2 References | |
d5dcdc7007958ac950012314e8b6e4ab | 105 174-2 | 2.1 Normative references | References are either specific (identified by date of publication and/or edition number or version number) or non-specific. For specific references, only the cited version applies. For non-specific references, the latest version of the referenced document (including any amendments) applies. Referenced documents which are not found to be publicly available in the expected location might be found at https://docbox.etsi.org/Reference. NOTE: While any hyperlinks included in this clause were valid at the time of publication, ETSI cannot guarantee their long term validity. The following referenced documents are necessary for the application of the present document. [1] CLC/TR 50600-99-1:2019: "Information technology - Data centre facilities and infrastructures - Part 99-1: Recommended practices for energy management". [2] ETSI EN 305 174-1: "Access, Terminals, Transmission and Multiplexing (ATTM); Broadband Deployment and Lifecycle Resource Management; Part 1: Overview, common and generic aspects". [3] ETSI EN 305 174-2: "Access, Terminals, Transmission and Multiplexing (ATTM); Broadband Deployment and Lifecycle Resource Management; Part 2: ICT sites". [4] ETSI EN 305 174-8: "Access, Terminals, Transmission and Multiplexing (ATTM); Broadband Deployment and Lifecycle Resource Management; Part 8: Management of end of life of ICT equipment (ICT waste/end of life)". [5] ETSI TS 105 174-8: "Access, Terminals, Transmission and Multiplexing (ATTM); Broadband Deployment and Lifecycle Resource Management; Part 8: Implementation of WEEE practices for ICT equipment during maintenance and at end-of-life". ETSI ETSI TS 105 174-2 V1.3.1 (2020-01) 12 |
d5dcdc7007958ac950012314e8b6e4ab | 105 174-2 | 2.2 Informative references | References are either specific (identified by date of publication and/or edition number or version number) or non-specific. For specific references, only the cited version applies. For non-specific references, the latest version of the referenced document (including any amendments) applies. NOTE: While any hyperlinks included in this clause were valid at the time of publication, ETSI cannot guarantee their long term validity. The following referenced documents are not necessary for the application of the present document but they assist the user with regard to a particular subject area. [i.1] CLC/TR 50600-99-2:2019: "Information technology - Data centre facilities and infrastructures - Part 99-2: Recommended practices for environmental sustainability". [i.2] CENELEC EN 50173 series: "Information technology - Generic cabling systems". [i.3] CENELEC EN 50174-2: "Information technology - Cabling installation - Part 2: Installation planning and practices inside buildings". [i.4] CENELEC EN 50600-1: "Information technology - Data centre facilities and infrastructures - Part 1: General concepts". [i.5] CENELEC EN 50600-2-1: "Information technology - Data centre facilities and infrastructures - Part 2-1: Building construction". [i.6] CENELEC EN 50600-2-2:2019: "Information technology - Data centre facilities and infrastructures - Part 2-2: Power supply and distribution". [i.7] CENELEC EN 50600-2-3:2019: "Information technology - Data centre facilities and infrastructures - Part 2-3: Environmental control". [i.8] CENELEC EN 50600-2-4: "Information technology - Data centre facilities and infrastructures - Part 2-4: Telecommunications cabling infrastructure". [i.9] CENELEC EN 50600-4-2: "Information technology - Data centre facilities and infrastructures - Part 4-2: Power Usage Effectiveness". [i.10] CENELEC EN 50600-4-6: "Information technology - Data centre facilities and infrastructures - Part 4-6: Energy Reuse Factor". [i.11] CENELEC EN 62040 series: "Uninterruptible power systems (UPS)". [i.12] ETSI EN 300 019-1-3: "Environmental Engineering (EE); Environmental conditions and environmental tests for telecommunications equipment; Part 1-3: Classification of environmental conditions; Stationary use at weatherprotected locations". [i.13] ETSI EN 300 132 series: "Environmental Engineering (EE); Power supply interface at the input of Information and Communication Technology (ICT) equipment". [i.14] ETSI EN 300 132-3 series: "Environmental Engineering (EE); Power supply interface at the input of Information and Communication Technology (ICT) equipment; Part 3: Up to 400 V Direct Current (DC)". [i.15] ETSI EN 300 132-3-1: "Environmental Engineering (EE); Power supply interface at the input to telecommunications and datacom (ICT) equipment; Part 3: Operated by rectified current source, alternating current source or direct current source up to 400 V; Sub-part 1: Direct current source up to 400 V". [i.16] ETSI EN 301 605: "Environmental Engineering (EE); Earthing and bonding of 400 VDC data and telecom (ICT) equipment". [i.17] ETSI EN 303 470: "Environmental Engineering (EE); Energy Efficiency measurement methodology and metrics for servers". ETSI ETSI TS 105 174-2 V1.3.1 (2020-01) 13 [i.18] ETSI EN 305 200-2-1: "Access, Terminals, Transmission and Multiplexing (ATTM); Energy management; Operational infrastructures; Global KPIs; Part 2: Specific requirements; Sub-part 1: ICT Sites". [i.19] ETSI EN 305 200-1: "Access, Terminals, Transmission and Multiplexing (ATTM); Energy management; Operational infrastructures; Global KPIs; Part 1: General requirements". [i.20] ETSI EN 305 200-3-1: "Access, Terminals, Transmission and Multiplexing (ATTM); Energy management; Operational infrastructures; Global KPIs; Part 3: ICT Sites; Sub-part 1: DCEM". [i.21] ETSI ES 202 336-9: "Environmental Engineering (EE); Monitoring and Control Interface for Infrastructure Equipment (Power, Cooling and Building Environment Systems used in Telecommunication Networks); Part 9: Alternative Power Systems". [i.22] ETSI ES 202 336-12: "Environmental Engineering (EE); Monitoring and control interface for infrastructure equipment (power, cooling and building environment systems used in telecommunication networks); Part 12: ICT equipment power, energy and environmental parameters monitoring information model". [i.23] ETSI ES 203 199: "Environmental Engineering (EE); Methodology for environmental Life Cycle Assessment (LCA) of Information and Communication Technology (ICT) goods, networks and services". [i.24] ETSI TR 102 489: "Environmental Engineering (EE); European telecommunications standard for equipment practice; Thermal management guidance for equipment and its deployment". [i.25] ETSI TS 103 199: "Environmental Engineering [EE]; Life Cycle Assessment (LCA) of ICT equipment, networks and services; General methodology and common requirements". [i.26] ETSI TS 105 200-3-1: "Access, Terminals, Transmission and Multiplexing (ATTM); Energy management; Operational infrastructures; Implementation of Global KPIs; Part 3: ICT sites: Sub-part 1: DCEM". [i.27] ISO 14040: "Environmental management. Life cycle assessment. Principles and framework". [i.28] ISO 14044: "Environmental management. Life cycle assessment. Requirements and guidelines". [i.29] ISO 14045: "Environmental management. Eco-efficiency assessment of product systems. Principles, requirements and guidelines". [i.30] ISO 14511 series: "Air conditioners, liquid chilling packages and heat pumps for space heating and cooling and process chillers, with electrically driven compressors". [i.31] ISO 14644-1:2015: "Cleanrooms and associated controlled environments. Classification of air cleanliness by particle concentration". [i.32] ISO 16890-1: "Air filters for general ventilation. Technical specifications, requirements and classification system based upon particulate matter efficiency (ePM)". [i.33] ISO 50001: "Energy management systems. Requirements with guidance for use". [i.34] ISO/IEC 20000 series: "Information technology - Service management". [i.35] ISO/IEC 21836: "Information technology - Data centres - Server Energy Effectiveness Metric". [i.36] Void. [i.37] ISO/IEC 30134-6: "Information technology - Data centres - Key performance indicators: Part 6: Energy re-sue factor (ERF)". [i.38] ISO/IEC TR 22237-50: "Information technology - Data centre facilities and infrastructures - Part 50: Earthquake risk and impact analysis". [i.39] ISO/IEC TS 22237-2: "Information technology - Data centre facilities and infrastructures - Part 2: Building construction". ETSI ETSI TS 105 174-2 V1.3.1 (2020-01) 14 [i.40] ISO/IEC TS 22237-3: "Information technology - Data centre facilities and infrastructures - Part 3: Power distribution". [i.41] ISO/IEC TS 22237-4: "Information technology - Data centre facilities and infrastructures - Part 4: Environmental control". [i.42] CLC/TR 50600-99-1:2018: "Information technology - Data centre facilities and infrastructures - Part 99-1: Recommended practices for energy management". [i.43] ASHRAE white paper 2011: "Gaseous and Particulate Contamination Guidelines for Data Centres". |
d5dcdc7007958ac950012314e8b6e4ab | 105 174-2 | 3 Definition of terms, symbols and abbreviations | |
d5dcdc7007958ac950012314e8b6e4ab | 105 174-2 | 3.1 Terms | For the purposes of the present document, the terms given in ETSI EN 305 174-2 [3], CLC/TR 50600-99-1 [1], ETSI EN 305 174-8 [4], ETSI TS 105 174-8 [5] and the following apply: absorption chiller: refrigeration unit that uses a heat source to provide the energy needed to drive a cooling process microgrid: group of interconnected loads and distributed energy resources within clearly defined electrical boundaries that acts as a single controllable entity with respect to the grid primary (power) supply: principal power supply that provides power to the ICT site under normal operating conditions secondary (power) supply: power supply independent from, and that is continuously available to be used to provide power to the ICT site following the disruption of, the primary power supply NOTE: A second feed to a separate transformer from the same grid is not a secondary supply. |
d5dcdc7007958ac950012314e8b6e4ab | 105 174-2 | 3.2 Symbols | For the purposes of the present document, the symbols given in ETSI EN 305 174-2 [3], CLC/TR 50600-99-1 [1], ETSI EN 305 174-8 [4] and ETSI TS 105 174-8 [5] apply. |
d5dcdc7007958ac950012314e8b6e4ab | 105 174-2 | 3.3 Abbreviations | For the purposes of the present document, the abbreviations given in ETSI EN 305 174-1 [2], ETSI EN 305 174-2 [3], CLC/TR 50600-99-1 [1], ETSI EN 305 174-8 [4], ETSI TS 105 174-8 [5] and the following apply: AC Alternative Current BREEAM Building Research Establishment Environmental Assessment Method DC Direct Current EMAS Eco-Management and Audit Scheme HQE Haute Qualité Environnementale LCA Life Cycle Assessment MMR Measurement, Monitoring and Reporting UPS Uninterruptible Power System ETSI ETSI TS 105 174-2 V1.3.1 (2020-01) 15 |
d5dcdc7007958ac950012314e8b6e4ab | 105 174-2 | 4 Applicability of the present document | |
d5dcdc7007958ac950012314e8b6e4ab | 105 174-2 | 4.1 Introduction to CLC/TR 50600-99-1 | CLC/TR 50600-99-1 [1] was first published in 2016 as a "standards-based" equivalent to the Best Practices Guidelines for the EU Code of Conduct on Data Centre Energy Efficiency V.7. At the time of initial publication of CLC/TR 50600-99-1 it was hoped that the European Commission Directorate General Joint Research Centre would abandon their Best Practices Guidelines in favour of CLC/TR 50600-99-1 which would have been jointly badged as a CLC and DG JRC document. This did not occur and as a result each year, until 2019, CLC/TR 50600-99-1 has been updated in line with the changes to the Best Practices Guidelines. However, in 2019 it was decided to discontinue the direct reference to the Best Practices Guidelines and only update CLC/TR 50600-99-1 when required. This separation from the EU Code of Conduct on Data Centre Energy Efficiency and associated stability of CLC/TR 50600-99-1 means that direct reference to it from the present document is viable. In addition, "environmental sustainability" practices contained within CLC/TR 50600-99-1:2018 [i.42] were transferred into CLC/TR 50600-99-2 [i.1] which clearly differentiates the practices as follows: • CLC/TR 50600-99-1 dealing with energy management; • CLC/TR 50600-99-2 dealing with environmental sustainability. |
d5dcdc7007958ac950012314e8b6e4ab | 105 174-2 | 4.2 ICT sites | ETSI EN 305 174-2 [3] specifies requirements for resource management of ICT sites based on the practices of CLC/TR 50600-99-1 [1] which are applicable to ICT sites of all sizes and business models. ETSI EN 305 174-1 [2] highlights that the concept of ICT sites was focussed on operator site (OS) and network data centre (NDC) facilities of broadband deployment. The definition of "ICT site" within ETSI EN 305 174-1 [2] and ETSI EN 305 174-2 [3] is a site containing structures or group of structures dedicated to the accommodation, interconnection and operation of ITE and NTE together with all the facilities and infrastructures for power distribution and environmental control together with the necessary levels of resilience and security required to provide the desired service availability. However, it has to be highlighted that the wide range of buildings and other structures housing OS and NDC facilities makes it impossible to consider them to be ICT sites subject to all of the requirements and recommendations of the present document. There are many reasons for this including the fact that were designed to, and only do, accommodate NTE equipment (although they may be evolving to incorporate ITE). That legacy NTE may restrict the application of some of the practices of both ETSI EN 305 174-2 [3] and the present document. |
d5dcdc7007958ac950012314e8b6e4ab | 105 174-2 | 4.3 Other Network Distribution Nodes (NDNs) | ETSI EN 305 174-1 [2] defines an NDN as a grouping of NTE equipment within the boundaries of an access network providing distribution of service from an operator site (OS). Structures historically identified as being NDNs and only housing NTE are now evolving to also contain ITE also and are therefore technically ICT sites according to the definition of ETSI EN 305 174-1 [2] and ETSI EN 305 174-2 [3]. The application of the practices of both ETSI EN 305 174-2 [3] and the present document may be applicable in specific cases but cannot be considered generally or universally applicable. ETSI ETSI TS 105 174-2 V1.3.1 (2020-01) 16 |
d5dcdc7007958ac950012314e8b6e4ab | 105 174-2 | 5 Mapping ETSI EN 305 174-2 to CLC/TR 50600-99-1 | |
d5dcdc7007958ac950012314e8b6e4ab | 105 174-2 | 5.1 General | ETSI EN 305 174-2 [3] specifies the requirements, applicable to all ICT sites, addressing the general engineering for energy management and management of end-of-life procedures. The energy management requirements are taken from specific recommendations of CLC/TR 50600-99-1 [1] are were selected because they are applicable to all sizes and types of ICT site. The following clauses describe these as: • Clause 5.2 Power supply and distribution. • Clause 5.3 Environmental control. • Clause 5.4 ICT equipment and software. |
d5dcdc7007958ac950012314e8b6e4ab | 105 174-2 | 5.2 Power supply and distribution | |
d5dcdc7007958ac950012314e8b6e4ab | 105 174-2 | 5.2.1 Design aspects | Table 1 shows the required energy management practices of ETSI EN 305 174-2 [3]. Further information can be obtained by reading the relevant reference in CLC/TR 50600-99-1 [1]. NOTE: Both CLC/TR 50600-99-1 [1] and ETSI EN 305 174-2 [3] state UPS incorrectly to be "uninterruptible power supply", the correct term is "uninterruptible power system": "UPS systems" is therefore an incorrect phrase but is used here because of the source material. Table 1: ETSI EN 305 174-2 [3] requirements for power supply and distribution design Practice Reference within CLC/TR 50600-99-1 [1] Electrical equipment, other than Uninterruptable Power Supply (UPS) batteries, shall be selected which does not require cooling in normal operation. 5.4.1 Power supply and distribution capacity in excess of that anticipated in the short term shall not be provisioned. 5.4.4 5.2.8 The above practice has to take into account that OSs are designed to have a longer life than NDC facilities and as a result this practice is modified as follows. "Taking into consideration works required for future expansion, the power supply and distribution capacity should not exceed the expected load anticipated in the short term". Where UPS systems are required, they shall be modular (scalable). 5.4.5 5.4.27 Static UPS systems shall be compliant with the EU Code of Conduct for AC Uninterruptible Power Systems. 5.4.30 The above practice has to take into account that OSs do not always contain a UPS, but a rectifier and a battery directly in parallel to the 48 VDC bars. "Where UPS systems are required, they shall be compliant with the EU Code of Conduct for AC Uninterruptible Power Systems". Mechanical and electrical equipment shall be selected to enable local metering/monitoring of temperature and incorporated within a system that allows for reporting of temperature trends over a period of time as well as instantaneous temperature readings. 5.4.35 |
d5dcdc7007958ac950012314e8b6e4ab | 105 174-2 | 5.2.2 Operational aspects | Table 2 shows the required energy management practices of ETSI EN 305 174-2 [3]. Further information can be obtained by reading the relevant reference in CLC/TR 50600-99-1 [1]. ETSI ETSI TS 105 174-2 V1.3.1 (2020-01) 17 Table 2: ETSI EN 305 174-2 [3] requirements for power supply and distribution operation Practice Reference within CLC/TR 50600-99-1 [1] Electrical equipment shall be subjected to regular maintenance to preserve or achieve a "like-new condition". Extension of 5.1.16 |
d5dcdc7007958ac950012314e8b6e4ab | 105 174-2 | 5.3 Environmental control | |
d5dcdc7007958ac950012314e8b6e4ab | 105 174-2 | 5.3.1 Design aspects | Table 3 shows the required energy management practices of ETSI EN 305 174-2 [3]. Further information can be obtained by reading the relevant reference in CLC/TR 50600-99-1 [1]. Table 3: ETSI EN 305 174-2 [3] requirements for environmental control design Practice Reference within CLC/TR 50600-99-1 [1] Mechanical equipment shall be selected which does not require cooling in normal operation. 5.4.1 5.4.18 Cooling capacity in excess of that anticipated in the short term shall not be provisioned. 5.4.26 6.1.15 The above practice has to take into account that OSs are designed to have a longer life than NDC facilities and as a result this practice is modified as follows. "Taking into consideration works required for future expansion, cooling capacity should not exceed the expected load anticipated in the short term". Cooling system infrastructures shall be designed to maximize its efficiency under partial load conditions (e.g. variable speed (or frequency) controls shall be used to optimize energy consumption during changing load conditions). 5.4.5 5.4.16 5.4.17 5.4.23 6.1.15 6.4.12 Cooling designs and solutions shall maximize the use of free cooling taking into consideration site constraints, local climatic conditions or applicable regulations. 5.4.18 5.4.22 6.4.2 6.4.3 6.4.4 6.4.5 6.4.6 6.4.7 6.4.8 Cooling units shall be sized such that they are capable of providing the maximum amount of free cooling to the ICT equipment at the temperature and humidity recommended by the ICT equipment manufacturer(s). 5.4.26 Designs shall incorporate appropriately controlled variable speed fans. 5.4.23 6.4.12 Electrically commutated motors shall be used (and retro-fitted where possible) which are significantly more energy efficient than traditional AC motors across a wide range of speeds. 5.4.17 Where required, humidity control shall be centralized at the ICT site supply computer room air handling (CRAH) unit. Computer Room Air Conditioner (CRAC) and CRAH units shall not be equipped with humidity control capability, or reheat capability, to reduce both capital and on-going maintenance costs. 5.4.25 Mechanical and electrical equipment shall be selected to enable local metering/monitoring of temperature and incorporated within a system that allows for reporting of temperature trends over a period of time as well as instantaneous temperature readings. 5.4.35 Where air cooling is used for ICT equipment: • ICT equipment shall be aligned in the computer room space(s) in a hot/cold aisle configuration. 5.2.16 The above practice has to take into account that some NTE in OS facilities is not designed to benefit from hot aisle/cold aisle or raised (access) floor designs as the equipment takes in cooling air and exhausts it from the same face (generally the front). As a result this practice is modified as follows. ETSI ETSI TS 105 174-2 V1.3.1 (2020-01) 18 Practice Reference within CLC/TR 50600-99-1 [1] "Where air cooling is used, ICT equipment shall be arranged in cold/hot aisle configuration. Equipment which takes in cooling air and exhausts it from the same face shall instead use air deflectors to prevent mixing of cooling and exhaust air". |
d5dcdc7007958ac950012314e8b6e4ab | 105 174-2 | 5.3.2 Operational aspects | Table 4 shows the required energy management practices of ETSI EN 305 174-2 [3]. Further information can be obtained by reading the relevant reference in CLC/TR 50600-99-1 [1]. Table 4: ETSI EN 305 174-2 [3] requirements for environmental control operation Practice Reference within CLC/TR 50600-99-1 [1] Mechanical equipment shall be subjected to regular maintenance to preserve or achieve a "like-new condition". 5.1.16 Allowable temperature and humidity ranges for existing installed ICT equipment shall be identified and: 1) the energy consumed by cooling systems shall be the minimum appropriate for these requirements (and not over-supplied); 2) ICT equipment with restrictive intake temperature ranges shall be either: i) be marked for replacement as soon as is practicable with equipment capable of a wider intake range; or ii) installed within groups of ICT, mechanical and electrical equipment with common environmental requirements. 5.4.26 5.1.8 5.4.10 ICT equipment with different airflow directions shall be installed in separate areas which have independent environmental controls. 5.1.13 5.4.6 5.2.18 The opportunity to optimize the refrigeration cycle set-points of mechanical refrigeration systems to minimize compressor energy consumption shall be regularly evaluated. 5.1.20 Where air cooling is used for ICT equipment: • blanking plates shall be installed in locations within cabinets/racks where there is no equipment. 5.1.10 |
d5dcdc7007958ac950012314e8b6e4ab | 105 174-2 | 5.4 ICT equipment and software | |
d5dcdc7007958ac950012314e8b6e4ab | 105 174-2 | 5.4.1 Design aspects | None |
d5dcdc7007958ac950012314e8b6e4ab | 105 174-2 | 5.4.2 Operational aspects | Table 5 shows the required energy management practices of ETSI EN 305 174-2 [3]. Further information can be obtained by reading the relevant reference in CLC/TR 50600-99-1 [1]. Table 5: ETSI EN 305 174-2 [3] requirements for ICT equipment operation Practice Reference within CLC/TR 50600-99-1 [1] An ITIL type Configuration Management Database and Service Catalogue shall be implemented in accordance with ISO/IEC 20000 series [i.34]. 5.1.6 The above practice has to take into account that the range (number and size) of ICT sites may limit the practicality of this requirement. As a result this practice is modified as follows. "The network operator shall establish a framework defining the ICT sites for which for the application of an ITIL type Configuration Management Database and Service Catalogue shall be implemented in accordance with ISO/IEC 20000 series [i.34]". Energy efficiency performance shall be a high priority criterion when choosing new ICT equipment. 5.2.1 ETSI ETSI TS 105 174-2 V1.3.1 (2020-01) 19 Practice Reference within CLC/TR 50600-99-1 [1] ICT equipment shall perform the required task with the lowest power consumption in the expected environmental conditions (see notes 1 and 2). 5.2.5 The above practice has to take into account that: • the energy consumption of NTE is addressed by the EU Code of Conduct for the energy consumption of broadband equipment; • the energy consumption of servers is addressed by key performance indicators specified in ETSI EN 303 470 [i.17] and ISO/IEC 21836 [i.35]. Periodic reviews shall be undertaken to validate the consistency of deployment of ICT equipment with respect to the cooling design and identify and implement appropriate changes. 5.1.15 Software shall use the least energy to perform the required task whilst ensuring that the application fully meets the defined operational needs. 5.3.1 NOTE 1: The power consumption of the device in normal operating circumstances should be considered in addition to peak performance per Watt. NOTE 2: ENERGY STAR®, SERT™, SPECpower® or other appropriate performance metrics should be closely aligned to the target environment. |
d5dcdc7007958ac950012314e8b6e4ab | 105 174-2 | 5.5 Other practices of CLC/TR 50600-99-1 | There are many other energy management practices contained within CLC/TR 50600-99-1 [1] which cannot be converted into requirements because they are not applicable to all sizes or types of ICT sites or are dependent upon the design and/or operation of the ICT site. These are described in: • Clause 6 Construction. • Clause 7 Power supply and distribution. • Clause 8 Environmental control. • Clause 9 ICT equipment and software. |
d5dcdc7007958ac950012314e8b6e4ab | 105 174-2 | 6 Construction recommendations | |
d5dcdc7007958ac950012314e8b6e4ab | 105 174-2 | 6.1 General | ICT sites can be housed in buildings or other structures and the term structures is used hereafter as being a generic term. The specification of a structure housing an ICT site is directly impacted by its location (see clause 6.2). The orientation of the structure together with the design and thermal insulation properties of walls, roofs and windows have a significant effect on the energy consumption associated with environmental control of the interior. The European standard for the construction of structures housing data centres (which can be applied to some extent to other ICT sites) is CENELEC EN 50600-2-1 [i.5] which specifies requirements and recommendations for: • site location and selection; • layout (configuration); • construction including relevant aspects of physical security and fire protection. NOTE: CENELEC EN 50600-2-1 [i.5] is the basis for ISO/IEC TS 22237-2 [i.39]. This clause contains practices that are applicable to the design of new or refurbished ICT sites (clause 6.2) and the operation of existing ICT sites (clause 6.3). National approaches exist for energy efficient buildings such as HQE (Haute Qualité Environnementale) for France, and BREEAM (Building Research Establishment Environmental Assessment Method) in United Kingdom. ETSI ETSI TS 105 174-2 V1.3.1 (2020-01) 20 |
d5dcdc7007958ac950012314e8b6e4ab | 105 174-2 | 6.2 Design practices | |
d5dcdc7007958ac950012314e8b6e4ab | 105 174-2 | 6.2.1 Location of ICT sites |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.