hash
stringlengths
32
32
doc_id
stringlengths
7
13
section
stringlengths
3
121
content
stringlengths
0
2.2M
25afcd9f8bff1a1f16037e46f0d49896
104 041
6.1 General Principles
This clause contains the general O-RAN slicing architecture principles as described below: • O-RAN slicing architecture and interface specifications shall be consistent with 3GPP architecture and interface specifications to the extent possible. • O-RAN slicing architecture shall provide standardized management service interfaces for RAN slicing management services. • O-RAN slicing architecture shall enable multi-vendor interoperability. • O-RAN slicing architecture shall support various Network Operator deployment options. • O-RAN slicing architecture shall support management of RAN slice subnets in multi-operator scenarios.
25afcd9f8bff1a1f16037e46f0d49896
104 041
6.2 Slicing Requirements
25afcd9f8bff1a1f16037e46f0d49896
104 041
6.2.1 Functional Requirements
Initial set of O-RAN slicing functional requirements based on the use cases defined in the present document are captured in table 6.2.1-1. Table 6.2.1-1: O-RAN Slicing Functional Requirements REQ Description Note [REQ-SL-FUN1] O-RAN slicing architecture and interfaces shall support network slicing, where an instance of O-RAN network function can be associated with one or more slices. O-RAN.WG1.OAM- Architecture-v04.00 [i.12] [REQ-SL-FUN2] O-RAN slicing architecture shall support differentiated handling of traffic for different RAN slice subnets. 3GPP TS 38.300 [10] [REQ-SL-FUN3] O-RAN slicing architecture shall support resource isolation between slices. 3GPP TS 38.300 [10] [REQ-SL-FUN4] O-RAN slicing architecture shall enable traffic and services in one RAN slice subnet having no impact on traffic and services in other RAN slice subnets in the same network. 3GPP TS 22.261 [1] [REQ-SL-FUN5] O-RAN slicing architecture shall enable mechanisms to avoid shortage of shared resources in one slice breaking the service level agreement for another slice. 3GPP TS 38.300 [10] [REQ-SL-FUN6] O-RAN slicing architecture shall enable defining a priority order between different RAN slice subnets in case multiple slices compete for resources on the same RAN. 3GPP TS 22.261 [1] [REQ-SL-FUN7] O-RAN slicing architecture shall apply policies at S-NSSAI level according to the SLA required by the network slice. - [REQ-SL-FUN8] O-RAN slicing architecture shall support means by which the operator can differentiate policy control, functionality and performance provided in different RAN slice subnets. 3GPP TS 22.261 [1] [REQ-SL-FUN9] O-RAN slicing architecture shall support QoS differentiation within a slice. 3GPP TS 38.300 [10] [REQ-SL-FUN10] O-RAN slicing architecture shall enable slice aware radio resource management strategies (such as admission control, congestion control, handover preparation). 3GPP TS 38.300 [10] [REQ-SL-FUN11] O-RAN slicing architecture shall allow creation, modification, and deletion of a RAN slice subnet. 3GPP TS 28.531 [4] ETSI ETSI TS 104 041 V11.0.0 (2025-03) 27 REQ Description Note [REQ-SL-FUN12] O-RAN slicing architecture shall support interaction between the SMO Framework and Non-RT RIC to consume provisioning management services exposed by each O-RAN managed element to configure RAN slice subnets through the O1 interface. RAN Slice SLA Assurance use case, NSSI Resource Allocation Optimization use case [REQ-SL-FUN13] O-RAN slicing architecture shall support the interaction between the SMO Framework and Non-RT RIC to consume management of slice specific PM jobs, PM data collection/storage/query/statistical reports from O-RAN network functions through the O1 interface. RAN Slice SLA Assurance use case, NSSI Resource Allocation Optimization use case [REQ-SL-FUN14] O-RAN slicing architecture shall support interaction between the SMO Framework and Non-RT RIC to retrieve/notify RAN slice subnet requirements (SLA) along with O-NSSI information. RAN Slice SLA Assurance use case, O-RAN Slice Subnet Instance Creation use case [REQ-SL-FUN15] O-RAN slicing architecture shall support provisioning, generation and monitoring of slice specific RAN performance metrics through O1 interface. RAN Slice SLA Assurance use case, NSSI Resource Allocation Optimization use case [REQ-SL-FUN16] O-RAN slicing architecture shall support training, deployment and execution of AI/ML models for slice SLA assurance and NSSI resource allocation optimization. RAN Slice SLA Assurance use case, NSSI Resource Allocation Optimization use case [REQ-SL-FUN17] O-RAN slicing architecture shall support slice specific policy guidance, enrichment information and policy feedback. RAN Slice SLA Assurance use case [REQ-SL-FUN18] O-RAN slicing architecture shall support provisioning, generation and monitoring of slice specific RAN performance data through E2 interface. RAN Slice SLA Assurance use case [REQ-SL-FUN19] O-RAN slicing architecture shall support reconfiguration of slice specific RAN parameters and resources for slice SLA assurance. RAN Slice SLA Assurance use case [REQ-SL-FUN20] O-RAN slicing architecture shall enable creation of O-RAN network slice subnet instances as O-RAN Network Service (NS) instance(s) within O-Cloud(s). O-RAN Slice Subnet Instance Creation use case [REQ-SL-FUN21] O-RAN slicing architecture shall enable association and disassociation of O-Cloud NS instances with corresponding O-NSSIs. O-RAN Slice Subnet Instance Creation use case, O-RAN Slice Subnet Instance Termination use case [REQ-SL-FUN22] O-RAN slicing architecture shall enable creation of O-RAN network slice subnet instances as O-RAN Network Function (NF) instance(s) within O-Cloud(s). O-RAN Slice Subnet Instance Creation use case [REQ-SL-FUN23] O-RAN slicing architecture shall enable association and disassociation of O-Cloud NF instances with corresponding O-NSSIs. O-RAN Slice Subnet Instance Creation use case, O-RAN Slice Subnet Instance Termination use case [REQ-SL-FUN24] O-RAN slicing architecture shall support (re-)configuration of an O-NSSI's constituent network functions through the O1 interface. O-RAN Slice Subnet Instance Creation use case, O-RAN Slice Subnet Modification use case, O- RAN Slice Subnet Configuration use case [REQ-SL-FUN25] O-RAN slicing architecture shall support (re-)configuration of an O-RAN Network Slice Subnet Instance (O-NSSI) attributes. O-RAN Slice Subnet Instance Creation / Activation / Modification / Deactivation / Configuration use case [REQ-SL-FUN26] O-RAN slicing architecture shall have the capability for the establishment of required transport network connectivity between O-RAN NFs during provisioning of O-RAN network slice subnet instances. O-RAN Slice Subnet Instance Creation use case, O-RAN Slice Subnet Instance Modification use case [REQ-SL-FUN27] O-RAN slicing architecture shall enable Non-RT RIC to be notified when an O-NSSI has been created, activated, modified, deactivated and terminated. O-RAN Slice Subnet Instance Creation use case, O-RAN Slice Subnet Instance Modification use case, O-RAN Slice Subnet Instance Termination use case ETSI ETSI TS 104 041 V11.0.0 (2025-03) 28 REQ Description Note [REQ-SL-FUN28] O-RAN slicing architecture shall support the capability to activate the constituent physical network functions such as O-DU and O-RU within an O-NSSI. O-RAN Slice Subnet Instance Activation use case [REQ-SL-FUN29] O-RAN slicing architecture shall enable modification of O-RAN network slice subnet instances through modification (such as scaling, updating, instantiation, etc.) of O-RAN Network Service (NS) instance(s) within O-Cloud(s). O-RAN Slice Subnet Instance Modification use case [REQ-SL-FUN30] O-RAN slicing architecture shall enable modification of O-RAN network slice subnet instances through modification (such as scaling, updating, instantiation, etc.) of O-RAN Network Function (NF) instance(s) within O-Cloud(s). O-RAN Slice Subnet Instance Modification use case [REQ-SL-FUN31] O-RAN slicing architecture shall support the capability to deactivate the constituent physical network functions such as O-DU and O-RU within an O-NSSI. O-RAN Slice Subnet Instance Deactivation use case [REQ-SL-FUN32] O-RAN slicing architecture shall enable removal of constituent O-RAN Network Service (NS) instance(s) that were functioning within O-Cloud(s) and were associated to O-RAN network slice subnet instance(s). O-RAN Slice Subnet Instance Termination use case [REQ-SL-FUN33] O-RAN slicing architecture shall enable removal of constituent O-RAN Network Function (NF) instance(s) that were functioning within O-Cloud(s) and were associated to O-RAN network slice subnet instance(s). O-RAN Slice Subnet Instance Termination use case [REQ-SL-FUN34] O-RAN slicing architecture shall have the capability for the removal of non-shared transport network connectivity between O-RAN NFs during termination of O-RAN network slice subnet instances. O-RAN Slice Subnet Instance Termination use case [REQ-SL-FUN35] O-RAN slicing architecture shall enable reservation of O-RAN Network Service (NS) instance(s) within O-Cloud(s). O-RAN Slice Subnet Feasibility Check [REQ-SL-FUN36] O-RAN slicing architecture shall enable reservation of O-RAN Network Function (NF) instance(s) within O-Cloud(s). O-RAN Slice Subnet Feasibility Check [REQ-SL-FUN37] O-RAN slicing architecture shall enable retrieval of network utilization information from the SMO and Non-RT RIC (e.g. load level information, resource usage information from management data analytics services). O-RAN Slice Subnet Feasibility Check [REQ-SL-FUN38] O-RAN slicing architecture shall support interaction between the SMO Framework and Non-RT RIC to consume O-Cloud management and orchestration services through the O2 interface. RAN Slice SLA Assurance use case, NSSI Resource Allocation Optimization use case [REQ-SL-FUN39] O-RAN slicing architecture shall enable provisioning and management of multiple slices on O-RU. Multi-vendor Slices use case [REQ-SL-FUN40] O-RAN slicing architecture shall enable O-RU to route per slice user plane traffic to one or more O-DUs. Multi-vendor Slices use case
25afcd9f8bff1a1f16037e46f0d49896
104 041
6.2.2 Non-Functional Requirements
Initial set of O-RAN slicing non-functional requirements based on the use cases defined in the present document are captured in table 6.2.2-1. Table 6.2.2-1: O-RAN Slicing Non-Functional Requirements REQ Description Note [REQ-SL-NFUN1] O-RAN slicing architecture shall support use of AI/ML to support RAN slicing use cases. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 29
25afcd9f8bff1a1f16037e46f0d49896
104 041
7 O-RAN Reference Slicing Architecture
25afcd9f8bff1a1f16037e46f0d49896
104 041
7.1 O-RAN Reference Slicing Architecture
This clause provides O-RAN reference slicing architecture (see figure 7.1-1) along with the high level roles and responsibilities of O-RAN network functions. Figure 7.1-1: O-RAN Reference Slicing Architecture O-RAN reference slicing architecture includes slice management functions along with O-RAN architectural components. As O-RAN's general principle is to be as compliant as possible with 3GPP architecture, these slice management functions are 3GPP defined NSMF and NSSMF with extensions for O-RAN network functions. Various deployment options of the location of NSMF and NSSMF have been presented in [i.4] and more detailed architectural implementation options for SMOs including NFV-MANO and ONAP are being presented in annex A.
25afcd9f8bff1a1f16037e46f0d49896
104 041
7.2 Non-RT RIC
The fundamental role of the Non-RT RIC in O-RAN slicing architecture is to gather long term slice related data through interaction with the SMO framework and apply AI/ML based approaches interworking with the Near-RT RIC to provide innovative RAN slicing use cases. For this purpose, Non-RT RIC shall be aware of RAN slice subnets and their respective SLAs through SMO. In addition, Non-RT RIC can retrieve enrichment information from 3rd party applications enabling advanced RAN slicing technology to be applied in O-RAN framework. In order to construct AI/ML models to be deployed in the Near-RT RIC, Non-RT RIC retrieves slice specific performance metrics, configuration parameters and required attributes of the RAN slice subnets from the SMO framework. Complex problems for Near-RT RIC e.g. applying RRM policies can be tackled by learning capabilities of AI/ML. The output of these algorithms can lead non-real-time optimization of the slice specific parameters of Near-RT RIC, O-CU and O-DU over O1 interface through SMO interaction. Moreover, these performance, configuration and other slice related data are used to generate policy guidance and assist Near-RT RIC over A1 to provide closed loop slice optimization. Applying such slice optimizations in the Near-RT RIC can be used for SLA assurance and prevent SLA violations between the slices as well. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 30
25afcd9f8bff1a1f16037e46f0d49896
104 041
7.3 Near-RT RIC
Near-RT RIC is the component which enables near-real-time RAN slice subnet optimization through execution of slicing related xApps and communicating necessary parameters to O-CU and O-DU through E2 interface. Deployed xApps can utilize either AI/ML based models or other control schemes which can further be guided by A1 policies that are generated by Non-RT RIC. In order to drive sliced RAN resources properly, Near-RT RIC shall have the knowledge of existing RAN slice subnets as well as their requirements. This information will be received through O1 interface during provisioning of RAN slice subnets. Therefore similar to Non-RT RIC, Near-RT RIC will be aware of RAN slice subnets through O-RAN specific information models and provisioning procedures. In O-RAN slicing architecture, configuration of slice resources on E2 nodes can be achieved through slow loop with O1 configuration and through fast loop with E2 configuration. This architecture enables advanced slicing use cases such as RAN slice SLA assurance and further enhances 3GPP slicing capability without misalignment. In the context of the RAN, the SLA assurance parameters sent over the A1 interface help the Near-RT RIC guide the behavior of the E2 Nodes (O-CU-UP, O-CU-CP, O-DU). While Near-RT RIC is capable for fast-loop configuration, slicing related O1 configuration, such as RRM policy information sent to O-CU, configured by the SMO framework will be taken into account. Moreover, slice specific near-RT performance data will be monitored through E2 interface which needs proper PM mechanisms between E2 nodes and Near-RT RIC as well.
25afcd9f8bff1a1f16037e46f0d49896
104 041
7.4 O-RAN Central Unit (O-CU)
O-CU, which includes a single O-CU-CP and possibly multiple O-CU-UP(s), which are communicating through E1 interface, needs to support slicing features as defined by 3GPP. Depending on slice requirements, O-CU-UP can be shared across slices or a specific instance of O-CU-UP can be instantiated per slice. On top of 3GPP slicing features, O-RAN further enhances slicing through the utilization of E2 interface and the assistance of Near-RT RIC dynamic slice optimizations along with the enhanced O1 interface to support additional slice configuration parameters. O-CU stacks, which are the upper layer protocols of the RAN stack, shall be slice aware and execute slice specific resource allocation and isolation strategies. These stacks are initially configured through O1 interface based on the slice specific requirements and then dynamically updated through E2 interface via Near-RT RIC for various slicing use cases. Based on the PM requests from SMO and Near-RT RIC, O-CU shall generate and send specific PMs through O1 and E2 interfaces respectively, where the PMs can be used for slice performance monitoring and slice SLA assurance purposes.
25afcd9f8bff1a1f16037e46f0d49896
104 041
7.5 O-RAN Distributed Unit (O-DU)
O-DU, which runs the lower layer protocols of RAN stack, shall support slice specific resource allocation strategies as well. Based on the initial O1 configuration of PRB allocation levels along with O-CU directives over F1 interface and the dynamic guidance received from Near-RT RIC over E2 interface, MAC layer needs to allocate and isolate relevant PRBs to specific slices. Based on the PM requests from SMO and Near-RT RIC, O-DU shall generate and send specific PMs through O1 and E2 interfaces respectively, where the PMs can be used for slice performance monitoring and slice SLA assurance purposes.
25afcd9f8bff1a1f16037e46f0d49896
104 041
7.6 A1 Interface
A1, which is the interface between the Non-RT RIC and the Near-RT RIC, supports policy management, ML model management and enrichment information services [i.13]. These three services will be utilized for various slicing use cases, such as slice SLA assurance. Policy management will be used by Non-RT RIC to send slice specific (e.g. S-NSSAI based) policies to guide Near-RT RIC with slice resource allocations and slice specific control activities, as well as to receive slice specific policy feedback for the policies deployed on the Near-RT RIC. For the use cases that make use of external enrichment data or where Non-RT RIC produces enrichment information, A1 enrichment interface will be used to send slice specific enrichment data to Near-RT RIC. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 31 It should be noted that slice specific A1 policies are not persistent (do not survive the restart of Near-RT RIC) and while they can take precedence over O1 slice specific configurations, they should be aligned and not deviate significantly from O1 configurations. NOTE: It is intended to add examples for the usage of A1 services for slicing use cases in the next version of the preent document.
25afcd9f8bff1a1f16037e46f0d49896
104 041
7.7 E2 Interface
E2, which is the interface between the Near-RT RIC and the E2 nodes, supports E2 primitives (Report, Insert, Control and Policy) to control the services exposed by E2 nodes [i.14]. These primitives will be used by slice specific applications (xApps) to drive E2 nodes' slice configurations and slice specific behaviour, such as slice based radio resource management, radio resource allocations, MAC scheduling policies and other configuration parameters used by various RAN protocol stacks. E2 will be used to configure and receive slice specific reports and performance data from E2 nodes. These reports can include 3GPP defined slice specific PMs (such as PRB utilization, average delay, etc. 3GPP TS 28.552 [8]) and new PMs that can be defined by O-RAN to support various slicing use cases. NOTE: It is intended to add examples for the usage of E2 primitives for slicing use cases in the next version of the present document.
25afcd9f8bff1a1f16037e46f0d49896
104 041
7.8 O1 Interface
O1, which is the interface between O-RAN managed elements and the management entity shall be used as specified in O-RAN.WG1.O1-Interface.0-v04.00 [13], to configure slice specific parameters of O-RAN nodes based on the service requirements of the slice. Some of the slice specific information models have been specified in 3GPP TS 28.541 [7], including the RRM policy attributes to provide the ratio of PRBs and the split of these PRBs among slices. To support O-RAN slicing use cases and their requirements, 3GPP information models can be extended and additional information models can be defined to capture slice profiles and slice specific configuration parameters, which will be carried over O1 interface as well. O1 will also be used to configure and gather slice specific performance metrics and slice specific faults from O-RAN nodes. NOTE: It is intended to add examples for the usage of O1 for the configuration, performance and fault management of slicing use cases in the next version of the present document.
25afcd9f8bff1a1f16037e46f0d49896
104 041
7.9 O2 Interface
O2, which is the interface between the SMO and O-Cloud as introduced in [i.15], will be used for life cycle management of virtual O-RAN network functions. As part of RAN NSSI creation and provisioning, RAN NSSMF, in interaction with SMO, triggers the instantiation of necessary O-RAN functions (such as Near-RT RIC, O-CU-CP, O-CU-UP and O-DU) based on slice requirements. After the creation of RAN NSSI, NSSMF in interaction with SMO can execute NSSI modification and NSSI deletion procedures. Since Non-RT RIC is part of SMO and would be instantiated along with other SMO functions, O2 is not expected to be used for lifecycle management of Non-RT RIC. NOTE: It is intended to add examples for the usage of O2 for slice lifecycle management of O-RAN network functions in the next version of the present document.
25afcd9f8bff1a1f16037e46f0d49896
104 041
7.10 Transport Network Slicing
As RAN Slice Subnet is composed of not only the O-RAN NFs but also the transport network components; Fronthaul interface (FH) between O-RU and O-DU and the Midhaul interface (MH) between the O-DU and O-CU, transport slicing aspects needs to be considered and incorporated into the overall O-RAN Slicing Architecture. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 32 There are different emerging approaches for defining transport network slicing that can meet 5G requirements, which mobile interfaces (Fronthaul and Midhaul in RAN slice subnet, Backhaul between RAN slice subnet and Core slice subnet, and between Core slice subnet and the PDN) can and need to be sliced, what form they will take and the number of slices required at the transport layer [i.16]. This clause aims to capture the transport network slicing aspects for Fronthaul and Midhaul links and provide references to relevant WG4 (Fronthaul), WG6 (intra-DC virtual links) and WG9 (inter-DC links) specifications as these specs become available. Given the current state and progress in these WGs, this version of the O-RAN Slicing Architecture Specification will focus only on WG9 aspects, with the scope of the network segments covered by WG9 being given in figure 7.10-1. The area inside the dotted green line characterizes the transport networks composed of a number of Transport Network Elements (TNE) deployed among different components defined in other O-RAN WGs. WG9 does not define the interfaces along the dotted green line. As an example, the Fronthaul interface of an O-RU or O-DU are defined by WG4. Figure 7.10-1: Xhaul Transport Network Overview (Source: Figure 3-1 [i.16]) Packet Switched Transport Network Slicing: O-RAN WG9 has defined the architecture and the best practices for an Open Xhaul transport network based on an end- to-end packet switching architecture in "Xhaul Packet Switched Architectures and Solutions specification" [i.16] that can support the requirements outlined in the O-RAN WG9 Transport Requirements document [i.17]. While the "Xhaul Packet Switched Architectures and Solutions specification" [i.16] describes the best practices for O-RAN transport based on end-to-end packet switching technology, it is recognized that other solutions, not based on packet switching, could be utilized, or mixed with a packet switching solution as well. As indicated in section 17 of [i.16] the terms hard and soft slicing has emerged for transport networks, referring to the level of isolation between different slices: • Hard slicing: Transport resources are dedicated to a specific "Network Slice Instance" (NSI) and cannot be shared with other slices. • Soft slicing: Transport resources are shared and can be re-used by other slices. A packet switched infrastructure, as described in [i.16], has an extensive toolset, consisting of underlay forwarding solutions, Quality of Service (QoS) and VPNs that allows an operator to scalable partition the transport network to cater for both hard and soft slices. Transport slice requirements and associated toolset are shown in figure 7.10-2. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 33 Figure 7.10-2: Packet switched toolset for transport level slicing (Source: Figure 17-2 [i.16]) Further details of transport network slicing based on packet switching technology is captured in section 17 and section 18.10 of [i.16], including packet-switched underlay networks, transport network Quality of Service, 5G service and slices and a transport slicing scenario on a packet switched Xhaul network.
25afcd9f8bff1a1f16037e46f0d49896
104 041
8 O-RAN Slice Subnet Provisioning Procedures
8.1 O-RAN Slice Subnet Instance (O-NSSI) Allocation Procedure The procedure for allocation of an O-RAN slice subnet instance to satisfy the O-NSSI requirements is given in figure 8.1-1. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 34 Figure 8.1-1: O-NSSI Allocation Procedure ETSI ETSI TS 104 041 V11.0.0 (2025-03) 35 1) Slice subnet management function receives an allocate O-NSSI request (AllocateNssi operation specified in 3GPP TS 28.531 [4], clause 6.5.2 shall apply) with network slice subnet related requirements (network slice subnet related requirements specified in SliceProfile in 3GPP TS 28.541 [7], clause 6.3.4 shall be used). 2) Slice subnet management function checks the feasibility of O-RAN slice subnet related requirements utilizing O-NSSI Feasibility Check Procedure. If the network slice subnet related requirements can be satisfied, the following steps are needed, else go to step 14. 3) Based on the O-RAN slice subnet related requirements slice subnet management function decides whether to use an existing O-NSSI or create a new O-NSSI. 4) If using an existing O-NSSI and the existing O-NSSI needs to be modified, slice subnet management function invokes the O-NSSI Modification Procedure. 5) If creating a new O-NSSI, slice subnet management function creates the MOI for the O-NSSI to be created and then derives the corresponding O-RAN slice subnet constituent (i.e. O-RAN NF, constituent O-NSSI) related requirements and transport network related requirements from the received network slice subnet related requirements. For each required O-NSSI constituent, steps 6-11 are needed: 6) If the required O-NSSI constituent is constituent O-NSSI, slice subnet management function invokes O-NSSI Allocation Procedure (clause 8.1). If the required O-NSSI constituent is a virtual O-RAN NF instance, steps 7-8 are needed: 7) Slice subnet management function derives O-Cloud requirements for O-RAN NF. 8) If the O-RAN NF is a virtual NF, slice subnet management function establishes virtual intra-Cloud links, allocates slice tags (i.e. VLAN ID) and optionally instantiates NF on O-Cloud by executing Network Slice Creation as specified in O-RAN.WG6.ORCH-USE-CASES-v4.00 [18], clause 3.10.1. If an existing O-RAN virtual NF instance needs to be modified, slice subnet management function can execute Scale Out of NF as specified in O-RAN.WG6.ORCH-USE-CASES-v4.00 [18], clause 3.2.2, Scale In of NF as specified in O-RAN.WG6.ORCH-USE-CASES-v4.00 [18], clause 3.2.3. 9) Slice subnet management function derives CM requirements for O-RAN NF. 10) Slice subnet management function executes Provisioning Management Services as specified in O-RAN.WG1.O1-Interface.0-v4.00 [13], clause 2.1. 11) Slice subnet management function configures the O-NSSI MOI. For each transport network requirement, step 12 is needed. 12) If the transport link is a physical link that needs to be established across clouds, slice subnet management function request transport link establishment from TN management functions. 13) If Non-RT RIC has subscribed for O-NSSI allocation event notifications, slice subnet management function notifies Non-RT RIC with O-NSSI information. 14) Slice subnet management function returns appropriate O-NSSI allocation result (AllocateNssi operation specified in 3GPP TS 28.531 [4], clause 6.5.2 shall apply). If the O-NSSI is created successfully, the result includes the relevant constituent network slice subnet instance information (NetworkSliceSubnet IOC specified in 3GPP TS 28.541 [7], clause 6.3.2 shall apply). 8.2 O-RAN Slice Subnet Instance (O-NSSI) Modification Procedure The procedure for modification of an O-RAN slice subnet instance to satisfy the O-NSSI requirements is given in figure 8.2-1. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 36 Figure 8.2-1: O-NSSI Modification Procedure 1) Slice subnet management function receives a modify O-NSSI request (modifyMOIAttributes operation specified in 3GPP TS 28.532 [5], clause 11.1.1.3 shall apply). 2) Based on the modification request, slice subnet management function derives modification requirements for the O-NSSI that can involve its constituents and transport network. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 37 3) If required, slice subnet management function checks the feasibility of modification requirements utilizing O-NSSI Feasibility Check Procedure. If the network slice subnet modification requirements can be satisfied, the following steps are needed, else go to step 11. For each O-NSSI constituent that needs modification, steps 4-9 are needed. 4) If the constituent is an O-NSSI constituent that needs to be modified, slice subnet management function invokes O-NSSI Modification Procedure (clause 8.2). If the constituent is an O-RAN network function, steps 5 and 6 are needed. 5) If the constituent network function is an O-RAN virtual network function that needs to be instantiated / modified / deleted, slice subnet management function realizes the following use cases as specified in O-RAN.WG6.ORCH-USE-CASES-v4.00 [18] depending on the modification requirements: a) O-RAN.WG6.ORCH-USE-CASES-v4.00 [18], clause 3.2.1 Instantiate NF On O-Cloud. b) O-RAN.WG6.ORCH-USE-CASES-v4.00 [18], clause 3.2.2 Scale Out of NF. c) O-RAN.WG6.ORCH-USE-CASES-v4.00 [18], clause 3.2.3 Scale In of NF. d) O-RAN.WG6.ORCH-USE-CASES-v4.00 [18], clause 3.2.5 Terminate NF on O-Cloud. 6) If the constituent network function needs configuration changes, slice subnet management function executes Provisioning Management Services as specified in O-RAN.WG1.O1-Interface.0-v4.00 [13], clause 2.1. If O-NSSI constituent has TN part that needs to be modified, for each transport network requirement, steps 7-8 are needed. 7) If the transport link is within a cloud, slice subnet management function requests virtual link modification within respective O-Cloud from NFO. Details of this request are not supported in the present document. 8) If the transport link is a physical link that is established across clouds, slice subnet management function requests transport link modification from TN management functions. Details of this request are not supported in the present document. 9) If O-NSSI MOI needs to be modified, slice subnet management function reconfigures the O-NSSI MOI. 10) If Non-RT RIC has subscribed for O-NSSI modification event notifications, slice subnet management function notifies Non-RT RIC with modified O-NSSI information. Details of event notification between slice subnet management function and Non-RT RIC are not supported in the present document. 11) Slice subnet management function returns O-NSSI modification result (modifyMOIAttributes operation specified in 3GPP TS 28.532 [5], clause 11.1.1.3 shall apply). 8.3 O-RAN Slice Subnet Instance (O-NSSI) Deallocation Procedure The procedure for deallocation of an O-RAN slice subnet instance to satisfy the O-NSSI requirements is given in figure 8.3-1. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 38 Figure 8.3-1: O-NSSI Deallocation Procedure ETSI ETSI TS 104 041 V11.0.0 (2025-03) 39 1) Slice subnet management function receives a deallocate O-NSSI request (DeallocateNssi operation specified in 3GPP TS 28.531 [4], clause 6.5.4 shall apply). Slice subnet management function decides whether the O-RAN slice subnet instance needs to be modified or terminated. If the O-NSSI needs to be modified, go to step 2 and then step 12, else steps 3-12 are needed. 2) Slice subnet management function invokes the O-NSSI Modification Procedure. For each O-NSSI constituent, steps 3-9 are needed. 3) If the constituent is an O-NSSI constituent, slice subnet management function invokes O-NSSI Deallocation Procedure (clause 8.3). Then go to step 12. If the constituent is an O-RAN NF, steps 4-9 are needed. If O-RAN NF needs to be terminated and is a virtual NF, steps 4-5 are needed, else if O-RAN NF needs to be modified go to step 6. 4) Slice subnet management function derives the O-Cloud requirements for O-RAN NF termination to terminate the NF within the respective O-Cloud. 5) Slice subnet management function removes virtual intra-Cloud links, optionally deallocates slice tags (i.e. VLAN ID) and terminates NF on O-Cloud utilizing steps as specified in Network Slice Deletion use case in O-RAN.WG6.ORCH-USE-CASES-v4.00 [18], clause 3.10.2. If O-RAN NF is a virtual NF that needs to be modified, steps 6-7 are needed, else go to step 8. 6) Slice subnet management function derives the requirements for O-RAN NF modification request to modify the NF within the respective O-Cloud. 7) Slice subnet management function invokes Scale In of NF use case as specified in O-RAN.WG6.ORCH-USE- CASES-v4.00 [18], clause 3.2.3. If O-RAN NF needs to be reconfigured, steps 8-9 are needed, else go to step 10. 8) Slice subnet management function derives the CM requirements for the O-RAN NF. 9) Slice subnet management function invokes Provisioning Management Services as specified in O-RAN.WG1.O1-Interface.0-v4.00 [13], clause 2.1. For each transport network requirement, step 10 is needed. 10) If the transport link is a physical link that needs to be modified/terminated across clouds, slice subnet management function request transport link modification/termination from TN management functions. 11) If Non-RT RIC has subscribed for O-NSSI deallocation event notifications, slice subnet management function notifies Non-RT RIC with O-NSSI deallocation. 12) Slice subnet management function returns appropriate O-NSSI deallocation result (DeallocateNssi operation specified in 3GPP TS 28.531 [4], clause 6.5.4 shall apply). ETSI ETSI TS 104 041 V11.0.0 (2025-03) 40 Annex A (informative): Implementation Options A.1 Implementation Options A.1.0 Example Implementation Options This annex presents example deployment options for various SMO options. A.1.1 3GPP and ETSI NFV-MANO based O-RAN Slicing Architecture Implementation Option Figure A.1.1-1: O-RAN Slicing Reference Architecture (ETSI NFV-MANO based example) A 3GPP – ETSI NFV-MANO based example of O-RAN slicing reference architecture and interfaces is shown in figure A.1.1-1 to describe the relationship between 3GPP defined slice management functions (NSMF, NSSMF), 3GPP defined management functions (3GPP TS 28.531 [4], Network Function Management Function, NFMF) and O-RAN network functions in terms of slice lifecycle management and slice configuration procedures. Life Cycle Management (LCM) procedures for mobile networks that include virtualized network functions (VNFs) as well as addition of physical network functions (PNFs) to network service (NS) instances are specified in 3GPP TS 28.526 [3]. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 41 A.1.2 ONAP based O-RAN Slicing Architecture Implementation Option Figure A.1.2-1: Example architecture depiction for ONAP based O-RAN Slicing support Current version of ONAP - O-RAN slicing reference architecture is shown in figure A.1.2-1 (based on ONAP G-release). In this architecture, one option is RAN NSSMF being located within SMO and is responsible for the entire RAN subnet, including the O-RAN NFs and the related O-RAN transport network components; Fronthaul interface (FH) between O-RU and O-DU and Midhaul interface (MH) between O-DU and O-CU. RAN NSSMF determines slice specific configuration of O-RAN NFs based on SliceProfile received from NSMF and determines the necessary slice specific requirements for FH and MH interface triggering Transport Network Management Domain (TN MD) to execute the actual configuration of FH and MH interface. ETSI ZSM based Management Domain approach is adopted for TN management. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 42 Annex B (informative): Transport Network Slicing B.1 Transport Network Slicing Use Cases This annex describes the ORAN Transport Network slicing related use cases and per-slice configuration of O-RAN transport network instances, phases of transport network slicing definition process and the corresponding scope of work for the relevant O-RAN WGs, considering the projected progress. Transport Network slicing Use Cases are built based on the definitions of slicing in O-RAN WG1 Slicing Architecture Specification, WG6.CAD Scenario C.1 and C.2, WG4 O-RU Slicing with support of RAN-Sharing (WG4 Use Case 7 and Use Case 10). Transport network slicing use cases are grouped into phases to incrementally extend the scope of work in relevant O-RAN WGs. Brief definition of phases: • Phase 1: Slicing as specified in 3GPP TS 22.261 [1], [i.2] and 3GPP TS 23.501 [2]. Slicing appears on the N3 (NG) interface. Number of slices are limited. UPF is centrally located. • Phase 2: Phase 1 is augmented with per-slice QoS characteristics. Slices are multiple with multiple exit-points and local break-outs. • Phase 3 and beyond are not supported in this version of the present document. For the current version of Transport Network slicing use cases the following scope and constrains are assumed, based on mutual discussions and agreements between WG1, WG6, WG9: • Delineation of slices in O-RAN CUPS User plane interfaces for current phases 1-5 is based on physical or VLAN+IP separation. For this delineation, procedures for coordination between TN<>O-RAN and TN<>5GC provisioning VLAN, IP and corresponding TN service instances are expected. • Fronthaul – connectivity driven domain, providing traffic differentiation and prioritization according to Open Fronthaul interface definition, given in O-RAN.WG4.CUS.0-v06.00. No need to support slicing in phases 1 to 4. The support for multi-vendor slicing in the Fronthaul is expected to be introduced in phase 5. • Midhaul – connectivity driven domain, providing traffic differentiation and prioritization according to 5QI model of F1 interface, defined in TS38.474. No need to support slicing in phases 1 to 3. DDoS prevention and traffic control are expected to protect O-DUs from O-CU-UPs, which in future phases can belong and be controlled by 3rd parties. From phase 3 and beyond, Midhaul is slicing aware domain, serving communication of O-CU-UP<>O-DU with slicing. For that O-CU-UP and O-DU are expected to perform marking of F1AP with DSCP and attach VLAN .1q tag and assign IP for slice delineation. • Backhaul – slicing aware domain, serving communication of UPF <> O-CU-UP with slicing. O-CU-UP and UPF are expected to perform marking of N3 interface according to 5QI<>DSCP marking and push .1q tag and assign IP for slice delineation. • Based on shared O-RU feature progress (WG4 Use Case 7 and Use Case 10), single O-RU are expected to serve multiple slices and multiple O-DUs, so the expectation is to have capability to maintain mapping of PLMN ID information to corresponding VLAN+ optional IP pair on C/U Plane of Open Fronthaul interface. • O-DU is expected to support multiple slices. Single O-DU, serving multiple slices, supposed to have capability to maintain mapping of O-NSSI information to corresponding VLAN+IP pair on F1 interface and 5QI to DSCP mapping. This assumes progress of WG5 effort on definition upstream DU > CU_UP F1_U 5QI<>DSCP mapping capabilities. • O-CU-UP is expected to support multiple slices. Single O-CU-UP, serving multiple slices, supposed to have capability to maintain mapping of O-NSSI information to corresponding VLAN+IP pair on F1 interface, however in current phases 1-5 it is assumed that cluster of O-CU-UPs will serve single O-NSSI, as it is shown in figure B.1-1. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 43 Figure B.1-1: Assumption of mapping functions of O-DU and O-CU-UP to slices From O-RAN perspective, a single network slice can start from O-DU and can have multiple distributed O-CU-UPs and UPFs, connecting a certain Slice in Data Network with multiple N6 interfaces and multiple local breakouts. In TN domain TNEs with attachment circuits to O-RU, O-DU and O-CU maintain corresponding QoS schemes and Transport Network Slice profiles as shown in figure B.1-2 and in figure B.1-3. Figure B.1-2: Types of slicing in various O-RAN scenarios ETSI ETSI TS 104 041 V11.0.0 (2025-03) 44 Figure B.1-3: Types of attachment circuits in TNE The assumptions of slicing capabilities on the Core and O-RAN for the Option 1 which are mapped to the phase 1 are given in figure B.1-4. The Option 2 depicts the expectation of the target capabilities of the systems, including capabilities on the Option 1. Figure B.1-4: Options for slicing, demapping orthogonal plane of 5QI per slice in Option 1 and multiple planes of 5QI as attribute per planes of slices For Phase 1 following constrains are assumed (see figure B.1-5): 1) Single operator with one O-NSSI MBB, one O-NSSI mMTC, one O-NSSI NB-IoT slice. 2) Fix mapping of slices inter to intra – DC. 3) Flat mapping of standards based 5QI (3GPP TS 23.501 [2], table 5.7.4-1) <> DSCP <> QoS in TN domain. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 45 Figure B.1-5: Diagram of network location of O-RAN instances with relevant domains for use cases As an outcome of the following mapping, it is expected capability of TN domain to accommodate all domains with relevant QoS profiles and slices in each TNE, as shown in figure B.1-6 below. Figure B.1-6: Diagram of O-RAN flows to be accommodated in each TNE Table B.1-1 captures assumptions of TN slicing capabilities in each interface for the phase 1. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 46 Table B.1-1: Scope of the capabilities of O-RAN elements in Phase 1 ORAN interface Stream direction Queue Slicing VLAN Differentiated QoS behavior per slice Fronthaul C/U Any EF (HP) No No Fronthaul M-pane, E1, E2, F1_c, O2, A1, Xn_c, N2, N4 Any AF No No F1_U DU > CU-UP AF* or 5QI<>DSCP mapped** No No F1_U CU-UP > DU 5QI<>DSCP mapped** No No N3 (NG) CU-UP > UPF 5QI<>DSCP mapped** Yes No N3 UPF > CU-UP 5QI<>DSCP mapped** Yes No Xn_U Any 5QI<>DSCP mapped (as per TS 38.424) No No * Based on current 3GPP TS 38.474 Section 5.4 definition of 5QI<>DSCP capability of F1_U for the link DU > CU-UP can be limited. If this is the case, for the phase 1 the mapping of F1_U is recommended to a bandwidth queue. ** 5QI QoS Identifiers, the Priority Level (if explicitly signaled), and other NG-RAN traffic parameters (e.g. ARP) in O-RAN and Core domains mapped to DSCP and ToS or CoS parameters, aligned with TN domain with accordance to NRM as specified in 3GPP TS 28.541 [7], with the flow shown in figure B.1-7 below. Figure B.1-7: Diagram of profiles information model parameters mapped to the domains to form a slice According to these parameters, the relation of RANSliceSubnetProfile and RANSliceSubnetProfile with VLAN and IP mapping could be established with corresponding EP_Transport VLAN and IP mapping, allowing TN domain to perform separation allocation of resources per slice. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 47 Definition of 3GPP IM/DM in 3GPP TS 28.541 [7] TN domain is out of scope, and network slice parameters of the RAN and CN are existing in the corresponding fields of the IM/DM. More details in of 3GPP IM/DM in regard for network slicing can be found in O-RAN.WG9.XTRP-MGT.0-v04.00 [i.19], clause 9.1. Phase 2 assumes the following constrains (see figure B.1-8): 1) Single operator with enterprise slices use case. 2) Number of slices: many. 3) Multiple exit points and multiple UPFs. 4) Per-slice (tenant) 5QI<>DSCP<>QoS model in TN domain. Figure B.1-8: Scope of the capabilities of O-RAN elements in Phase 2 * 5QI QoS Identifiers, the Priority Level (if explicitly signaled), and other NG-RAN traffic parameters (e.g. ARP) in O-RAN and Core domains mapped to DSCP and ToS or CoS parameters, aligned with TN domain with accordance to NRM as specified in 3GPP TS 28.541 [7], with the flow shown in figure B.1-9 below: ETSI ETSI TS 104 041 V11.0.0 (2025-03) 48 Phase 2 and 3: Figure B.1-9: Diagram of use cases for phase 2 and 3 Table B.1-2 captures assumptions of TN slicing capabilities in each interface for the phase 2 and later. Table B.1-2: Scope of the capabilities of O-RAN elements in Phase 2 and later ORAN interface Stream direction Queue in 4Queue model Slicing VLAN Differentiated QoS behavior per slice Fronthaul C/U any EF (HP) no no Fronthaul M-pane, E1, E2, F1_C, O2, A1, Xn_c, N2, N4 any AF no no F1_U DU > CU-UP 5QI<>DSCP mapped* yes** no F1_U CU-UP > DU 5QI<>DSCP mapped yes** no N3 (NG) CU-UP > UPF 5QI<>DSCP mapped*** yes no N3 UPF > CU-UP 5QI<>DSCP mapped*** yes no Xn_U any AF no no * Based on assumption on WG5 and WG6 effort on definition upstream DU > CU_UP F1_U 5QI<>DSCP mapping capabilities ** Based on assumption on WG5 and WG6 effort on definition F1_U slicing capabilities *** More on Mapping recommendation in WG9 "RBBN-TFCA-QoS mapping" contribution After this analysis and discussions within ORAN WG9 the conclusion is that NRM as specified in 3GPP TS 28.541 [7] does not provide enough data to create valid Transport Network Slice. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 49 However, since some of the parameters required to create transport network slice can be extracted from NRM as specified in 3GPP TS 28.541 [7] and some can be translated from SMO expectations, the following scope of work for WG9 and WG10 is proposed: • Collect missing parameters in 3GPP TS 28.541 [7] for TN Slice creation. • Propose enhancements of 3GPP TS 28.541 [7] to include OpenModelClass TNSliceSubnet to link EP_Transport to TN and add options for linking 3GPP subsystems to TN subsystems. • File ORAN liaison to SA5 in order to augment 3GPP TS 28.541 [7] with missing information and proposed items. • Align with IETF TN Network Slice abstraction. • Define information flows. • Align ORAN WG1, WG9 and WG10. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 50 Annex C (informative): Slicing Terminology C.1 Slicing and 3GPP Slice Modeling Terminology Awareness C.1.0 Slicing and 3GPP Slice Modeling Terminology This annex intends to provide further information and clarified awareness for some of the fundamental network slicing concepts and 3GPP modeling aspects. C.1.1 Conceptual Differences Between 3GPP SA2 and SA5 (System Architecture) Network Slice (3GPP SA2): Even though this is a system architecture group, the RAN architecture is out of the scope of SA2. RAN3 owns the RAN architecture. SA2 service-based architecture (SBA) is a functional architecture with mostly service-based interfaces. (Management Aspect) Network Slice (3GPP SA5): SA5 describes all the management aspects of a network, which is modular and model driven. It is a service-based management architecture (SBMA), using service-based API allowing implementations to pick and choose appropriate API and compliance per API. See 3GPP TS 28.533 [i.18]. A Network Slice Instance (NSI), a SA2 term, is a set of functions. SA2 also introduced the concept of an instance because there would be more than one slice of the same type for the purpose of scaling. What is managed by an operator is a group of functions (an NSI). A NetworkSlice Managed Object Instance (MOI), a SA5 usage, is an object exposing to an external consumer. Once an operator builds a something to satisfy expectations, it needs to build something that is exposed to an external consumer which is a NetworkSlice MOI. Scale in/Scale out is an operation within Life Cycle Management which is under the purview of SA5. In the context of SA5, a NSI is just an attribute carrying signaling "NSI" value semantic. From a management perspective, the abbreviation NSI is sometimes used as a replacement of the proper term NetworkSlice MOI. However, this is just an improper use of terminology. A NetworkSliceSubnet MOI is sometimes called an NSSI. However, in SA5, any occurrences of the term NSSI should be perceived as a NetworkSliceSubnet MOI. C.1.2 Name Containment Limitation NSSI stands for NetworkSliceSubnet Instance a.k.a. Managed Object Instance (MOI) of the NetworkSliceSubnet Information Object Class (IOC). NetworkSliceSubnet IOC has been introduced to group instances (of any IOC) in a way that is NOT restricted by the Name Containment (NC) rules. Those name containment rules are specified in 3GPP TS 32.300 [9]. Name containment defines a managing hierarchy. The flexibility of supporting network sharing management in Network Slice scenario (especially for shared Network Functions) is fulfilled by introducing the NetworkSliceSubnet concept in 3GPP TS 28.541 [7] which allows for multiple views or "overlays" to augment the management hierarchies. This grouping is independent of how the network is managed and structured via distinguished names. Note that the NetworkSliceSubnet inherits from the top IOC (3GPP TS 28.541 [7], clause 6.2.2). ETSI ETSI TS 104 041 V11.0.0 (2025-03) 51 This is illustrated in the following diagram given in figure C.1.2-1: Figure C.1.2-1: Multi-Parent inheritance supported from Network Slice Subnet Instance C.1.3 Network Slice Subnet is a Purposeful Generic Collection The definitions and diagrams that follow are driven from 3GPP 5G NRM specifications in 3GPP TS 28.541 [7]. The 3GPP 5G NRM serves as a foundation for ORAN specification development, and modeling work from which ORAN development aligns to. For further details, refer to O-RAN WG5 and WG10 specifications. The diagrams in the present document utilize elements from the 3GPP 5G NRM but should be fully applicable to the ORAN IM. The NetworkSliceSubnet is a purposeful generic collection of objects. The NetworkSliceSubnet can be comprised of a number of managedfunctions, networkservices and EP_transport endpoints as shown in the diagram below inspired from the 3GPP TS 28.541 [7], clause 6. Three key points that the diagram illustrates is that: 1) Generic Collection: The NetworkSliceSubnet is a purposeful generic collection of objects shown in the diagram. 2) Recursive: The NetworkSliceSubnet is a recursive structure which can self-reference NetworkSliceSubnets. 3) Purposeful: The "purpose" of that Generic Collection of objects is defined in the SliceProfile. NOTE 1: The cardinality between NetworkSliceSubnet and SliceProfile which is written as 1…* could be an empty list: 1…0 which would imply there is no SliceProfile. A set of objects could be grouped without a profile. Profiles are derived from the SLS. NOTE 2: It stated that it is under consideration in 3GPP that the SliceProfile can become an IOC. These points are highlighted in the following diagram given in figure C.1.3-1. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 52 Figure C.1.3-1: NetworkSliceSubnet is a purposeful generic collection C.1.4 Slicing Instance Example The following diagram given in figure C.1.4-1 shows an example instance of a NetworkSliceSubnet. Figure C.1.4-1: Slicing Instance Example ETSI ETSI TS 104 041 V11.0.0 (2025-03) 53 As a concrete example given in figure C.1.4-1, the above diagram shows instances of managed functions and network slice subnets. Depicted are two Network Slices (NS1, NS2), with their associated NetworkSliceSubnets, Core and RAN managed function components. The picture illustrates that NetworkSliceSubnets are generic groupings of objects. These relationships are indicated by the <<groups>> tag. This diagram shows two eMBB Network Slices that are providing two different services with different throughput SLA requirements within the network of the same service provider (PLMN-A). Notice that the NetworkSliceSubnets have SliceProfiles associated with them in this example. The illustration also highlights how some name containment relationships (3GPP TS 32.300 [9]). These are indicated by the <<names>> tag. Notice that NSS2 (NetworkSliceSubnet #2) is a collection of 5G Core Components (UPF1, AMF1, SMF1); and that NSS1 is a collection of 5G RAN components (DU1, CUCP1, CUUP1, NRCellDU1, NRCellCU1). NSS3 and NSS4 are the NetworkSliceSubnets that expose the "stitched" groupings. The NSS3/NSS4 will have a slice profile that reflects the entire SLA/SLS represented by the service profile. While NSS1/NSS2 will have portions of the entire SLS pertaining to the corresponding (e.g. Transport, RAN, Core) parts of the slice. This illustrates that the "stitching" occurs at the OSS / network management layer not at the BSS layer where slice orchestration happens. Slice instances at the BSS, the subnets are handled as the management layers. C.1.5 NSSI Slice Profile and PLMNInfo The "purpose" of the NetworkSliceSubnet MOI is given in the SliceProfile which is associated with it. It can be associated with a particular network slice or a group of network slices. The association is accomplished through the pLMNInfoList attribute of the SliceProfile. The Slice Profile is a data type that represents the requirements or purpose of the generic collection instance that can support one or more NetworkSliceSubnet instances. For an end-to-end Network Slice to exist, it is expected that it would have an associated slice profile containing a pLMNInfoList. NOTE: A NetworkSliceSubnet can be just a generic collection of objects not associated with an End-to-End Network Slice. In this case, there might not be a slice profile associated with NetworkSliceSubnet. Both 3GPP Release 16 and 3GPP Release 17 5G NRM have the SliceProfile, therefore 3GPP TS 28.541 [7] shall be used for further details. This is illustrated in the following diagram given in figure C.1.5-1. Figure C.1.5-1: Slice Profile contains a List of PLMNInfo ETSI ETSI TS 104 041 V11.0.0 (2025-03) 54 C.1.6 Managed Functions Have a List of PLMNInfo The concrete managed functions, notably the gNBCUUPFunction, NRCellCU and NRCellDU each can have a list of PLMNInfo attributes. Concrete managed functions are configured with the slice information via the pLMNInfoList attribute. It is mainly used for signaling. Additionally, the concrete managed function becomes associated with a Network Slice. There are two possible error scenarios: First, a managed function can be configured with a particular entry in the pLMNInfoList but it is not member of any Network Slice Subnets associated with profiles containing the same entry in their pLMNInfoList (Signaling configured, management association missing). Second, a managed function cannot be configured with a particular entry in the pLMNInfoList but is a member of a Network Slice Subnet associated with profiles containing this particular entry (Management association present, signaling configuration missing). This is illustrated in the following diagram given in figure C.1.6-1. Figure C.1.6-1: Managed Functions have a list of PLMNInfo C.1.7 Single Network Slice Selection Assistance Information (S-NSSAI) A Single Network Slice Selection Assistance Information (S-NSSAI) is used to define a slice. It is a 32-bit number defined by 3GPP TS 23.003 [i.2] clause 28.4.1. S-NSSAI is comprised of Slice / Service Type (SST) and Slice Differentiator (SD). SST have values 0 to 127 standards reserved and 128 values are operator defined. These are specified in 3GPP TS 23.501 [2]. The Standardized SST values are specified in 3GPP TS 23.501 [2], clause 5.15.2.2. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 55 Annex D (informative): Multi-Operator Slice RAN Management Services Exposure D.1 RAN Management services exposure for Multi-Operator Slice D.1.0 Management services exposure for Multi-Operator Slice Sharing of RAN resources is one of the key requirements for Mobile Network Operators (MNO) to reduce the RAN infrastructure cost and increase profitability, while enhancing the coverage. Currently O-RAN drives the RAN sharing models and associated considerations through WG1 Use Cases Detailed Specification Use Case 7: RAN Sharing which focus on the MORAN (Multi Operator RAN) scenario and sharing computing resources (for more details 3GPP TS 22.261 [1] shall be used). The goal of the existing RAN Sharing use case is to enable multiple operators to share the same O-RAN infrastructure wherein each operator utilizes a separate carrier, while allowing them to remotely configure and control the shared resources via a remote O1, O2 and E2 interface. While the RAN Sharing use case set the premise for sharing the infrastructure and extending the management interfaces, the use case does not clearly depict the network slicing scenarios especially offering and usage of RAN objects related to network slicing in context of relation between multiple operators. The need for such a slice usage consumption /offering scenario is depicted in 3GPP specification 3GPP TR 28.824 [i.7], 3GPP TR 28.801 [i.3], 3GPP TS 28.530 [23] and other industry initiatives like 5GPPP Slicenet [i.10] project. Specifically, 3GPP TR 28.824 [i.7] depicts the management capability exposure across multiple operators, especially Network Operator (NOP) role played by multiple operator organizations – one offering RAN objects related to network slicing and other using the RAN objects related to network slicing to establish an end-to-end slice. Few examples of business use cases are depicted below: • Cross MNO Model of offering and usage of RAN objects related to network slicing: - Two MNOs get into agreement in a scenario where one of the MNOs do not have spectrum or coverage in a particular region (such as foreign country, remote areas, out of service area) to share RAN resources. The arrived solution to share RAN is to utilize RAN objects related to network slicing to fix coverage gaps. E.g. network slice consisting of core objects belonging to MNO1, and RAN objects belonging to MNO2. - Many MNOs form a consortium and share RAN resource through bulk provisioning of RAN objects related to network slicing that can be offered on demand for subscription to address the needs of consortium participating MNOs (for example to address the emergency, disaster response). - MVNO relying on multiple MNOs. Each MNO offers its RAN resources as RAN objects related to network slicing. • MNO and Hyperscale Public Cloud Provider Slice Consumption Offering Model: - There is a market trend of collaboration between Hyperscale Public Cloud Companies and MNOs in which Hyperscalers count on access network infrastructure owned by MNO, while utilizing the Edge and Core network services using their own or partner services. In this model the RAN objects related to network slicing is offered to the Hyperscalers and integrated to offer a unified service. Considering that standards, related marketing trends and described cases with multiple parties involved, there is a great value to streamline parties' interaction with respective O-RAN specification activities. From an O-RAN standardization perspective following three aspects need to be considered in context of these use case models: • Exposure of management services providing lifecycle management, CM, FM, PM, performance analytics and other management capabilities of RAN logical instances (as a collected group of RAN objects related to network slicing) for usage of RAN objects related to network slicing in context of relation between multiple operators. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 56 • Exchange and mapping of resource identifiers corresponding to a collected group of RAN objects related to network slicing for it to be composed into an end-to-end slice. • Governance of the exposed RAN Slice Subnet management services for them to be made available externally in a secured and manageable manner. NOTE 1: Logical instances as collected group of RAN objects related to network slicing implies representation of a group of logical components within each RAN node that collectively provide required characteristics of network slice at RAN part which is commonly called in the O-RAN Slicing Architecture Specification as O-RAN Slice Subnet. NOTE 2: Direct SMO exposure to external slice management systems across MNOs assumes prior business agreement (contract) between parties. NOTE 3: Correct interpretation of FM and PM data can require prior interface agreements with exchange of related data (e.g. specifications of PM, FM problem types, MIB files). NOTE 4: "Multi-Operator slice" is a term used to represent a concept of end-to-end network slice that uses RAN Slice Subnet instances (collected group of RAN objects related to network slicing) provided by MNO in shared RAN resources. As needed, the partner operator would define its PLMN specific network slice (as per 3GPP TS 23.501 [2], slice is identified by S-NSSAI within a specific PLMN ID but not multiple PLMN IDs) in that end-to-end slice. D.1.1 Management services exposure for Multi-Operator slice - Relevant Standards • 3GPP TS 28.530 [23], clause 4.1.6 Network Slice as a Service (NSaaS): Explains the concept of NSaaS offered by CSP to CSC and the ability of the CSC to manage the network slice as manager via management interface exposed by the CSP. Further, the CSC is also allowed to offer their own services (e.g. communication services) on top of the network slice obtained from the CSP. • 3GPP TS 28.533 [i.18] clause A.3 Utilization of management services by Exposure Governance Management Function (EGMF): Explains the management capability of EGMF, especially Exposure Governance and also depicts an example of exposing the management function (MnF) capability through the EGMF to MnF from another operator or to 3rd Party. The standardization of EGMF in 3GPP is not mature and require further elaboration. • 3GPP TS 28.824 [i.7]: Focused study on network slice management capability exposure. Highlights the general concept of exposure of management service (e.g. via BSS, without going through BSS) , the roles related to network management capability exposure, types of interfaces for exposure of network slice. Further, the study item also highlights a use cases and scenario of exposure of network slice as a service, wherein RAN Slice is offered as a product to CSP. This informative annex reuses the concepts defined in 3GPP 28.824 [i.7] and investigate the O-RAN specific impacts. • 3GPP TR 28.811 [i.8]: Study on the network slice management enhancements. This study item covers potential enhancements to slice management such as - Multi-operator relationships in network slice management, Concepts like roaming, network slice isolation, edge computing, network slice specific authentication, management data isolation for different Network Slice Consumers(NSCs). One relevant scenario considered in the study item is network slice using multiple networks scenario which highlights two potential options – a) Solution based on Network slice as a service (NSaaS) – enhancements to NetworkSlice IOC or new ExternalNetworkSlice IOC and b) Solution based on Network slice subnet as a service (NSSaaS) - enhancements to NetworkSliceSubnet IOC or new ExternalNetworkSliceSubnet IOC. • 3GPP TR 28.801 [i.3]: Study on management and orchestration of network slicing for next generation network. Clause 5.1.8.2 describes the scenario of creating an end-to-end Network Slice Instance across multiple operators. Similarly, clause 5.1.9 describes a scenario of limited level of management exposure for multiple Network Slice Instances. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 57 • 3GPP TR 23.700-99 [i.9]: Study on Network Slice Capability Exposure for Application Layer Enablement (NSCALE). This study identifies key issues, solutions, and potential requirements to enable exposure of network slice management capabilities applied to vertical industries (exposure to Application Function). This study item highlights some key requirements that can be relevant for the external management of RAN Slice Subnet - such as the ones described in clause 4.1.1 of 3GPP TR 23.700-99 [i.9]. The application enablement layer should support interaction with 3GPP network management system to consume network slice management service. D.1.2 Concepts of management services exposure for Multi- Operator slice This clause summarizes the key concepts related to management services exposure for multi-operator slice. These concepts build on those already defined in 3GPP TR 28.824 [i.7]: • exposed Management Service (eMnS): The SMO has a certain set of capabilities. Some of the capabilities of the SMO need to be exposed northbound. • eMnS Producer: The logical function of SMO that provides management service that can be consumed by a eMnS Consumer. • eMnS Consumer: The logical entity outside the administrative and trust boundaries of SMO that consumes exposed management service. • MnS Producer: The logical function of SMO that provides management service(s) that can be discovered and consumed within a MNO's administrative and trust boundaries. • eMnS Exposure: Set of procedures that make available management service in SMO for external consumer. This can include registration of service producers in external exposure functions, publishing of a management service for external exposure, authentication, and authorization of eMnS consumer, discovery of required eMnS based on selection criteria defined by eMnS consumer. • eMnS Discovery: Discovery of the eMnS Producer management services by a eMnS Consumer based on its selection criteria. • SMO external exposure functions: Abstract notion to be used in use cases and scenarios description consolidating SMO Services providing capabilities to support eMnS Exposure and potentially other similar services. SMO external exposure function is used as an example to logically represent this set of capabilities. NOTE 1: The discovery service can be realized using either an existing service discovery functions in SMO or to be defined new SMO external exposure functions or a gateway function as defined in 3GPP TS 28.533 [i.18] like EGMF. The exposure of eMnS capabilities can also adopt the approach in 3GPP TR 23.700-99 [i.9] leveraging the Network Slice Capability Enablement Server (NSCE-S) in combination with EGMF. NOTE 2: Since SMO external exposure function is not limited to eMnSs related to RAN network slicing, it is assumed to be addressed within the scope of WG1 Architecture work and is not supposed to be formally defined in the present document. An extensive functional description is beyond the scope of the present document. Some potential references for realizing the functionality of SMO external exposure functions are – CAPIF (3GPP TS 23.222 [i.5], 3GPP TS 29.222 [i.6]), EGMF (3GPP TS 28.533 [i.18]), etc. • Operator Business Roles: - Managed Network Operator (MNO): provides or consumes Management Services related to RAN slicing. - E_NSSMS_C (External consumer of MnS related to RAN slicing): Operator who discovers and consumes eMnSs related to RAN slicing. - E_NSSMS_P (Provider of external MnS related to RAN slicing): Operator who is exposes and provides eMnSs related to RAN slicing. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 58 D.1.3 Management aspects of external exposure of MnS related to RAN slicing D.1.3.1 General External exposure of MnS related to RAN slicing can be treated as a particular case of the generic service exposure existing in the industry and covered in standard specifications like 3GPP Common API Framework [i.5], [i.6]. Therefore, many aspects related to generic exposure architecture can be applicable to the external exposure of MnS related to RAN slicing case as well. According to 3GPP CAPIF framework, clear separation of registry specific aspects and service specific aspects is recommended in functional model, architecture, and procedures. In CAPIF architecture the CAPIF core function consolidates all registry specific aspects and enable service APIs from both the MNO trust domain and the 3rd party trust domain having business relationship with MNO, while service specific aspects are covered by API provider functions (API Exposing Function (AEF), API Publishing Function (APF) and API Management Function (AMF)). Similarly, management of external exposure of MnS related to RAN slicing would include two categories of aspects: • related to service specific aspects (i.e. actual processing of service requests to exposed management services (eMnSs) from E_NSSMS_C e.g. service endpoint termination, throttling, translation between external and internal information, re-exposure of service operations and events, topology hiding, routing, etc.); • related to registry and catalog specific aspects (i.e. capabilities to manage various data records and profiles to support registration, discovery, publishing, identity management, authentication, authorization, access policies, data translation rules, topology hiding policies, etc.). While the CAPIF framework helps to reuse registry specific capabilities for various API exposure scenarios, it presumes decomposed architecture with additional interactions between CAPIF Core Function (CCF) and API provider functions (AEF, APF, AMF) which adds more complexity and cannot be always preferred. With regard to possible impact on capabilities of SMO management services, at least two alternative options can be considered for management of external exposure of MnS related to RAN slicing: • Option 1: Consolidation of exposure capabilities (see figure D.1.3.1-1). Internal SMO management services are off-loaded from capabilities of external exposure as much as possible, i.e. service related aspects are as well managed by dedicated category of SMO external exposure services. Figure D.1.3.1-1: Option 1: Consolidation of exposure capabilities • Option 2: Decomposition of exposure capabilities (see figure D.1.3.1-2). Internal SMO management services are aware of exposure and enriched with at least service specific aspects to proceed requests from external consumer of MnS related to RAN slicing following identity and policy information managed in SMO external exposure services responsible for registry specific aspects. E_NSSMS_C SMO/ E_NSSMS_P SMO external exposure functions SMO Functions External consumer of MnS related to RAN slicing service specific aspects capabilities Authentication, discovery,… Operations Capabilities of SMO Management Services Capabilities of SMO external exposure services Services producing Services consuming Registry Capabilities SMO Management Services SMO external exposure services ETSI ETSI TS 104 041 V11.0.0 (2025-03) 59 Figure D.1.3.1-2: Option 2: Decomposition of exposure capabilities NOTE: CAPIF is originated out of 3GPP SA6 WG (application enablement and critical communication applications group for vertical markets) and 3GPP CT WG3 (design and specification of the Northbound APIs between the vertical application servers and the Core Network) which are not dedicated to management specifications design. Management specifications design is in responsibility of 3GPP SA5 WG (Management, Orchestration and Charging for 3GPP systems). SA5 WG gives high level definition [i.18] of Exposure Governance Management Function (EGMF) as a management function with the role of management service exposure governance (i.e. abstraction, simplification, filtering, etc.). Functional model and requirements are not yet defined for EGMF. 3GPP conducts the study ([i.7] not yet finalized) suggesting possible roles of EGMF and possible solutions for combined CAPIF/EGMF architecture applied to management domain. One of the considered architectures is where EGMF consolidates the registry specific functions (CAPIF Core Function) and API provider domain functions. D.1.4 Multi-Operator slice: high level use cases and potential requirements for exposure fo MnS related to RAN slicing D.1.4.0 High level use cases As noted above, the term "Multi-Operator slice" is used here to represent a concept of end-to-end network slice that uses RAN Slice Subnet instances (collected group of RAN objects related to network slicing) provided by MNO in shared RAN resources. As needed, the partner operator would define its PLMN specific network slice (as per 3GPP TS 23.501 [2], slice is identified by S-NSSAI within specific PLMN ID but not multiple PLMN IDs) in that end-to-end slice. This clause provides the high-level use cases and potential requirements for controlled external exposure of of MnS related to RAN slicing. D.1.4.1 Registration of producers of MnSs related to RAN slicing for external exposure D.1.4.1.1 Background and goal of the use case In order to expose MnS related to RAN slicing externally SMO Functions responsible for external exposure (SMO external exposure functions) need to be aware of producer of these services in SMO. This can be achieved by registering producers of respective MnSs in SMO external exposure functions. E_NSSMS_C SMO/ E_NSSMS_P SMO external exposure functions SMO Functions External consumer of MnS related to RAN slicing Registry Capabilities service specific aspects capabilities Authentication, discovery,… Operations Services producing Services consuming SMO external exposure services SMO Management Services Capabilities of SMO Management Services Capabilities of SMO external exposure services ETSI ETSI TS 104 041 V11.0.0 (2025-03) 60 D.1.4.1.2 Description Pre-condition • There are existing trust relations between SMO external exposure functions and SMOF that is a producer of MnS related to RAN slicing. • SMOF can reach SMO external exposure services (as one of the possibilities, SMOF gets informed of SMO external exposure services capabilities and service end-points through preliminary registration of SMO external exposure functions as a producer of SMO external exposure services in SMOF responsible for service registration and management). NOTE: SMOF responsible for service registration and management is not yet formally defined in SMO architecture. • SMOF playing a role of a producer of MnS related to RAN slicing determines its MnSs are required to be exposed externally. • SMO Function responsible for external exposure does not have information about the producer of MnSs related to RAN slicing. High level procedure (see figure D.1.4.1.2-1) 1) SMOF playing a role of a producer of MnS related to RAN slicing requests SMO external exposure services to register its MnSs. 2) SMO external exposure services producer creates registry record for the producer of MnSs related to RAN slicing containing data about available MnSs and additional data (e.g. id, end-point URIs, MnS Producer profile with supported MnSs, load level, heartbeat, etc.). 3) SMO external exposure services producer acknowledges the successful registration or failure. SMO external exposure function can subscribe to SMOF status or SMOF sets pushing status heartbeat. Figure D.1.4.1.2-1: Flow of registration of MnS related to RAN slicing for external exposure Post-condition SMO external exposure services producer has created the registry record for a producer of MnS related to RAN slicing. D.1.4.1.3 Potential requirements REQ-SL-EXP-FUNxx SMO should support a capability to register MnSs that are required to be exposed externally. REQ-SL-EXP-FUNxx SMO should support a capability to discover the MnS of the function that provides registration for MnSs that are required to be exposed externally. REQ-SL-EXP-FUNxx SMO should support a capability to store, query, update, and deliver information about producer of MnSs that can be exposed externally. SMO/ E_NSSMS_P SMO external exposure functions SMO Functions Registry Capabilities producer of MnS related to RAN slicing MnS MnS MnS No Record Record created 0 1 2 3 SMO external exposure services ETSI ETSI TS 104 041 V11.0.0 (2025-03) 61 Editor's note: The above requirements are intentionally not yet numbered and will be numbered if/when they become normative requirements. D.1.4.2 Discovery of producers of MnS related to RAN slicing for external exposure D.1.4.2.1 Background and goal of the use case In order to expose capabilities of MnS related to RAN slicing externally SMO external exposure functions need to be aware of producer of these services in SMO. This can be achieved by discovering producers of respective MnSs by querying SMOF responsible for service registration and management and providing common registration service to MnSs). NOTE: SMOF responsible for service registration and management is not yet formally defined in SMO architecture. D.1.4.2.2 Description Pre-condition • There are existing trust relations between SMO external exposure function and SMOF responsible for service registration and management. • SMO external exposure function can reach SMOF responsible for service registration and management. • SMOF playing the role of a producer of MnS related to RAN slicing registered its MnSs in SMOF responsible for service registration and management. • SMO external exposure function determines MnSs that are required to be exposed externally. • SMO external exposure function does not have information of the producer of MnSs that are required to be exposed externally. High level procedure (see figure D.1.4.2.2-1) 1) SMO external exposure function requests SMOF responsible for service registration and management to discover a producer of specific MnSs by providing filter criteria. 2) SMOF responsible for service registration and management applies filter criteria to the existing registry records for MnS producers. 3) SMOF responsible for service registration and management provides response with details on producers of matching MnSs. Figure D.1.4.2.2-1: Flow of discovery of MnS related to RAN slicing for external exposure SMO/ E_NSSMS_P SMO external exposure functions SMO Functions Registry Capabilities producer of MnS related to RAN slicing MnS No Record Record created 0 1 3 2 service registration and management SMOF MnS 0 Record exists MnS ETSI ETSI TS 104 041 V11.0.0 (2025-03) 62 Post-condition SMO external exposure function has obtained the registry records for a producer of the MnSs that are required to be exposed externally as per requested filter criteria. D.1.4.2.3 Potential Requirements REQ-SL-EXP-FUNxx SMO should support a capability for an authorized SMOF to discover MnSs that are required to be exposed externally. REQ-SL-EXP-FUNxx SMO should support a capability for an authorized SMOF to obtain information about MnSs that are required to be exposed externally based on criteria specified by that SMOF. Editor's note: The above requirements are intentionally not yet numbered and will be numbered if/when they become normative requirements. D.1.4.3 Registration of eMnS Consumer D.1.4.3.1 Background and goal of the use case A consumer outside the trust and administrative boundary of the SMO needs to be authenticated and authorized using the identity data of the consumer in order to allow acess to exposed management services. Consumer identity data needs to be registered at SMO. Traditionally, in the industry, registration is performed manually at producer side. It can also be performed automatically. Automatic registration presumes mechanisms for secured communication, for instance, certificate check and ecryption based on server (SMO) certificate. Registration capabilities also presume management of records containing consumer account data including but not limited to consumer credentials, consumer roles and rights it is authorized to. NOTE: In case of manual registration, consumer details corresponding to the registration are created through SMO operational procedures initiated by a human operator. By completion of registration procedures exposed management services consumer is provided with credentials and is aware of authentication method and access rights. D.1.4.3.2 Description Pre-condition • eMnS Consumer can reach SMO external exposure. • SMO Function responsible for external exposure does not have consumer account data record for eMnS Consumer. High level procedure (see figure D.1.4.3.2-1) 1) eMnS Consumer requests SMO external exposure services to register itself and provides information about intended scope of management services to consume. 2) SMO external exposure services producer performs eligibility check based on provided by eMnS Consumer information and makes a descision on request approval. 3) In case of positive result, SMO external exposure services producer creates registry record for the eMnS Consumer containing data about intended and authorized management services to consume, credentials and authentication method. 4) SMO external exposure services producer acknowledges the successful registration and provide credentials data to eMnS. eMnS Consumer can subscribe to status of SMO external exposure function or SMO external exposure function sets pushing status heartbeat. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 63 Figure D.1.4.3.2-1: Flow of registration of consumer of eMnS related to RAN slicing Post-condition SMO external exposure function has created the consumer account data record for the eMnS Consumer and eMnS Consumer has received registration confirmation with required data. D.1.4.3.3 Potential requirements REQ-SL-EXP-FUNxx SMO should support a capability to register a consumer of exposed management services. REQ-SL-EXP-FUNxx SMO should support a capability to manage lifecycle of account data record (creation, modification, deletion) of a consumer of exposed management services. REQ-SL-EXP-FUNxx SMO should support a capability to manage credentials, related roles and authorized access rights of a consumer of exposed management services. REQ-SL-EXP-FUNxx SMO should support a capability for an authorized SMOF to store, query, update, and deliver information about consumer of exposed management services. Editor's note: The above requirements are intentionally not yet numbered and will be numbered if/when they become normative requirements. D.1.4.4 Authentication and Authorization of eMnS Consumer D.1.4.4.1 Background and goal of the use case In order consumer of exposed management services be authenticated and authorized to consume intended scope of MnSs, SMO external exposure functions need to support authentication and authorization processes capabilities relying on identity and authorized rights for the eMnS Consumer created during its registration. NOTE: Authentication procedures shall comply with all the security requirements specified in the ORAN SFG: Security Protocols Specifications [24]. D.1.4.4.2 Description Pre-condition • There are existing trust relations between SMO external exposure functions and eMnS Consumer. • eMnS Consumer can reach SMO external exposure. SMO/ E_NSSMS_P SMO external exposure functions SMO Functions Registry Capabilities producer of MnS related to RAN slicing MnS MnS MnS No Record Record created 0 3 SMO external exposure services E_NSSMS_C External consumer of MnS related to RAN slicing 1 Eligibility check passed 2 4 ETSI ETSI TS 104 041 V11.0.0 (2025-03) 64 • SMO Function responsible for external exposure has consumer account data record for eMnS Consumer including credentials, roles and authorized access rights. High level procedure (see figure D.1.4.4.2-1) 1) eMnS Consumer initiates authentication by providing its credentials and information on security capabilities to SMO external exposure function. 2) SMO external exposure function selects security method and performs mutual authentication with eMnS Consumer. 3) After receiving successful authentication response eMnS Consumer can request for permission to access certain exposed management services (otherwise decision on access permission is made based on previously stored data). 4) SMO external exposure function checks access rights in registry record for eMnS Consumer and sends authorization response containg access token and scope defining allowed for eMnS Consumer exposed management services. Figure D.1.4.4.2-1: Flow of authentication and authorization of eMnS Consumer Post-condition SMO external exposure function has authenticated and authorized eMnS Consumer to access allowed scope of exposed management services. D.1.4.4.3 Potential requirements REQ-SL-EXP-FUNxx SMO should support a capability for mutual authentication between SMO and consumer of exposed management services. REQ-SL-EXP-FUNxx SMO should support a capability for authorization of consumer of exposed management services. REQ-SL-EXP-FUNxx SMO should support internal to SMO capability to create, read, update and delete authorization policies related to exposed management services. Editor's note: The above requirements are intentionally not yet numbered and will be numbered if/when they become normative requirements. SMO/ E_NSSMS_P SMO external exposure functions SMO Functions Registry Capabilities producer of MnS related to RAN slicing MnS MnS MnS access rights check SMO external exposure services E_NSSMS_C External consumer of MnS related to RAN slicing 1 4 2 3 4 ETSI ETSI TS 104 041 V11.0.0 (2025-03) 65 D.1.4.5 Publishing of eMnS D.1.4.5.1 Background and goal of the use case For an eMnS Consumer to consume eMnS, the eMnS needs to be defined, created, and activated in SMO as available for requests from outside the administrative and trust boundaries of SMO. It is also necessary to implement and apply access constraint policies (e.g. access rights, requests rate limitations) to eMnS in accordance with the previous business agreement between E_NSSMS_C and E_NSSMS_P. The eMnS definition relies on the capabilities of MnSs related to RAN slicing that are registered in SMO external exposure services. The definition includes sufficient information for eMnS consuming (e.g. public endpoint URI, version, metadata, protocol, authentication method, resource URIs, operations and their parameters, data format). Depending on the use case eMnS can be associated with MnS either directly or indirectly. In case of indirect association eMnS can be an aggregation of a set of MnSs, of a subset of MnS capabilities or group of several MnS subsets. eMnS can also be an abstraction formed by transformation to specific external data models, mapping, filtering and enrichment for data and operations of supporting MnSs, in that case publishing can also cover how eMnS is formed. NOTE: For the given use case direct association of eMnS to MnS is assumed. In general, service publishing includes manual activities that can be streamlined using various DevOps automation tools in conjunction with workflow management systems. D.1.4.5.2 Description Pre-condition • Business agreement between E_NSSMS_C and E_NSSMS_P exists. • SMO external exposure function uses account information of eMnS consumer to authenticate and authorize it. • SMO external exposure function has information about scope of management services eMnS Consumer intends to consume. • SMO external exposure function has the registry records for producers of MnS related to RAN slicing that can exposed externally. High level procedure 1) SMO administrator gets a task to make a new eMnS available for the E_NSSMS_C. 2) SMO administrator using various DevOps automation tools defines and create eMnS deployment package within SMO external exposure functions. 3) SMO administrator following business agreement information, information in eMnS consumer account data record and eMnS consumer intended scope of consumed management services defines and stores within SMO external exposure function access constraint policies to be applied to the eMnS consumer for the newly created eMnS. 4) SMO administrator deploys and activates an instance of eMnS for it to be available for for the requests from the outside the administrative and trust boundaries of SMO and task of eMnS publishing is reported as completed. Post-condition • eMnS deployment package is prepared and created within SMO external exposure functions. • Access constraint policies to be applied to the eMnS consumer for the newly created eMnS are prepared and created within SMO external exposure functions. • An instance of eMnS based on eMnS definition package is deployed and activated within SMO and is available for the consumption by the eMnS consumer from the outside of the administrative and trust boundaries of SMO. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 66 NOTE: Depending on the progress of De-Coupled SMO or specific implementation need, these capabilities can reside in one of the SMO Functions or realized with support of rApps. D.1.4.5.3 Potential requirements REQ-SL-EXP-FUNxx SMO should support a capability to manage lifecycle (creation, modification, deletion) of the exposed management service deployment package that includes sufficient information for consuming that exposed management service (e.g. including but not limited to public endpoint URI, version, metadata, protocol, authentication method, resource URIs, operations and their parameters, data format). REQ-SL-EXP-FUNxx SMO should support a capability to manage lifecycle of the access constraint policies (access rights, requests rate limits and other) to be applied to the exposed management service consumer for the exposed management service consumption. REQ-SL-EXP-FUNxx SMO should support a capability to manage lifecycle (creation, modification, deletion) of the exposed management service instance. Editor's note: The above requirements are intentionally not yet numbered and will be numbered if/when they become normative requirements. D.1.4.6 Discovery of eMnS D.1.4.6.1 Background and goal of the use case For the consumer of exposed management services to be aware of eMnS that are available for consumption, the external exposure function of the SMO needs to be able to support the capability and process of eMnS discovery. For eMnS to be discovered it needs first to be published. Discovery is provided to the authorized consumers of exposed management services. Upon discovery, the consumer of exposed management services obtains information on producers and available instances of eMnSs, including service access information for each producer. It can also contain specific instructions on how requests data are expected to be secured and formatted. A consumer of exposed management can discover available eMnS with a specific query containing filter criteria to SMO external exposure function or by getting a notification from SMO external exposure function on available eMnS matching notification subscription criteria. NOTE: Access to discovery capabilities of SMO can be in itself a subject for the access constraint policies. D.1.4.6.2 Description Pre-condition • There are existing trust relations between SMO external exposure function and eMnS Consumer. • eMnS Consumer can reach SMO external exposure. • SMO external exposure function has consumer account data record for eMnS Consumer including credentials, roles and authorized access rights. • eMnS instances are published. • eMnS Consumer does not have information about published eMnS instances or deems the previously stored information needs to be updated. High level procedure (see figure D.1.4.6.2-1) 1) eMnS Consumer requests SMO external exposure function to discover a producer of specific eMnSs by providing filter criteria. 2) SMO external exposure function applies filter criteria to the existing instances of eMnSs and its producers. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 67 3) SMO external exposure function provides a response with information on producers of matching eMnSs and available instances of eMnSs. Figure D.1.4.6.2-1: Flow of discovery of eMnS by eMnS Consumer Post-condition eMnS Consumer has an updated information about published eMnS instances it is able to consume. D.1.4.6.3 Potential requirements REQ-SL-EXP-FUNxx SMO should support a capability for an authorized consumer of exposed management services to discover published exposed management services. REQ-SL-EXP-FUNxx SMO should support a capability for an authorized consumer of exposed management services to obtain information about published exposed management services based on criteria specified by that consumer of exposed management services. REQ-SL-EXP-FUNxx SMO can support capability for an authorized consumer of exposed management services to subscribe to discovery events, get notifications (for example: update, removal or creation of exposed management service instance in the scope of subscription) and to request health check. Editor's note: The above requirements are intentionally not yet numbered and will be numbered if/when they become normative requirements. SMO/ E_NSSMS_P SMO external exposure functions SMO Functions Registry Capabilities producer of MnS related to RAN slicing MnS MnS MnS SMO external exposure services E_NSSMS_C External consumer of MnS related to RAN slicing 1 2 3 0 No Record/ Obsolete 0 Record exists Record created / updated ETSI ETSI TS 104 041 V11.0.0 (2025-03) 68 Annex E (informative): Change history Date Version Information about changes 2020.03.11 01.00 • First version with slicing architecture of O-RAN. Describes at a high level the O-RAN slicing related use cases, requirements and architecture along with slicing related impact to O-RAN functions and interfaces. 2020.07.16 02.00 • Detailed definition of slice management and provisioning along with related requirements. • Addition of multi-vendor slices use case. • Addition of long-term NSSI optimization use case. 2020.11.13 03.00 • Updates of requirements to capture use case impacts. • Additional requirements for multi-vendor slices use case. • Addition of ONAP deployment options to Annex section. 2021.03.13 04.00 • Addition of transport network slicing sub-section. • Addition of two detailed procedures; O-NSSI (O-RAN Network Slice Subnet Instance) allocation and O-NSSI deallocation. 2021.07.19 05.00 • Addition of O-NSSI (O-RAN Network Slice Subnet Instance) modification procedure. • Addition of an Annex on transport network slicing use cases and roadmap. 2021.11.23 06.00 • Updates to NSSI resource optimization use case. • Addition of an Annex section for slicing terminology awareness. 2022.04.04 07.00 • Updates to O-NSSI allocation and deallocation procedures to reflect WG6 developments. • Updates to Annex B to capture the latest approach and the updated phases for WG9 Transport Network slicing aspects. 2022.08.01 08.00 • Initial content related to multi-operator RAN slice subnet management use case. 2022.11.18 09.00 • Additional content related to multi-operator RAN slice subnet management use case. 2023.03.24 10.00 • Additional content related to exposure of slice management capabilities for multi-operator use cases. 2023.07.27 11.00 • Updates for O-RAN Drafting Rules (ODR) compliancy. • Addition of a sub-use case for multi-vendor slice management service exposure. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 69 History Document history V11.0.0 March 2025 Publication
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
1 Scope
The present document details measures which may be taken to improve the energy efficiency within operators sites and data centres for broadband deployment. Clauses 2 and 3 contain references, definitions and abbreviations which relate to this part; similar information will be included in the corresponding clauses of the other parts, thus ensuring that each document can be used on a "stand-alone" basis. Within the present document: • clause 4 introduces data centre concepts including those specifically related to network operators; • clause 5 develops the concept of Key Performance Indicators (KPI), introduced in TS 105 174-1 [13], to enable consistent monitoring of energy efficiency; • clause 6 details the approaches that may be employed to improve energy efficiency within the information technology infrastructure; • clause 7 details the approaches that may be employed to improve energy efficiency within the environmental control systems; • clause 8 details the approaches that may be employed to improve energy efficiency via the physical infrastructure of the buildings; • clause 9 details the approaches that may be employed to improve energy efficiency within the power distribution system; • clause 10 provides a summary of energy efficiency approaches within existing data centres; • clause 11 provides a summary of energy efficiency approaches within new data centres and introduces wider issues concerning their location; • clause 12 contains the conformance mechanisms of the present document; • clause 13 contains the recommendations of the present document; • clause 14 introduces future opportunities for improvements of energy efficiency; • annex A provides indications of the first order effect of applying the approaches outlined in clauses 6, 7 and 9. This will enable the proper implementation of services, applications and content on an energy efficient infrastructure, though it is not the goal of this multi-part deliverable to provide detailed standardized solutions for network architecture. The present document focuses on energy efficiency. The CO2 footprint is not taken in account in the present document. Two separate aspects of energy efficiency are considered as shown in figure 1: • actions to improve energy efficiency in existing data centres in the short or medium term; • actions to improve energy efficiency in new data centres, in medium or long term. The domains under study are: • in the Information Technology (IT) infrastructure: all aspects of the technical infrastructure in the data centre, including servers, storage arrays, backup libraries and network equipment including routers, switches, etc.; • in the IT operational strategy: all consolidation initiatives, such as virtualization, physical or logical consolidations, usage of specific software and processes; • in the technical environment: all aspects concerning energy usage, cooling and, more generally, all disciplines involved in the technical environment of the data centre. ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 8 Figure 1: Aspects of data centres under consideration
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
2 References
References are either specific (identified by date of publication and/or edition number or version number) or non-specific. • For a specific reference, subsequent revisions do not apply. • Non-specific reference may be made only to a complete document or a part thereof and only in the following cases: - if it is accepted that it will be possible to use all future changes of the referenced document for the purposes of the referring document; - for informative references. Referenced documents which are not found to be publicly available in the expected location might be found at http://docbox.etsi.org/Reference. NOTE: While any hyperlinks included in this clause were valid at the time of publication ETSI cannot guarantee their long term validity.
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
2.1 Normative references
The following referenced documents are indispensable for the application of the present document. For dated references, only the edition cited applies. For non-specific references, the latest edition of the referenced document (including any amendments) applies. [1] ANSI/TIA-942: "Telecommunications Infrastructure Standard for Data Centres". [2] Uptime Institute: "Tier Classifications Define Site Infrastructure Performance". [3] Johannesburg: "Datacenter Dynamics Research Key Findings" August 2008. [4] European Commission: "DG-JRC Code of Conduct on Data Centres Energy Efficiency". [5] "Best Practices for the EU Code of Conduct on Data Centres". ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 9 [6] CENELEC EN 50173-2: "Information technology - Generic cabling systems - Part 2: Office premises". [7] CENELEC EN 50173-5: "Information technology - Generic cabling systems - Part 5: Data centres". [8] CENELEC EN 50174-1: "Information technology - Cabling installation - Part 1: Installation specification and quality assurance". [9] CENELEC EN 50174-2: "Information technology - Cabling installation - Part 2: Installation planning and practices inside buildings". [10] High performance buildings: "UPS report (Ecos Consulting-Epri Solutions)". [11] ETSI EN 300 019-1-3: "Environmental Engineering (EE); Environmental conditions and environmental tests for telecommunications equipment; Part 1-3: Classification of environmental conditions; Stationary use at weatherprotected locations". [12] ETSI EN 300 132-3: "Environmental Engineering (EE); Power supply interface at the input to telecommunications equipment; Part 3: Operated by rectified current source, alternating current source or direct current source up to 400 V". [13] ETSI TS 105 174-1: "Access, Terminals, Transmission and Multiplexing (ATTM); Broadband Deployment - Energy Efficiency and Key Performance Indicators; Part 1: Overview, common and generic aspects".
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
2.2 Informative references
The following referenced documents are not essential to the use of the present document but they assist the user with regard to a particular subject area. For non-specific references, the latest version of the referenced document (including any amendments) applies. [i.1] ETSI TR 102 489: "Environmental Engineering (EE); European telecommunications standard for equipment practice; Thermal Management Guidance for equipment and its deployment".
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
3 Definitions and abbreviations
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
3.1 Definitions
For the purposes of the present document, the following terms and definitions apply: application: single program or a set of several programs executing a function or a service availability: time or period during the application or the service has to be operational NOTE: Availability is one of the criticality criteria. blade server: server chassis housing multiple thin, modular electronic circuit boards, known as server blades NOTE: Each blade is a server in its own right, often dedicated to a single application. The blades are literally servers on a card, containing processors, memory, integrated network controllers, an optional fibre channel host bus adaptor (HBA) and other input/output (IO) ports. computer room: closed, secured and environmentally controlled room in which IT equipment is operating criticality: level given to an application or service, linked to the impact for the enterprise in case of crash NOTE: More the impact is strong, more the application or service is critical. data centre: centralized repository for the storage, management, and dissemination of data and information organized around a particular body of knowledge or pertaining to a particular business ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 10 Data Centre Infrastructure Efficiency (DCIE): reciprocal of the PUE, that is "IT equipment power" divided by "total facility power", expressed as a percentage NOTE: DCIE improves as it approaches 100 %. Disaster Recovery Plan (DRP): all process (technical, organization, people) to launch in case of continuity disruption disk array: cabinet containing physical disks energy efficiency: search in existing DC, or for new future DC, of all tracks and actions allowing minimizing energy needs and costs NOTE: Key drivers are Economic to decrease the energy bill by increasing the efficiency of all equipment and minimize power loss. green data centre: in addition to energy efficiency, the "Green" approach will focus on carbon footprint NOTE 1: Energy Efficiency is one way, to decrease CO2 emissions, but it is not the only one. NOTE 2: More "sustainable development" objective than economic, the key indicator is carbon footprint. Today, this concept is not still clearly defined, especially if we now that data centres are not directly producers of CO2, but indirectly, due to their energy needs. If the source of power is becoming from renewable energies (hydraulic, solar, etc.) or nuclear (not so green for earth, but not producing CO2) the carbon footprint of the datacenter is low. But if energy is becoming from coal, or fuel the CO2 emissions are high. information technology equipment: equipment such as computers, servers, mainframes, calculators and all storage devices as arrays, libraries, tape robots together with routers and switches within the local area networks IT equipment power: total power needed for operate servers, racks, disk arrays, libraries, network telecommunications equipment (such as routers and switches), equipment used for monitoring the data centre (PC, laptops, terminals and workstations) and network telecommunications-specific equipment (such as DSLAM and BTS) logical consolidation ratio: number of application instances per operating system image logical server: one single instance of operating system mainframe: high-performance computer used for large-scale computing purposes that require greater availability and security than a smaller-scale machine can offer network telecommunications equipment: equipment providing direct connection to core and/or access networks including switches, DSLAM, BTS operator site: premises accommodating network telecommunications equipment providing direct connection to the core and access networks and which may also accommodate information technology equipment physical server: box containing supplies for energy, mother board, central processing unit, memory, slots Power Usage Effectiveness (PUE): metric used to determine the energy efficiency of a data centre that is determined by "Total facility power" divided by "IT equipment power", expressed as a ratio (PUE is expressed as a ratio, with overall efficiency improving as the quotient decreases toward 1) Recovery Point Objective (RPO): maximum allowed data loss Recovery Time Objective (RTO): maximum authorized time during application or service can be stopped server: computer program that provides services to other computer programs (and their users) in the same or other computers total computing load: total computing power in the data centre, that can be evaluated by taking vendors specifications of computational power of each model of server multiplied by the number of servers (transactions per minute is one measure of total computing power) total facility power: total power used by all power delivery components (such as uninterruptible power supplies, switches, power distribution units, batteries and transformers), cooling system components (such as chillers, computer room air conditioning units, pumps, fans, engines) and the non-technical energy (such as building lighting) ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 11 TPC Benchmark C (TPC-C): On-Line Transaction Processing (OLTP) benchmark measured in transactions per minute (TPMc) NOTE 1: TPC-C is more complex than previous OLTP benchmarks such as TPC-A because of its multiple transaction types, more complex database and overall execution structure. TPC-C involves a mix of five concurrent transactions of different types and complexity either executed on-line or queued for deferred execution. The database comprises nine types of tables with a wide range of record and population sizes. NOTE 2: TPC-C simulates a complete computing environment where a population of users executes transactions against a database. The benchmark is centred around the principal activities (transactions) of an order- entry environment. These transactions include entering and delivering orders, recording payments, checking the status of orders, and monitoring the level of stock at the warehouses. While the benchmark portrays the activity of a wholesale supplier, TPC-C is not limited to the activity of any particular business segment, but, rather represents any industry that manages, sells, or distributes a product or service. utility computing: service provisioning model in which a service provider makes computing resources and infrastructure management available to the customer needs NOTE: Like other types of "on-demand computing" (such as grid computing), the utility model seeks to maximize the efficient use of resources and/or minimize associated costs. This approach is becoming increasingly common in enterprise computing and is sometimes used for the consumer market as well, for internet service, web-site access, file sharing, and other applications. Virtual Machine (VM): emulation of a physical server on a shared infrastructure NOTE: Virtual machine embeds Operating System, specific softwares and application. virtual server: "piece" of physical server dedicated to run a "virtual machine virtualization: software that separates applications from the physical hardware on which they run, allowing a "piece" of physical server to support one application, instead of requiring a full server virtualization ratio: number of Virtual Machines per server
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
3.2 Abbreviations
For the purposes of the present document, the following abbreviations apply: AC Alternative Curent ADSL Asymetric Digital Suscriber Line AS Application Server ATTM Access Transmission Terminal and Multiplexing B2B Business To Business B2C Business To Customer BTS Base Transceiver Station CPU Central Processing Unit CRAC Computer Room Air Conditioning CRIP Club des Responsables d’Infrastructure et de Production DC Data Centre DCIE Data Centre Infrastructure Efficiency DRP Disaster Recovery Plan DSLAM Digital Subscriber Line Access Multiplexer HBA Host Bus Adaptor HQE Haute Qualité Energétique HVDC High Voltage Direct Current ICT Information Communication Technology IEC International Electrotechnical Commission IO Input Output IS Information Systems ISP Internet Service Provider IT Information Technology ITIL IT Information Library ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 12 KPI Key Performance Indicator LEED Leadership in Energy and Environmental Design M2M Machine To Machine MVS Proprietary Operating System for IBM Mainframes servers NGDC New Generation Data Centre NGN Next Generation Network OLTP On-Line Transaction Processing OS Operating System PDU Power Distribution Unit POD Performance Optimzed Datacenter PUE Power Usage Effectiveness RISC Reduced Instruction Set Computer RPO Recovery Point Objective RTO Recovery Time Objective SAN Storage Area Network SLA Service Level Agreement TCO Total Cost of Ownership TPC-C TPC Benchmark C TPM Transaction Per Minute TPMc transaction per minute - count TV TeleVision UPS Uninterruptible Power Supply VM Virtual Machine VMS Proprietary Operating System for DEC Mainframe Servers VOD Video On Demand VOIP Voice Over IP WAS Web Access Server
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
4 Overview of data centres
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
4.1 Types of data centres
There are a number of different types of data centre: • a network data centre has the primary purpose of the delivery and management of broadband services to the operator's customers. To enable their functionality, all network data centres must be connected to at least one core network operator site. For reasons of network resilience, data centres will invariably be connected to more than one operator site and to several other data centres. Data Centres may serve core networks operated by several network operators, thus enabling traffic between customers of different network operators; • an enterprise data centre has similar functions and connectivity functions and connectivity to that of a network data centre but has the primary purpose of the delivery and management of services to its employees and customers; • a co-location data centre is one in which multiple customers locate their own network, server and storage equipment and have the ability to interconnect to a variety of telecommunications and other network service providers. The support infrastructure of the building (such as power distribution, security, environmental control and housekeeping services) is provided as a service by the data centre operator; • a co-hosting data centre is one in which multiple customers are provided with access to network, server and storage equipment on which they operate their own services/applications and have the ability to interconnect to a variety of telecommunications and other network service providers. Both the information technology equipment and the support infrastructure of the building are provided as a service by the data centre operator. This clause will identify and explain the elements of the network sub-systems employed in broadband deployment. ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 13
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
4.2 Tiering of data centres
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
4.2.1 Tiers and criticality
Several levels of data centres have been defined, based on the criticality of the applications or the business processed which determine the global Recovery Time Objective (RTO). The lower the RTO, the more the data centre has to be supported by the use of redundant equipment in both the technical environment and IT infrastructure domains. A number of schemes defining levels of data centres have been developed that are considered in the following clauses.
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
4.2.2 ANSI/TIA-942
ANSI/TIA-942 [1] defines requirements for reliability and availability of data centres, including the associated redundant support infrastructures, based on four "tiers". Network data centres are assumed to at least meet the requirements of Tier 3.
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
4.2.3 Uptime Institute
The Uptime Institute [2] defines an alternative system of "Tiers" based upon business objectives and acceptable downtime as shown in table 1. The Tier determines the redundancy of energy and cooling equipment as indicated in table 1 and shown in figure 2 and has some significant consequences on energy costs. Table 1: Uptime Institute Tiers Tier Impact of failure Design criteria Downtime (maximum) 1 Internal company impact Mostly cash-based Limited on-line presence Low dependence on IT Downtime perceived as tolerable inconvenience Single path for power and cooling distribution No redundant components 28,8 hours/year 2 Business critical applications Multiple servers Telephone system vital to business Dependent on e-mail Single path for power and cooling distribution Redundant components 22,0 hours/year 3 World-wide presence Majority of revenue from on-line business VoIP telephone system High dependence on IT High cost of downtime Multiple power and cooling distribution paths but only one path active Redundant components; concurrently maintainable 1,6 hours/year 4 Strategic or mission critical business Majority of revenue from electronic transactions Business model entirely dependent on IT Multiple active power and cooling distribution paths; redundant components; fault 0,4 hours/year ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 14 Figure 2: Uptime Institute Tier energy paths and redundancy scheme
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
4.2.4 Criticality levels
These levels as shown in table 2 are proposed by the Siska Hennessey Group and offers 10 levels of criticality, from 98 % estimated availability (175,2 hours of downtime/year) for a C1 level to C10 level (99,99999999 %, which corresponds to 0,0031 s of annual downtime). Levels above C4 are not considered to be achievable with available technologies. For the purpose of comparison, the Uptime Institute Tiers of clause 4.2.3 lie between C2 and C4 criticality levels as proposed by the Siska Hennessey Group. Table 2: Criticality levels proposed by Siska Hennessey Group Tier % availability Annual downtime Uptime Institute Tier Status C1 98 175,2 hours Achievable C2 99 87,6 hours Tiers 1-2 Achievable C3 99,9 8,76 hours Tiers 3-3+ Achievable C4 99,99 53 minutes Tier 4 Achievable C5 99,999 5,3 minutes Not achievable (see note) C6 99,9999 31 seconds Not achievable (see note) C7 99,99999 3,1 seconds Not achievable (see note) C8 99,999999 0,31 seconds Not achievable (see note) C9 99,9999999 0,031 seconds Not achievable (see note) C10 99.99999999 0,0031 seconds Not achievable (see note) NOTE: Not considered to be achievable with current technologies.
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
4.2.5 Tiers and costs
The capital expenditure (Capex) and operational expenditure (Opex) of new data centres increase with the tier level. Figure 3 shows Capex and Opex (normalized to 100 for the Uptime Institute Tier 1) as a function of Uptime Institute tier. ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 15 0 50 100 150 200 250 300 Tier 1 Tier 2 Tier 3 Tier 4 Capex Opex Figure 3: Uptime Institute Tiers and expenditure Capex, which includes building, design, facilities (such energy, cooling and fire detection) shows the most significant increase between Tier 2 and Tier 3 due to creation of a fully redundant, concurrently maintainable facility including the power and cooling infrastructure. The Uptime Institute assume that power requirement density (W/m2) increases with each Tier level as shown in table 3 (normalized to 100 for the Uptime Institute Tier 1). Consequently, the Opex related to energy consumption would increase accordingly with some adjustment (for example, an additional 10 % to 20 %) for the inefficiencies in the redundant power and cooling equipment. Table 3: Uptime Institute Tiers and power density ratios Tier Power density ratio 1 100 2 170 3 200 4 250 Opex also increases significantly between Tier 2 and 3 since the power distribution and cooling infrastructure runs at less than 50 % utilization, allowing for a system component failure without impacting service. There will be some additional Opex costs associated with the inefficiencies in the redundant power distribution and cooling plant, but the primary determinant of Opex cost will be the amount and density of the information technology deployed in the data centre.
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
4.3 Issues faced by data centres
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
4.3.1 General
Clause 4.3 reviews the situation in existing data centres and the issues that all enterprises, including network operators, are facing now or will face in the near future. There are several types of data centre, from main strategic buildings, running enterprise, "mission critical" applications, for which maximum security and guarantee of service continuity is mandatory, to technical sites or computer rooms, for which the same level of integrity is not required (see table 4). This will have a direct and significant consequence on the energy costs, due to redundancy of the technical environmental and the IT infrastructure. Table 4: Uptime Institute Tiers and mission criticality Data Centre Type Hosted applications Disaster Recovery Plan (DRP) Uptime Institute Tier Strategic Mission critical applications Campus dual-site 3+/ 4 Secondary Business critical/internal impact Remote site 3 Local Proximity equipment or controller Equipment redundancy 1/2 ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 16
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
4.3.2 Current issues
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
4.3.2.1 Overview
The information in table 5 summarizes the results obtained from an internal benchmarking exercise undertaken by the enterprise members of Club des Responsables d’Infrastructure et de Production (CRIP). NOTE: CRIP is a French organization representing major companies such as in the banking, telecommunications, insurance, car manufacturing and general industrial sectors. Table 5: Average values of actual situation (CRIP source) Aspect of data centre design No. of servers 100-200 > 200 Floor space for IT equipment (m2) 139 2 405 Average total floor space of the data centre (m2) 2 987 4 538 Average power consumption (W/m2) 86 655 Autonomy following total failure of external electrical supply (days) 10 (% of those surveyed) Redundancy of electrical systems 67 94 Redundancy of cooling systems 91 91 Redundancy of telecommunications networks and rooms 58 90 Existence of local disaster recovery plan (DRP) 46 Existence of campus mode DRP (dual sites within 10 km) 22 Existence of metropolitan DRP (sites within 10 km to 100 km) 26 Existence of continental DRP (sites separated by more than 100 km) 18 Existence of effective DRP 51 Supervision and control room backup 40
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
4.3.2.2 Principle issues
An unsatisfactory situation exists in legacy data centres as a result of historical policies for the provision of servers, often with each application having its own dedicated physical server, sometimes as a result of running older operating systems not allowing virtualization features. As a result, these servers may have a Central Processing Unit (CPU) usage of only 10 % to 20 %, resulting in a very low energy efficiency. One of the reasons for this has been the lack of effective management of server capacity, another consequence of which is that servers may not be removed from service when they are no longer required. The overall result of this is that many data centres are now at their limits in terms of energy, cooling and floor space. Research surveying benchmarked significant enterprises [3] indicates that the most significant concerns in existing data centres are: • lack of energy; • lack of cooling; • absence of upgrade path due to new environmental legislation and other constraints; • energy costs; • new generation hardware, more efficient but creating areas with high energy density (from 0,7 kW/m2 to 20 kW and potentially 40 kW per rack in 2010); • average PUE (see clause 5.3.1) of 2,5 to 2,8; • low usage of servers, especially Windows and Linux ( 10 % to 20 %). Nevertheless, unless improvement in energy efficiency are implemented it is clear that energy costs, as a proportion of the Total Cost of Ownership (TCO), which increase from a typical level of 10 % to 50 %, meaning that energy costs (Opex) will exceed annual IT Capex. ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 17
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
4.3.2.3 Operator data centres
Historically, network data centres have often migrated into existing operator sites which are typically located in urban areas. The primary power supply to these locations was often not designed for the high levels of energy usage required by the technology now employed. These buildings and their infrastructure were designed to accommodate network telecommunications equipment which had a power usage density several orders of magnitude lower than the modern information technology equipment that has replaced it. Modern building technology is capable of achieving far greater efficiency both in floor space utilization and energy usage; hence it is unlikely that the overall performance of legacy buildings could ever be made to approach that of purpose-built data centre complexes. It is, therefore, probably necessary to consider these as separate cases when comparing energy performance. Additionally, these existing buildings often have a shortage of floor space that is difficult to increase due to commercial, building and planning constraints in urban areas; this, in turn, forces increased concentrations of processing capability. Legislative and environmental factors place severe constraints on the provision of the additional cooling equipment that becomes necessary. As energy costs continue to rise and concerns regarding its availability increase, it will become even more necessary to employ new generation hardware with greater processing efficiency. These factors require new strategies and practices to be employed in the design and operation of data centres, particularly in relation to energy efficiency.
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
4.3.3 Evolution and future trends
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
4.3.3.1 Power and cooling demands of IT equipment
The technology road-maps of the main IT equipment (servers and/or storage) vendors road-maps show that predicted power consumption values (in terms of kW/m2) are not aligned with the capabilities of the majority of computer rooms within data centre facilities (both in terms of power provisioning and cooling capacity). NOTE 1: The great majority of current data centres are not adapted to meet the energy needs required to host these new technologies (Source CRIP Members internal benchmark on Data centre trends - February 2008) New technologies of servers (such as blades, chassis and Unix mainframes) have power requirements that exceed the capacity of computer rooms. Examples of such increases are from 0,5 kW/m2 to 0,7 kW/m2 to 4 kW/m2 or 6 kW/m2 in the two next years, and much more (10 kW/m2) in the five next years. NOTE 2: Gartner Group said in 2006: "50 % of existing data centres are becoming obsolete by the end of 2008 in terms of space floor, energy and cooling potential". Such increases make the concept of kW/m2 as a design criterion irrelevant and introduce a new evaluation method based on kW into a rack, forcing designers to consider high density areas within data centres with the necessary electrical power and cooling systems. These areas may be specially configured areas within a traditional computer room (see figure 17 and figure 18) allowing the remaining space to accommodate low-density equipment such as robotics, backup libraries, network switches, etc. Alternatively, the required conditions may be created within closed and secured pre-racked environments (including electrical power management features, dedicated liquid cooling systems and containing all the necessary connections to external networks) which allows their use outside a computer room (see clause 7.2.2.1.5).
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
4.3.3.2 New projects
Some projects or services, especially in the telecommunications and Internet Service Provider (ISP) world, need huge computing power, linked to the natural growth of their activities or global consolidation initiatives. Additionally, new services (VOD, VOIP, TV ADSL, etc.), electronic exchanges with customers (B2C), suppliers or partners (B2B) and, in the near future, between machines (M2M) will impose new constraints in terms of connectivity, availability, security and random workload absorption. An overall approach has to be taken to IT in the data centre, enabling sharing policies and being able to respond immediately to application needs, with the required level of service, in a cost effective manner. ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 18
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
4.3.3.3 Data centre consolidation programmes
A recent trend is the consolidation of the existing data centres of major organizations within fewer, major "Critical Data Centres". This has a potential impact on energy costs since "Critical Data Centres" are usually constructed at the Tier 3+ or Tier 4 level. However it is possible to have some areas or computer rooms of a low Tier level (1 - 2) in a data centre of a higher tier (3 - 4). If the data centre is of Tier 3 or 4, overall redundancy of power supplies will be implemented, but only the Tier 3 or 4 computer rooms will require fully redundant power and cooling systems and other equipment (applications) may be installed in computer rooms with a lower Tier level (as shown in figure 4). Figure 4: Mixed Tier-level data centre This approach may reduce overall power consumption. In addition it may be possible to operate the different computer rooms at different temperatures (for example, information technology equipment may be segregated from network telecommunication equipment) allowing further reductions in power consumption (see clause 7.2.3).
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
4.3.3.4 Environmental impacts
Regulations frequently require a prolonged case-by-case study by authorities which can introduce significant delays to the planning process. These constraints make it very difficult if not impossible to implement projects for the expansion of an existing data centre in towns which would increase heat dissipation or have significant energy requirements.
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
4.4 The new context
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
4.4.1 Energy consumption and energy efficiency
The model "one application, one physical architecture", each with its own servers, storage, is becoming obsolete and is being replaced by a new model based on shared IT components and mutualisation of technical infrastructure. In the near future, the data centre will be a true "production factory", with automation and industrialization and will become fully virtualized as described in clause 6. ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 19 New "energy efficient "equipment in all domains of IT, new cooling technologies (including solutions that only become practical as the power densities increase) and other solutions to manage power consumption more efficiently will all act to enable a reduction in energy costs for a given level of service. However, the greatest savings will only be obtained if other initiatives reflecting the "utility computing" concept are fully implemented (such as consolidation, automated provisioning of servers (network or storage) and recurrent operations). The search for energy efficiency has to cover all disciplines and not only focus on the technical environment.
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
4.4.2 Factors impacting energy efficiency
The factors shown in table 6 contribute to poor energy efficiency and, consequently, high energy consumption. Table 6: Principle factors leading to poor energy efficiency Power distribution systems Power distribution units and/or transformers operating well below their full load capacities. N+1 or 2N redundant designs, which result in under utilization of components. Decreased efficiency of Uninterruptible Power Supply (UPS) equipment when run at low loads. Cooling systems Air conditioners forced to consume extra power to drive air at high pressures over long distances. Pumps with flow rate automatically adjusted by valves (which reduces the pump efficiency). N+1 or 2N redundant designs, which result in under utilization of components. Physical infrastructure Under-floor "noodleware" that contribute to inefficiency by forcing cooling devices to work harder to accommodate existing load heat removal requirements, which can lead to temperature differences and high-heat load areas might receive inadequate cooling. Lack of internal computer room design. IT infrastructure Low usage of existing servers (10 % to 20 % CPU) specially in X86 world. Lack of capacity management process for technical environment and IT. Physical servers dedicated to applications. No consolidation, lack of sharing policy. Data redundancy, generating lot of storage capacity. IT equipment in active mode 24/24, 7/7 but only used at certain hours of the day and/or on certain days. Old generations of servers, with a low computing power/electrical consumption ratio. Lack of functional cartography, generating a lot of applications, data duplications, backups, and bit rate for exchanges, that contribute to dramatically increase the TCO (number of physical components, software fees, people, space floor, etc.) and energy consumption. The actual situation is more generally the consequence of an important growth of the needs in term of IT equipment, due to the natural growth of the enterprises business and the creation of new services. Another factor increasing the IT load is that the functional cartography is very complex and generates a lot of applications. Sometimes, the best way to make the maximum savings on energy is to decrease the number of applications and minimize data duplications.
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
4.4.3 On-going initiatives
Many organizations have introduced initiatives in response to the principle concerns identified within clause 4.3.2.2. These cover the following areas: • IT infrastructure (discussed in detail in clause 6); - consolidation of their existing assets to decrease the number of physical components in computer rooms comprising: storage consolidation - concentrating data on to shared arrays instead of using storage capacity dedicated to specific applications. This requires a development of a Storage Area Network (SAN) policy; ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 20 server consolidation initiatives are now in progress in many enterprises, particularly for servers using X86 technology, running Windows or Linux operating systems, due to their being more numerous than those using other operating systems (Unix, MVS, VMS). These servers are those from the Tier 1 (presentation) and Tier 2 (application servers) in the commonly used three tier software architecture model (outlined in clause 6.2). The consolidation of these servers is primarily implemented with: - new generation hardware such as blade or other racked servers to build a shared and common technical infrastructure; - virtualization software to move an application on to a "virtual server". Several applications can be consolidated in the same hardware, using virtual servers, with a 60 % to 80 % computational load, instead of only 10 % to 20 % for a non-virtualized server. These actions not only limit a dramatic increase of energy needs and costs but also reduce the time before the new generation data centres become operational. In the majority of cases, this consolidation is "physical" (with and without virtualization) using mainframes, racked servers, "pizza box" or "blade servers" technologies, due to their power capacity / space floor place ratio. - process automation for more precise provisioning of resources, introduction of new virtual servers and dynamically managing their workload and scalability; - capacity management for storage and servers, to have a better usage of existing resources; • cooling systems (discussed in detail in clause 7): - increasing the ambient temperature; - free air cooling; - water cooling; - disabling air-humidifiers within areas that only contain equipment meeting the requirements of EN 300 019-1-3 [11]; • power distribution systems (discussed in detail in clause 9); - High Voltage Direct Current (HVDC) supplies for IT equipment (for new data centres).
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
5 Energy efficiency standards and metrics
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
5.1 Review of activities outside ETSI
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
5.1.1 EU Code of Conduct on Data Centres Energy Efficiency
The European Union Code of Conduct [4] provides the opportunity for operators of data centres to implement practices intended to reduce the energy consumption of their data centres. Details of the energy efficiency best practices [5] that are employed are detailed together with actual recorded energy consumption measured at specific points in the data centre.
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
5.2 Energy consumption in data centres
Of the total energy used in a data centre, the principal areas of consumption, shown schematically in figure 5, are: • power distribution to information technology equipment and network telecommunications equipment in the computer room; • environmental control (for example, cooling and humidity) applied to the computer room; • lighting and equipment in offices associated with the data centre; ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 21 • lighting for the computer room. Main energy input Backup energy input Computer room(s) Switch 100 % 100 % UPS W % Z % Ancillary lighting and office equipment X % Environmental control i.e. cooling CRAC units Z % PDU UPS Servers Storage Network W - E1 % W - E1 - E2 % X % Y % Lighting and office equipment HEAT H % Recoverable energy Figure 5: Schematic of energy consumption and wastage in data centres With reference to figure 5, • the proportion of the energy delivered to the information technology equipment is W - E1 - E2 % where: W (%) = energy consumption at the input to the UPS; E1 (%) = energy wasted within/by of the UPS; E2 (%) = energy wasted within/by of the PDU. • the energy consumed by the environmental control equipment X %. Unless otherwise stated, the standard model used in the present document is follows: • the % power required for IT equipment 45 (based on W = 60 and E1 + E2 = 15 in figure 5); • the % power required for the cooling systems = 37 (that is X = 37 in figure 5); • the % power required for building facilities = 3 (that is Y + Z = 3 in figure 5).
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
5.3 Energy Efficiency Key Performance Indicators (KPIs)
5.3.1 Power Usage Effectiveness (PUE) or Data Centre Infrastructure Efficiency (DCIE)
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
5.3.1.1 General
With reference to figure 5, PUE is defined as 100/(W - E1 - E2). ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 22 The European Union Code of Conduct [4] recognizes the DCIE (the inverse of the PUE expressed as a percentage as shown in table 7. Table 7: PUE and DCIE conversion PUE DCIE 4,0 25 % 3,0 33,3 % 2,5 40 % 2,2 45 % 2,0 50 % 1,8 56 % 1,6 62,5 % 1,5 66,7 % 1,4 71,4 % 1,3 76,9 % The standard model of clause 5.2 produces a PUE of 2.22 and a DCIE of 45 %. Figure 6 indicates the PUE and DCIE figures for high efficiency, medium efficiency and low efficiency data centres. In general, data centres built in the last five years have an operational PUE of between 2 and 2,5 (that is DCIE values of 40 % - 50 %). Some older data centres have operational PUE values greater than 3 (DCIE values less than 33,3 %). NOTE: The word "efficient" needs to be replaced with the word "efficiency". Figure 6: Data centre energy efficiency spectrum and PUE/DCIE values The latest Tier 3+ or 4 data centres may have PUE targets of as low as 1,4. Other data centres of Tier 2 that use free water or air cooling and equipped with the latest generations of UPS, pods (see clause 7.2.2.1.5), high-efficiency racks and servers are capable of localized PUE targets of as low as 1,2 in specific areas.
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
5.3.1.2 PUE for new data centres
For a data centre under construction, the KPIs of PUE (or DCIE) are appropriate (see clause 12.2). A data centre under construction can be designed to have a specified PUE (or DCIE) and following construction the actual PUE (or DCIE) can be monitored against the design value. However, it should be recognized that any reduction of W (by means of improvement of the information technology or network telecommunications equipment or its usage, and by reduction of waste in the power distribution system, E1 or E2) without a equivalent reduction in the primary energy consumption parameters, X , Y and Z, will lead to an increase in PUE indicating a worsening in energy efficiency rather recognising a reduction in energy usage. The design strategies for new buildings can be chosen in order to maximize their initial PUE and with the objective of maintaining that PUE during their subsequent growth, operation and evolution As described in clause 7, the selection of data centre location, building engineering strategies and system selection can substantially reduce the energy consumption required for environmental control. As described in clause 8, effective planning of pathways and spaces in accordance with EN 50174-1 [8] and the data centre specific aspects of EN 50174-2 [9] can maximize the energy efficiency of environmental control systems. Current average situation ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 23 As described in clause 6, internal design studies can influence the usage of equipment (servers, storage and networking). In addition, appropriate management tools used in conjunction with effective process organization and automation can significantly reduce the overall energy consumption of the information technology equipment. The introduction of high efficiency power distribution components as described in clause 9 can reduce the value of W. Using these approaches in combination, the typical design PUE of new buildings will reduce over the next few years from the current value of 2,0 to 1,6 (2010-2015) and, with the implementation of measures outlined in clause 14, can be expected to reach 1,3. It is possible to specify even lower PUE values for dedicated areas within computer rooms but it is unlikely that, using the approaches detailed in the present document, the PUE values for the overall data centre will be much lower than the values shown. Table 8 gives several steps for a better efficiency. Some of them are directly achievable by the data centre owner, or the IT one, but some others are depending on technological evolutions. Table 8: Progressive improvement in PUE Year Situation PUE 2005-2008 Many stand alone servers Physical dedicated architecture per application No virtualization on Open systems Obsolete technologies and OS Manual provisioning of resources Lack of space, energy or cooling in computer rooms 2,5 - 2,0 2008-2010 Infrastructure policy (server farms, Unix mainframes) Storage consolidation - Virtualization - Tiering policy Physical server consolidation Virtualization technologies for servers and storage Automatic provisioning for server Operation process automation Capacity management for server and storage Need 2 kW/m2 to 4 kW/m2 1,8 2010-2015 Energy Efficient Data Centre building (LEED) Own production of renewable Energy (wind, solar, hydraulic, etc.) New generation cooling appliances - Free cooling Auto-cooled IT hardware Energy Efficient servers (blade farms, mainframes) Specific software for manage Energy capacity Software for thermal measurement and modelling in computer rooms Massive application consolidations - IS rationalization Full data centre processes automation 1,6
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
5.3.1.3 PUE in existing data centres
There is little opportunity within existing buildings to reduce energy consumption by modifications to building structures or to make substantial changes to the environmental control systems without incurring significant costs and operational disruption. Small improvements may achieved by limited changes to environmental control systems and general improvements in energy efficiency in power distribution components and ancillary areas thereby reducing the values of E1, E2 and W in figure 4. Instead, reduction of overall energy consumption may be achieved by, where possible, implementing effective planning of pathways and spaces in accordance with EN 50174-1 [8] and the data centre specific aspects of EN 50174-2 [9] in order to maximize the energy efficiency of environmental control systems. Procurement of more energy efficient IT equipment and consolidation/virtualization/process automation initiatives represents the only realistic solution for a reduction in energy consumption in existing buildings for a given level of service. ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 24 However, these actions without a equivalent reduction in the primary energy consumption parameters, X, Y and Z as shown in figure 5, will lead to an increase in PUE indicating a worsening in energy efficiency rather than recognising a reduction in energy usage. PUE is hard to evaluate without a set of tools generally missing in majority of legacy data centres. As mentioned above, some necessary primary actions on infrastructure such as virtualization will have a negative effect on PUE. PUE may be the most relevant indicator to give a global view of efficiency, but it is not the most appropriate KPI metric for measurement the improving energy efficiency in existing data centres. Conformance to the present document (see clause 12.1) for existing data centres requires the use of a KPI other than PUE.
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
5.3.2 Other KPIs
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
5.3.2.1 Energy efficiency KPI
A number of these KPIs exist including: • electrical power/space-floor ratio expressed in kW/m2; typically, this KPI is used for legacy data centres and/or low density areas in computer rooms; • the ratio of total energy consumption / total computational load; • computing power/electrical power expressed as TPM-C/kW; this KPI gives a density of computing potential per kW and is useful for racks and/ or high density areas in computer rooms; • total energy required for data centre per hour; • total energy consumption per year.
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
5.3.2.2 Consolidation KPI
A number of KPIs exist for IT infrastructure consolidation including: • number of physical servers; • number of virtual servers; • virtualization ratio; • number of deployed operating system images (logical servers) - one physical server can contain several logical servers: this KPI is a principal measure of logical consolidation; • logical consolidation ratio; • number of applications; this is a KPI only for application consolidation and de-commissioning (relation has to be done between application, physical components, and technical environment needs saved with the consolidation); • average computational load per family (RISC, X86) - this KPI evaluates the efficiency of servers, in terms of computational load with the objective having the highest usage of CPU power in order to minimize the number of servers; • number of disk arrays. A number of KPIs exist for physical storage consolidation including: • number of SAN ports; • Tbytes per m2; • ratio of occupied storage space/m2 - this KPI gives storage density; ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 25 • average array allocation ratio - this KPI is similar to average computational load, providing the ratio of space allocation in the array.
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
5.3.2.3 Data collection
The production of the KPIs in clause 5.3.2.2 requires the collection and aggregation of a wide range of data as shown in table 9. Table 9: Date required for the production of consolidation KPIs Vendors characteristics of all equipment in the data centre Vendor Model, type Number of CPU , cores Computational power Electrical consumption (idle, full load) Heat dissipation Weight, Size Inventory database from all equipment of the data centre Per data centre Per computer room Per business process Per application Database containing measurement values during business activities and during periods of non-activity Electrical consumption Per data centre Per computer room Per business process Per application Per end-user Computational needs Per computer room Per business process Per application Per user Cooling needs Per computer room Per business process Per application Per user
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
6 Increasing the energy efficiency of IT infrastructures
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
6.1 General
Indications of the impact of some of the actions in this domain of energy efficiency are shown in annex A.
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
6.2 3 Tier Software Architecture Model
Figure 7 shows the 3 tier software architecture model adopted in data centre environments. This clause makes reference to the Tiers of this model - and these tiers should not be confused with those discussed in clause 4. ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 26 Figure 7: Typical architecture of 3 Tier model application
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
6.3 Energy efficiency solutions
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
6.3.1 Obsolete equipment
This involves the identification, turning-off and removal of all equipment without any activity such as old servers, modems and routers. This typically represents a small percentage of the installed equipment (possibly 5 %) but decommissioning of this equipment provides an immediate reduction in energy consumption without any reduction in service levels.
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
6.3.2 Replacement equipment
This involves the replacement of existing equipment from previous generations of technology with the most recent, more energy efficient IT equipment. The choice of server equipment should be directed by their ability to run virtualized operating systems. Blade server farms offer an excellent ratio of power consumption / computing power in a limited space. High-end or mainframes are and, for the foreseeable future, will be necessary for the processing of large databases.
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
6.3.3 Power and capacity management
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
6.3.3.1 General
There are two separate rapid routes by which reductions of power consumption may be achieved by providing more efficient usage of existing resources within existing IT infrastructures without the need for changes to hardware. The routes are described as: • power management (see clause 6.3.3.2); • processing capacity management (see clause 6.3.3.3).
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
6.3.3.2 Power management
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
6.3.3.2.1 Activation of basic power management features
This involves the activation of any power management features within existing equipment. The application of dynamic allocation of equipment resources (see clause 6.3.3.3.5) provides additional beneficial effects on power management. ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 27
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
6.3.3.2.2 Activation of "sleep" mode
This involves the activation of sleep mode ( that is not a system shut-down of the equipment) during periods without application activity during certain periods during days, weeks or months and can be applied to a variety of equipment depending upon it role within the 3 tier software architecture model of clause 6.2. It may be even be possible to consider a full system shut-down of certain pieces of equipment. These solutions are applicable to all servers that are not used in continuous use (such as backup and development servers) and servers in Tiers 1 or 2 of the software architecture model of clause 6.2. Typically, these servers or applications respect a schedule of the type shown in table 10. Table 10: Server schedules Times Activity Tier 1 Tier 2 Tier 3 08h00 - 20h00 TP User connections Application Database connections 20h00 - 08h00 Backup/batches Any connection Inactive Backup/batch activity Week-ends None Any connection Inactive Backup/batch activity An example of the potential savings is shown below. EXAMPLE: An active, last generation X86 mono or bi-processor, server has a typical mean consumption of 240 W. The same server in "sleep" mode has a typical consumption of 80 W and 0 W when turned off. In table 11, 200 such servers were identified that could be turned-off or put in sleep mode: • for 8 hours per day; • during weekends and public holidays. Table 11 shows that the potential energy savings from activating sleep-mode would be 31 % and from shut-down would be 46 %. Table 11 - Example saving calculation Times Activity No. of week-ends per year W 52 No. of public holidays per year P 9 No. of working week-days D = 365-2W-P 252 Total energy-reduction hours R = 8D + 24(2W+P) 4 730 Total energy consumption (without action) 8 760 x 240 W Per server 420 480 kWH Energy reduction with sleep mode 160(8 760-R) per server 128 960 kWH (30,7 %) Energy reduction with shutdown 240(8 760-R) per server 193 440 kWH (46 %) ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 28 420480 128960 193440 0 50000 100000 150000 200000 250000 300000 350000 400000 450000 500000 Total consumption per year (kWh) Consumption if servers are shut down (kWh) Consumption if servers are in sleep mode (kWh) Figure 8: Example savings schematic
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
6.3.3.2.3 Reduction of energy consumption of environmental control equipment
It may also be possible to reduce the energy consumption for environmental control but the level of savings is depends upon the type of cooling employed. If it is not possible to dynamically adjust the cooling air-rate, any savings would be insignificant. However, if the cooling air-rate can be adjusted dynamically then the energy used to cool the servers could be reduced by up to 50 %.
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
6.3.3.3 Capacity management
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
6.3.3.3.1 General
Capacity management is the ongoing, operational, process of estimation and allocation of space, environmental needs, computer hardware, software and connection infrastructure resources to reflect the dynamic nature of data centre users or interactions. As shown in figure 9, capacity management addresses the following: • is the data centre able to host new applications, services or support the growth of the activity? • what is the capacity in terms of energy, space and cooling? • what is the capacity in terms of storage, CPU, memory, I/O, ports, etc.? Capacity management provides an exhaustive view of the real needs in terms of computational power and/or environmental capabilities by continuous management, measurement and monitoring of the servers and application activities. The objective of capacity management is to ensure that new capacity is added just in time to meet the anticipated need but not so early that resources go unused for a long period. Successful capacity management, using analytical modelling tools (responding to "what will happen if" scenarios) implements trade-offs between present and future needs that prove to be the most cost-efficient overall. The emergence of new technologies together with changes to business strategies and forecasts change require capacity management to be under continual review. Effective capacity management supports the use of products that are modular, scalable and also stable and predictable in terms of support and upgrades over the life of the product. Capacity management has the following objectives: • prediction and anticipation of future needs of the business due to both natural growth and new projects; • implementation of actions on IT or environment to provide adequate resources; • adjustment of infrastructure usage to the real needs of the business and prevent waste due to over-sizing of applications; ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 29 • determination of equipment usage; • preparation of consolidation initiatives (see clause 6.3.4). Figure 9: Capacity management
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
6.3.3.3.2 Environmental capacity management
This requires the measurement and subsequent management of electrical, cooling and space needs. In many cases this information is obtained manually, directly by the data centre personnel. However, the best method is to apply software solutions.
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
6.3.3.3.3 Storage
This involves the use of shared data storage, active data compression and data de-duplication in order to maximize the utilization of storage capacity. The implementation of thin provisioning for storage, allowing the right disk-space is critical to the management of storage capacity.
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
6.3.3.3.4 Servers
This involves the use of existing equipment when additional server capacity is required. This approach is a step towards the consolidation initiatives of clause 6.3.4.
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
6.3.3.3.5 On-demand scalability for on-line business
This requires the implementation of pre-packaged virtual environments, including all logical components necessary to run the application, and a "utility computing" tool to distribute them across the infrastructure taking account of, for example, the number of connections to the service (that is one new server provisioned all the 200 connections). A critical aspect is that the automated system has to be able to remove the additional capacity as soon as the number of connection decreases. Virtualization is the main key for this, as each new server installed is a Virtual Machine.
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
6.3.4 Consolidation initiatives
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
6.3.4.1 Consolidation of servers
The consolidation of processing within existing servers is the best way toward reduce energy costs for given level of service. The result of consolidation is a reduction in the number of servers which has a direct impact on the IT infrastructure power requirements which has a corresponding effect on reductions in requirements for cooling and floor space. ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 30 There are a number of types of consolidation that are covered in this clause: • physical consolidation; • virtualization; • logical consolidation; • application or Information Systems (IS) rationalization.
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
6.3.4.2 Physical consolidation
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
6.3.4.2.1 The process
Physical consolidation involves the gathering of stand-alone hardware within a physically more powerful container, as shown in figure 10 and can be achieved without using virtualization if the server technology allows partitioning features. Figure 10: Physical consolidation - Virtualization
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
6.3.4.2.2 The effects
A physical consolidation programme has the following effects: • reduction in the number of physical components (servers, storage arrays, robotics); • savings on floor space, maintenance costs, cooling and power; • Capex for new hardware.
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
6.3.4.3 Virtualization
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
6.3.4.3.1 The process
Virtualization is a method of physical consolidation and has to be implemented made on new generation servers and is a pre-requisite for a shared infrastructure policy. Physical consolidation has no effect on the number of "logical" servers (that is reducing the number of hardware has no effect on the number of Operating System (OS) images and on software licences). The effect on the global IT TCO is not significant.
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
6.3.4.3.2 The effects
Under certain conditions, virtualization can deliver energy reductions of 80 % on a selected set of servers.
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
6.3.4.3.3 Reduction of energy consumption of IT infrastructure
Table 12 provides a methodology to evaluate energy savings for a virtualized panel of servers. Other indirect savings could be also evaluated if the virtualization affects the cooling requirements in the computer room. In majority of cases, virtualization will not have a positive effect on the PUE of the data centre, but it contributes to a reduction in total energy consumption. ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 31 NOTE: Any savings will be ineffective unless the old servers are shut-down electrically. Table 12: Virtualization savings profile Number of servers to virtualize X Mean energy consumption per server (watt) W1 Total consumption in Watt (X * W1) C1 New server number (number of blades or boxes) X / R Average consumption per VM (including storage) W2 Virtualization ratio R New consumption (X * W2) C2 Savings on energy (watt) : C1-C2 S1 Savings on energy % (S1 / C1) S2 The following figures are provided by a major telecommunications organization that has launched a virtualization project on X86 servers under production. The results are significant, since in addition to virtualization, logical consolidation (see clause 6.3.4.3.4) was also applied, by reducing the number of operating systems. Generally, mean values for energy savings using virtual environments are 40 % to 60 % of the energy consumed by the servers before virtualization. In this example, the aim was to consolidate many physical servers of one multi-server application (170) into new generation technology servers, associated with a virtualization tool. The legacy servers were 2 CPU Intel X86 "racked" servers from previous generation, containing one image of an operating system and one application instance. The new servers were Intel X86 4 CPU 2 core blade servers, racked. A consolidation ratio of 12 (meaning that one new physical server contains at least 12 virtual servers) was achieved. Figure 11 shows the savings in electrical consumption that resulted. 49 300 5 358 0 10 000 20 000 30 000 40 000 50 000 60 000 Electrical consumption of servers in Watt Before virtualization Aftrer virtualization Figure 11: Electrical savings with virtualization
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
6.3.4.3.4 Reduction of energy consumption of environmental control equipment
See clause 6.3.4.3.4.
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
6.3.4.4 Logical consolidation
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
6.3.4.4.1 The process
Logical consolidation aims to decrease the number of logical servers (operating system images) as shown in figure 12 and so make some savings on licence fees in addition to the number of physical servers. ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 32 Figure 12: Logical consolidation This is recommended for existing servers and can, without any Capex provide a better usage of legacy equipment. However, the latest generation of servers are recommended when creating a new data centre. Virtualization and logical consolidation can be implemented in tandem, providing a cumulative benefit. There are three methods of achieving logical consolidation: • Homogeneous: only servers containing same logical components of the same application (same web access servers (WAS), web server, etc.) are consolidated in the same OS. • Semi-homogeneous: only servers containing same logical components, but from different applications (same WAS, web server, etc.) are consolidated in the same OS. • Heterogeneous: different logical components, from different applications (this method is not the most popular and is implemented only in certain cases, with specific rules). In all cases, logical consolidation has a pre-requisite of defining consolidation rules and criteria that address, amongst other issues: • application sensitivity; • application type; • daily schedule; • resource usage measurement (CPU, inputs/outputs (IO), memory, etc.); • links with other applications.
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
6.3.4.4.2 The effects
• increased computational load of existing servers; • reduction in the number of logical servers (OS instances) by strong sharing policy reducing software and maintenance fees; • improved physical consolidation score, means less servers, and all related effects such as energy, cooling, etc. • medium effect on global IT TCO, due to savings on licences.
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
6.3.4.5 Application consolidation
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
6.3.4.5.1 General
This is the highest level of consolidation initiatives. It addresses the problem, not from the infrastructure layer, but from the business-process layer. As shown in figure 13 it is a "top down" approach, in opposition with the "bottom-up" approach of physical or logical consolidation. This is not the easiest way, but it is the more profitable in terms of savings in all domains.
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
6.3.4.5.2 The process
There are several ways to undertake application consolidation, but in all cases, the process has to be led by business owners and developers. ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 33 Figure 13: "Top Down" approach for application consolidation In the majority of cases, application consolidation is not driven only by energy savings. Nevertheless, it is the best way to achieve maximum reduction in energy consumption since it addresses the primary causes of energy consumption inflation, etc. too many applications and too much complexity, redundancy, data duplication, etc. Such a programme is more commonly implemented to positively impact TCO, improve quality of service, reduce "time to market" and enhance security.
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
6.3.4.5.3 The effects
• Build business process-oriented or service-oriented dedicated infrastructures, integrating scalability, availability, agility, etc. • Can be made by consolidation of applications from the same vendor, from the same business owner or the same process on one dedicated infrastructure. • Boost physical and logical consolidation scores, by decreasing complexity and redundancies. • Develop consolidated and unique vision of applications relating to same process in order to propose one SLA per process. 7 Reducing the energy demand of environmental control systems
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
7.1 General
This clause describes the approaches that may be employed to reduce the energy demand of environmental control systems within the data centre which typically represents 35 - 40 % of total energy consumption in an Uptime Institute Tier 3 data centre. Some of the approaches are applicable to legacy data centres while other is more likely to be applied in new data centres. Further details are provided in clauses 10 and 11. The present document add to the information provided in TR 102 489 [i.1]. Indications of the impact of some of the actions in this domain of energy efficiency are shown in annex A. ETSI ETSI TS 105 174-2-2 V1.1.1 (2009-10) 34
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
7.2 Energy reduction solutions
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
7.2.1 Measurement of thermal behaviour
In order to enable reductions in energy usage it is important to determine the thermal patterns within the data centre by using available thermal measurement software tools. These may be deployed without significant Capex since the costs are restricted to the fees for the software and the Opex for the installation and customization.
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
7.2.2 Improvement of cooling efficiency
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
7.2.2.1 Zonal approaches to thermal isolation
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
7.2.2.1.1 General
The actual design in most of legacy data centres is that the Computer Room Air Conditioning (CRAC) units force cold air under a raised floor, into the cabinets, and draw up hot air coming from the top of the cabinets. However, if the hot air and cold air become mixed, bad aeraulic management results producing hot spots within the computer room. This can be shown by taking infra-red photographs of the room. Some simple and low cost actions can be taken to avoid this which generate some non-negligible savings on cooling and have a positive effect on the PUE.
a525c3531c3b8f5d14495cfdffcb2180
105 174-2-2
7.2.2.1.2 Hot aisle and cold aisle segregation
This approach creates areas within the data centre that are designed as dedicated "hot aisles" and "cold aisles". Rows of cabinets are created in which the front of the cabinets face each other (cold aisles) and in the adjacent row, the rear of cabinets face each other (hot aisles). This segregation is shown in figure 14. Cold air is drawn in through the front of the cabinets (cold aisles) and expressed through the rear of the cabinets into the hot aisles. Infrastructure design, installation and operational procedures are critical to maximizing the benefit of basic hot aisle and cold aisle solutions (see clause 8).