hash
stringlengths
32
32
doc_id
stringlengths
7
13
section
stringlengths
3
121
content
stringlengths
0
2.2M
0e25f513e54fb64c4210e9eadff82437
105 174-5-4
2.2 Informative references
The following referenced documents are not essential to the use of the present document but they assist the user with regard to a particular subject area. For non-specific references, the latest version of the referenced document (including any amendments) applies. [i.1] European Commission: "DG-JRC Code of Conduct on Energy Consumption of Broadband Equipment". ETSI ETSI TS 105 174-5-4 V1.1.1 (2009-10) 6 [i.2] CENELEC EN 50173-1: "Information technology - Generic cabling systems - Part 1: General requirements". [i.3] CENELEC EN 50173-2: "Information technology - Generic cabling systems - Part 2: Office premises". [i.4] CENELEC EN 50173-5: "Information technology - Generic cabling systems - Part 5: Data centres". [i.5] CENELEC EN 50174-1: "Information technology - Cabling installation - Part 1: Installation specification and quality assurance". [i.6] CENELEC EN 50174-2: "Information technology - Cabling installation - Part 2: Installation planning and practices inside buildings". [i.7] CENELEC EN 50174-3: "Information technology - Cabling installation - Part 3: Installation planning and practices outside buildings". [i.8] ETSI TS 102 973: "Access Terminals, Transmission and Multiplexing (ATTM); Network Termination (NT) in Next Generation Network architectures". [i.9] ISO/IEC 24764: "Information technology - Generic cabling systems for data centres". [i.10] ETSI TS 105 174-1: "Access, Terminals, Transmission and Multiplexing (ATTM); Broadband Deployment - Energy Efficiency and Key Performance Indicators; Part 1: Overview, common and generic aspects". [i.11] ETSI TS 105 174-2-2: "Access, Terminals, Transmission and Multiplexing (ATTM); Broadband Deployment - Energy Efficiency and Key Performance Indicators; Part 2: Network sites; Sub-part 2: Data centres". [i.12] ETSI TR 105 174-5-2: "Access, Terminals, Transmission and Multiplexing (ATTM); Broadband Deployment - Energy Efficiency and Key Performance Indicators; Part 5: Customer network infrastructures; Sub-part 2: Office premises (single-tenant)". [i.13] ETSI TR 105 174-4: "Access, Terminals, Transmission and Multiplexing (ATTM); Broadband Deployment - Energy Efficiency and Key Performance Indicators; Part 4: Access networks".
0e25f513e54fb64c4210e9eadff82437
105 174-5-4
3 Definitions and abbreviations
0e25f513e54fb64c4210e9eadff82437
105 174-5-4
3.1 Definitions
For the purposes of the present document, the following terms and definitions apply: application: system, with its associated transmission method that is supported by telecommunications cabling (this corresponds to a Layer One application in the OSI 7-layer model) Broadcast Communication Technology (BCT) application: system, with its associated transmission method using the HF band (3 MHz to 30 MHz), the VHF band (30 MHz to 300 MHz) and the UHF band (300 MHz to 3 000 MHz) dedicated to the transmission of sound radio, TV and two-way data services, as well as for in-home inter-networking NOTE: See EN 50173-1[i.2] modified. BCT service: transmission of sound radio, TV and two-way data NOTE: See EN 50173-1 [i.2] modified. ETSI ETSI TS 105 174-5-4 V1.1.1 (2009-10) 7 Control, Command and Communications in Building (CCCB) application: system, with its associated transmission method dedicated to providing appliance control and building control NOTE: See EN 50173-1[i.2] modified. CCCB services: appliance control and building control NOTE: See EN 50173-1 [i.2] modified. Information Communication Technology (ICT) applications: system, with its associated transmission method for the communication of information ICT services: creation, communication dissemination, storage and management of information network convergence: ability of a network, by virtue of the applications it supports, to deliver multiple ICT, BCT and CCCB services
0e25f513e54fb64c4210e9eadff82437
105 174-5-4
3.2 Abbreviations
For the purposes of the present document, the following abbreviations apply: BCT Broadcast Communication Technology CCCB Control, Command and Communications in Building CGIC ETSI CLC Co-ordination Group on Installations and Cabling DSL Digital Subscriber Line ENI External Network Interface ENTI External Network Termination Interface EO Equipment Outlet FTTB Fire To The Building HF High Frequency ICT Information and Communication Technology KPI Key Performance Indicator LDP Local Distribution Point MD Main Distributor NGN Next Generation Network OIE Operator Independent Equipment OSE Operator Specific Equipment UHF Ultra High Frequency VHF Very High Frequency ZD Zone Distributor
0e25f513e54fb64c4210e9eadff82437
105 174-5-4
4 Customer networks in data centres
0e25f513e54fb64c4210e9eadff82437
105 174-5-4
4.1 Overview of data centre network infrastructures
0e25f513e54fb64c4210e9eadff82437
105 174-5-4
4.1.1 General
Customers data centres range in size from relative small areas contained in offices or industrial premises (perhaps only containing a telephony switch and a small number of servers) to sophisticated, enterprise data centres of a size and complexity equivalent to those described in TR 105 174-5-2 [i.12]. However, in virtually all cases the energy consumption of information technology equipment is the major element of the overall energy consumption of the area designated as the data centre. ETSI ETSI TS 105 174-5-4 V1.1.1 (2009-10) 8
0e25f513e54fb64c4210e9eadff82437
105 174-5-4
4.1.2 Network convergence
Within customer data centres, the range of networks has, in the past, reflected the diversity of the services with ICT services being delivered over a variety of cabling infrastructures for specified computer-computer or computer- peripheral connections. The type of networks within data centres include the Ethernet solutions of the type found in other premises for the delivery of ICT services. However, other networks such as Fibre Channel are also used in data centres for specific purposes in specific areas. These networks are supported by the transmission performance offered by generic cabling (see clause 4.2.1) and CENELEC TC215 developed EN 50173-5 [i.4] covering the design and specification of generic cabling within data centres which can be applied, although not as a complete replacement for all cabling, within all data centres.
0e25f513e54fb64c4210e9eadff82437
105 174-5-4
4.2 Infrastructure standardization activities
0e25f513e54fb64c4210e9eadff82437
105 174-5-4
4.2.1 Generic cabling designs in accordance with EN 50173-5
NOTE: EN 50173-5 [i.4], published in 2007, has a similar scope, and is intended to be technically equivalent, to ISO/IEC 24764 [i.9] produced by ISO/IEC JTC1 SC25.
0e25f513e54fb64c4210e9eadff82437
105 174-5-4
4.2.1.1 Infrastructure layers
EN 50173-5 [i.4] specifies a single layer infrastructure for data centre cabling as shown in figure 1. ENI BEF MD ZD Distributor in accordance with EN 50173-1 Equipment/telecommunications room in accordance with EN 50173-1 Data centre EO LDP Figure 1: "Data centre"-specific cabling infrastructure of EN 50173-5 [i.4] Within the area designated as the data centre, the infrastructure is fed from a Main Distributor (MD) which is connected to one or more Zone Distributors (ZD). Information technology equipment may be housed at any distributor as required by the networked application. The final distribution to the servers and other peripheral equipment is from the ZD to the Equipment Outlets (EO). A Local Distribution Point (LDP) may be installed an EO and a ZD to provide a point of administration but does not house active information technology equipment. Connections from the data centre are: • external: via the External Network Interfaces (ENI) allowing connection of either an MD or a ZD to the access network; • internal: via connection to a generic cabling distributor as relevant to the type of premises within which the data centre is housed (if appropriate). The computer room may contain generic cabling for general ICT services with the room which would be designed in accordance with EN 50173-2 [i.3] as described in TR 105 174-5-2 [i.12]. The hierarchical nature of the data centre cabling infrastructure is shown in figure 2 (as in EN 50173-5 [i.4]). ETSI ETSI TS 105 174-5-4 V1.1.1 (2009-10) 9 Network access cabling subsystem Zone distribution cabling subsystem optional cables ZD LDP LDP ZD LDP LDP MD ENI ENI Distributor in accordance with EN 50173-1 EO EO EO EO EO EO EO EO EO EO Main distribution cabling subsystem Network access cabling subsystem Zone distribution cabling subsystem Network access cabling subsystem Zone distribution cabling subsystem optional cables ZD LDP LDP ZD LDP LDP MD ENI ENI Distributor in accordance with EN 50173-1 EO EO EO EO EO EO EO EO EO EO Main distribution cabling subsystem Network access cabling subsystem Zone distribution cabling subsystem Figure 2: Cabling topology of EN 50173-5 [i.4]
0e25f513e54fb64c4210e9eadff82437
105 174-5-4
4.2.1.2 Cabling
The main and zone distribution cabling subsystems of figure 2 shall be provided by either balanced or optical fibre cabling. EN 50173-5 [i.4] Amendment 1:2009 requires that the balanced cabling is capable of providing a minimum transmission performance (defined as Class EA of EN 50173-1 [i.2]) which is cable of supporting 10BASE-T, 100BASE-T, 1000BASE-T and 10GBASE-T applications. EN 50173-5 [i.4] Amendment 1:2009 requires that, where the main or zone distribution cabling subsystems do not exceed 300 metres in length, the minimum performance of the cabled multimode optical fibre is Category OM3 which is capable of supporting 10GBASE-SR/SW applications.
0e25f513e54fb64c4210e9eadff82437
105 174-5-4
4.2.2 Cabling installation in accordance with EN 50174 standards
EN 50174-1 [i.5], EN 50174-2 [i.6] and EN 50174-3 [i.7] contain requirements and recommendations for the specification, quality assurance, planning and installation practices that apply to all information technology cabling media in all premises. Clause 8 of EN 50174-2 [i.6] specifies the additional/amended requirements and recommendations that apply within the buildings containing data centres.
0e25f513e54fb64c4210e9eadff82437
105 174-5-4
4.3 Network access infrastructure
The connection between the operator's access network and the customer's data centre is made to the ENI as shown in figure 1 (or the equivalent in non-generic cabling). The ENI is an interface within the premises which provided to the access network. The physical nature of this interface is specified in EN 50173-5 [i.4] for both balanced and optical fibre cabling. The ENI is the boundary between the network access cabling and the access network as shown by the dotted line in figure 3. ETSI ETSI TS 105 174-5-4 V1.1.1 (2009-10) 10 ENTI OIE Distribution ENTI OSE Distribution Access network Network access cabling in EN 50173-5 Service interface Service interface Figure 3: Network access cabling and equipment The network telecommunications equipment typically comprises a passive interface (ENTI) and an optional item of apparatus. The apparatus may be specific to the network operator (OSE) or may be operator independent (OIE) as described in the following examples: • OIE: DSL modem, FTTB modem (where interoperability standard exists). NOTE: See TS 102 973 [i.8]. • OSE: FTTB modem (where no interoperability standard exists). The OSE is part of the access network whereas the OIE is part of the customer premises infrastructure. In most cases the OIE, or some part of it, may be powered from the access network. In some cases the OSE may be powered from the customer premises. For this reason, the energy efficiency of the access network takes into account any power required to maintain the functionality at the service interface, whether or not it is part of the access network (and is covered in TR 105 174-4 [i.13]). The EU Code of Conduct on Energy Consumption of Broadband Equipment [i.1] provides a framework for ensuring operational energy efficiency consumption of network telecommunication equipment.
0e25f513e54fb64c4210e9eadff82437
105 174-5-4
5 Energy efficiency
0e25f513e54fb64c4210e9eadff82437
105 174-5-4
5.1 General
TS 105 174-2-2 [i.11] provides information regarding Key Performance Indicators (KPIs) for the energy efficiency of data centres. Although the customer data centre is outside the scope of TS 105 174-2-2 [i.11], its requirements and recommendations may be applied to these customer networks and their associated infrastructure. The comparative viability of implementing such solutions will depend on the size and nature of the customer data centre. ETSI ETSI TS 105 174-5-4 V1.1.1 (2009-10) 11 History Document history V1.1.1 October 2009 Publication
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
1 Scope
The present document addresses the opportunities and challenges offered by the use of lamp-posts to provide facilities supporting services required by sustainable digital multiservice cities and communities. The replacement of existing luminaires by LED light sources offers an opportunity to increase the functionality provided by the lamp-posts - beginning with improved operational control of the lighting provided. However, additional functionality can be supported by simultaneous installation of an electronics package to enable the lamp-post to host sensing devices. The present document describes the functions to be supported by this package together with consideration of power supply to any hosted sensing devices. A more comprehensive replacement approach includes the incorporation of 5G services by the separate installation of wireless network components acting as a Remote Radio Unit (RRU). The present document describes the technical challenges associated with the physical installation, provision of power, cabling and other infrastructures necessary to meet the required level of availability for these services.
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
2 References
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
2.1 Normative references
References are either specific (identified by date of publication and/or edition number or version number) or non-specific. For specific references, only the cited version applies. For non-specific references, the latest version of the referenced document (including any amendments) applies. Referenced documents which are not found to be publicly available in the expected location might be found at https://docbox.etsi.org/Reference/. NOTE: While any hyperlinks included in this clause were valid at the time of publication, ETSI cannot guarantee their long term validity. The following referenced documents are necessary for the application of the present document. [1] EN 40-1:1991: "Lighting columns; Part 1: Defintions and Terms" (produced by CEN). [2] ETSI EN 303 472 (V1.1.1): "Environmental Engineering (EE); Energy Efficiency measurement methodology and metrics for RAN equipment". [3] IEC 60050-601: "International Electrotechnical Vocabulary (IEV) - Part 601: Generation, transmission and distribution of electricity - General".
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
2.2 Informative references
References are either specific (identified by date of publication and/or edition number or version number) or non-specific. For specific references, only the cited version applies. For non-specific references, the latest version of the referenced document (including any amendments) applies. NOTE: While any hyperlinks included in this clause were valid at the time of publication, ETSI cannot guarantee their long term validity. The following referenced documents are not necessary for the application of the present document but they assist the user with regard to a particular subject area. [i.1] EN 50173-1: "Information technology - Generic cabling systems - General requirements" (produced by CENELEC). [i.2] EN 50174-3: "Information technology - Cabling installation - Installation planning and practices outside buildings - General requirements" (produced by CENELEC). [i.3] HD 60364 series: "Electrical Installations for Buildings" (produced by CENELEC). ETSI ETSI TS 110 174-2-2 V1.2.1 (2020-11) 9 [i.4] IEC 62368-3: "Audio/video, information and communication technology equipment - Safety - Part 3: DC power transfer through information technology communication cabling". [i.5] IEEE 802.3bt™: "IEEE Standard for Ethernet Amendment 2: Physical Layer and Management Parameters for Power over Ethernet over 4 pairs". [i.6] IEEE 802.3cg™: "10Mb/s Single Pair Ethernet". [i.7] Recommendation ITU-T G.652: "Characteristics of a single-mode optical fibre and cable". [i.8] Recommendation ITU-T G.657: "Characteristics of a bending-loss insensitive single-mode optical fibre and cable". [i.9] Recommendation ITU-T K.50: "Safe limits for operating voltages and currents in telecommunication systems powered over the network". [i.10] IEC 61140: "Protection Against Electric Shock Common Aspects for Installation and Equipment". [i.11] IoTUK group: "The Future of Street Lighting". NOTE: Available at https://iotuk.org.uk/wp-content/uploads/2017/04/The-Future-of-Street-Lighting.pdf. [i.12] IEC 60479-2: "Effects of current on human beings and livestock - Part 2: Special aspects". [i.13] ETSI TS 110 174-1: "Access, Terminals, Transmission and Multiplexing (ATTM); Sustainable Digital Multiservice Cities (SDMC); Broadband Deployment and Energy Management; Part 1: Overview, common and generic aspects of societal and technical pillars for sustainability".
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
3 Definition of terms, symbols and abbreviations
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
3.1 Terms
For the purposes of the present document, the following terms apply: backhaul (network): fixed network interconnecting the BaseBand Units (BBUs), collecting/distributing data traffic from/to those BBUs, to/from core network access points Base Station (BS): Network Telecommunications Equipment (NTE) which serves one or more cells within a coverage area of a mobile access network big data: structured, semi-structured and unstructured data that has the potential to be mined for information and used in machine learning projects and other advanced analytics applications core network: functional elements (that is equipment and infrastructure) that enable communication between operator sites (OSs) or equivalent ICT sites Enhanced Mobile Broadband: one of three primary 5G New Radio (NR) use cases defined by the 3GPP as part of its SMARTER (Study on New Services and Markets Technology Enablers) project fast data: application of big data analytics to smaller data sets in near-real or real-time in order to solve a problem or create business value NOTE: The goal of fast data is to quickly gather and mine structured and unstructured data so that action can be taken. As the flood of data from sensors, actuators and Machine-to-Machine (M2M) communication in the IoT continues to grow, it has become more important than ever for organizations to identify what data is time-sensitive and should be acted upon right away and what data can sit in a database or data lake until there is a reason to mine it. front-haul (network): network interconnecting the BaseBand Units (BBUs) or antennas connected to them, collecting/distributing data traffic from/to those BBUs, to/from Remote Radio Units (RRUs) lamp-post: lighting column and lantern(s) it supports ETSI ETSI TS 110 174-2-2 V1.2.1 (2020-11) 10 lantern: protective case for a light fitting lighting column: support intended to hold one or more lanterns, consisting of one or more parts: a post, possibly and extension piece and, if necessary, a bracket NOTE 1: It does not include columns for catenary lighting. NOTE 2: SOURCE: EN 40-1:1991 [1], clause 2.1. low voltage: set of voltage levels used for the distribution of electricity and whose upper limit is generally accepted to be 1 000 V for alternating current NOTE 1: 1 500 V for direct current. NOTE 2: SOURCE: IEC 60050-601 [3]. Massive IoT: applications that are less latency sensitive and have relatively low throughput requirements, but require a huge volume of low-cost, low-energy consumption devices on a network with excellent coverage mid-haul (network): network interconnecting the BaseBand Units (BBUs) to/from antennas which provide wireless connections to Remote Radio Units (RRUs) Network Telecommunications Equipment (NTE): equipment between the boundaries of, and dedicated to providing direct connection to, core and/or access networks Radio Access Network (RAN): telecommunications network in which the access to the network (connection between user equipment and network) is implemented over the air interface NOTE: SOURCE: ETSI EN 303 472 [2]. safety class: class rating in electrical appliances NOTE 1: See IEC 61140 [i.10]. NOTE 2: Class I is based on the presence of an earthing terminal while Class II, also known as double insulated, does not need earthing terminal. urban data platform: facility to integrate the large amount of data in cities, including energy, transport, crowdsourced data, etc. and provide holistic view of the information with the aim of improvement and development of innovative smart city services
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
3.2 Symbols
Void.
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
3.3 Abbreviations
For the purposes of the present document, the following abbreviations apply: 3GPP 3rd Generation Partnership Project 5G Fifth Generation AC Alternating Current AWG American Wire Gauge BBU BaseBand Unit BS Base Station CPRI Common Public Radio Interface C-RAN Centralized Radio Access Network DC Direct Current eCPRI evolved Common Public Radio Interface eMBB enhanced Mobile BroadBand EU End Users FWA Fixed Wireless Access IEEE Institute of Electrical and Electronics Engineers ETSI ETSI TS 110 174-2-2 V1.2.1 (2020-11) 11 IoT Internet of Things IT Information Technology LED Light Emitting Diode LiFi Light Fidelity (wireless technology) LoRa™ Long Range (wireless technology) LTE-M Long Term Evolution for Machines LV Low Voltage LVDC Low Voltage Direct Current M2M Machine-to-Machine MIMO Multiple Input-Multiple Output mmWave millimetre Wave MNO Mobile Network Operator NB-IoT Narrow Band Internet of Things NFV Network Functions Virtualisation NR New Radio NSP Network Service Platform PA Power Amplifier PoE Power over Ethernet PtMP Point to MultiPoint PtP Point to Point QoS Quality of Service RAN Radio Access Network RF radio frequency RFT-C Remote Feeding Telecommunication - Current limited RFT-V Remote Feeding Telecommunication - Voltage limited RRU Remote Radio Unit URLLC Ultra-Reliable and Low Latency Communications UPS Uninterruptable Power System USB Universal Serial Bus V-RAN Virtual Radio Access Network VAC Voltage Alternating Current VCO Voltage-Controlled Oscillator VDC Voltage Direct Current WiFi® Wireless Fidelity (wireless technology)
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
4 The path towards Smart street lighting
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
4.1 General
It is estimated that there are more than 60 million lamp-posts, or equivalent structures, supporting lanterns providing lighting for roads and other spaces across Europe. NOTE: The figures in the present document show conventional lamp-posts but should be considered to represent any form of supporting structures for lanterns. The current trend to replace the lights within the lanterns with LED technology offers considerable benefits to the community which are outside the scope of the present document. However, the replacement process offers the opportunity to make other changes to the components within the lamp-post to enable the provision of additional services of both direct and indirect benefit to the community. Typical examples of such services are shown in Figure 2. ETSI ETSI TS 110 174-2-2 V1.2.1 (2020-11) 12 Figure 2: Examples of lamp-post service provisioning Services of direct benefit to the community would be "smart" lighting, environmental sensing, image sensing, signage and sound. The power and data enabling these services to operated could be provided over the infrastructure already used to deliver power to the lamp-posts. Alternatively, the data could be provided over connections to existing wireless networks of third-party operators. Independent of its delivery mechanism, the data provided to and from the lamp-post is used directly by the community and the cost of producing, transporting and interpreting that data is borne by the community. Indirect benefit to the community results from the revenue-earning opportunity of sharing of the lamp-post, as a part of a widely distributed infrastructure, with third-party providers such as those offering wireless telecommunications and vehicle charging. The demands for availability of data and power differs between such third-party services and also differs from those of the primary function of the lamp-post and the other services described above. The present document specifically addresses the use of lamp-posts to host "direct benefit" services relating to sensing devices and "indirect benefit" services relating to the provision of 5G connectivity between End Users (EUs) and the Radio Access Network (RAN) via the RRU mounted on the poles and the onward connectivity BaseBand Unit (BBU). The main advantages offered by lamp-posts for 5G connectivity are: • a well-defined and ubiquitous distribution within urban environments which matches the demands for radio coverage from the RRU - providing reduced deployment costs and timescales; • a height which facilitates propagation of the radio signal - both extending the coverage radius of each cell and minimizing the impairment produced by large vehicles such as public transport and goods vehicles. However, the dramatic differences in the requirements for the supply of data and power to the lamp-posts for sensing devices as compared to 5G connectivity cannot be underestimated. Table 2 provides a non-exhaustive list of the service groups and the detailed applications that could be supported by the 5G RRUs hosted by the lamp-posts and those applications are differentiated as "Massive IoT". "enhanced Mobile BroadBand (eMBB)" and "Ultra-Reliable and Low Latency Communications (URLLC)". Vehicle charging Environmental sensing - air quality - noise - flooding “Smart” lighting • LED • photocell control • dimming • on-demand Image sensing • proximity • footfall monitoring • parking monitoring • public security Sound • music • alerts Signage - directions - traffic information - civic information - advertising 5G services ETSI ETSI TS 110 174-2-2 V1.2.1 (2020-11) 13 Table 2: Service areas and applications Service group Application Massive IOT eMBB URLLC Infotainment Gaming (Ultra High Definition)  Video (Ultra High Definition)  Virtual reality  Augmented reality  Smart gadgets (toys, smartwatches, etc.)  Robotics (home)  Home Energy management  Smart sensors (gas, electricity, water, etc.)  Appliance control  Intrusion detection  Remote video watching  Security issues; detection (leaks, fire, etc.)  Smart city Utility monitoring (gas, electricity, water, etc.)  Street smart light poles  Public safety watching  Traffic control  Parking management  Waste management  Health Road and buildings status  Fall detection  Remote diagnostic  Health monitoring  Robotic surgery  Environment Medication (management)  Air quality  Water quality  Noise measurement  Radiations  Energy use  Leakages (floods, chemical, etc.)  Industry Drone watching  Asset and stock management  Robotic control - production automation  Production control and safety  Agriculture Machine monitoring  Soil monitoring (water, nutriments, etc.)  Crop yield  Storage yield management  Transports Green house monitoring  Traffic regulation  Remote diagnosis  Autonomous vehicles management  Watching drone management  A document entitled "The Future of Street Lighting" [i.11] published by the IoTUK group refers to the evolution of the functions of lamp-posts as follows: • Stage 1: Switching to LED bulbs; • Stage 2: Connected street lighting; • Stage 3: New service development. The present document adopts these terms. The clauses 4.2, 4.3 and 4.4 explain the meaning and boundaries for each stage of evolution. ETSI ETSI TS 110 174-2-2 V1.2.1 (2020-11) 14
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
4.2 Stage 1: Switching to LED bulbs
Stage 1 is simply the replacement of the existing technology lighting fixture with those using LED technologies. LEDs offer longer lifetimes, lower energy consumption and reduced maintenance costs. Savings on energy consumption are estimated to be 50. This is not of direct interest in the present document except that it defines an opportunity to initiate the other changes to the lamp-post functionality offered in Stage 2 (clause 4.3) and Stage 3 (clause 4.4).
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
4.3 Stage 2: Connected street lighting
Stage 2 overlays Stage 1 with a limited bandwidth network connectivity and control systems to: • remotely monitor lighting performance: maintenance costs are reduced by detecting and raising a service alert when there is a problem with an LED; • change light levels to match ambient light levels: street lights are switched on when fog or rain creates low daylight levels or dimmed when there is too much reflected glare (e.g. from snow cover); • change light levels to match local activity: street lights integrated with motion sensors are switched on when pedestrians or cars pass; • change light levels to alert the public: public safety personnel can increase lighting levels, or have lights flash, at locations where accidents or emergencies have occurred; • measure energy consumption: measuring the consumption of each lamp-post. This level of control uses an "urban data platform" to monitor and manage the performance of the lighting provided on the lamp-post. The network connectivity varies and includes both wired solutions and wireless connections and the data represents a form of Massive IoT mentioned in Table 2. The present document adds to this concept by specifying in clause 5.1 the functionality of electronic circuitry necessary to provide a connection to this urban data platform from sensors attached to lamp-posts to provide data relating to: • intermittent polling of climatic conditions: - temperature; - pressure; - humidity; - precipitation; - wind; - ultraviolet UVA/UVB radiation; • intermittent polling of environmental Key Performance Indicators: - noise; - air quality:  carbon dioxide;  nitrogen dioxide;  fine particulate matter; • continuous monitoring: - instances of peak noise (e.g. gun-shot); ETSI ETSI TS 110 174-2-2 V1.2.1 (2020-11) 15 - video surveillance (e.g. traffic control). It is recognized that not all lamp-posts will host the same (or any) sensors but the majority of the above sensors are subject to strict requirements for their location. The electronic circuitry above is required to be able to communicate directly with the urban data platform using either the network solutions employed to manage the lighting, the 5G network described in clause 4.4 or other networks (e.g. 2G, 3G, 4G, LoRA™ and SIGFOX™).
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
4.4 Stage 3: New service development
Stage 3 represents the full migration of the lamp-post infrastructure to support 5G connectivity to EUs supporting as required the Massive IoT, eMBB and URLLC applications listed in Table 2. While existing lamp-posts may be served by a power supply suitable to support Massive IoT applications, the existing power supplies may not be continuously available. This may be adequate for the electronic circuitry for the package of electronics meeting the objectives of Stage 2 evolution but is not adequate for those of Stage 3. It is a real challenge to provide the number of infrastructural components needed to distribute 5G connectivity (and the widespread installation of sensors) with the associated demands for reliability, security, ubiquity and Quality of Service (QoS). The costs and complexity of such deployments, involving many different stakeholders can slow the 5G network implementation because: • many more RRUs are required due to the short wavelengths used to enable the service demands and RRUs need to be installed closer to each other and to the EUs compared to 3G and 4G solutions; • however, very few (if any) lamp-posts will be served with a power supply with an appropriate quality and/or availability necessary to meet the demands of 5G RRUs; • the maintenance of service requires not only provision of power and data to each lamp-post which is separate from the existing provision but may also require a network design that maintains the required services even if that provision of power and data fails at a given lamp-post. The presence of multiple access networks and power supplies clearly represents a challenge for demarcation during installation and maintenance. The present document does not address vehicle charging but the risk to service provision can be considered to be exacerbated if other parties are involved in providing other power supplies to and in the lamp- posts.
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
5 Functionality and availability
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
5.1 Stage 2
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
5.1.1 Functionality
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
5.1.1.1 Data connection
Figure 3 shows the complete functional set for the sensor circuitry. It is recognized that not all lamp-posts will host the same (or any) sensors but the installation of a common circuit board capable of hosting all sensors offers advantages in terms of both cost and operational flexibility. The present document does not specify the type of sensor devices or the interfaces between them and the data amalgamation circuitry. A common set of sensors have to be selected for all lamp-posts in order to define the data amalgamation circuitry shown schematically in Figure 3. ETSI ETSI TS 110 174-2-2 V1.2.1 (2020-11) 16 Figure 3: Examples of lamp-post service provisioning The data transport technology has to be able to support either a wired or wireless connection to the urban data platform using either the network controlling the lamp-post itself or via a third-party operator's network (including the 5G connectivity) implemented under Stage 3. Where battery-powered sensors are used, it is common to employ wide area, low power communications systems (e.g. NB-IoT, LTE-M, SIGFOX™, LoRa™) in order to maximize battery life and minimize the disruption and cost of battery replacement.
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
5.1.1.2 Power supply
There are number options, depending on the energy consumption of the sensors and by the communication system, including: • an integrated battery - where sensors have particularly low energy consumption and communicate very limited amounts of data to the urban data platform; • a local energy harvesting solution (e.g. solar panel) - the presence of a battery-backup is typically required as the energy to be harvested can be erratic; • the power supply to the lamp-post - a battery providing back-up power is typically required as the supply may often be absent (e.g. during the day, during maintenance) and in any case such a solution is only practical for powering equipment that have low energy consumption (e.g. < 1 W); • ad-hoc power supply remotely fed so to provide both the amount of energy needed (even tens of watts) and guarantee the needed service continuity - such systems can be those used for Stage 3. It is considered that all the sensors on a given lamp-post would not require a power supply of more than 20 W unless specific requirements exist for video camera functionality (e.g. heating of camera enclosures, etc.). The power provided to the sensors and protocol used to poll data from the sensors can either be: • proprietary with each sensor potentially using a different supply voltage and interface protocol - this requires each sensor to be specified before the electronics package can be defined and designed; or • implemented via existing standards such as Universal Serial Bus (USB), IEEE 802.3bt [i.5] or IEEE 802.3cg [i.6], this provides much greater flexibility in terms of changing the type of sensors installed - however this may result in increased power consumption, and physical dimensions, of the electronics package. NOTE 1: Certain sensors may operate using power supplied from a local battery. PSU AC-DC fed by unswitched lamp-post supply Data amalgamation Data transport Wired Wireless Intermittent - Climatic Temperature Humidity Pressure Precipitation Wind UVA/UVB Continuous Intermittent - Environmental NO2 CO2 Particulate matter Power Data Power Data Power Data Noise Video Power Data Power Data Power Data Power Data Power Data Power Data Power Data ETSI ETSI TS 110 174-2-2 V1.2.1 (2020-11) 17 NOTE 2: Sensors may operate using power supplied from a local energy harvesting solution such as solar cell technology. In such cases, mechanical constraints should be taken into account, in particular the stresses the structure supporting the solar cell places on the lamp-post (see clause 5.1.2).
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
5.1.2 Availability
The first element of availability to be addressed is the physical capability of the population of lamp-posts in a given city to support the mass of the sensors and any local power supply solutions. It should be noted that additional mass can change the behaviour of the lamp-post, cause collapse and represent a safety risk when a transverse load is applied (e.g. resulting from wind or traffic accidents). Typical lamp-posts are fed from 230VAC. There is no obvious problem with obtaining a power supply adequate to serve the sensor package via an AC-DC convertor connected to an unswitched input to the lamp-post (not the output of the on/off switch for the light). The availability of the existing power supplies to the lamp-post should be assessed to determine its impact on the functionality of the sensor package taking the following into consideration: • the majority of the data obtained via the sensors is not required on a continuous basis and, where necessary, a local battery may be applicable to maintain service following of power supply failure; • if video surveillance (or other service with a continuous data feed) is to be supported then the deployment plan should maintain the supply of data from adjacent lamp-posts.
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
5.2 Stage 3
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
5.2.1 Functionality
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
5.2.1.1 Data connection - front-haul and mid-haul networks
RRUs hosted on lamp-posts act as micro-cells which are suitable for a coverage area radius of 50 to 200 metres, while RRUs hosted on lamp-post acting as mini-macro cells could cover an average area radius of 100 to 500 metres. Data connection to an RRU on a lamp-post from a BBU is, as shown in Figure 4, via: • optical fibre cabling as a front-haul technology as either in Point to Point (PtP), Point to MultiPoint (PtMP) or cascading the equipment using the Common Public Radio Interface (CPRI) or equivalent solution (see clause A.3); • a mmWave (air interface) front-haul technology from an antenna connected to the BBU with a cabled mid-haul connection. The BBU provides connection to the backhaul network. It is installed in a centralized location (see clause A.2.2) and provide service to multiple RRUs. Figure 4: Data connection between BBU and RRU RRU BBU RRU BBU mmWave front-haul Antenna Optical fibre front-haul Cabled mid-haul Cabled front-haul solution Wireless front-haul solution ETSI ETSI TS 110 174-2-2 V1.2.1 (2020-11) 18 Optical fibre front-haul (as shown in the schematic of Figure 5) offers the largest capacity support and scalability but requires some excavation both for data and power supply cabling unless aerial routes can be employed. The availability of existing undergroundpathways can represent a significant asset limiting the amount of excavation necessary by the Mobile Network Operator (MNO). mmWave solutions (as shown in the schematic of Figure 6) avoid the complexities during installation and operation of such fixed data cabling installations but require the installations of antennae at the BBU and RRU and also require some excavation for power supply cabling unless aerial routes can be employed. The transmission performance is determined by signal loss between the BBU and the RRU produced by trees, rain, fog, distance, snow and oscillation of the lamp- post due to excessive wind conditions. Figure 5: Schematic of separate cabled data and power supply pathways BBU BBU BBU UPS BBU BBU BBU UPS Existing lamp-post power supply Existing conduit/duct Optical fibre PSU RRU power supply PSU New excavation ETSI ETSI TS 110 174-2-2 V1.2.1 (2020-11) 19 Figure 6: Schematic of wireless data and cabled power supply pathways
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
5.2.1.2 Power supply
Independent of the means of delivery of data to the RRU, each RRU requires a power supply. This has to be separate from the power supply to the lamp-post to support the lights and the sensor electronics of clause 5.1. This segregation is critical to ensure that any maintenance or breakdown of a lamp-post will not affect the 5G network service continuity. Some excavation, independent of the front-haul technology applied, is required to ensure the provision of an adequate power supply (see Figure 5 and Figure 6).
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
5.2.2 Availability
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
5.2.2.1 General
The first element of availability to be addressed is the physical capability of the population of lamp-posts in a given city to support the mass of an RRU. It is recognized that not all light fittings are accommodated on street-mounted lamp- posts as shown in Figures 5 and 6 of the present document and some will be attached to, or suspended between, buildings and other structures. While the present document does not specify the physical characteristics of the RRU, it is probable that the mass of the RRU (typically in the range 5-20 kg), will be greater than that of any Stage 2 sensor implementation. Annex A provides information on the options and trade-offs RRU for re-distributing the processing electronics of the RRU which can reduce the mass of the RRUs. The overall availability of service provided to a given EU is a combination of the availability of the data connection and the availability of power supply to an RRU or a group of RRUs. The availability, and quality, of power from the utility grid fed directly to a lamp-post is unlikely to be adequate for the provision of 5G services. Indeed, it is at times of failure of the grid power supply that the services provided by the 5G network are of critical value. Unless proper powering and connectivity architectures are implemented, a single point of failure (such as damage to a connectivity or powering pathway caused by civil works on the road) can cause a widespread loss of service to multiple RRUs. mmWave antenna macro-site BBU BBU BBU UPS PSU BBU BBU BBU UPS PSU Existing lamp-post power supply RRU power supply Existing conduit/duct ETSI ETSI TS 110 174-2-2 V1.2.1 (2020-11) 20 NOTE: The following explains why a direct power feed (i.e. without redundancy) would not be acceptable: For eMBB services, overall availability will be required to be in excess of "three nines" or 99,9. This equates to 31 536 seconds of downtime per year. This can be misleading since such downtime can be one period of 526 minutes of service failure or 100 periods of 316 seconds. As an example, if a failure of power supply to the RRU of 1 second could result in shutdown and the additional time for the RRU to recover its intended functionality was at least 5 minutes then only 105 power outages per year would result in failure to meet the overall availability of 99,9. This would require the availability of the power supply to be a minimum of 99,99997 (i.e. better than "6 nines"). For URLLC services, such as autonomous driving, overall availability will be required to be in excess of "six nines" or 99,9999. This equates to 31,5 seconds of downtime per year. If the additional time for an RRU to recover its intended functionality was at least 5 minutes then no power outages could be allowed i.e. the power availability would have to be 100. It is therefore necessary to consider the provision of redundant architectures including: • service to EUs using groups of RRUs fed from multiple BBUs; • power supplies to the groups of RRUs from different sources; • local back-up power to the RRU, dependent upon the type of power supply implemented. The demand for increased levels of availability of both data and power supply can be enhanced by implementing a ring approach to increase the diversity of connection as shown schematically in Figure 7. This reduces the risk of disruption due to accidental damage to any one of the pathways comprising the ring. Figure 7: Schematic of ring implementation of data and cabled power supply pathways
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
5.2.2.2 Data connection
Examples of redundant data pathways are shown in Figure 5 where adjacent RRUs being fed from different BBUs and Figure 6 where each RRU is fed by multiple mmWave antennae (also served by multiple BBUs). Both solutions are enhanced using the applicable ring structure of Figure 7. BBU BBU BBU UPS Existing lamp-post power supply Optical fibre PSU RRU power supply ETSI ETSI TS 110 174-2-2 V1.2.1 (2020-11) 21
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
5.2.2.3 Power supply
The provision of a power supply to the many RRUs in a reliable and cost-effective way is a major challenge. Figure 5 and Figure 6 show the RRUs on each side of the road being fed by separate power supplies from the two BBU networks. It should be noted that centralized power supplies could also be accommodated in other ad-hoc sites where required due to the topology, availability of cabling and/or the reduction of the losses within the power supply infrastructure. Both solutions are enhanced using the ring structure of Figure 7. Independent of the power supply implemented it is important that a local overload does not cause a loss of power supply to other equipment. Clause 8 describes alternatives for the supply of power to RRUs but it is clear that: • the reliability of power and the power quality supplied by the grid will be inadequate in most cases; • the installation of local battery based on Uninterruptable Power Systems (UPS) for an individual RRU will be problematic due to the size, weight and difficulties of obtaining relevant permissions - and have associated risk of theft and vandalism; • the installation of larger items of local back-up power such as diesel generators for groups of RRUs will be problematic due to public concerns regarding pollution and noise - and have associated risk of theft and vandalism. With regard to BBUs, the cluster approach adopted in Figure 5 and Figure 6 infer larger facilities which could adopt both battery- and generator-based UPS which can be protected against the risks of theft and vandalism by, for example, hosting them in a Central Office or equivalent structure.
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
6 RRU infrastructure
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
6.1 General
As shown schematically in Figure 8, the principal components of the RRU are the power supply converter, the Power Amplifier (PA), Radio Frequency (RF) transceiver and the antenna. Figure 8 shows the implementation for cabled front-haul technology (as discussed in this clause) and includes the opto- electronic convertor. A mmWave technology implementation replaces the opto-electronic convertor with an additional antenna and a mmWave converter. ETSI ETSI TS 110 174-2-2 V1.2.1 (2020-11) 22 Figure 8: RRU architecture using optical fibre front-haul technology
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
6.2 Power supply converter
The power supply converter adapts the input power provision to the needs of its electronic circuitry. It includes any needed AC-DC and DC-DC electronics and incorporate overvoltage protection against induction from lightning strikes in the vicinity.
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
6.3 Power amplifier
The PA amplifies the electrical signals received to/from the opto-electronic converter before passing them from/to the RF transceiver. It also amplifies the signals coming from the RF transceiver before transmitting them to the air interface by the antenna.
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
6.4 RF transceiver
RF transceiver consists of an intermediate frequency and baseband interface and the following functions: • modulation/demodulation of the signals; • Voltage Controlled Oscillators (VCO) and mixers; • digital to analog conversion; • analog to digital signal conversion; • low noise amplifier (gain, clock, etc.). RRU BBU Optical fibre front-haul Cabled front-haul solution RRU Opto-electronic converter Power supply converter Power Power amplifier (PA) RF transceiver Antenna Data ETSI ETSI TS 110 174-2-2 V1.2.1 (2020-11) 23
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
7 RRU Installation
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
7.1 General
This clause describes an RRU installation mode and lamp-post design requirement, for mounting an RRU. It also defines the mechanical interface and adequate load abilities on the lamp-post migration project. When planning to mount an RRU, the lamp-post should be evaluated on a series of indicators, for example wind load, snow load, ice load, etc., to ensure the safety of a lamp-post when there is extreme weather such as strong wind, earthquake, etc. As shown in Figure 9, the typical installation modes are top-mounted and side-mounted. The mounting height should not be less than 2,5 m to optimize antenna coverage, and the minimum wall thickness of a lamp-post is 4 mm to avoid any physical security risks to the installation. Local earthing through earth eletrodes could be installed among the protection measures against excessive main leakages from the equipment (in case of fault). It applies to Safety Class I equipment. The use of safety Class II (double insulation) equipment typically allows avoiding need for local earthing. Overvoltages, due to induction from lightning strokes in the vicinity, can reach the equipment through the powering wiring or the backhauling connection. So there is the need to protect opportunely the equipment. As good (low resistance) earthing may not be available due to cost and operational issues, solution for protecting the equipment are both the use of transversal (wire to wire) protection circuits (e.g. GDT, Trisil, Transil, Tranzorb), together with the increased resistibility to overvoltages to the input citcuits of the equipment. Within the range of ±60° in the horizontal direction and ±30° in the vertical direction of the RRU antenna normal line, and within 2 m from the radiation direction of the antenna, metal shielding should be avoided as it would affect adversely the RRU radio coverage. In some cases, two or three RRUs can be mounted on a lamp-post as needed after evaluating the feasibility. Figure 9: RRU installation modes
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
7.2 RRU top-mounted installation
The RRU top-mounted installation can have a mechanical flange interface reserved for connection to the 5G RRUs along with the lamp-post migration project (as in clause C.1). It should guarantee adequate wind load resistance ability, together with other indicators, to ensure the safety of lamp-post when there is extreme weather. The flange should support 360 °horizontal rotation adjustment for changing RRU coverage direction according to the network deployment requirements. ETSI ETSI TS 110 174-2-2 V1.2.1 (2020-11) 24
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
7.3 RRU side-mounted installation
The RRU side-mounted installation can directly fit on existing lamp-post(as in clause C.2). It requires evaluating the feasibility of drilling holes (typically the diameter is 20 mm) for power cables and optical fibres, to enable provision of RRU's cables and fibres through the inner of the pole. Otherwise cables and fibres can be installed externally to the pole through hose clamps, but such solution could be sub-optimal due to its visual impact and the reduced robustness. In addition, the mounting kits for installing RRUs can support angle adjustment to optimize coverage.
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
7.4 Cover and concealing Box
A cover can be designed for mounting multiple RRUs (e.g. 2 RRUs on one lamp-post) and to protect the antenna from external mechanical stresses that could come from branches of trees in some scenarios. Additionally, the cover could provide concealment to the RRU and give mechanical protection. The design of the concealing box can be complex as it needs to satisfy challenging factors (e.g. heat dissipation, waterproof, reduced size). Furthermore, it is adding weight and wind related stresses to the lamp-post body. So, if there is no higher mechanical protection or concealment demand, it could be convenient directly mounting the RRU to the lamp-post. The cover should be coordinated with lamp-post and allow flexible adjustment of the RRU's coverage angle. It should use non-metallic materials to avoid signal shielding and its colour should be consistent with the lamp-post. It should be designed to grant sufficient heat dissipation. Waterproof design should be considered as well.
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
8 RRU energy consumption
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
8.1 General
The micro-cell concept as implemented by RRUs hosted on lamp-posts, with support of Multiple Input-Multiple Output (MIMO) technologies, can have typical energy consumptionin excess of 100 Watts. The mini-macro cell as implemented by RRUs hosted on lamp-posts, with identical MIMO technologies, can have typical power consumption about 400 Watts. However, this is not a universal requirement and depends upon the specific implementation of the RRU by the MNO. With the development RRU solutions, lower power consumptions can allow a wider range of remote powering options (technologies and costs) to be considered. This level of energy consumption allows fresh air cooling without the need for any active measures. This will allow reducing the energy use, the size and weight of the RRU and will avoid acoustic nois that could annoy people living in the vicinity to the lamp-post.
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
8.2 Power supply converter
An AC-DC or DC-DC voltage converter is required to provide the necessary voltage supplies to the RRU components. The power supply converter protects the RRU from any overvoltages due to induction from lightning stroke in the vicinity. As good (low resistance) earthing may not be available at all lamp posts due to cost and operational issues, the power supply converter could be required to implement reinforced isolation, adequate for Overvoltage Category IV (8-10 kV and combination test wave 1,2/50 µs).
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
8.3 Opto-electronic converter
The opto-electronic converter provides: • opto-electronic conversion: optical signals from the front-haul connection to electrical signals for processing by the PA; • electro-optic conversion: electrical signals from the PA to optical signals for transmission to the BBU. Several factors will influence the optical transceiver operation such as the technology used, the required output power and the operating condition. ETSI ETSI TS 110 174-2-2 V1.2.1 (2020-11) 25
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
8.4 Power amplifier
Modelling the energy consumption of a PA is based on the following parameters: • output transmitted power of the antenna; • output power of the PA; • the share of maximum bandwidth, that an antenna uses, i.e. the actual number of the physical resource blocks that occupies a certain bandwidth for transmission. The PA is the primary energy consumer. Generally, PAs have low efficiencies in the range of antenna transmission powers and high frequencies (many GHz) employed for the micro-cell application of 5G networks.
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
8.5 Antenna
The antenna does not influence directly the energy consumption as it is a purely passive element. It affects it indirectly as, depending from the antenna characteristics (the antenna gain and radiation pattern) more or less transmit power could be required. The antenna can be integrated into the RRU. Further consumption could be required in the case of support of Multiple Input-Multiple Output (MIMO) technologies.
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
9 Power supply provision
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
9.1 Power from the grid
Figure 5 and Figure 6 show two separate power supplies feeds on each side of the road in order to reduce the risk of service disruption due to failure of one set of RRUs. With the development of 5G aiming at easy deployment of wireless networks, the mainstream 5G related RRUs typically can support AC or DC power feeding. For most MNOs, obtaining an AC power supply from the grid is the most obvious and apparently simple solution for powering the RRU. Power from the AC grid is not able to support services requiring high availability (e.g. the URLLC services), so their delivery will require more resilent powering solutions (e.g. adding energy backups or ad hoc DC remote powering). Certainly, the large mass 5G RRU mounted on the lamp-posts require important planning and project management because: • the installation of such power supplies from the grid to all the lamp-posts represent a large undertaking by the responsible utility; • obtaining a grid connection from existing buildings and getting municipal approval for hundreds or thousands of installations means negotiations with owners, tenants and suppliers; • connecting the lamp-post to the nearest point of access to the grid may require extensive digging, implying high costs and disturbance to traffic and the population in general. Enclosures have to be installed to accommodate the connection to the grid, metering and protection elements and in all cases the delays in power provisioning cannot be underestimated. Continuity of grid supply can be interrupted by: • technical influences by other third parties; • lightning (causing protection switches to trip); • extreme events such as floods, storms etc. which can cause disruption to telecommunications service provision at a time when it is most needed by the community and public services. ETSI ETSI TS 110 174-2-2 V1.2.1 (2020-11) 26 Furthermore, grid supply normally suffers severe and short-term outages and variations of supply voltage which can disrupt the function of the RRU. As mentioned in clause 5.2.2.1, they can cause the equipment within the RRU to reboot, negatively impacting to the QoS required. As mentioned in clause 5.2.2.1, the reliability and power quality from the grid is unlikely to be adequate for RRUs when they are supporting URLLC services. In such case it would require MNOs to solve the issue by equipping each RRU with battery-based UPS on the lamp-posts in urban locations or to provide powering through ad hoc remote powering solution.
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
9.2 DC power feeding from centralized sites
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
9.2.1 General
Figure 5 and Figure 7 show the use of data and power supply cabling serving each lamp-post. The data and power are served from centralized locations co-located with the groups of BBUs. It should be noted that centralized power supplies could also be accommodated in other ad-hoc sites where required due to the topology, availability of cabling and/or the reduction of the losses within the power supply infrastructure. This allows any battery or generator-based UPS equipment to be co-located with the power serving equipment offering potential cost and management advantages. In all cases, it has to be assumed that the power cabling (dedicated or hybrid with data circuits) is a new installation and does not re-use any existing infrastructure. The pathways are discussed in clause 9. There are number of power supply solutions which could be used using Low Voltage Direct Current (LVDC) rather than AC feeding. This could simplify the power conversion circuitry within the RRU (see clause 6.2). These are discussed in the following clauses. When applying remote powering techniques, appropriate protection is required to protect the telecommunications equipment against voltage surges and overvoltages on the power supply circuits caused by external events (e.g. lightning strikes in the vicinity).
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
9.2.2 Remote powering at 38-72 VDC
The equipment of the RRU is designed to work with the most common powering architectures found in Central Offices to supply Network Telecommunications Equipment (NTE) with operating voltages in the range 38-72 VDC. Feeding DC voltages within such range enables the use of common and lower cost equipment. The use of voltages below 60 VDC incurs less demanding safety requirements and eases installation and maintenance. However, within legacy telecommunications cabling, the need to limit the power losses resulting from the relatively high currents required restricts this type of solution to maximum distances in the range 200-300 m.
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
9.2.3 Remote powering in accordance with IEEE 802.3 applications
Sometimes referred to as Power over Ethernet (PoE), IEEE 802.3bt [i.5] specifies remote power feeding over 2 and 4 balanced pairs of cables of Category 5 and above (as specified in EN 50173-1 [i.1]). In addition, IEEE 802.3cg [i.6] specifies remote power feeding of a variety of 1 pair balanced cables. Both implementations use voltages of below 60 VDC. Whereas the power feeding of IEEE 802.3bt [i.5] provides up to 71 W at the remote equipment using 4 balance pairs, the maximum transmission distance is only 100 m. By comparison, IEEE 802.3cg [i.6] is specified to deliver: • 14 W at 300 m and 2 W at 1 000 m using conductors of 0,51 mm diameter (24 AWG); • 14 W at 1 000 m using conductors of diameter of 1,6 mm (14 AWG). This may offer opportunities for the direct supply of power to the RRU from the BBU sites. The technology can use multiple pairs to deliver more power to a device. ETSI ETSI TS 110 174-2-2 V1.2.1 (2020-11) 27
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
9.2.4 Higher voltage DC power feeding
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
9.2.4.1 RTF-C and RFT-V
Recommendation ITU-T K.50 [i.9] specifies the operation of remote power feeding of telecommunications equipment using voltage- (RFT-V) and current-limited (RFT-C) solutions capable of supplying up to 100 W per powering circuit in accordance with IEC 62368-3 [i.4]. Both RFT-C and RFT-V can use multiple powering circuits to deliver more power to a device. RFT-V can utilize multiple copper pairs in each circuit to reduce power losses.
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
9.2.4.2 Other solutions
A number of solutions exist where the power supply is implemented using conductors of similar diameters to those of clauses 9.2.2 and 9.2.3 but at voltages of up to and including 400 VDC. The simplest solution is to employ a true DC voltage of approximately 400 VDC. The background to this approach is the forecast growth of NTE in Central Offices with operating voltages of 380/400 VDC. The RRU only requires a simple DC-DC conversion to the voltage(s) required by the equipment accommodated at the lamp-post. Operation of power supply cabling at 400 VDC requires specific safety procedures to be employed during installation and maintenance. A more complex, proprietary, solution which avoids the safety considerations mentioned above features a digital DC transmission. The power available at the Central Office is converted to a digital "signal" in excess of 300 VDC which is turned on and off at a frequency enabling detection of faults (and associated removal of power) at a timescale consistent with human safety as defined in IEC 60479-2 [i.12]. The duty cycle of the digital signal is such that the average DC voltage is approximately 240 VDC. In both cases the higher voltage requires considerably less current to deliver the required power levels which allows the use of smaller conductors without major power losses. The true DC solution has economic advantages while the digital solution obviates some of all of the safety concerns.
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
9.3 Hybrid data and power supply cabling
All of the solutions of clause 8.2 enable the use of comparatively small conductors and are able to supply a variety of power levels over a range of distances. This offers the opportunity for the provision of both data and power supply elements within a "hybrid" cable construction feeding the RRUs from the BBU sites. The alternative is to install two separate cables in the same pathways but this risks of ravelling of cables preventing maintenance of both cables. The connection of the power supply to the lamp-post to the hybrid cable may be PtP or PtMP, in a bus-like structure, depending on the powering architecture. The hybrid cable comprises single mode optical fibre (in accordance with Recommendation ITU-T G.652 [i.7] or Recommendation ITU-T G.657 [i.8]) to supply the data and some metallic conductors to supply the required power.
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
9.4 Earthing
The installation of should not assume the presence of a protective earth at each lamp-post. In addition, the presence of a protective earth may not provide an effective functional earth for any sensor circuitry of RRU equipment installed on the lamp-post. ETSI ETSI TS 110 174-2-2 V1.2.1 (2020-11) 28
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
10 Accessing the lamp-posts
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
10.1 Existing pathways
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
10.1.1 General
Civil works represent the majority of the cost of deploying a telecommunications communications network outside buildings. Excavation to install new pathway systems (conduit, etc.) and cables is not only costly but can also be a source of delay due to difficulty in obtaining the necessary permissions and the resulting operational restrictions to avoid disruption to traffic and the general population. The availability of existing pathways, underground or overhead, that accommodate the existing power supplies to lamp- posts, provides an opportunity for shared use, enabling lower cost deployment for MNOs and economic opportunity for cities to offer the available real estate for rent or lease to the MNO. Installation and operational solutions are required to guarantee the coexistence and safeguarding of the cables (both data and power supply) of the telecommunications network and those of the existing lighting infrastructures.
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
10.1.2 Underground services
Underground cable management systems providing the existing power supplies to lamp-posts are typically conduits (ducts) of 80-100 mm diameter and these are frequently under-utilized, containing only one or two power circuits. The use of free space in such conduits (ducts) for the provision of data and power to support the RRUs would simplify and significantly increase the pace of 5G deployment. However, there is a risk of cable damage to both the existing and new cables and agreement would have to be reached between the parties involved. Safety concerns can arise for MNOs when accessing assets where lamp-post powering is present or, conversely for those managing the lighting circuits that would access assets where data and power supply cabling is installed. Such safety concerns could be mitigated by appropriate segregation. A rered way to segregate is through the installation into the existing conduits of sub-conduits (as shown in Figure 9) dedicated to the 5G power supply (and data if used) cables. The sub-conduit provides the segregation for the Low Voltage (LV) cables of the existing lighting circuits in accordance with national implementation of the HD 60364 series standards [i.3]. Figure 9: Installation of sub-ducts to provide segregation The sub-ducts should be routed to maintenance holes dedicated to the telecommunications infrastructure. This approach enables installation of the RRU cables when needed, without requiring multiple operations on the assets, with huge benefits on flexibility, overall reliability and costs. ETSI ETSI TS 110 174-2-2 V1.2.1 (2020-11) 29
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
10.1.3 Overhead services
Existing aerial cabling pathways serving the lamp-posts could accept the addition of data and power supply cables feed the RRU. EN 50174-3 [i.2] specifies requirements of the installation of aerial cables on shared infrastructures.
732c20c558beb43159ed5fd386ac6bc5
110 174-2-2
10.2 New underground pathways
EN 50174-3 [i.2] specifies the depths of underground pathways for telecommunications cables in footpaths and roads (including parking areas). In addition, EN 50174-3 [i.2] states that installation of cables at depths less than those specified results in those installations being treated as sacrificial (i.e. subject to heightened risk of damage by other service providers). However, the requirements and recommendations of EN 50174-3 [i.2] are always subject to local and national regulations which amend these depths. The installation of new underground pathways requires detailed knowledge of the underground environment relating to other services in order to minimize contractual risk. ETSI ETSI TS 110 174-2-2 V1.2.1 (2020-11) 30 Annex A (informative): The evolution of Radio Access Network architectures A.1 Introduction This annex describes the evolution of Radio Access Network architectures to support the next generation of mobile networks. These details are primarily in the domain of the MNOs and are not directly within the scope of the present document. However, the information may be helpful to understand the consequences of such evolution on community infrastructures such as lamp-posts which will be critical to maintain network provision to the EU. Existing 4G RAN architectures are mainly based on Base Station (BS) connected via optical fibre cabling or wireless links to the core network at a Central Office as shown in the schematic of Figure A.1. The BS hosts the antenna, the RRU and the baseband section dedicated to signal processing (the BBU). The connection between RRU and baseband components is commonly referred to as front-hauling to differentiate it from backhauling, which is the connection from the BS to the core network. Figure A.1: Traditional RAN (distributed BBU architecture) A.2 Centralized and virtual Radio Access Networks A.2.1 General Clause A.2.2 describes a Centralized RAN (C-RAN) architecture which is already used (although not at a large scale) in existing 4G installations. Clause A.2.3 describes a Virtual RAN (V-RAN) architecture. The majority of MNOs consider that the future 5G architecture will differ from the 4G approach and will be based on Virtual RAN (V-RAN) solutions. A.2.2 C-RAN C-RAN solutions divide the BS functions by moving the BBUs to an appropriate centralized location (e.g. Central Office) as shown in the schematics in Figure A.2. The BBUs, in addition to being centralized, are designed to coordinate with each other, in order to optimize the performance of the access network. This results in the BS being simplified and only containing the RRU. The BBU and RRU remain as dedicated NTE, provided by telecommunications vendors. The pooling of BBUs in a centralized location present several operational, hardware and spectrum efficiency advantages: • installation is simpler and quicker and with a reduced footprint by the use of small RRUs; • energy consumption is reduced by avoiding the losses introduced by coaxial feeders between the antenna and the RRU and the fact that cooling is no longer needed for antenna sites; Backhauling YESTERDAY Central Office Optical fibre BBU RRU ETSI ETSI TS 110 174-2-2 V1.2.1 (2020-11) 31 • availability of the BBUs is improved by the UPS and other back-up power provision at the Central Office. • radio performance is improved due to the very low latency in the protocol between the co-located BBUs enabling higher capacities and improved cell performance; • data overhead is reduced by avoiding the secured protocols on the backhaul from the BS to the Centrals Office. In addition, pooling of BBUs simplifies network management and upgrades while reducing troubleshooting and maintenance costs due to a reduction in BS visits. Figure A.2: C-RAN and traditional front-hauling architecture A.2.3 V-RAN In the case of V-RAN as shown in Figure A.3, the BBU functions will be performed by software on generic Information Technology (IT) server equipment using the same rules and same processes as those currently existing in IT data centres. This migration to software-based solution is generally termed Network Functions Virtualisation (NFV). V-RAN builds upon the advantages of C-RAN by introducing more efficient front-haul technologies to limiting transport investments (substantially based on Ethernet) and use of IT technologies and NFV solutions, with advantages in terms of scalability, costs and resilience. BBU BBU BBU Front-hauling CPRI Bit rate: 2,5-10 Gb/s per 20 MHz wireless capacity Acceptable delay 100-250 ∝s (~ 10-20 km) TODAY Centralization, coordination. Proprietary hardware Optical fibre RRU RRU RRU RRU RRU RRU Core network BBU pool CENTRALIZATION RRU BBU BBU BBU BBU BBU BBU Front-hauling.CPRI Bit rate: 2,5-10 Gb/s per 20 MHz wireless capacity Acceptable delay 100-250 μs (∼10-20 km) ETSI ETSI TS 110 174-2-2 V1.2.1 (2020-11) 32 Figure A.3: V-RAN and innovative front-hauling architecture Virtualized architectures can provide increased levels of resource sharing, agility and scalability while decreasing deployment time. Also, the use of generic IT servers to host the BS components will provide improved energy management and reduce the energy consumption. This provides an opportunity to operate a Network Service Platform (NSP) making mobile telecommunications functions "a service" in the same way that Cloud Computing platforms do in the IT environment. Using statistical multiplexing an NSP can perform the same tasks with less and cheaper hardware. NOTE: As an example, NFV MANO is a framework developed by a working group within ETSI Industry Specification Group for NFV (ETSI ISG NFV). The highest degree of mutualization is achieved with a fully centralized BBUs approach because processing the lower layers in a BBU constitutes a large part of the computational effort and can be performed with specific hardware. Further expansion of capacity can also be achieved using radio cells, creating heterogeneous networks: mini-macro cells and even small cells, for example of micro- and pico- type, can be connected to a V-RAN architecture, maintaining the benefits of coordination and optimization performance. In this way Central Offices absorb the functions of data centres. However, for some specific applications such as URLLC, data centres will have to be located close to the RRUs they serve. In such a case, the term Multi-access Edge Computing is used The solutions employed will be the choice of the MNO but the community infrastructure will have to provide infrastructures (e.g. lamp-posts, traffic lights, buildings, etc.) to host the RRUs. Identifying locations where to install such RRUs, together with the development of cost and functionally efficient connectivity and powering architectures, will be among the biggest challenges for the development of mobile networks. A.3 Front-haul As described in clause 5.2.1.1, the front-haul infrastructure can be an optical fibre connection or, under certain conditions, a wireless connection. vBBU vBBU vBBU Front-hauling Ethernet Bit rate: 180 Mb/s per 20 MHz wireless capacity Acceptable delay 3-6 ms (~ 40-80 km) TOMORROW Centralization, coordination. Off-the-shelf hardware Optical fibre Load balancer and switch Virtual BS cluster Virtual BS cluster Virtual BS cluster VIRTUALIZATION RRU RRU RRU RRU RRU RRU RRU RRU RRU RRU RRU RRU RRU ETSI ETSI TS 110 174-2-2 V1.2.1 (2020-11) 33 The choice of the technology, the protocol and the location of the different functions of the BBU (functional splits) will be the choice of the MNO and will be based on criteria such as: • the specific use case; • the distance between the BBU and the RRU; • the required bandwidth; • latency constraints; • the impact of loss of service. Several possibilities are existing for front-hauling - the most common solution at this time is CPRI. In C-RAN architectures, the front-haul carries the signal between the BBU and RRU. Consequently, C-RAN architectures requires optical transport networks for carrying multi-Gigabit radio signal in real time. In V-RAN architectures, part of the signal processing is undertaken in the RRU reducing the demands on the front-haul network. This allows consideration of mmWave solutions. To be compliant with the latency recommendation, a passive front-haul link should not exceed a distance between 10 km and 80 km (depending on the RAN implementation as indicated in Figure A.2 and Figure A.3). The presence of active equipment within the front-haul could generate additional latencies and thereby reduce this distance. Baseband modules and radiofrequency modules are connected via a fibre-optic connection, or sometimes, in a specially designed radio bridge, with an interface aiming to carry and reconstructing the radio signal. Every carrier of 20 MHz of radio spectrum requires at least 2,5 Gb/s on the front-haul. There are also constraints on latency (of the order of 100 μs for C-RAN and 1 ms for V-RAN). When MIMO techniques are used such front-haul capacity needs to be multiplied. Multi-cell and multi-frequency BS sites therefore require very high transmission capacities which make CPRI, originally defined for applications in a different context, difficult to realize an efficient 5G RAN. NOTE: A new interface eCPRI (evolved CPRI) is now proposed defined by the IEEE Next Generation Fronthaul Interface (IEEE 1914) working group. Research and prototyping activities are focusing on the split between BBU and RRU to determine the most efficient division of functionality to reduce bandwidth requirements (bringing them to values close to the capacity transported), and latency (in the region of milliseconds) while retaining as much as possible the performance of traditional RAN architectures. The best functional split solutions enable high mobile network performance to be achieved by using Ethernet-based transport for front-hauling. This also supports the transition from C-RAN to V-RAN where the commercial equipment used in NFV architectures are Ethernet-based. ETSI ETSI TS 110 174-2-2 V1.2.1 (2020-11) 34 Annex B (informative): Void ETSI ETSI TS 110 174-2-2 V1.2.1 (2020-11) 35 Annex C (informative): Example and Interface of RRUs Installation C.1 Top-mounted flange interface The example of Lamp-posts mounting RRUs on top position is shown in Figure C.1. The flange interface can be reserved on the top of lamp-post. The three waist holes of the flange are connected to the lamp-post landscaping cover, to adjust the horizontal angle of RRUs inside. The typical dimensions of the physical flange interface is shown in Figure C.2, the inner diameter of the flange should be not less than 60 mm, which is convenient to reserve enough space for RRU's power cables and optical fibers distribution inside. However, the actual dimensions of the top physical flange interface should be determined according to the whole lamp-post design. Figure C.1: Example of Lamp-posts mounting RRUs on top position Figure C.2: Dimensions of the flange interface (1) Flange (2) Screw hole (3) Nut ETSI ETSI TS 110 174-2-2 V1.2.1 (2020-11) 36 C.2 Side-mounted interface Another position to install a RRU is on a side of a lamp-post. Such solution is normally taken either for specific aiming of the coverage area or if the shape of the lamp-post is not allowing the RRU to be mounted on top position (as in Figure C.3). The RRU side-mounted interface as shown in Figure C.4. The appropriate diameter of drilling holes is 20 mm, to enable provision of RRU's cables and fibres through the internal part of the pole. Drilling holes requires applying waterproof and anti-corrosion treatment after the RRU is mounted. Figure C.3: Example of Lamp-posts not allowing mounting RRUs on top position Figure C.4: Side-mounted Schematic Diagram ETSI ETSI TS 110 174-2-2 V1.2.1 (2020-11) 37 History Document history V1.1.1 June 2019 Publication V1.2.1 November 2020 Publication
25afcd9f8bff1a1f16037e46f0d49896
104 041
1 Scope
The present document specifies the high level O-RAN slicing related use cases, requirements and architecture. While some of the requirements are derived from use cases, some of the relevant SDO requirements are captured as they have impact on O-RAN functions. Along with requirements and reference slicing architecture, slicing related impact to O-RAN functions and interfaces are captured as well.
25afcd9f8bff1a1f16037e46f0d49896
104 041
2 References
25afcd9f8bff1a1f16037e46f0d49896
104 041
2.1 Normative references
References are either specific (identified by date of publication and/or edition number or version number) or non-specific. For specific references, only the cited version applies. For non-specific references, the latest version of the referenced document (including any amendments) applies. Referenced documents which are not found to be publicly available in the expected location might be found in the ETSI docbox. NOTE: While any hyperlinks included in this clause were valid at the time of publication, ETSI cannot guarantee their long term validity. The following referenced documents are necessary for the application of the present document. [1] ETSI TS 122 261: "5G; Service requirements for the 5G system; (3GPP TS 22.261 version 18.16.0 Release 18)", Release 16, December 2019. [2] ETSI TS 123 501: "5G; System architecture for the 5G System (5GS); (3GPP TS 23.501 version 18.8.0 Release 18)", Release 16, December 2019. [3] ETSI TS 128 526: "LTE; Telecommunication management; Life Cycle Management (LCM) for mobile networks that include virtualized network functions; Procedures; (3GPP TS 28.526 version 18.1.0 Release 18)", Release 15, December 2018. [4] ETSI TS 128 531: "5G; Management and orchestration; Provisioning; (3GPP TS 28.531 version 18.8.0 Release 18)", Release 16, March 2020. [5] ETSI TS 128 532: "5G; Management and orchestration; Generic management services; (3GPP TS 28.532 version 18.5.0 Release 18)", April 2021. [6] Void. [7] ETSI TS 128 541: "5G; Management and orchestration; 5G Network Resource Model (NRM); Stage 2 and stage 3; (3GPP TS 28.541 version 18.9.0 Release 18)", January 2020. [8] ETSI TS 128 552: "5G; Management and orchestration; 5G performance measurements; (3GPP TS 28.552 version 18.9.0 Release 18)", Release 16, January 2020. [9] 3GPP TS 32.300: Technical Specification Group Services and System Aspects; Telecommunication management; Configuration Management (CM); Name convention for managed objects", Release 16, July 2020. [10] ETSI TS 138 300: "5G; NR; NR and NG-RAN Overall description; Stage-2; (3GPP TS 38.300 version 18.4.0 Release 18)", January 2020. [11] ETSI GS NFV 003: "Network Functions Virtualisation (NFV); Terminology for Main Concepts in NFV", 1.4.1, August 2018. [12] Void. [13] O-RAN.WG1.O1-Interface.0-v04.00: "O-RAN Operations and Maintenance Interface Specification". ETSI ETSI TS 104 041 V11.0.0 (2025-03) 7 [14] Void. [15] Void. [16] Void. [17] Void. [18] O-RAN.WG6.ORCH-USE-CASES-v04.00: "Orchestration Use Cases and Requirements for O-RAN Virtualized RAN". [19] Void. [20] Void. [21] Void. [22] Void. [23] ETSI TS 128 530: "5G; Management and orchestration; Concepts, use cases and requirements; (3GPP TS 28.530 version 18.2.0 Release 18)", Release 17, September 2022. [24] ORAN SFG: "Security Protocols Specifications".
25afcd9f8bff1a1f16037e46f0d49896
104 041
2.2 Informative references
References are either specific (identified by date of publication and/or edition number or version number) or non-specific. For specific references, only the cited version applies. For non-specific references, the latest version of the referenced document (including any amendments) applies. NOTE: While any hyperlinks included in this clause were valid at the time of publication, ETSI cannot guarantee their long term validity. The following referenced documents are not necessary for the application of the present document but they assist the user with regard to a particular subject area. [i.1] ETSI TR 121 905: "Digital cellular telecommunications system (Phase 2+); Universal Mobile Telecommunications System (UMTS); LTE; Vocabulary for 3GPP Specifications; (3GPP TR 21.905 version 9.4.0 Release 9)". [i.2] ETSI TS 123 003: "Digital cellular telecommunications system (Phase 2+); Universal Mobile Telecommunications System (UMTS); Numbering, addressing and identification; (3GPP TS 23.003 version 9.9.0 Release 9)", Release 16, June 2021. [i.3] 3GPP TR 28.801: "Study on management and orchestration of network slicing for next generation network", Release 15, January 2018. [i.4] O-RAN_Study_ORAN_Slicing_Technical_Report.v02.00: "Study on O-RAN Slicing", Technical Report. [i.5] ETSI TS 123 222: "LTE; 5G; Common API Framework for 3GPP Northbound APIs; (3GPP TS 23.222 version 18.7.0 Release 18)", Release 17, June 2021. [i.6] ETSI TS 129 222: "LTE; 5G; Common API Framework for 3GPP Northbound APIs; (3GPP TS 29.222 version 18.7.0 Release 18)", Release 17, December 2021. [i.7] 3GPP TR 28.824: "Study on network slice management capability exposure", Release 17, July 2022. . [i.8] 3GPP TR 28.811: "Management and orchestration; Study on Network Slice Management Enhancement", Release 17, December 2021. [i.9] 3GPP TR 23.700-99: "Study on Network Slice Capability Exposure for Application Layer Enablement", Release 18, September 2022. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 8 [i.10] Slicenet, https://5g-ppp.eu/slicenet/. [i.11] NGMN Alliance: "5G P1 Requirements & Architecture "Description of Network Slicing Concept", Version 1.0.8, September 2016. [i.12] O-RAN.WG1.OAM-Architecture-v04.00: "O-RAN WG1 Operations and Maintenance Architecture". [i.13] O-RAN.WG2.A1.GA&P-v01.00: "O-RAN Working Group 2; (A1 interface: General Aspects and Principles)". [i.14] O-RAN.WG3.E2GAP-v01.00: "O-RAN Working Group 3; Near-Real-time RAN Intelligent Controller Architecture & E2 General Aspects and Principles". [i.15] O-RAN.WG6.CAD-V02.01: "Cloud Architecture and Deployment Scenarios for O-RAN Virtualized RAN". [i.16] O-RAN.WG9.XPSAAS-v01.00: "O-RAN Open Transport Working Group 9 Xhaul Packet Switched Architectures and Solutions". [i.17] O-RAN.WG9.XTRP-REQ-v01.00: "O-RAN Open Xhaul Transport Working Group 9 Xhaul Transport Network Requirements". [i.18] ETSI TS 128 533: "5G; Management and orchestration; Architecture framework; (3GPP TS 28.533 version 18.4.0 Release 18)", Release 17, March 2022. [i.19] O-RAN.WG9.XTRP-MGT.0-v04.00: "O-RAN Open X-haul Transport Working Group Management interfaces for Transport Network Elements".
25afcd9f8bff1a1f16037e46f0d49896
104 041
3 Definition of terms, symbols and abbreviations
25afcd9f8bff1a1f16037e46f0d49896
104 041
3.1 Terms
For the purposes of the present document, the terms given in [i.1] and the following apply: NOTE: A term defined in the present document takes precedence over the definition of the same term, if any, in [i.1]. A1: interface between non-RT RIC and Near-RT RIC to enable policy-driven guidance of Near-RT RIC applications/functions, and support AI/ML workflow A1 Enrichment information: information utilized by near-RT RIC that is collected or derived at SMO/non-RT RIC either from non-network data sources or from network functions themselves A1 policy: type of declarative policies expressed using formal statements that enable the non-RT RIC function in the SMO to guide the near-RT RIC function, and hence the RAN, towards better fulfilment of the RAN intent E2: interface connecting the Near-RT RIC and one or more O-CU-CPs, one or more O-CU-UPs, and one or more O-DUs E2 Node: logical node terminating E2 interface NOTE: In the present document, O-RAN nodes terminating E2 interface are: - for NR access: O-CU-CP, O-CU-UP, O-DU or any combination; - for E-UTRA access: O-eNB. near-RT RIC: O-RAN near-real-time RAN Intelligent Controller: • a logical function that enables near-real-time control and optimization of RAN elements and resources via fine-grained data collection and actions over E2 interface ETSI ETSI TS 104 041 V11.0.0 (2025-03) 9 non-RT RIC: O-RAN non-real-time RAN Intelligent Controller: • a logical function that enables non-real-time control and optimization of RAN elements and resources, AI/ML workflow including model training and updates, and policy-based guidance of applications/features in near-RT RIC Network Service: Composition of Network Function(s) and/or Network Service(s), as specified in ETSI GS NFV 003 [11] Network Slice: set of run-time network functions, and resources to run these network functions, forming a complete instantiated logical network to meet certain network characteristics required by the Service Instance(s) NOTE: "End-to-End" is often associated with term Network Slice, but it was deliberately excluded from this term because the "Ends" change based on the perspective of the user and is relative. Network Slice Instance (NSI): set of network functions and the resources for these network functions which are arranged and configured, forming a complete logical network to meet certain network characteristics as specified in 3GPP TS 23.501 [2] Network Slice Subnet MOI (NSSI): generic recursive collection or set allowing non-restricted grouping of Managed Functions and EP_Transport endpoint(s) O-CU: O-RAN Central Unit O-RAN Central Unit - Control Plane (O-CU-CP): logical node hosting the RRC and the control plane part of the PDCP protocol O-RAN Central Unit - User Plane (O-CU-UP): logical node hosting the user plane part of the PDCP protocol and the SDAP protocol O-RAN Distributed Unit (O-DU): logical node hosting RLC/MAC/High-PHY layers based on a lower layer functional split O-RAN Radio Unit (O-RU): logical node hosting Low-PHY layer and RF processing based on a lower layer functional split O1: interface between the Service Management and Orchestration Framework and O-RAN managed elements O2: interface between the Service Management and Orchestration Framework and the O-Cloud Radio Access Network (RAN): in terms of the present document, any component below near-RT RIC per O-RAN architecture, including O-CU/O-DU/O-RU resource isolation: regime of resource management when a resource used by one network slice instance cannot be shared with another network slice instance Single Network Slice Selection Assistance Information (S-NSSAI): S-NSSAI is comprised of an SST (Slice/Service type) and an optional SD (Slice Differentiator) field (3GPP TS 28.541 [7], clause 4.3.37).
25afcd9f8bff1a1f16037e46f0d49896
104 041
3.2 Symbols
Void.
25afcd9f8bff1a1f16037e46f0d49896
104 041
3.3 Abbreviations
For the purposes of the present document, the abbreviations given in [i.1] and the following apply: NOTE: An abbreviation defined in the present document takes precedence over the definition of the same abbreviation, if any, in [i.1]. AEF API Exposing Function AI/ML Artificial Intelligence/Machine Learning AMF API Management Function APF API Publishing Function ETSI ETSI TS 104 041 V11.0.0 (2025-03) 10 API Application Program Interface ARP Allocation Retention Priority BSS Base Station System CAPIF Common API Framework CCF CAPIF Core Function CM Configuration Management CN Core Network CoS Class of Service CSC Communication Service Customer CSP Communication Service Provider DC Data Center DDoS Distributed Denial of Service DRB Data Radio Bearer DSCP Differentiated Services Code Point EGMF Exposure Governance Management Function eMBB Enhanced Mobile BroadBand eMnS Exposed Management Service eNB eNodeB (applies to LTE) E-UTRA Enhanced Universal Terrestrial Radio Access FCAPS Fault, Configuration, Accounting, Performance, Security FH FrontHaul FM Fault Management gNB gNodeB (applies to NR) GSMA Global System for Mobile communications Association IM/DM Information Model/Data Model IOC Information Object Class KPI Key Performance Indicator KQI Key Quality Indicator LCM Life Cycle Management MAC Media Access Control MANO Management and Network Orchestration MBB Mobile BroadBand MD Management Domain MH Midhaul MIB Management Information Base MIoT Massive Internet of Things ML Machine Learning mMTC massive Machine Type Communications MnF Management Function MNO Mobile Network Operator MnS Management Service MOI Managed Object Instance MORAN Multi-Operation Radio Access Network MVNO Mobile Virtual Network Operator Near-RT RIC O-RAN Near-Real-Time RIC NF Network Function NFMF Network Function Management Function NFO Network Function Orchestration NG-RAN Next Generation - Radio Access Network NMS Network Management System Non-RT RIC O-RAN Non-Real-Time RIC NOP Network Operator NRM Network Resource Model NS Network Service NSaaS Network Slice as a Service NSC Network Slice Consumer NSCALE Network Slice Capability Exposure for Application Layer Enablement NSCE-S Network Slice Capability Enablement Server NSI Network Slice Instance NSMF Network Slice Management Function NSSaaS Network Slice Subnet as a Service NSSAI Network Slice Selection Assistance Information ETSI ETSI TS 104 041 V11.0.0 (2025-03) 11 NSSI Network Slice Subnet Instance NSSMF Network Slice Subnet Management Function OAM Operations, Administration and Maintenance O-CU O-RAN Central Unit O-DU O-RAN Distributed Unit ONAP Open Network Automation Platform O-NSSI O-RAN Network Slice Subnet Instance O-RU O-RAN Radio Unit OSS Operations Support System PDCP Packet Data Control Protocol PDN Packet Data Network PDU Protocol Data Unit PHY Physical PLMN Public Land Mobile Network PM Performance Management PNF Physical Network Function PRB Physical Resource Block QoS Quality of Service RF Radio Frequency RIC O-RAN RAN Intelligent Controller RLC Radio Link Control RRC Radio Resource Control RRM Radio Resource Management SBA Service Based Architecture SBMA Service Based Management Architecture SD Slice Differentiator SDAP Service Data Adaptation Protocol SDO Standards Developing Organizations EXAMPLE: 3GPP, ETSI, ONAP, O-RAN. SLA Service Level Agreement SLS Service Level Specification SMF Session Management Function SMO Service and Management Orchestration SMOF SMO Function SST Slice/Service Type TN Transport Network TNE Transport Network Element ToS Type of Service UE User Equipment UPF User Plane Function URLLC Ultra-Reliable Low Latency Communications VLAN Virtual Local Area Network VNF Virtual Network Function VPN Virtual Private Network
25afcd9f8bff1a1f16037e46f0d49896
104 041
4 Slicing Overview
Network Slicing is expected to play a critical role in 5G networks because of various use cases and services 5G will support. It allows a network operator to provide services tailored to customers' requirements. Network slice is defined as a logical network with a bundle of specified network services over a common network infrastructure. A single physical network is sliced into multiple virtual networks that can support different service types over a single RAN. 3GPP has standardized 4 different service types: eMBB, URLLC, MIoT and V2X [i.2]. 3GPP defined 5G architecture and procedures containing network slicing and related concepts in Release 15. Furthermore, management and orchestration of 5G networks featuring slicing was defined in the 3GPP specifications. Other standard groups e.g. GSMA, ETSI NFV-MANO, ETSI ZSM and ONAP focus on the different aspects of network slicing. Further information regarding network slicing and other SDO's contributions was discussed in the Study on O-RAN Slicing Technical Report [i.4]. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 12 A sample RAN slicing deployment of O-RAN network functions based on the select initial deployment option, option B as described in [i.15], is shown in figure 4-1, with some of the network functions shared between RAN slice subnets (such as O-CU-CP, O-DU, O-RU) and some network functions dedicated to a particular RAN slice subnet (such as O-CU-UP). Figure 4-1: Example O-RAN Slicing Deployment
25afcd9f8bff1a1f16037e46f0d49896
104 041
5 High-Level O-RAN Slicing Use Cases
25afcd9f8bff1a1f16037e46f0d49896
104 041
5.1 O-RAN Slicing Use Cases
This clause contains high-level O-RAN slicing use cases that O-RAN is expected to support. Slicing requirements will include the requirements derived from the specified use cases. Additional use cases will be added as prioritized by the O-RAN community in future versions of the present document. It should be noted that not all of the use cases presented here are currently supported by O-RAN specifications and these use cases will be addressed in future O-RAN work. 5.2 O-RAN Slice Subnet Management and Provisioning Use Cases
25afcd9f8bff1a1f16037e46f0d49896
104 041
5.2.0 O-RAN Slice Subnet Management and Provisioning
Network slicing is conceived to be an end-to-end feature that includes the core network, the transport network and the RAN. Although 3GPP has started defining network slicing support with Release 15, slicing in O-RAN needs to be further addressed in line with 3GPP to achieve deployable network slicing in an open RAN environment. Management aspects of network slicing such as Network Slice Instances (NSI) and Network Slice Subnet Instances (NSSI) are defined by 3GPP. While NSI refers to an end-to-end network slice, NSSI refers to a part of it such as a RAN slice subnet. The provisioning of network slicing includes the four phases which are preparation, commissioning, operation and decommissioning. The NSI/NSSI provisioning operations include: - Create an NSI/NSSI; - Activate an NSI/NSSI; - De-active an NSI/NSSI; - Modify an NSI/NSSI; - Terminate an NSI/NSSI. 3GPP TS 28.531 [4], clause 4.1 shall be used for further details of NSI and NSSI lifecycle management and provisioning. This clause provides the use cases and procedures necessary for O-RAN Slice Subnet Management that is in-line with 3GPP Slice Management framework. For this purpose, use cases such as slice subnet creation, activation, modification, deactivation, termination, configuration and feasibility check are considered for O-RAN architecture. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 13
25afcd9f8bff1a1f16037e46f0d49896
104 041
5.2.1 O-RAN Slice Subnet Instance Creation
The context of the O-RAN Slice Subnet Instance Creation Use Case is captured in table 5.2.1-1. Table 5.2.1-1: O-RAN Slice Subnet Instance Creation Use Case Use Case Stage Evolution / Specification <<Uses>> Related use Goal Creation of a new O-RAN network slice subnet instance (O-NSSI) or use an existing O-NSSI to satisfy the RAN slice subnet related requirements (3GPP TS 28.531 [4], clause 5.1.2). Actors and Roles NSSMS_C such as NSMF, who acts as an example network slice subnet management service consumer. NSSMS_P such as NSSMF, who acts as an example of network slice subnet management service provider. NFMS_P such as SMO OAM Functions or NFMF who acts as an example of network function management service provider. O-Cloud M&O, who acts as O-Cloud management and orchestration provider within SMO. Non-RT RIC. O-RAN Network Functions: NFs such as Near-RT RIC, O-CU-CP, O-CU-UP, O-DU and O-RU. Assumptions NSSMS_P is aware of O-Cloud M&O to manage the lifecycle of VNFs and interconnection between the VNFs and PNFs. Pre conditions VNF packages for virtualized O-RAN network functions to be included in the O-RAN slice subnet instance have been already on-boarded. Begins when NSSMS_P receives request for a network slice subnet instance. The request contains network slice subnet related requirements. Step 1 (M) NSSMS_P checks the feasibility of the request, based on the received network slice subnet related requirements. O-RAN Slice Subnet Feasibility Check Step 2 (M) NSSMS_P decides to create a new O-NSSI or use an existing O-NSSI. Step 3 (M) If an existing O-NSSI is decided to be used, NSSMS_P should trigger modification of the existing O-NSSI to satisfy the network slice subnet related requirements. Go to "Step 11". Otherwise, NSSMS_P triggers creation of a new O-NSSI, continue with Step 4 O-RAN Slice Subnet Instance Modification Use Case Step 4 (M) NSSMS_P derives the requirements for the constituent NSSI(s). Step 5 (O) If the required O-NSSI contains constituent NSSI(s) managed by other NSSMS_P(s), NSSMS_P can trigger creation of respective constituent NSSI(s) through other NSSMS_P(s) which manages the constituent NSSI(s). In that case, NSSMS_P receives the constituent NSSI information from the other NSSMS_P(s) and associates the constituent NSSI(s) with the required O-NSSI. (O-RAN) Slice Subnet Instance Creation Use Case (to create constituent (O-)NSSI(s) managed by other NSSMS_P(s)) Step 6 (M) NSSMS_P determines the service related requirements and triggers a service request to O-Cloud M&O for instantiation of virtual O-RAN network functions and virtual links within the determined O-Cloud(s). Based on the service request, O-Cloud M&O performs corresponding NF instantiation procedures and virtual link establishment. Step 7 (M) NSSMS_P associates the service response received from O-Cloud M&O with the corresponding O-NSSI. Step 8 (M) NSSMS_P uses (O-RAN) NF provisioning service exposed by NFMS_P to configure (O-)NSSI constituents. Step 9 (M) NSSMS_P configures the O-NSSI MOI with each constituent (O-)NSSI MOI identifier. Step 10 (M) NSSMS_P triggers O-RAN TN Manager coordination procedure to establish necessary links such as for A1, E2, and midhaul and fronthaul connectivity. Step 11 (M) NSSMS_P notifies Non-RT RIC with network slice subnet requirements and respective O-NSSI information. Step 12 (M) NSSMS_P notifies NSSMS_C with the resulting status of this process and relevant O-NSSI information. Ends when O-RAN O-NSSI and relevant O-RAN NFs are created, and Non-RT RIC is configured with slice requirements and O-NSSI information. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 14 Use Case Stage Evolution / Specification <<Uses>> Related use Exceptions One of the steps identified above fails. Post Conditions O-NSSI is ready to satisfy the network slice subnet related requirements. Traceability REQ-SL-FUN14, REQ-SL-FUN20 - REQ-SL-FUN27.
25afcd9f8bff1a1f16037e46f0d49896
104 041
5.2.2 O-RAN Slice Subnet Instance Activation
The context of the O-RAN Slice Subnet Instance Activation Use Case is captured in table 5.2.2-1. Table 5.2.2-1: O-RAN Slice Subnet Instance Activation Use Case Use Case Stage Evolution / Specification <<Uses>> Related use Goal Activation of an O-RAN network slice subnet instance (O-NSSI) (3GPP TS 28.531 [4], clause 5.1.10). Actors and Roles NSSMS_C such as NSMF, who acts as an example network slice subnet management service consumer. NSSMS_P such as NSSMF, who acts as an example of network slice subnet management service provider. NFMS_P such as SMO OAM Functions or NFMF who acts as an example of network function management service provider. O-RAN Network Functions: NFs such as Near-RT RIC, O-CU-CP, O-CU-UP, O-DU and O-RU. Assumptions NSSMS_P is providing services to authorized consumers. Pre conditions An O-NSSI has already been created and it is inactive. As an example, the O-NSSI can contain inactive O-RAN NFs:- Near-RT RIC is installed and active: - O-CU-CP installed (i.e. operationalState = enabled), but not yet activated (i.e. administrativeState = locked). - O-CU-UP installed (i.e. operationalState = enabled), but not yet activated (i.e. administrativeState = locked). - O-DU installed (i.e. operationalState = enabled), but not yet activated (i.e. administrativeState = locked) (3GPP TS 28.541 [7], figure B.2.2). - O-RU physically installed (i.e. operationalState = enabled), but not yet activated (i.e. administrativeState = locked). See note 1. Begins when NSSMS_P decides to activate the O-NSSI based on the received network slice subnet related request from its authorized consumer NSSMS_C. Step 1 (M) NSSMS_P receives a request from NSSMS_C to activate the O-NSSI (via NSSI Provisioning service with operation modifyMOIAttributes (3GPP TS 28.531 [4], table 6.2-1) to change administrativeState of the O-NSSI to unlocked.) Step 2 (M) NSSMS_P identifies the inactive constituents within the O-NSSI and decides to activate those constituents which can be NFs or NSSI. As an example, NSSMS_P activates the following inactive O-RAN NF constituents: - the O-CU-CP NF constituent, and invokes NF Provisioning service with operation modifyMOIAttributes (3GPP TS 28.531 [4], table 6.3-1) to request NFMS_P to activate the O-CU-CP by changing the administrativeState of the O-CU-CP to unlocked. See note 2. - the O-CU-UP NF constituent, and invokes NF Provisioning service with operation modifyMOIAttributes (3GPP TS 28.531 [4], table 6.3-1) to request NFMS_P to activate the O-CU-UP by changing the administrativeState of the O-CU-UP to unlocked. See notes 3 and 4. - the O-DU NF constituent, and invokes NF Provisioning service with operation modifyMOIAttributes (3GPP TS 28.531 [4], table 6.3-1) to request NFMS_P to activate the O-DU by changing the administrativeState of the O-DU to unlocked. See notes 5 and 6. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 15 Use Case Stage Evolution / Specification <<Uses>> Related use - the O-RU NF constituent, and invokes NF Provisioning service with operation modifyMOIAttributes (3GPP TS 28.531 [4], table 6.3-1) to request NFMS_P to activate the O-RU by changing the administrativeState of the O-RU to unlocked. See note 7. Step 3 (M) NSSMS_P receives notifications indicating that all inactive NSSI constituents are activated. (NSSMS_P is notified by respective NFMS_P(s) which invokes NF Provisioning data report service with notification notifyMOIAttributeValueChanges (3GPP TS 28.531 [4], table 6.3-1) that the O-CU-CP, O-CU-UP, O-DU and O-RU have been activated). Step 4 (M) NSSMS_P changes administrativeState of the O-NSSI to unlocked, and invokes NSSI Provisioning data report service with notification notifyMOIAttributeValueChanges (3GPP TS 28.531 [4], table 6.2-1) to notify NSSMS_C that the O-NSSI has been activated. Ends when O-RAN O-NSSI is activated. Exceptions One of the steps identified above fails. Post Conditions O-NSSI is in operation. Traceability REQ-SL-FUN25, REQ-SL-FUN28. NOTE 1: The O-CU-CP, O-CU-UP, O-DU and O-RU are not shared by other O-NSSI, since they are not yet activated. Near-RT RIC has already been activated for other services. NOTE 2: O-CU-CP starts to establish the E2 interface connection with Near-RT RIC. NOTE 3: O-CU-UP starts to establish the E2 interface connection with Near-RT RIC. NOTE 4: E1 interface connection will be established between O-CU-CP and O-CU-UP. NOTE 5: O-DU starts to establish the F1 interface connection with O-CU, (3GPP TS 28.541 [7], Annex A.1). NOTE 6: O-DU starts to establish the E2 interface connection with Near-RT RIC. NOTE 7: O-RU starts to establish the M plane interface connection with O-DU (O-RAN.WG1.O1-Interface.0-v04.00 [13]).
25afcd9f8bff1a1f16037e46f0d49896
104 041
5.2.3 O-RAN Slice Subnet Instance Modification
The context of the O-RAN Slice Subnet Instance Modification Use Case is captured in table 5.2.3-1. Table 5.2.3-1: O-RAN Slice Subnet Instance Modification Use Case Use Case Stage Evolution / Specification <<Uses>> Related use Goal Modification of an existing O-NSSI to satisfy O-RAN slice subnet related requirements (3GPP TS 28.531 [4], clause 5.1.9). Actors and Roles NSSMS_C such as NSMF, who acts as an example network slice subnet management service consumer. NSSMS_P such as NSSMF, who acts as an example of network slice subnet management service provider. NFMS_P such as SMO OAM Functions or NFMF who acts as an example of network function management service provider. O-Cloud M&O, who acts as O-Cloud management and orchestration provider within SMO. Non-RT RIC O-RAN Network Functions: NFs such as Near-RT RIC, O-CU-CP, O-CU-UP, O-DU and O-RU. Assumptions NSSMS_P is aware of O-Cloud M&O to manage the lifecycle of VNFs and interconnection between the VNFs and PNFs. Pre conditions VNF Packages for virtualized O-RAN network functions to be included in the O-RAN slice subnet instance have been already on-boarded. Begins when NSSMS_P receives request for a modification of an existing O-NSSI. The request contains new network slice subnet related requirements. Step 1 (M) NSSMS_P checks the feasibility of the request, based on the received network slice subnet related modification requirements. If the modification requirements can be satisfied, go to step 2, else go to step 8. O-RAN Slice Subnet Feasibility Check Step 2 (M) NSSMS_P decomposes the O-NSSI modification request into modification requests for each (O-)NSSI constituent. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 16 Use Case Stage Evolution / Specification <<Uses>> Related use Step 3 (O) If the required O-NSSI contains constituent (O-)NSSI(s) managed by other NSSMS_P(s), NSSMS_P can trigger modification of respective constituent (O-)NSSI(s) through other NSSMS_P(s) which manages the constituent (O-)NSSI(s). In that case, NSSMS_P receives the constituent (O-)NSSI information from the other NSSMS_P(s) and associates the constituent (O-)NSSI(s) with the required O-NSSI. (O-RAN) Slice Subnet Instance Modification Use Case (to modify constituent (O-)NSSI(s) managed by other NSSMS_P(s)) Step 4 (O) If the O-NSSI contains virtualized part(s), NSSMS_P triggers a service modification request to O-Cloud M&O for scaling, updating, instantiation, etc. of virtual O-RAN network functions and virtual links within the determined O-Cloud(s). Step 5 (O) If the O-NSSI consists of NF instances, NSSMS_P uses NF provisioning service exposed by NFMS_P to (re-)configure (O-)NSSI constituents. Step 6 (O) If the NSSI contains TN part, NSSMS_P triggers O-RAN TN Manager coordination procedure to establish/modify necessary links such as for A1, E2, midhaul and fronthaul connectivity. Step 7 (M) NSSMS_P notifies Non-RT RIC with the updated network slice subnet requirements and respective O-NSSI information. Step 8 (M) NSSMS_P notifies NSSMS_C with the resulting status of this process and relevant O-NSSI information. Ends when O-RAN O-NSSI and relevant O-RAN NFs are modified, and Non-RT RIC is configured with modified slice requirements and O-NSSI information. Exceptions One of the steps identified above fails. Post Conditions O-NSSI is ready to satisfy the updated network slice subnet related requirements. Traceability REQ-SL-FUN24 - REQ-SL-FUN27, REQ-SL-FUN29, REQ-SL-FUN30.
25afcd9f8bff1a1f16037e46f0d49896
104 041
5.2.4 O-RAN Slice Subnet Instance Deactivation
The context of the O-RAN Slice Subnet Instance Deactivation Use Case is captured in table 5.2.4-1. Table 5.2.4-1: O-RAN Slice Subnet Instance Deactivation Use Case Use Case Stage Evolution / Specification <<Uses>> Related use Goal Deactivation of an O-RAN network slice subnet instance (O-NSSI) (3GPP TS 28.531 [4], clause 5.1.11). Actors and Roles NSSMS_C such as NSMF, who acts as an example network slice subnet management service consumer. NSSMS_P such as NSSMF, who acts as an example of network slice subnet management service provider. NFMS_P such as SMO OAM Functions or NFMF who acts as an example of network function management service provider. O-RAN Network Functions: NFs such as Near-RT RIC, O-CU-CP, O-CU-UP, O-DU and O-RU. Assumptions NSSMS_P is providing services to authorized consumers. Pre conditions An O-NSSI has already and it is in active state. As an example, the existing O-NSSI includes active O-CU-CP, O-CU-UP, O-DU and O-RU O-RAN NFs, where the administrativeState is unlocked (3GPP TS 28.541 [7], figure B.2.2). See note 1. Begins when NSSMS_C decides to deactivate the O-NSSI based on the received network slice subnet related request from its authorized consumer NSSMS_C. Step 1 (M) NSSMS_P receives a request from NSSMS_C to deactivate the O-NSSI (via NSSI Provisioning service with operation modifyMOIAttributes (3GPP TS 28.531 [4], table 6.2-1) to request NSSMS_P to deactivate the O-NSSI that is to change administrativeState of the O-NSSI to locked). Step 2 (M) NSSMS_P identifies active constituents (e.g. NSSI, NF) of the NSSI and decides to deactivate those constituents. As an example, NSSMS_P finds that the O-NSSI contains active non-shared O-RAN NFs: ETSI ETSI TS 104 041 V11.0.0 (2025-03) 17 Use Case Stage Evolution / Specification <<Uses>> Related use - the O-CU-CP NF constituent, and invokes NF Provisioning service with operation modifyMOIAttributes (3GPP TS 28.531 [4], table 6.3-1) to request NFMS_P to deactivate the O-CU-CP by changing the administrativeState of the O-CU-CP to locked. See notes 2 and 3. - the O-CU-UP NF constituent, and invokes NF Provisioning service with operation modifyMOIAttributes (3GPP TS 28.531 [4], table 6.3-1) to request NFMS_P to deactivate the O-CU-UP by changing the administrativeState of the O-CU-UP to locked. See note 4. - the O-DU NF constituent, and invokes NF Provisioning service with operation modifyMOIAttributes (3GPP TS 28.531 [4], table 6.3-1) to request NFMS_P to deactivate the O-DU by changing the administrativeState of the O-DU to locked. See notes 5 and 6. - the O-RU constituent, and invokes NF Provisioning service with operation modifyMOIAttributes (3GPP TS 28.531 [4], table 6.3-1) to request NFMS_P to deactivate the O-RU by changing the administrativeState of the O-RU to locked. See note 7. Step 3 (M) NSSMS_P receives notifications indicating that all active non-shared NSSI constituents are deactivated. (NSSMS_P is notified by respective NFMS_P(s) which invoke NF Provisioning data report service with notification notifyMOIAttributeValueChanges (3GPP TS 28.531 [4], table 6.3-1) to notify that the O-CU-CP, O-CU-UP, O-DU and O-RU have been deactivated.) Step 4 (M) NSSMS_P changes administrativeState of the O-NSSI to locked, and invokes NSSI Provisioning data report service with notification notifyMOIAttributeValueChanges (3GPP TS 28.531 [4], table 6.2-1) to notify NSSMS_C that the O-NSSI has been deactivated. Ends when O-RAN O-NSSI is deactivated. Exceptions One of the steps identified above fails. Post Conditions O-NSSI is inactive. Traceability REQ-SL-FUN25, REQ-SL-FUN31. NOTE 1: The O-CU-CP, O-CU-UP, O-DU and O-RU are not shared by other O-NSSI. NOTE 2: O-CU-CP starts to terminate the E2 interface connection with Near-RT RIC. NOTE 3: E1 interface connection will be released between O-CU-CP and O-CU-UP. NOTE 4: O-CU-UP starts to terminate the E2 interface connection with Near-RT RIC. NOTE 5: O-DU starts to terminate the F1 interface connection with O-CU, (3GPP TS 28.541 [7], Annex A.1). NOTE 6: O-DU starts to terminate the E2 interface connection with Near-RT RIC. NOTE 7: O-RU starts to terminate the M plane interface connection with O-DU (O-RAN.WG1.O1-Interface.0-v04.00 [13]). ETSI ETSI TS 104 041 V11.0.0 (2025-03) 18
25afcd9f8bff1a1f16037e46f0d49896
104 041
5.2.5 O-RAN Slice Subnet Instance Termination
The context of the O-RAN Slice Subnet Instance Termination Use Case is captured in table 5.2.5-1. Table 5.2.5-1: O-RAN Slice Subnet Instance Termination Use Case Use Case Stage Evolution / Specification <<Uses>> Related use Goal Termination or disassociation of an existing O-RAN network slice subnet instance (O-NSSI) (3GPP TS 28.531 [4], clause 5.1.4). Actors and Roles NSSMS_C such as NSMF, who acts as an example network slice subnet management service consumer. NSSMS_P such as NSSMF, who acts as an example of network slice subnet management service provider. O-Cloud M&O, who acts as O-Cloud management and orchestration provider within SMO. Non-RT RIC O-RAN Network Functions: NFs such as Near-RT RIC, O-CU-CP, O-CU-UP, O-DU and O-RU. Assumptions NSSMS_P is aware of O-Cloud M&O to manage the lifecycle of VNFs and interconnection between the VNFs and PNFs. Pre conditions O-NSSI exists and it is in inactive state. Begins when NSSMS_P receives a request for an O-RAN network slice subnet instance indicating that the O-NSSI is no longer needed. The request contains network slice subnet identifier. Step 1 (M) NSSMS_P checks whether the O-NSSI is a shared network slice subnet instance. If the O-NSSI is shared, O-NSSI will be disassociated via O-NSSI slice subnet instance modification use case. Go to step 5. If the O-NSSI is not shared, O-NSSI will be terminated. Go to step 2. O-RAN Slice Subnet Instance Modification Use Case Step 2 (M) If the O-NSSI consists of constituent NSSIs that are not managed directly by the NSSMS_P, it sends a request to other NSSMS_P(s) indicating that the constituent NSSIs are no longer needed for the O-NSSI. O-RAN Slice Subnet Instance Termination Use Case Step 3 (O) NSSMS_P triggers a service termination request to O-Cloud M&O for removal of non-shared virtual O-RAN network functions. Step 4 (O) If the O-NSSI includes constituent transport links, NSSMS_P triggers O-RAN TN Manager coordination procedure. Step 5 (M) NSSMS_P notifies Non-RT RIC that the O-NSSI has been terminated. Step 6 (M) NSSMS_P notifies NSSMS_C with the resulting status of this process and relevant O-NSSI information. Ends when All the steps identified above are successfully completed. Exceptions One of the steps identified above fails. Post Conditions O-NSSI has been terminated or disassociated and Non-RT RIC is notified. Traceability REQ-SL-FUN21, REQ-SL-FUN23, REQ-SL-FUN27, REQ-SL-FUN32 - REQ- SL-FUN34. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 19
25afcd9f8bff1a1f16037e46f0d49896
104 041
5.2.6 O-RAN Slice Subnet Instance Configuration
The context of the O-RAN Slice Subnet Instance Configuration Use Case is captured in table 5.2.6-1. Table 5.2.6-1: O-RAN Slice Subnet Instance Configuration Use Case Use Case Stage Evolution / Specification <<Uses>> Related use Goal Configuration of an O-NSSI (3GPP TS 28.531 [4], clause 5.1.13). Actors and Roles NSSMS_C such as NSMF, who acts as an example network slice subnet management service consumer. NSSMS_P such as NSSMF, who acts as an example network slice subnet management service provider. NFMS_P such as SMO OAM Functions or NFMF, who acts as an example network function management service provider. O-RAN Network Functions: NFs such as Near-RT RIC, O-CU-CP, O-CU-UP, O-DU and O-RU. Assumptions NSSMS_P is providing services to authorized consumers. NSSMS_P is aware of the respective NSSMS_P(s) and NFMS_P(s) which manages the constituent NSSI(s) and NF(s). Pre conditions The O-NSSI exists. Begins when NSSMS_C triggers the (re-)configuration of an O-NSSI and its constituents. Step 1 (M) NSSMS_P receives a request from NSSMS_C with slice subnet (re-)configuration information for (re-)configuration of an O-NSSI. Step 2 (M) NSSMS_P decomposes the received slice subnet configuration information and prepares CM requests for each constituent if necessary and applicable. Step 3 (O) NSSMS_P configures the constituent O-NSSI(s) if it is managed directly by the NSSMS_P. Step 4 (O) If the O-NSSI contains constituent NSSI(s) managed by other NSSMS_P(s), NSSMS_P triggers configuration of respective constituent NSSI(s) through NSSMS_P(s) which manages the constituent NSSI(s). (O-RAN) Slice Subnet Configuration Use Case Step 5 (O) If the required O-NSSI contains constituent O-RAN NF(s) managed by NFMS_P(s), NSSMS_P triggers configuration requests with corresponding slice subnet configuration information through NFMS_P(s) which manages the constituent O-RAN NF(s). Step 6 (M) NSSMS_P sends the configuration result to the NSSMS_C which might be based on results received from other CM service providers. Ends when All the steps identified above are successfully completed. Exceptions One of the steps identified above fails. Post Conditions The required (re-)configuration is accomplished at the corresponding constituent(s). Traceability REQ-SL-FUN24, REQ-SL-FUN25.
25afcd9f8bff1a1f16037e46f0d49896
104 041
5.2.7 O-RAN Slice Subnet Feasibility Check
The context of the O-RAN Slice Subnet Feasibility Check is captured in table 5.2.7-1. Table 5.2.7-1: O-RAN Slice Subnet Feasibility Check Use Case Stage Evolution / Specification <<Uses>> Related use Goal To check the feasibility of provisioning an O-RAN network slice subnet instance (O-NSSI) to determine whether the O-NSSI requirements can be satisfied (e.g. in terms of resources) (3GPP TS 28.531 [4], clause 5.1.21). ETSI ETSI TS 104 041 V11.0.0 (2025-03) 20 Use Case Stage Evolution / Specification <<Uses>> Related use Actors and Roles NSSMS_C such as NSMF, who acts as an example network slice subnet management service consumer. NSSMS_P such as NSSMF, who acts as an example of network slice subnet management service provider. NFMS_P such as SMO OAM Functions or NFMF who acts as an example of network function management service provider. O-Cloud M&O, who acts as O-Cloud management and orchestration provider within SMO. Non-RT RIC O-RAN Network Functions: NFs such as Near-RT RIC, O-CU-CP, O-CU-UP, O-DU and O-RU. Assumptions NSSMS_C has decided to check the feasibility of provisioning an O-NSSI based on, for example, internal decision or an external service request. Pre conditions Network slice subnet requirements have been derived or received by network slice subnet management service consumer NSSMS_C. Begins when NSSMS_P receives the request to evaluate the feasibility of an O-NSSI according to the network slice subnet requirements. O-RAN Slice Subnet Instance Creation, O-RAN Slice Subnet Instance Modification Step 1 (M) NSSMS_P identifies the network slice subnet constituents according to the requirements, e.g. network services to be requested from O-Cloud M&O. Step 2 (O) For the purpose of checking the feasibility of provisioning an O-NSSI, NSSMS_P can obtain information from the SMO and Non-RT RIC (e.g. load level information, resource usage information from management data analytics services). Step 3 (M) NSSMS_P sends enquiries with reservation requests to O-Cloud M&O to determine availability of network constituents, e.g. network services, network functions. Step 4 (O) For the purpose of checking the feasibility of transport network links, NSSMS_P can obtain information from TN Manager. Ends when NSSMS_P provides feasibility check results to NSSMS_C. If provisioning of O-NSSI is feasible, the information about reserved resources can also be provided. Exceptions One of the mandatory steps fails. Post Conditions N/A Traceability REQ-SL-FUN35 - REQ-SL-FUN37.
25afcd9f8bff1a1f16037e46f0d49896
104 041
5.3 O-RAN Slicing Use Cases
25afcd9f8bff1a1f16037e46f0d49896
104 041
5.3.1 Use Case 1: RAN Slice SLA Assurance
In the 5G era, network slicing is a prominent feature which provides end-to-end connectivity and data processing tailored to specific business requirements. These requirements include customizable network capabilities such as the support of very high data rates, traffic densities, service availability and very low latency. According to 5G standardization efforts, the 5G system can support the needs of the business through the specification of several service needs such as data rate, traffic capacity, user density, latency, reliability, and availability. These capabilities are always provided based on a Service Level Agreement (SLA) between the mobile operator and the business customer, which brought up interest for mechanisms to ensure slice SLAs and prevent its possible violations. O-RAN's open interfaces and AI/ML based architecture will enable such challenging mechanisms to be implemented and help pave the way for operators to realize the opportunities of network slicing in an efficient manner. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 21 RAN slice SLA assurance scenario involves Non-RT RIC, Near-RT RIC, E2 Nodes and SMO interaction. The scenario starts with the retrieval of RAN specific slice SLA/requirements (possibly within SMO or from NSSMF depending on Operator deployment options). Based on slice specific performance measurements from E2 Nodes, Non-RT RIC and Near-RT RIC can fine-tune RAN behavior aligned with O-RAN architectural roles to assure RAN slice SLAs. Non-RT RIC monitors long-term trends and patterns for RAN slice subnets' performance, and employs AI/ML methods to perform corrective actions through SMO (e.g. reconfiguration via O1) or via creation of A1 policies. Non-RT RIC can also construct/train relevant AI/ML models that will be deployed at Near-RT RIC. A1 policies possibly include scope identifiers (e.g. S-NSSAI) and statements such as KPI targets. On the other hand, Near-RT RIC enables optimized RAN actions through execution of deployed AI/ML models in near-real-time by considering both O1 configuration (e.g. static RRM policies) and received A1 policies, as well as received slice specific E2 measurements. An overview of RAN Slice SLA Assurance use case is given in figure 5.3.1-1. Figure 5.3.1-1: RAN Slice SLA Assurance use case overview The more detailed functions provided by the entities for RAN slice SLA assurance are listed as below: 1) SMO: a) Provides information about slice topology and SLA associated with the slice to the Non-RT RIC. 2) Non-RT RIC: a) Retrieve RAN slice SLA target from respective entities such as SMO, NSSMF. b) Long term monitoring of RAN slice subnet performance measurements. c) Training of potential ML models that will be deployed in Near-RT RIC for optimized slice assurance. d) Support deployment and update of AI/ML models into Near-RT RIC. e) Send A1 policies and enrichment information to Near-RT RIC to drive slice assurance. f) Send O1 reconfiguration requests to SMO for slow-loop slice assurance. 3) Near-RT RIC: a) Near real-time monitoring of slice specific RAN performance measurements. b) Support deployment and execution of the AI/ML models from Non-RT RIC. c) Support interpretation and execution of policies from Non-RT RIC. d) Perform optimized RAN (E2) actions to achieve RAN slice subnet requirements based on O1 configuration, A1 policy, and E2 reports. 4) E2 Nodes (O-CU-CP, O-CU-UP, O-DU): a) Support slice assurance actions such as slice-aware resource allocation, prioritization, etc. b) Support slice specific performance measurements through O1. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 22 c) Support slice specific performance reports through E2.
25afcd9f8bff1a1f16037e46f0d49896
104 041
5.3.2 Use Case 2: Multi-vendor Slices
This use case enables multiple slices with functions provided by multi-vendors, such as slice #1, composed of DU(s) and CU(s), provided by vendor B and slice #2, composed of DU(s) and CU(s), provided by vendor C (see figure 5.3.2-1). Figure 5.3.2-1: Multi-vendor Slices When providing multiple slices, it is assumed that suitable vO-DU/scheduler and vO-CU treat each slice respectively. A vendor who provides vO-DU and vO-CU function can have a strength of a customized scheduler for a certain service. With accomplishment of multi-vendor circumstances, following benefits can be expected: 1) More flexible and time to market deployment Operators can maximize options to choose suitable vO-DU/scheduler and vO-CU to offer various slices. For example, some vendors can have a strength of a scheduler for eMBB service and the other can have a strength of scheduler for URLLC service. Or, vendor A can provide vO-DU/scheduler and vO-CU suitable for URLLC earlier than vendor B, therefore operators can choose vO-DU and vO-CU functions from vendor A to meet their service requirements. Also, when an operator wants to add a new service/slice, new functions from a new vendor can be introduced with less consideration for existing vendors if multi-vendor circumstance was realized. This can help expand vendor's business opportunities rapidly. 2) Flexible deployment when sharing RAN equipment among operators When operators want to share RAN equipment and resources, RAN vendors and their placement of each RAN functions can be different. If multi-vendor circumstance was introduced, then it can relax restrictions among operators to share RAN equipment and resources. This can help expanded opportunities for reaching agreements of RAN sharing among operators. With expansion of RAN sharing, operators CAPEX and OPEX can be optimized, helping with additional investment opportunities. 3) Reducing supply chain risk If an existing vendor providing a certain pair of vO-DU and vO-CU functions withdraws of the market due to business reasons, operators can deploy new vO-DU and vO-CU functions alternatively from other vendors under this multi-vendor circumstance. This can reduce a risk for operators' business continuity. To realise multi-vendor slices, some coordination between vO-DU/vO-CUs will be required since radio resource shall be assigned properly and without any conflicts. Depending on different service goals and the potential impact on O-RAN architecture, a required coordination scheme needs to be determined. The possible cases are: 1) Loose coordination through O1/E2/A1 interface (Case 1 shown in figure 5.3.2-2). 2) Moderate coordination through X2/F1 interface (Case 2 shown in figure 5.3.2-2). 3) Tight coordination through a new interface between vO-DUs (Case 3 shown in figure 5.3.2-2). ETSI ETSI TS 104 041 V11.0.0 (2025-03) 23 Figure 5.3.2-2: Multi-vendor Slices Coordination Scheme Options In case 1, a resource allocation between slices or vO-DU/vO-CUs is provisioned through O1/A1/E2 interface and each pair of vO-DU and vO-CU will allocate radio resources to each customer within radio resources allocated by Near-RT RIC and/or Non-RT RIC. In case 2, a resource allocation can be negotiated between slices or vO-DU/vO-CUs through X2 and F1 after provisioned through O1/E2/A1 interface. Negotiation period will be several seconds due to periodicity of X2 and F1 message exchange between vO-CU(s). If a more adaptive radio resource allocation is needed (case 3), a more frequent negotiation would be required. This can potentially be achieved via an interface or API extension between vO-DU(s).
25afcd9f8bff1a1f16037e46f0d49896
104 041
5.3.3 Use Case 3: NSSI Resource Allocation Optimization
5G networks are becoming increasingly complex with the densification of millimeter wave small cells, and various new services, such as eMBB (enhanced Mobile Broadband), URLLC (Ultra Reliable Low Latency Communications), and mMTC (massive Machine Type Communications) that are characterized by high speed high data volume, low speed ultra-low latency, and infrequent transmitting low data volume from huge number of emerging smart devices, respectively. It is a challenging task for 5G networks to allocate resources dynamically and efficiently among multiple network nodes to support various services. However, eMBB, URLLC, and mMTC services in 5G are typically realized as NSI(s) (Network Slice instance(s)). Therefore, the resources allocated to NSSI (Network Slice Subnet Instance) to support the O-RAN nodes can be optimized according to the service requirements. As the new 5G services have different characteristics, the network traffic tends to be sporadic, where there can be different usage pattern in terms of time, location, UE distribution, and types of applications. For example, most IoT sensor applications can run during off-peak hours or weekends. Special events, such as sport games, concerts, can cause traffic demand to shoot up at certain times and locations. Therefore, NSSI resource allocation optimization function trains the AI/ML model, based on the huge volume of performance data collected over days, weeks, months from O-RAN nodes. It then uses the AI/ML model to predict the traffic demand patterns of 5G networks in different times and locations for each network slice subnet, and automatically re-allocates the network resources ahead of the network issues surfaced. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 24 The resource quota policies associated with RAN NFs (E2 Nodes) included in the respective NSSIs enable 5G network providers to optimize or prioritize the utilization of the RAN resources across slices and supports the flexibility to share resources optimally across critical service slices during resource surplus or scarcity. For example, an NSSI allocated for premium service can receive a major share of the resources compared to a slice allocated for a standard/best-effort service. Another such example is the scenario of additional resource allocation for emergency services. An important consideration here is that the NSSI resource quota policies focuses on maximization of resource utilization across the NSSIs .The resource quota policies can be used as a constraint for resource allocation that defines the range of resources that can be allocated per slice. One use case for applying such a constraint is the analysis and decision based on history of resource allocation failure that can be reflected in the RAN Node measurements. Here resource quota policy can be provisioned to control the minimum, maximum and dedicated resources that need to be allocated based on the historical pattern. The NSSI resource allocation optimization on the Non-RT RIC is shown in figure 5.3.3-1 shows, and can consist of the following steps: 1) Monitoring: monitor the radio network(s) by collecting data via the O1 interface, including the following performance measurements that are measured per S-NSSAI (3GPP TS 28.552 [8] shall apply): - DL PRB used for data traffic (3GPP TS 28.552 [8], clause 5.1.1.2.5 shall apply). - UL PRB used for data traffic (3GPP TS 28.552 [8], clause 5.1.1.2.7 shall apply). - Average DL UE throughput in gNB (3GPP TS 28.552 [8], clause 5.1.1.3.1 shall apply). - Average UL UE throughput in gNB (3GPP TS 28.552 [8], clause 5.1.1.3.3 shall apply). - Number of PDU Sessions requested to setup (3GPP TS 28.552 [8], clause 5.1.1.5.1 shall apply). - Number of PDU Sessions successfully setup (3GPP TS 28.552 [8], clause 5.1.1.5.2 shall apply). - Distribution of DL UE throughput in gNB (3GPP TS 28.552 [8], clause 5.1.1.3.2 shall apply). - Distribution of UL UE throughput in gNB (3GPP TS 28.552 [8], clause 5.1.1.3.4 shall apply). - Number of DRBs successfully setup (3GPP TS 28.552 [8], clause 5.1.1.10.2 shall apply). 2) Analysis & Decision: consisting of the following steps: - Utilize AI/ML models to analyze the measurements and predict the future traffic demand, including the RRMPolicyRatio IOC limits, for each NSSI for a given time and location. - Determine the actions needed to add or reduce the resources (e.g. capacity, VNF resources, slice subnet attributes (3GPP TS 28.541 [7] shall apply), etc.) for the RAN NFs (E2 Nodes) included in the respective NSSI at the given time, and location. 3) Execution: execute the following actions to reallocate the NSSI resources: 3a. Re-configure the NSSI attributes, including RRMPolicyRatio IOCs (3GPP TS 28.541 [7] shall apply) via the OAM Functions in SMO which uses O1 interface to configure the E2 Nodes. 3b. Update the cloud resources via the O2 interface. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 25 Service management & Orchestration Framework Near-RT RIC O-DU O-CU- CP O-CU- UP O-Cloud E2 E1 F1-C F1-U Open Fronthaul interface O-RU A1 O1 O2 E2 Network functions Non-RT RIC NSSI Resource Allocation Optimization 2. analysis & decision 1. monitoring O-Cloud M&O E2 3.a. execution Figure 5.3.3-1: The realization of NSSI resource allocation optimization over Non-Real Time RIC The more detailed functions provided by the entities for NSSI resource optimization are listed as follows: 1) SMO: a) Pre-Provision the default NSSI resource quota policy as constraint for NSSI resource allocation optimization. This information is optionally used by the Non-RT RIC in case the resource quota that needs to be allocated per slice is not specified during the slice creation and for conflict resolution at the time of resource scarcity. 2) Non-RT RIC: a) Collect the performance measurements related to NSSI resource usage from the O-RAN nodes via the O1 interface. b) Train the AI/ML model based on the analysis of historical performance measurements, to predict of the traffic demand patterns of NSSI at different times and locations. c) Determine the time/date and locations (i.e. which O-RAN nodes) to add or reduce the resources (e.g. capacity, VNF resources, slice subnet attributes (3GPP TS 28.541 [7]), RRMPolicyRatio IOC, etc.) for a given NSSI based on inference. d) Perform the following action(s) to optimize the NSSI resource allocation, at the time determined by the model: i. Re-configure the NSSI attributes via the O1 interface. ii. Update the cloud resources via the O2 interface. 3) E2 Nodes (O-CU-CP, O-CU-UP, O-DU, and O-RU): a) Support the performance measurement collection with required granularity over O1 interface. b) Support the configuration related to the NSSI resource allocation update over O1 interface. ETSI ETSI TS 104 041 V11.0.0 (2025-03) 26
25afcd9f8bff1a1f16037e46f0d49896
104 041
6 O-RAN Slicing Principles and Requirements