hash stringlengths 32 32 | doc_id stringlengths 7 13 | section stringlengths 3 121 | content stringlengths 0 2.2M |
|---|---|---|---|
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 5.2.2 Comprehensive Knowledge Base | Organizations shall possess and maintain a comprehensive knowledge base relevant to the AI systems under assessment, which shall include: a) Understanding of conformity requirements: Organizations shall possess and maintain a deep and current understanding of the specific conformity requirements applicable to the AI systems. This understanding shall encompass relevant standards (e.g. ISO/IEC standards, including potentially relevant aspects from ISO/IEC 42001 [1] on AI management systems, and ETSI standards), regulations (e.g. the EU AI Act [i.9]), sector-specific rules, and ethical guidelines. b) Knowledge of standards and regulations linkage: Familiarity with how international and regional standards relate to and support regulatory requirements. This capability shall support the correct interpretation and application of requirements, particularly when aiming for compliance via Harmonised European Standards (HEN). c) Understanding AI system risk classification: The ability to correctly classify the organization's AI systems according to applicable regulatory frameworks (e.g. risk categories in the EU AI Act [i.9]: unacceptable, high, limited, minimal risk). This classification shall inform the scope, depth, and rigor of the CABCA implementation and associated risk management strategies. d) Synthesis of previous assessment results: Consolidated learnings from prior system evaluations, including identified non-conformities and corrective actions, to inform subsequent requirements and measurements. e) AI literacy program: An organization-wide AI literacy program for personnel involved in development, operation, assurance, and oversight of AI systems, proportionate to role and risk. ETSI ETSI TS 104 008 V1.1.1 (2026-01) 16 |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 5.2.3 Technical and Operational Expertise | Organizations shall ensure they possess the necessary technical and operational expertise, including: a) Operationalization skills: Demonstrated ability to translate high-level conformity specifications and requirements (from standards, regulations, and ethical guidelines) into specific, actionable, measurable, and machine-readable metrics suitable for continuous assessment within the CABCA framework. This shall involve expertise in both the AI domain and compliance/audit methodologies. b) AI-related technical expertise: A thorough understanding of the design, development, deployment, and operational principles of the AI systems being assessed. This shall include in-depth knowledge of the algorithms employed, data management practices (including governance), model training and validation, and system architecture. |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 5.2.4 Infrastructure and Methodological Framework | Organizations shall have in place an adequate infrastructure and methodological framework, which shall include: a) Continuous monitoring infrastructure: A robust and reliable technological infrastructure capable of supporting continuous monitoring, data collection, evidence processing, and automated analysis as required by CABCA. b) Awareness of auditing and compliance standards: Organizational knowledge of established auditing processes and compliance standards relevant to technology systems and, where applicable, AI-specific assurance techniques (potentially referencing practices from ISO/IEC 42001 [1] or other relevant audit standards). |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 5.2.5 Risk Management and Stakeholder Engagement | Organizations shall demonstrate effective risk management and stakeholder engagement capabilities, including: a) Risk management proficiency: Proficiency in identifying, assessing, and mitigating risks associated with AI systems, specifically tailored to the context of continuous auditing and conformity assessment as performed under CABCA. b) Stakeholder engagement strategy: A defined approach for identifying relevant internal and external stakeholders, understanding their requirements and concerns related to AI conformity, and comprehending how CABCA outcomes impact them. This shall form the basis for effective communication and trust-building. c) Risk ownership: Named owners are assigned for each assessed AI system with authority to allocate resources and accept/mitigate risks; ownership is recorded with the assessment status and persisted, see clause 9.3. |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 5.3 Basic Assumptions of CABCA | |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 5.3.1 General | This clause describes the fundamental assumptions that underpin the CABCA framework that are also essential for quality management systems, including post-market monitoring and being in compliance with ISO 9001:2015 [3] - Quality management systems. These assumptions relate to the nature and characteristics of AI systems, the applicability and transparency of standards and regulations, and the independence and objectivity required in the auditing process. ETSI ETSI TS 104 008 V1.1.1 (2026-01) 17 |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 5.3.2 AI System Heterogeneity and Dynamism | CABCA assumes that while AI systems vary widely in complexity, function, and application domain, they are subject to common, universal requirements. These requirements are often derived from high-level ethical principles, such as fairness, transparency, technical robustness, and accountability, which are foundational to ethical guidelines [i.10] and regulatory frameworks like the EU AI Act [i.9]. For instance, this could mean ensuring a hiring AI is free from gender bias (fairness), providing a clear rationale for a credit scoring decision (transparency), verifying that a self-driving system performs reliably in adverse weather conditions (technical robustness), and establishing a clear audit trail to address system-induced errors (accountability). This assumption acknowledges the diverse nature of AI systems while emphasizing a consistent approach to evaluating their adherence to fundamental requirements. AI systems are dynamic, evolving over time due to data variability, algorithm updates, and operational environment changes. CABCA is adaptable to these evolutions by running frequent reassessment cycles that automatically collect fresh evidence, reevaluate metrics, and update the conformity status on a regular basis. These iterative cycles allow CABCA to adapt immediately to system changes, ensuring compliance remains current and thus trustworthy throughout the AI system's lifecycle. |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 5.3.3 Universal Applicability and Transparency | Building on the existence of common requirements, CABCA further assumes that the specific standards, regulations, and ethical guidelines selected for assessment can be consistently interpreted and practically operationalized (i.e. translated into measurable metrics and verifiable tests) for any type of AI system, ensuring the assessment framework's broad applicability. Transparency and accountability are central to this approach. CABCA operationalizes these principles by continuously collecting and documenting evidence of system behaviour, decision logic, and performance metrics. This structured, traceable documentation, combined with automated reporting and clear mapping to compliance requirements, ensures that AI operations remain visible and auditable. As a result, CABCA enables stakeholders to understand, verify, and hold accountable both the AI systems and their operators for decisions and outcomes. |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 5.3.4 Independence and Objectivity in Auditing | The integrity and credibility of CABCA rely on maintaining independence and objectivity throughout the auditing process, whether internal or external. This is not only an assumption, but a fundamental requirement supported by several facets of the CABCA framework. Independence is fostered through several mechanisms. These include clearly defined roles (Auditee and Auditing Party, see clause 6.3), strict reliance on objective evidence derived from metrics and measurements (see clause 9.1), the option of third-party assessment modes (see clause 6.2), and the overall transparency of the process and its outcomes (see clauses 5.4.3 and 9.4.2). Together, these mechanisms safeguard the impartiality of auditing and assessment under CABCA, thereby strengthening trust and reliability in the resulting conformity assessments. While stated here as a fundamental requirement, the specific implementation details of these mechanisms are elaborated in the referenced sections, including roles (clause 6.3), assessment modes (clause 6.2), evidence from measurements (clause 9.1), and transparent reporting (clause 9.4). |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 5.4 Principles of CABCA | |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 5.4.1 General | CABCA is founded on three guiding principles: ongoing conformity, stakeholder trust, and adaptability [i.7], [i.8]. The rationale for their selection is as follows: β’ Ongoing conformity was chosen to directly counter the limitations of traditional 'point-in-time' audits, which are inadequate for the dynamic nature of AI. Since AI systems evolve continuously with new data and model updates, this principle establishes the necessity of continuous assessment to ensure compliance is actively maintained throughout the system's lifecycle, not just verified at a single moment. ETSI ETSI TS 104 008 V1.1.1 (2026-01) 18 β’ Stakeholder's trust was selected because the complexity and societal impact of AI systems demand more than technical compliance; they require demonstrable trustworthiness. This principle mandates a transparent, verifiable, and communicative assessment process designed to build and maintain confidence among all stakeholders, including regulators, customers, and the public - which is crucial for the acceptance and responsible deployment of AI. β’ Adaptability was included to ensure the long-term viability of the CABCA framework in a rapidly changing technological and regulatory landscape. This principle enables the methodology to evolve alongside new AI techniques, emerging risks, and updated standards, ensuring it remains relevant and effective without requiring fundamental redesign. These principles offer distinct advantages over traditional periodic audits and self-assessment methods. They ensure a more standardized yet flexible approach to conformity assessment. Standardization is achieved through a common methodological framework and process, while flexibility is provided by the scoping and operationalization phases that adapt the assessment to specific systems and risks. These principles build confidence among stakeholders, provide an infrastructure for organizations to continually improve and adapt to new compliance challenges, and are central to maintaining transparency and demonstrating an ongoing commitment to quality and compliance. |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 5.4.2 Ongoing conformity | Traditional conformity assessment methodologies often offer a 'snapshot' of compliance at a particular moment in time. While this approach may suffice for static or slowly evolving systems, it is inadequate for rapidly changing environments, particularly in sectors like Machine Learning. CABCA shifts this paradigm by focusing on the continuous auditing of an organization's AI system's adherence to standards. This principle aligns closely with the core process of Continuous Assessment, which leverages real-time data for uninterrupted compliance status. This approach not only provides a more nuanced and current view of compliance but also integrates seamlessly with risk-based quality requirements and risk management strategies. |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 5.4.3 Stakeholder trust | Trust is a crucial element for any organization's success, especially in today's rapidly changing landscape. The CABCA methodology places a high value on transparency and open communication with stakeholders, which in turn fosters trust. As detailed in clause 5.1, CABCA achieves this by establishing foundational trust in the audit method itself when stakeholders review and approve the Operationalization Specification. This ensures the "how" of the assessment is agreed upon before it begins. Consistent and continuous auditing and reporting (as described in clause 9.1) keep stakeholders updated, enhancing awareness and understanding of the organization's compliance activities and status. This continuous flow of reliable information boosts confidence among stakeholders, allowing them to make well-informed decisions. This principle also incorporates the benefits of greater focus on stakeholder communication, ensuring that all parties involved have a clear understanding of the organization's commitment to maintaining ongoing compliance with relevant standards. |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 5.4.4 Adaptability | CABCA is designed for adaptability, enabling it to keep pace with changes in AI systems, emerging risks, and the external regulatory landscape. This principle is crucial for dynamically evolving systems like Machine Learning, where static auditing methods fall short. It is put into practice through the Operationalization process, which allows organizations to redefine the audit's scope and metrics in response to technological advancements or new regulations. This approach supports continuous improvement, ensuring that compliance processes remain effective over time. |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 6 Description of the CABCA Process Execution | |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 6.1 General | The execution of a CABCA cycle involves several distinct phases, driven by specific triggers and resulting in a documented conformity status. The overall process flow, including variations based on the assessment mode, is visualised in Figure 6.1β1. ETSI ETSI TS 104 008 V1.1.1 (2026-01) 19 An assessment cycle can be considered a complete update of the conformity status, including measurement, evidence collection, and reporting. The process assumes that the initial Operationalization (A1) has been completed. As a result of this phase, the Operationalization Specification shall be available to configure the subsequent assessment steps. Figure 6.1β1: CABCA process flow diagram ETSI ETSI TS 104 008 V1.1.1 (2026-01) 20 Each Assessment Cycle shall be initiated by one or more defined Assessment Cycle Triggers. These triggers are classified into two main categories: proactive triggers, which are planned or scheduled (e.g. time-based reviews, system updates), and reactive triggers, which are event-driven and respond to emergent risks (e.g. significant data drift, performance degradation, or detected security anomalies). Regardless of the specific conformity assessment path chosen, this trigger shall lead into the "Common CABCA Execution Steps", which use the Operationalization Specification as their configuration: β’ A2: Continuous Evidence Gathering & Measurement: This is an automated, ongoing process that collects data from system artifacts (e.g. log files, model parameters, data samples) using programmed measurements. The output of this step shall be the raw Measurement Results, which serve as the primary input for the next step. β’ A3: Automated Analysis & Findings Mapping: This step shall take the Measurement Results from A2 as input. These results are automatically compared against the pre-defined thresholds and requirements documented in the Operationalization Specification (A1). The outcome of this step shall be Analysed Findings, an artifact that identifies any deviations or non-conformities. β’ A4: Continuous Reporting & Status Updates: This step shall take the Analysed Findings from A3 as input and consolidate them into a structured, final output. The common execution steps shall culminate in the generation of an Assessment Report and its associated Evidence, which consists of the raw data and artifacts (e.g. log files, test results, metric values) that substantiate the findings in the Assessment report. This artifact shall serve as the input for the subsequent assessment path chosen in clause 6.2. The cycle concludes at the Assessment Cycle End. The overall methodology supports A5: Iterative Follow-up & Monitoring, leveraging the continuous evidence to potentially trigger subsequent assessment cycles. |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 6.2 Process Variations Based on CABCA Modes | |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 6.2.1 Self-Assessment Path | In the Self-Assessment Path, the AI system provider acts as the auditing party to internally validate conformity. The primary input for this path shall be the Assessment Report. As shown in Figure 6.2-1, the AI system provider (acting as the auditing party) shall directly proceed to "Provider Reviews Results/Report". Based on this internal review against the established requirements, the provider shall then move to "Provider Records Conformity Status". The outcome of this path shall be the "Self-Assessment Status Recorded", an artifact that documents the system's compliance and can serve as an input for the Certification Path. Figure 6.2-1: Self-Assessment Path |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 6.2.2 Third-Party Assessment Path | The Third-Party Assessment Path involves an independent external body and uses the Assessment Report as its primary input. The process, shown in Figure 6.2-2, begins when the AI system provider shall execute the step "Provider Makes Results/Report Available to Third-Party". The external body then shall proceed to "Third-Party Accesses Results/Report for Independent Assessment". The subsequent steps of the third-party assessment, including their determination of conformity, may occur externally. CABCA provides the structured Assessment Report and its underlying Evidence to facilitate this external evaluation. To achieve a fully automated assessment, the provider can expose the assessment results and evidence through a secure API, allowing the third-party's systems to programmatically and continuously ingest the data and update the assessment status automatically. The direct output of the CABCA process in this path shall be the "Third-Party Assessment Status Available". ETSI ETSI TS 104 008 V1.1.1 (2026-01) 21 Figure 6.2-2: Third-Party Assessment Path |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 6.2.3 Certification Path | CABCA facilitates the optional Certification Path by providing the continuous stream of evidence generated through either the Self-Assessment or Third-Party Assessment modes. As detailed in Figure 6.2-3, the final Assessment Report from one of these preceding paths shall serve as the auditable proof of ongoing compliance necessary for obtaining and maintaining certification. This report, which includes a formal conformity status designated as either "Self-Assessment Status Recorded" or "Third-Party Assessment Status Available," is the primary input for this path. A "Certification Body Audits" this report and its supporting evidence to conduct its own assessment. By providing programmatic access to the continuous stream of evidence, this path can be highly automated, enabling a 'living certificate' where the Certification Body's systems continuously monitor compliance, potentially triggering automated updates, renewals, or revocations based on real-time data. This automated process is governed by a clear division of responsibilities: the AI Provider is liable for the continuous provision and integrity of the evidence via a secure interface, while the Certification Body is liable for the impartial and correct execution of the automated assessment and decision logic as codified in the certification scheme. The process culminates in a formal "Certification Outcome", which may be the issuance of a certificate or mark, or its revocation. Figure 6.2-3: Certification Path |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 6.3 Roles in Process Execution | |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 6.3.1 General | The CABCA process execution shall involve distinct roles with specific responsibilities as defined in clauses 6.3.2 to 6.3.4. The allocation of these roles shall be dependent on the CABCA mode (Self-assessment, Third-party assessment, or Certification support) being implemented. |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 6.3.2 Auditee | The entity acting as the Auditee, typically the AI System Provider, shall be responsible for the AI system undergoing CABCA. Core Responsibilities (Applicable in all Modes): a) The Auditee shall define the scope of the CABCA implementation, including the specific AI system(s) and the applicable conformity specifications (Scoping). b) The Auditee shall lead and document the Operationalization phase (as defined in clause 8), which includes translating conformity specifications into measurable metrics, defining measurements, and establishing assessment criteria (Operationalization). ETSI ETSI TS 104 008 V1.1.1 (2026-01) 22 c) The Auditee shall implement and maintain the technical infrastructure necessary to execute measurements and checks as defined during Operationalization. d) The Auditee shall ensure that system artifacts required for measurements are generated and made available to the measurement processes (Operationalization). e) The Auditee shall oversee the automated execution of the continuous assessment cycles, including the "Execute Measurements & Checks" and "Generate Assessment Results / Evidence Report" steps (Continuous Assessment). f) The Auditee shall perform internal reviews of assessment results and evidence reports (Continuous Assessment). g) The Auditee shall implement and manage a review and update procedure to refine the CABCA process, operationalization, and measurement setup based on assessment outcomes and changes in conformity specifications or the AI system (Scoping and Operationalization). h) The Auditee shall ensure the continuous operation of its CABCA loop to maintain ongoing conformity (Continuous Assessment). Mode-Specific Responsibilities: a) The Auditee shall record the conformity status of the AI system based on internal reviews in Self-Assessment mode. b) In the context of Third-Party Assessment or Certification, the Auditee shall make the assessment results and evidence reports available to the designated Auditing Party or Certification Body. |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 6.3.3 Auditing Party | The entity acting as the Auditing Party shall be responsible for conducting the assessment of the AI system's conformity. The Auditing Party shall be the Auditee itself in Self-Assessment mode, or an independent external body in Third-Party Assessment or Certification modes: a) The Auditing Party shall conduct assessments based on the evidence provided by the Auditee and according to the rules and criteria defined in the operationalization documentation and conformity specifications (Continuous Assessment). b) The Auditing Party shall, where applicable (e.g. third-party mode), verify or formally accept the scope and operationalization defined by the Auditee (Scoping and Operationalization). c) The Auditing Party shall participate in or inform the review and update procedure by providing feedback on the assessment process and findings (Operationalization and Continuous Assessment). d) The Auditing Party shall maintain independence and objectivity in its assessment activities, consistent with the principles outlined in clause 5.3.4. Mode-Specific Responsibilities a) In Self-Assessment mode, the Auditee, acting as the Auditing Party, shall review assessment results/reports and record the conformity status. b) In Third-Party Assessment mode, the external Auditing Party shall access the results/reports provided by the Auditee and conduct an independent assessment to determine conformity. c) For Certification, the Auditing Party shall be an accredited Certification Body, which shall conduct its assessment based on the evidence and operationalization documentation provided by the Auditee. ETSI ETSI TS 104 008 V1.1.1 (2026-01) 23 |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 6.3.4 Entity for Attestation of Conformity | The entity responsible for attesting to the conformity status of the AI system shall provide the official conformity statement to stakeholders: a) The Entity for Attestation of Conformity shall communicate the confirmed conformity status to relevant stakeholders. b) In Self-Assessment mode, the Auditee shall act as the Entity for Attestation of Conformity, based on its recorded conformity status. c) In Third-Party Assessment mode, the external Auditing Party shall typically provide the attestation of conformity (e.g. an audit opinion or report). d) In Certification mode, the accredited Certification Body, acting as the Auditing Party, shall be the Entity for Attestation of Conformity, issuing a formal certificate if conformity is determined. e) The attestation provided shall be based on the outcomes of the CABCA process, as recorded by the Auditee (in self-assessment) or determined by the external Auditing Party/Certification Body. |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 7 Scoping (Identification of Conformity Specifications) | |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 7.1 General | Before the CABCA methodology can be operationalized, an organization should conduct a foundational Scoping process. This process involves the systematic identification, selection, and compilation of all relevant requirements that apply to a specific AI system. This activity is a fundamental practice in any robust compliance or quality management framework and is not unique to CABCA. It is the standard process of determining the applicable legal, technical, and ethical landscape for a product or system. The primary goal of the Scoping phase is to produce a definitive and auditable Conformity Specification. This document consolidates the applicable requirements from various sources, such as legislation, harmonised standards, industry best practices, and internal policies. The resulting Conformity Specification serves as the authoritative and formal input for the subsequent Operationalization phase (see clause 8), where these high-level requirements are translated into a concrete, measurable, and continuously auditable framework. 7.2 Sources and Integration Criteria for Conformity Specifications |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 7.2.1 Sources of Conformity Specifications | The Scoping process relies on identifying and selecting appropriate conformity specifications from various sources. This clause first identifies the diverse sources from which these specifications can originate and then details the essential criteria a specification should meet to be selected for inclusion in the final Conformity Specification (clause 7.2.2). Conformity specifications, identified during the Scoping phase, are foundational documents that consolidate the requirements an AI system should meet. These can be international standards from ISO, IEC and ITU, European documents from ETSI, CEN and CENELEC, as well as other authoritative sources that outline the conditions or guidelines that shall be met. The sources of these specifications can vary widely, depending on factors such as the industry, market standards, and specific organizational needs. In cases where requirements from these diverse sources overlap or conflict, they shall be harmonized using a defined hierarchy of authority, where binding legal obligations prevail over harmonised standards, which in turn prevail over industry best practices and internal policies. This ensures that a single, coherent, and non-contradictory set of requirements is consolidated into the final Conformity Specification for operationalization. Examples of sources include: β’ Legislative Requirements: Federal, state, or local laws that mandate certain types of conformity. ETSI ETSI TS 104 008 V1.1.1 (2026-01) 24 β’ Such can be the legislative requirements withing the European AI Act and the correspondent standardization request. β’ International standards that often serve as a framework for conformity in various sectors. β’ (Harmonised) European Standards. β’ National Standards: Developed by national organizations like NIST in the United States. β’ Industry Guidelines: Established best practices within specific industries. β’ Internal Policies: Guidelines set within an organization. β’ Quality Assurance Protocols: These could be part of internal governance or subject to external audits. β’ Product Certifications: Such as UL, CE, or Common criteria labels that certify a product meets certain standards. β’ Military Standards (MIL-STD): These are relevant to military applications. β’ Healthcare Regulations: Such as HIPAA in the United States. β’ Financial Regulations: Sarbanes-Oxley for corporate governance, GDPR for data protection, and so on. β’ Vendor Agreements: Requirements from vendors that organizations shall adhere to. β’ Ethical Guidelines: Ethical principles and frameworks, like fairness, transparency, and accountability, that apply specifically to AI development and deployment. β’ Third-Party Audits: These may also dictate conformity requirements. β’ Consumer Protection Laws: Regulations to ensure AI-driven consumer products meet safety and reliability standards, as well as disclosure requirements around data collection and usage. Among the sources of conformity specifications, harmonised standards are becoming particularly significant, especially driven by legislation like the European AI Act [i.9]. Developed by European Standardization Organizations (ESOs like CEN, CENELEC, ETSI) based on formal requests (mandates) from the European Commission, HS serve a critical function: they translate the high-level, legally binding requirements of legislation (such as the essential requirements in the AI Act) into detailed, technical specifications. The key value of HS for frameworks like CABCA lies in their technical granularity and actionability. Unlike the broader legal text, HS provide concrete engineering details, potentially specifying: β’ Precise test methods and procedures. β’ Quantitative metrics and performance thresholds (e.g. for accuracy, robustness, bias detection). β’ Specific requirements for data governance (e.g. dataset characteristics, provenance checks). β’ Detailed documentation structures and content requirements. β’ References to underlying international standards (e.g. from ISO/IEC). This level of detail directly addresses CABCA's need for precision and clarity. While CABCA is designed to be adaptable, its effectiveness in enabling continuous auditing against clear, measurable benchmarks is significantly enhanced when operating with the precise, actionable requirements defined in harmonised standards. |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 7.2.2 Criteria for Integrating Conformity Specifications in CABCA | For a source document to be included in the final Conformity Specification and be effectively used by the CABCA framework, it should meet the following minimum requirements (criteria): β’ Operationalizability: The conformity specification, which represents the quality objective to be achieved, for example an essential requirement from a HEN standard shall be translatable into practical steps for risk management, quality assurance, as well as other operational activities, in dependence of the quality goals to be achieved. ETSI ETSI TS 104 008 V1.1.1 (2026-01) 25 β’ Coverage of key areas: At a minimum, the conformity specifications shall cover the essential aspects (e.g. essential requirements in European Legislations) regarding the quality goals relevant to the system or industry under CABCA audit. β’ Credibility: The conformity specification depicting the quality objectives to be achieved shall have some degree of recognition or authority within its respective industry. β’ Clear licensing and usage policies: There shall be explicit terms defining how the specification can be used. β’ Capability to provide metrics: The conformity specification shall either directly provide measurable parameters or be interpretable in a way that such metrics can be derived. β’ Auditability: The conformity specification shall contain elements that are auditable on a continuous basis, aligning with CABCA's methodology. 8 Operationalization (Planning & Risk Assessment, Automation Setup) |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 8.1 Process of Operationalization | |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 8.1.1 General | Operationalization is the cornerstone of the CABCA methodology. It is the process that takes the "Conformity Specification", as defined in the Scoping phase (see clause 7) and transforms its abstract, high-level requirements into concrete, measurable, and auditable requirements and metrics for a specific AI system and its data. This process is also informed by the results of prior assessments to allow for iterative refinement. This process ensures that regulatory, ethical, and quality-related intentions are preserved and implemented throughout the AI system's lifecycle. It begins with aligning the operationalization process with the requirements of the chosen conformity specifications. This alignment ensures that the specifications are operationalizable, credible, comprehensive, and applicable to the AI system. It establishes a foundation that respects both the intention of the specifications and the functional capabilities of the AI system. For example, the EU AI Act applies mostly to high-risk AI systems and General-Purpose AI systems, emphasizing the importance of the AI system's intended use. Figure 8.1-1 illustrates the operationalization process, which consists of two interconnected flows. The forward Operationalization arrow shows the progression through four key stages: from Dimensions and Risks to Requirements & Metrics and finally Measurement. The blue boxes at the bottom specify the context and inputs for each corresponding stage, starting with Conformity Specifications and ending with the specific AI implementation. The reverse "Assessment" arrow indicates a feedback flow, where results from the measurement phase are used to evaluate the choices made in the preceding stages. Figure 8.1-1: The Operationalization Process in CABCA ETSI ETSI TS 104 008 V1.1.1 (2026-01) 26 The following clauses present a stepwise breakdown of this transformation process, guided by specific goals and justified by their role in effective auditing and continuous conformity assessment. |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 8.1.2 Identification of Quality Dimensions | |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 8.1.2.1 Requirements for Defining Quality Dimensions | Quality dimensions are the foundation of the operationalization process. They define the key aspects of an AI system under which specific requirements should be evaluated to ensure alignment with conformity specifications. These dimensions reflect societal, regulatory, and technical expectations and serve as entry points for further risk analysis and metric definition: β’ The operationalization process shall identify quality dimensions relevant to the selected conformity specifications. β’ These dimensions shall include, but are not limited to, avoidance of unwanted bias, accuracy, robustness, and cybersecurity. β’ Identified quality dimensions shall act as high-level categories for auditing and evaluating system performance and conformity. |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 8.1.2.2 Criteria for Defining Quality Dimensions | The definition of quality dimensions shall adhere to the following criteria: β’ Granularity: Quality dimensions shall be defined in a way that supports decomposition into manageable subcomponents for targeted assessment. β’ Suitability: Quality dimensions shall be adaptable to evolving technologies, application contexts, or business scenarios, as permitted by the conformity specification. β’ Coverage & traceability: Dimensions shall cover the relevant aspects required by the selected conformity specifications and enable traceability to those sources. β’ Operationalizability: Dimensions shall be definable in terms that allow derivation of measurable requirements and metrics. |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 8.1.3 Identification of Risks | |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 8.1.3.1 Requirements for Defining Risks | Risks linked to each quality dimension need to be documented to contextualize potential harms or non-conformities. This step ensures that all assessments are risk-informed and traceable to their underlying rationale in the conformity specification: β’ For each identified quality dimension, the operationalization process shall identify and document associated risks by referencing the conformity specifications. β’ The documented risks shall form the basis for developing risk mitigation strategies. β’ Each identified risk shall be addressed by one or more defined requirements or mitigation actions. |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 8.1.3.2 Criteria for Identifying Risks | The identification and documentation of risks shall adhere to the following criteria: β’ Mitigatability: Risks shall be evaluated based on the ability to implement risk mitigation strategies. Or the risk is accepted, then itis not part of the assessment. β’ Risk traceability: Each mitigation action shall be demonstrably traceable to its originating risk and source specification. ETSI ETSI TS 104 008 V1.1.1 (2026-01) 27 |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 8.1.4 Deriving Measurable Compliance Requirements and Metrics | 8.1.4.1 Requirements for Deriving Measurable Compliance Requirements and Metrics To assess risks effectively, concrete and measurable compliance requirements should be derived. These requirements should be directly traceable to risks and quality dimensions, and measurable using well-defined metrics. This formalization ensures that assessments are actionable and reproducible: β’ The Operationalization Process shall define specific, measurable, and auditable requirements for the AI system under assessment, derived from the identified risks and their corresponding mitigation strategies: - Each requirement shall be traceable to an identified risk and its associated quality dimension. - Each requirement shall be directly linked to the initial conformity specification. β’ For each requirement, the operationalization process shall establish one or more corresponding metrics: - Each metric shall be a concrete, measurable parameter that reflects the AI system's adherence to the specific requirement. |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 8.1.4.2 Criteria for Deriving Measurable Compliance Requirements | The derivation of measurable compliance requirements shall adhere to the following criteria: β’ Relevance: Requirements shall be directly relevant to the AI system's intended purpose and application context. β’ Clarity: Requirements shall be defined in a clear and unambiguous manner to prevent misinterpretation. β’ Measurability: Each requirement shall be linked to one or more metrics that enable its quantitative evaluation. β’ Traceability: Each requirement shall be traceable back to its originating risk and source specification. β’ Frequency: The evaluation frequency for each requirement shall be explicitly specified. β’ Threshold justification: For metrics with pass/fail or target thresholds, the acceptance threshold shall be justified by intended use and deployment context and referenced to the applicable risk classification. |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 8.1.4.3 Criteria for Deriving Metrics | The derivation of metrics shall adhere to the following criteria: β’ Precision: Metrics shall be defined with sufficient clarity and precision to ensure unambiguous interpretation. β’ Relevance: Each metric shall be directly and demonstrably related to the requirement and quality dimension being measured. β’ Unit specification: A specific unit of measurement shall be defined for each metric's value. β’ Measurability: Each metric shall be capable of being reliably and practically measured through a defined process. ETSI ETSI TS 104 008 V1.1.1 (2026-01) 28 |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 8.1.5 Implementing Measurements | |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 8.1.5.1 Requirements for Implementing Measurements | To enable automated and continuous assessment, concrete measurement methods should be defined for each metric. These methods serve as the procedural and technical basis for conformity checks and allow repeatable evaluations across time and system versions: β’ For each defined metric, the operationalization process shall define and document the method(s) for its measurement: - The documentation for each measurement method shall specify the techniques, tools, or procedures to be used. - The defined measurement methods shall collectively establish the framework for the continuous, automated assessment of compliance. |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 8.1.5.2 Criteria for Implementing Measurements | The implementation of measurements shall adhere to the following criteria: β’ Automation: Measurements shall be implemented with a high degree of tool-based automation to ensure repeatability and efficiency. β’ Output unit consistency: The output unit of a measurement shall be consistent with the unit defined for the corresponding metric. β’ Automated result collection: The collection of measurement results shall be automated to enable continuous auditing. β’ Evidence retention: The retention period for measurement results, which serve as evidence, shall be explicitly specified. β’ Measurement uncertainty & repeatability: Known sources of uncertainty (including non-determinism in tools and the AI system) shall be identified; variability should be quantified or otherwise documented (e.g. standard deviation, confidence intervals); repeatability considerations shall be described. β’ Tool accountability: If the testing/evaluation tool contains AI components, the uncertainty assessment shall explicitly cover those components. |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 8.2 The Operationalization Specification | |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 8.2.1 General | To address both human and machine processing needs, the Operationalization Specification is captured as a single, structured, machine-readable specification. From this source of truth, a human-readable documentation is rendered to inform stakeholders, while the specification itself is used directly to configure automated assessments. Both forms inherently adhere to common principles such as traceability, ensuring consistency between the documented rationale and its automated assessments. Given the complexity of multi-vendor AI systems, where different components like data sources, foundation models, and hosting might be managed by different entities, this specification also supports nested operationalization. This mechanism allows the final system provider's assessment to programmatically incorporate the conformity status of a third-party component. It is achieved by defining a measurement that queries the component supplier's assessment results via a machine-readable Assessment Interface. This structure creates a recursive "chain of assurance" and ensures clarity in roles and responsibilities, as each entity attests to the conformity of its own contribution. This structure ensures clarity in roles and responsibilities across all levels of system operation. ETSI ETSI TS 104 008 V1.1.1 (2026-01) 29 |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 8.2.2 Requirements for the Operationalization Specification | The operationalization specification shall adhere to a defined structure that translates high-level Conformity Specifications into specific documentation items. This structure shall facilitate the creation of actionable, machine-readable records for continuous assessment and auditing by meeting the following requirements: a) The specification shall be based on a formal documentation model that defines the required information elements for all documentation items, including but not limited to, Conformity Specifications, Dimensions, Requirements, Metrics, and Measurements. A core set of common information elements shall be identified for all items, which may be supplemented by specific elements where necessary. b) The specification shall ensure that each documentation item is systematically described with the necessary information elements to enable automated evaluation and consistently capture all necessary descriptive and operational details. c) The specification shall be structured to support machine readability and automation, enabling it to function directly as a configuration file for automated assessment engines and monitoring tools. This shall be achieved through the use of standardized information elements (such as Name, Assessor, AssessmentInterface, Frequency) to allow for automated parsing, integration with assessment tools, and the execution of continuous monitoring cycles. d) The specification shall capture essential metadata required for robust auditability and traceability. Information elements (such as Name, Comment, ConfidentialityFlag, and Assessor) shall be used to allow auditors to verify the assessment process, responsibilities, evidence trails, and rationale. The documentation shall also support a hierarchical structure of items, linked by information elements (such as names), to ensure unambiguous linkage between them. e) The specification shall ensure that the consistent capture of information facilitates stakeholder understanding, comparison, and review, aligning with recognized quality aspects of documentation. f) The documentation structure shall support visualization using an Entity-Relationship (ER) diagram or an equivalent visual technique, as exemplified in Figure 8.2-1. |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 8.2.3 Criteria for the Operationalization Specification | The structuring of the Operationalization Specification shall adhere to the following criteria, which ensure the overall quality and utility of the documentation: a) Transparency: The specification shall clearly outline the process of translating Conformity Specifications into operational steps, thus eliminating ambiguity. This includes clearly articulating the operationalization methods used, providing precise interpretations of Conformity Specifications to reduce regulatory uncertainties, and clearly stating quality requirements, expectations, and the scope of each assessment. b) Traceability: The specification shall allow for each metric, requirement, and decision to be traced back to the original Conformity Specifications and its corresponding quality dimensions. c) Configurability: The specification shall enable a straightforward translation into automated assessments by being structured to act as a configuration file. d) Nested Documentation Capability: The specification shall ensure that operationalization documentation can be nested and can also nest documentation from other vendors, providing a comprehensive and integrated view of multi-vendor AI system components. To visually represent how these criteria are met, the structure of the Operationalization Specification is depicted in Figure 8.2-1. This Entity-Relationship (ER) diagram illustrates the relationships between the core components, showing how high-level Conformity Specifications are linked down through Quality Dimensions, Requirements, and Metrics to the specific Measurements, thereby ensuring end-to-end traceability. ETSI ETSI TS 104 008 V1.1.1 (2026-01) 30 Figure 8.2-1: ER Diagram of Operationalization Specification Structure |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 9 Continuous Assessment Process | |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 9.1 Continuous Evidence Gathering & Measurement | CABCA is designed to provide an ongoing assessment of AI systems to ensure they meet conformity specifications, quality goals, and manage risks effectively. This clause explains the steps involved in the CABCA assessment, focusing on how the operationalization documentation serves as a configuration file for the assessment. Initial Setup: Using operationalization documentation Before initiating the assessment process, the operationalization documentation, which lays down dimensions, requirements, metrics, and measurements, is used to set up the continuous assessment. This documentation enables automatic interpretation of audit criteria, metrics, and the corresponding measurements. Frequency of assessments The frequency for each assessment varies based on individual needs and is specified for each requirement within the configuration file. The assessment process is invoked every time a new model is deployed. During the operation phase, the frequencies for assessment are explicitly defined within the operationalization documentation. 1) Measurements based on Artifacts of ongoing AI Operation: Artifacts such as log files, model weights, and data samples are produced or utilized throughout the ML life-cycle and serve as inputs for these measurements. Some artifacts yield results directly through parsing, while others require comprehensive test suites (e.g. as described in ETSI TR 103 910 [i.6]) to obtain accurate data (see Figure 9.1-1). 2) Measurement: The artifacts are analyzed and processed to produce metric values, which are then prepared for transmission to the auditing entity. This enables real-time or frequency-based evaluations, triggered either after specific events (e.g. model deployment) or at predefined intervals tied to individual quality requirements. This corresponds to the "Measurement" phase in the diagram (see Figure 9.1-1). 3) Assessment based on measurement results β Mapping to Requirements and Conformity Specifications: The received measurement data are automatically evaluated, using the operationalization documentation as a configuration file, against predefined metric thresholds that reflect the AI system's quality goals. This automation makes the evaluation both rapid and fully consistent with the conformity specifications (see Figure 9.1-1). ETSI ETSI TS 104 008 V1.1.1 (2026-01) 31 4) Report generation: The resulting report shows which quality goals are met by mapping each measurement result to the relevant parts of the Conformity Specifications and indicating the degree of satisfaction. It is shared with stakeholders at varying levels of detail, for example, a concise executive summary for leadership and a technical annex for engineers and auditors, to communicate the AI system's current compliance status. This aligns with the "Report Generation" phase in the diagram (see Figure 9.1-1). Figure 9.1-1: Continuous Evidence Gathering and Measurement |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 9.2 Assessment Evidence | |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 9.2.1 General | The continuous assessment process within CABCA shall utilize evidence derived from operational outputs, system states and records: a) Relevant operational outputs, system states, and records, along with their pertinent characteristics (information elements), shall be systematically captured and structured to constitute documentation items for assessment purposes. b) The identification of these documentation items and the ensuring of the availability of their underlying sources shall be performed during the Operationalization phase (as defined in clause 5). c) The documentation items, determined during planning based on the defined measurements, shall provide the necessary verifiable evidence for the continuous assessment process. d) The underlying sources (e.g. log entries, model parameters) shall serve as inputs for measurements. e) The structured record of these sources and their measurements shall constitute the verifiable documentation item used to evaluate conformity against specified requirements. f) The operationalization documentation shall specify the documentation items to be used as evidence for each defined measurement. g) For different stages of the AI system lifecycle, such as development/training and operation, specific documentation items shall be identified and utilized as evidence. These documentation items shall include, but are not limited to, those listed in clauses 6.2.1 and 6.2.2. |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 9.2.2 Sources of Evidence During Development and Training | Evidence for the development and training stages of an AI system shall include, as applicable: a) Documentation of Log Files: This shall capture information elements related to system logs, application logs, and audit logs reflecting operational behaviour and interactions during development. b) Documentation of Model Weights: This shall capture information elements detailing the parameters of machine learning models that are trained. c) Documentation of Data Samples: This shall capture information elements describing instances of data used in model training, including input data, processed data and output data. ETSI ETSI TS 104 008 V1.1.1 (2026-01) 32 d) Documentation of Code Repositories: This shall capture information elements related to the source code and scripts used in developing the AI system. e) Documentation of Configuration Files: This shall capture information elements detailing settings and parameters for model training and development processes. f) Documentation of Test Results: This shall capture information elements detailing the outcomes of tests, such as unit tests and integration tests. g) Documentation of Hyperparameters: This shall capture information elements describing the set of parameters that govern the training process of machine learning models. |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 9.2.3 Sources of Evidence During Operation | Evidence for the operational stage of an AI system shall include, as applicable: a) Documentation of Log Files: This shall capture information elements from system, application, and audit logs detailing operational behaviour in a live environment. b) Documentation of Loaded Model Weights: This shall capture information elements detailing the parameters of the deployed machine learning models. c) Documentation of Inference Data: This shall capture information elements describing data used or generated by the AI systems during operation, including user input, system outputs, and intermediate processing data. d) Documentation of Performance Metrics: This shall capture information elements detailing data on system performance, including accuracy, latency, throughput, and error rates during operation. e) Documentation of Security Certificates and Logs: This shall capture information elements related to encryption certificates, security logs, and breach reports. f) Documentation of Incident Reports: This shall capture information elements documenting any operational incidents or anomalies. g) Documentation of Change Logs: This shall capture information elements recording updates, bug fixes, and feature additions post-deployment. h) Documentation of Configuration Files: This shall capture information elements detailing operational settings and parameters under which the AI systems function post-deployment. |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 9.3 Persistence of Assessment Results | The results of each CABCA assessment shall be stored persistently for future reference, quality tracking, and for proving compliance in audits. The scope of this persistence shall include: β’ Measurement results: All metrics, along with their corresponding values, are systematically stored. This includes data on system performance, measures on avoidance of unwanted bias, accuracy and other critical indicators essential for evaluating the system's alignment with specified standards. β’ Evaluation outcomes: The outcomes of the comparisons between metrics and pre-defined quality goals are archived. β’ Assessment Reports: The final reports generated after each assessment are saved. The retention of these artifacts is governed by a policy that aligns with industry standards, legal requirements, and organizational guidelines. This policy is periodically reviewed and updated to remain congruent with the evolving technological and regulatory landscape. Audit requirements for persistence: The following audit requirements shall be considered: β’ Searchability: Stored data is organized to facilitate easy and efficient retrieval. This includes advanced search capabilities to handle complex queries relating to the system's performance and compliance history; ETSI ETSI TS 104 008 V1.1.1 (2026-01) 33 β’ Security: Ensure that the stored data is encrypted and accessible only by authorized personnel; β’ Traceability: Each stored item shall be traceable back to the specific assessment cycle and corresponding operationalization documentation; β’ Automated archiving: An automated archiving system shall be in place to manage the lifecycle of stored data. |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 9.4 Updating Conformity Status | |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 9.4.1 Conformity Status Updates | The outcome of quality assessments in CABCA is a critical component, involving a detailed evaluation of measurement results. These results shall be derived from various artifacts such as log files and model weights, and shall be assessed against predefined values. These values shall be based on expert knowledge and risk assessments integral to CABCA's methodology. When measurement results align with the predefined values, a conformity status shall be issued, confirming the AI system's compliance with the required standards. Conversely, if the results deviate from these values, the conformity status shall be either revoked or adjusted. This highlights areas needing improvement and signals the necessity for enhanced risk management strategies. An integral part of this process is the ongoing revision and updating of quality requirements. Changes in these requirements can significantly impact the assessment outcomes, reflecting CABCA's dynamic approach to conformity in rapidly evolving technological environments. |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 9.4.2 Transparency in Reporting and Updates | Transparency is a cornerstone of CABCA, especially in the documentation and communication of assessment outcomes to stakeholders. This process shall entail: β’ Clear system identification: Including unambiguous references to the AI system name, type, and additional identification details for full traceability. β’ Provider information: Documenting the name and address of the AI system provider or their authorized representative. β’ Compliance statement: A direct statement indicating the AI system's conformity with applicable standards, regulations, and, where relevant, data protection requirements. β’ Reference to Standards: Mentioning any relevant harmonised standards or specifications that have been used to declare conformity. This documentation guides the automated assessment process and enhances stakeholder understanding of the conformity measures, ensuring that every operationalization decision is communicated transparently. |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 9.4.3 Effective Communication with Stakeholders | Communication in CABCA is designed to be clear, consistent, and tailored to the needs of various stakeholders. This involves defining specific communication channels and the scope of each, ensuring that stakeholders receive the most relevant and up-to-date information. The frequency and granularity of communication updates are carefully determined to keep stakeholders informed about the AI system's compliance status without overwhelming them with unnecessary details. ETSI ETSI TS 104 008 V1.1.1 (2026-01) 34 |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 9.4.4 Building and Maintaining Stakeholder Trust | CABCA strengthens stakeholder trust by maintaining a steady flow of reliable information about the AI system's compliance status. This is achieved through the regular publication of Assessment Reports and updates on any changes in the conformity status. By ensuring open and clear communication and providing objective evidence of compliance through third-party audits when applicable, CABCA not only maintains but also bolsters stakeholder confidence in the organization's commitment to adhering to relevant standards and regulations regarding their AI systems. 10 Documentation of the CABCA Process and its Outcome |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 10.1 General | CABCA directly supports the documentation obligations under the EU Artificial Intelligence Act (AI Act) by enabling traceable, evidence-based assessments of AI systems against conformity requirements. It operationalizes continuous conformity monitoring and supports the production of living documentation that reflects the current compliance status, audit readiness, and risk posture of an AI application across its lifecycle. As such CABCA is designed to address the technical documentation obligations under the EU AI Act [i.9]. It supports: β’ Article 11 and Annex IV documentation obligations (technical documentation), β’ Article 17 (post-market monitoring system), β’ Article 9 (risk management system), β’ Article 15 (accuracy, robustness, and cybersecurity), β’ Article 17 (quality management system, when relevant). The CABCA documentation shall follow the structure, principles, and stakeholder-oriented perspective defined in ETSI TR 104 119 [i.11]. As per that specification, CABCA documentation shall be: β’ Modular, separating concerns across assessment specification, execution and results. β’ Traceable, enabling linkage from high-level conformity requirements to low-level metrics and evidence. β’ Stakeholder-sensitive, supporting differentiated documentation profiles. β’ Lifecycle-integrated, covering static, dynamic, and continuous assessment across development and operational phases. |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 10.2 CABCA Documentation Items | |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 10.2.1 General | In line with ETSI TR 104 119 [i.11], documentation items refer to the specific artifacts, workflows, or components of CABCA that require structured and traceable documentation to support transparency, accountability, and regulatory compliance. Importantly, a documentation item is not the document itself, but rather the subject of documentation. Each documentation item is a source for generating information elements that describe its properties, context, and compliance-relevant attributes, such as performance metrics, system attributes, or measurement results. CABCA shall organizes its main documentation items to reflect this structure, ensuring clear traceability, stakeholder relevance, and alignment with legal obligations, particularly those defined by the EU AI Act [i.9]. ETSI ETSI TS 104 008 V1.1.1 (2026-01) 35 |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 10.2.2 Conformity Specification | This documentation item, which is the output of the Scoping process (clause 7), shall serve as basis to document the normative and contextual basis for CABCA. Documentation derived from the conformity specification shall include the following information elements: β’ Sources of requirements: Legal (e.g. EU AI Act), Regulatory (e.g. ISO/IEC 42001 [1], ETSI ENs), Domain-specific or internal (e.g. automotive safety standards, medical AI best practices) as outlined in clause 7.2.1. β’ Structured conformity requirements: Organized by lifecycle phase and risk domain, assigned unique identifiers, linked to stakeholder concerns and tagged with applicability and criticality as outlined in clause 8.2. |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 10.2.3 Operationalization Specification | This documentation item shall serve as a basis to document how conformity requirements are translated into measurable quality constructs and embedded into the continuous assurance cycle. Documentation derived from the operationalization specification shall include the following information elements: β’ Quality dimensions: Derived from stakeholder expectations and the conformity specification. β’ Risk identification and treatment aligned with the quality dimensions. β’ Metrics and measurement models: Mapped from quality dimensions, justified by relevance (i.e. risks), and defined with thresholds. β’ Technical setup and infrastructure: Describes CABCA's measurement and monitoring architecture. β’ Monitoring protocols: Frequency, granularity, triggers and escalation mechanisms. |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 10.2.4 Measurement Results and Evidence | This documentation item shall serve as a basis to document the results of assessment activities performed through CABCA's measurement pipeline. It shall include the following information items: β’ Assessment outputs: Metric results contextualized by time and version. β’ Interpretation and scoring: Deviation from expectation, anomaly detection and interpretation of deviations. β’ Conformity status: Compliant / partially compliant / non-compliant. β’ Evidence artefacts: Logs, data excerpts, explainability reports and statistical summaries. β’ Timeliness and relevance: Versioned and timestamped, aligned with audit requirements. β’ Verdict: Provide a comprehensive conclusion on the system's compliance status in relation to the set quality goals. |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 10.3 Traceability and Consistency | CABCA shall ensure bidirectional traceability throughout the documentation stack: β’ Results β Metrics β Quality Dimensions β Risks β Conformity Requirements. β’ All artefacts are version-controlled and machine/human-readable. β’ Integrity checks support data provenance and auditability. ETSI ETSI TS 104 008 V1.1.1 (2026-01) 36 |
700e2f9cc025d565da15725694fe1bf2 | 104 008 | 10.4 Stakeholder Documentation Profiles | CABCA shall support customized documentation profiles for different stakeholders as mandated by ETSI TR 104 119 [i.11]. Table 10.4-1: Documentation Profiles for different Stakeholders Stakeholder Documentation Items AI Provider (Developer) All documentation items (i.e. conformity specification, operationalization specification and measurement results and evidence). Full traceable across all layers, with design decisions and version control. Conformity Assessment Body Structured conformity mapping, test protocols, results, and audit logs. Regulatory Authority Conformity status, risk deviations, and legal requirement mappings. AI Operator Monitoring dashboards and alerts. End User/Impacted Person Public summaries on fairness, transparency, and safety. ETSI ETSI TS 104 008 V1.1.1 (2026-01) 37 Annex A (informative): Examples A.1 General This annex provides two illustrative, end-to-end use cases that demonstrate the practical application of the Continuous Auditing-Based Conformity Assessment (CABCA) framework. The examples walk through the core phases of Scoping (clause 7), Operationalization (clause 8), and Continuous Assessment (clause 9). Each use case begins with a description of the application scenario and its associated risks. It then follows the CABCA process to show how high-level compliance goals are systematically translated into specific, automated measurements and verifiable reports. This approach is designed to showcase the complete, traceable path from a conformity requirement to its continuous, evidence-based assessment, as detailed throughout the present document. A.2 PII Leakage Control for a Support Ticket Assistant A.2.1 Use Case Description Application Scenario and Risk: ACME Support Solutions develops an AI-powered assistant to help its customer support teams. The AI system reads incoming customer support tickets, summarizes them, and suggests draft replies for the human agent. A primary risk is that the AI model might inadvertently include Personally Identifiable Information (PII) - such as names, email addresses, or phone numbers, in its generated summaries or system logs. If this sensitive information is stored insecurely or included in a reply, it would constitute a significant data breach and a violation of privacy regulations like GDPR, leading to legal penalties and loss of customer trust. A.2.2 CABCA Implementation Example A.2.2.1 General The following clauses illustrate how the CABCA framework is applied to mitigate the risk of PII leakage. The process begins with Scoping to define the conformity requirements, proceeds to Operationalization to create a measurable and auditable specification, and concludes with an example of a Continuous Assessment Report. A.2.2.2 Scoping and Operationalization Specification This example details the creation of the Operationalization Specification, which translates the high-level goal of preventing PII leaks into a concrete, machine-readable format for continuous auditing. The specification is built by defining the Conformity Specification, Quality Dimensions, Requirements, Metrics and Measurements. The information is presented in Tables A.2-1 to A.2-5 below. Each component includes its attributes and a rationale explaining how it was derived in the context of the CABCA process. ETSI ETSI TS 104 008 V1.1.1 (2026-01) 38 Table A.2-1: Exemplary Conformity Specification for PII Leakage Control Item Component Attribute Value / Description Conformity Specification Name "CS-PII-01" Comment "Consolidated requirement to prevent clear-text personal data in outputs and logs." ConfidentialityFlag true Assessor "Compliance" AssessmentInterface "doc://conformity/CS-PII-01" sources ["EU_AI_Act_Art15", "GDPR_Art25"] Explanation: This Scoping artifact (clauses 7.1, 7.2.1) consolidates applicable sources like GDPR Article 25 ("Data protection by design") and the EU AI Act's robustness requirements (Art. 15) into a single, actionable goal. It is selected based on the criteria in clause 7.2.2 and serves as the formal input for Operationalization and the documentation item in clause 10.2.2. Table A.2-2: Exemplary Quality Dimension Specification for PII Leakage Control Item Component Attribute Value / Description Quality Dimensions Name "QD-PII" Comment "Dimensions relevant to CS-PII-01." ConfidentialityFlag false Assessor "Compliance" AssessmentInterface "doc://dimensions/QD-PII" dimensions ["privacy", "accountability"] Explanation: In accordance with clause 8.1.2, the high-level requirement is broken down into quality dimensions. Preventing data leaks is a core "privacy" concern, while the ability to prove that the system is functioning correctly relates to "accountability". These dimensions structure the subsequent risk analysis and metric definition. Table A.2-3: Exemplary Requirements Specification for PII Leakage Control Item Component Attribute Value / Description Requirement Name "R-PII-01" Comment "No clear-text PII in answers or logs; traceable to CS-PII-01." ConfidentialityFlag false Assessor "Compliance" AssessmentInterface "cfg://req/R-PII-01" Frequency "weekly && on_deploy" fromConformitySpecification "CS-PII-01" qualityDimensionsRef "QD-PII" traceToSource ["GDPR_Art25"] ETSI ETSI TS 104 008 V1.1.1 (2026-01) 39 Explanation: The requirement is defined to be measurable and auditable (clause 8.1.4.1) and includes an explicit evaluation frequency (clause 8.1.4.2). The frequency is set to run weekly for ongoing monitoring and also on_deploy, as a new model version could introduce unexpected behaviour and should be checked immediately. Full traceability is maintained (clause 10.3). Table A.2-4: Exemplary Metric Specification for PII Leakage Control Item Component Attribute Value / Description Metrics Name "M-PII-LEAK-RATE" Comment "Proportion of outputs/log entries with detected PII." ConfidentialityFlag false Assessor "Quality Engineering" AssessmentInterface "cfg://metric/M-PII-LEAK-RATE" metric key: "pii_leak_rate"unit: "proportion"threshold: "<= 0,5 %" requirementRef "R-PII-01" Explanation: The metric defines a precise unit and threshold (clause 8.1.4.3) and is structured for machine- readable assessment (clause 8.2.2). While a zero-tolerance threshold (0 %) is ideal, it is often impractical due to potential false positives in detection tools. The engineering team documented that a leak rate below 0,5 % represents an acceptable level of risk. Table A.2-5: Exemplary Measurement Documentation for PII Leakage Control Item Component Attribute Value / Description Measurement Name "MEAS-PII-SCAN" Comment "Combined NER and pattern scanning over outputs and logs." ConfidentialityFlag true Assessor "MLOps" AssessmentInterface "runner://measurements/m_pii_scan" method ["ner_based_pii_scan", "pattern_scan"] inputArtifacts ["prod_logs/*.jsonl", "sampled_outputs/*.txt"] outputUnit "proportion" automation true evidenceRetentionDays 180 metricRef "M-PII-LEAK-RATE" Explanation: The measurement specifies the technique, input artifacts, and automation details (clause 8.1.5). The inputs align with evidence sources (clause 9.2), and the retention period supports persistence requirements (clause 9.3). The MLOps team chose a hybrid approach of a NER model (to find names) and pattern scanning (for emails/phones) to achieve better detection coverage. A.2.2.3 Continuous Assessment Report Table A.2-6 shows an excerpt from an Assessment Report, the final artifact of the Continuous Assessment Cycle (clause 9). This report is automatically generated, providing a verifiable record of the system's conformity status for the specified period. ETSI ETSI TS 104 008 V1.1.1 (2026-01) 40 Table A.2-6: Exemplary Assessment Report Excerpt for PII Leakage Control Attribute Value / Description system_id "TicketClassifier v1.4" provider "ACME GmbH" window "2025-07-01..2025-08-01" requirementResults requirementRef: "R-PII-01"metricRef: "M-PII-LEAK- RATE"measurementRef: "MEAS-PII-SCAN"value: "0.3%"threshold: "<= 0.5%"status: "pass"evidence: ["evid/pii_scan_2025-08-01.json"] conformity_status "conformant" standards_refs ["EU AI Act Art.15", "GDPR Art.25"] timestamp "2025-08-01T09:15Z" Explanation: This report demonstrates a successful "pass" case. The measured value of 0,3 % is below the specified threshold of 0,5 %. The report maps the measurement result back to the requirement (clause 9.1), persists the evidence link (clause 9.3), and clearly communicates the "conformant" status to stakeholders (clause 9.4). It forms a key part of the "Measurement Results and Evidence" documentation item (clause 10.2.3), providing end-to-end traceability. A.3 Use Case: Data Drift Monitoring for Demand Forecasting A.3.1 Use Case Description Application Scenario and Risk: ACME Retail Analytics Ltd. provides a demand forecasting AI model to a large e-commerce client. The model predicts sales for thousands of products for the upcoming week, helping the client optimize inventory. The primary risk is data drift. The model was trained on historical sales data, but if customer behaviour changes (e.g. due to a new trend, a competitor's campaign, or economic shifts), the live production data will no longer resemble the training data. If this drift goes undetected, the model's predictions will become inaccurate, leading to costly business errors like overstocking (wasted capital) or stockouts (lost sales). A.3.2 CABCA Implementation Example A.3.2.1 General The following clauses illustrate the application of the CABCA framework to manage the risk of data drift. The example covers the creation of the Operationalization Specification and demonstrates how a non-conformity is handled in a Continuous Assessment Report. A.3.2.2 Scoping and Operationalization Specification This example details the Operationalization Specification for monitoring data drift. It translates the high-level goal of maintaining model robustness and accuracy into a concrete, measurable and automated process. See Table A.3-1. ETSI ETSI TS 104 008 V1.1.1 (2026-01) 41 Table A.3-1: Exemplary Conformity Specification for Data Drift Monitoring Item Component Attribute Value / Description Conformity Specification Name "CS-DRIFT-01" Comment "Continuous monitoring of data/model drift affecting prediction quality." ConfidentialityFlag false Assessor "Quality Management" AssessmentInterface "doc://conformity/CS-DRIFT-01" sources ["EU_AI_Act_Art9", "EU_AI_Act_Art15"] Explanation: The Quality Management team identified data drift as a major operational risk falling under the EU AI Act's Article 9 (Risk Management System) and Article 15 (Accuracy, Robustness). This Scoping artifact consolidates these legal drivers into a focused compliance objective (clauses 7.1 and 7.2). Table A.3-2: Exemplary Quality Dimension Specification for Data Drift Monitoring Item Component Attribute Value / Description Quality Dimensions Name "QD-DRIFT" Comment "Dimensions for drift control." ConfidentialityFlag false Assessor "Quality Management" AssessmentInterface "doc://dimensions/QD-DRIFT" dimensions ["robustness", "accuracy"] Explanation: Significant data drift directly impacts the model's "robustness" (its ability to handle new data) and its "accuracy" (the quality of its predictions). These dimensions were chosen as they directly map to the business risk and the conformity specification (clause 8.1.2). Table A.3-3: Exemplary Requirement Specification for Data Drift Monitoring Item Component Attribute Value / Description Requirement Name "R-DRIFT-01" Comment "Significant population drift should be detected and addressed." ConfidentialityFlag false Assessor "Quality Engineering" AssessmentInterface "cfg://req/R-DRIFT-01" Frequency "daily && on_deploy" fromConformitySpecification "CS-DRIFT-01" qualityDimensionsRef "QD-DRIFT" traceToSource ["EU_AI_Act_Art9"] Explanation: This requirement translates the abstract risk of drift into a concrete, testable rule. The daily frequency is chosen because market conditions can change quickly in retail. This adheres to the criteria in clause 8.1.4. ETSI ETSI TS 104 008 V1.1.1 (2026-01) 42 Table A.3-4: Exemplary Metrics Specification for Data Drift Monitoring Item Component Attribute Value / Description Metrics Name "M-PSI" Comment "Population Stability Index as drift indicator." ConfidentialityFlag false Assessor "Quality Engineering" AssessmentInterface "cfg://metric/M-PSI" metric key: "population_stability_index"unit: "index"threshold: "<= 0.20" requirementRef "R-DRIFT-01" Explanation: The Population Stability Index (PSI) is an industry-standard metric for data drift. A common rule is that PSI < 0,1 is stable, 0,1 - 0,25 is moderate drift, and > 0,25 is major drift. The team set a threshold of 0,20 as the trigger for an alert, providing a clear, unambiguous metric (clause 8.1.4.3). Table A.3-5: Exemplary Measurement Documentation for Data Drift Monitoring Item Component Attribute Value / Description Measurement Name "MEAS-PSI" Comment "Compare training vs. production feature distributions." ConfidentialityFlag true Assessor "MLOps" AssessmentInterface "runner://measurements/m_psi" method "compare(train_feature_dist, prod_feature_dist)" inputArtifacts ["train_stats/*.json", "prod_stats/daily/*.json"] outputUnit "index" automation true evidenceRetentionDays 365 metricRef "M-PSI" Explanation: The implementation statistically compares feature distributions between the original training data and recent production data, providing an automated and repeatable measurement process (clause 8.1.5). A.3.2.3 Continuous Assessment Report The following excerpt from an Assessment Report demonstrates a "fail" case where data drift was detected, triggering a non-conformity status and a corrective action plan. ETSI ETSI TS 104 008 V1.1.1 (2026-01) 43 Table A.3-6: Exemplary Assessment Report Excerpt for Data Drift Monitoring Attribute Value / Description system_id "DemandForecaster v3.2" provider "ACME Retail Analytics Ltd." window "2025-08-01..2025-08-07" requirementResults requirementRef: "R-DRIFT-01"metricRef: "M- PSI"measurementRef: "MEAS-PSI"value: 0.27threshold: "<= 0.20"status: "fail"evidence: ["evid/psi_2025-08-07.json"] nonconformities id: "NC-DRIFT-20250807"description: "PSI above threshold for three consecutive days."corrective_action: { due: "2025-08-15", plan: ["retrain_on_recent_data", "update_feature_normalization"]} conformity_status "non-conformant (temporary)" standards_refs ["EU AI Act Art.9", "EU AI Act Art.15"] timestamp "2025-08-07T06:30Z" Explanation: This report shows a "fail" case. The measured PSI value of 0,27 exceeds the threshold of 0,20, triggering a non-conformity. The system status is updated to "non-conformant," and the report automatically documents the issue and outlines a corrective action plan (retrain the model). This demonstrates the full feedback loop of CABCA: detect, report, and plan for remediation, ensuring risks are actively managed (clauses 9.1 and 9.4). Note on traceability (both examples): Each chain follows Results β Metrics β Requirement β QualityDimensions/ConformitySpecification as required by clause 10.3, and all items are machine/human-readable to support automated execution and stakeholder review (clauses 8.2.2, 8.2.3, 10.1 and 10.2). ETSI ETSI TS 104 008 V1.1.1 (2026-01) 44 History Document history V1.1.1 January 2026 Publication |
941346d7292bbb3cbcd6a2c87ca69d21 | 104 149-2 | 1 Scope | The present document provides the Implementation Conformance Statement (ICS) pro forma for testing Server implementations for compliance to the Mission Critical Services over LTE protocol requirements defined by 3GPP, and in accordance with the relevant guidance given in ISO/IEC 9646-1 [i.3] and ISO/IEC 9646-7 [2]. The present document specifies the recommended applicability statement for the test cases included in ETSI TS 104 149-1 [4]. These applicability statements are based on the features implemented in the Server respectively. The present document is valid for Mission Critical Push to Talk (MCData) Servers implemented according to 3GPP releases starting from Release 14 up to the Release indicated on the cover page of the present document. The present document does not specify applicability or ICS for protocol conformance testing for the EPS (LTE) bearers which carry the Mission Critical Services data sent or received by the Client and/or the Server. These are defined in ETSI TS 136 523-2 [i.11] (3GPP TS 36.523-2). |
941346d7292bbb3cbcd6a2c87ca69d21 | 104 149-2 | 2 References | |
941346d7292bbb3cbcd6a2c87ca69d21 | 104 149-2 | 2.1 Normative references | References are either specific (identified by date of publication and/or edition number or version number) or non-specific. For specific references, only the cited version applies. For non-specific references, the latest version of the referenced document (including any amendments) applies. Referenced documents which are not found to be publicly available in the expected location might be found in the ETSI docbox. NOTE: While any hyperlinks included in this clause were valid at the time of publication, ETSI cannot guarantee their long-term validity. The following referenced documents are necessary for the application of the present document. [1] ETSI TS 136 579-5: "LTE; Mission Critical (MC) services over LTE; Part 5: Abstract test suite (ATS) (3GPP TS 36.579-5)". [2] ISO/IEC 9646-7: "Information technology - Open systems interconnection - Conformance testing methodology and framework - Part 7: Implementation Conformance Statements". [3] ETSI TS 123 282: "LTE; Functional architecture and information flows to support Mission Critical Data (MCData); Stage 2 (3GPP TS 23.282)". [4] ETSI TS 104 149-1: "Mission Critical (MC) services; Mission Critical Data (MCData) Application Server (AS) Protocol conformance specification for server-to-client interface; Part 1: Test structure, configurations, conformance requirement and test purposes". |
941346d7292bbb3cbcd6a2c87ca69d21 | 104 149-2 | 2.2 Informative references | References are either specific (identified by date of publication and/or edition number or version number) or non-specific. For specific references, only the cited version applies. For non-specific references, the latest version of the referenced document (including any amendments) applies. NOTE: While any hyperlinks included in this clause were valid at the time of publication, ETSI cannot guarantee their long-term validity. The following referenced documents may be useful in implementing an ETSI deliverable or add to the reader's understanding, but are not required for conformance to the present document. [i.1] ETSI TR 121 905: "Digital cellular telecommunications system (Phase 2+) (GSM); Universal Mobile Telecommunications System (UMTS); LTE; 5G; Vocabulary for 3GPP Specifications (3GPP TR 21.905)". ETSI ETSI TS 104 149-2 V1.1.1 (2026-02) 7 [i.2] ETSI TS 136 579-1: "LTE; Mission Critical (MC) services over LTE; Part 1: Common test environment (3GPP TS 36.579-1)". [i.3] ISO/IEC 9646-1: "Information technology - Open Systems Interconnection - Conformance testing methodology and framework - Part 1: General concepts". [i.4] ETSI TS 136 579-4: " Mission Critical (MC) services over LTE; Part 4: Test Applicability and Implementation Conformance Statement (ICS). (3GPP TS 36.579-4)". [i.5] ETSI TS 136 579-6: "LTE; Mission Critical (MC) services over LTE; Part 6: Mission Critical Video (MCVideo) User Equipment (UE) Protocol conformance specification (3GPP TS 36.579-6)". [i.6] ETSI TS 136 579-7: "LTE; Mission Critical (MC) services over LTE; Part 7: Mission Critical Data (MCData) User Equipment (UE) Protocol conformance specification (3GPP TS 36.579-7)". [i.7] 3GPP TS 36.579-8: "Mission Critical (MC) services over LTE; Part 8: Mission Critical Video (MCVideo) Server Application conformance specification". [i.8] 3GPP TS 36.579-9: "Mission Critical (MC) services over LTE; Part 9: Mission Critical Data (MCData) Server Application conformance specification". [i.9] ETSI TS 136 579-2: "LTE; Mission Critical (MC) services over LTE; Part 2: Mission Critical Push To Talk (MCPTT) User Equipment (UE) Protocol conformance specification (3GPP TS 36.579-2)". [i.10] ETSI TS 136 579-3: "LTE; Mission Critical (MC) services over LTE; Part 3: Mission Critical Push To Talk (MCPTT) Server Application conformance specification (3GPP TS 36.579-3)". [i.11] ETSI TS 136 523-2: "LTE; Evolved Universal Terrestrial Radio Access (E-UTRA) and Evolved Packet Core (EPC); User Equipment (UE) conformance specification; Part 2: Implementation Conformance Statement (ICS) proforma specification (3GPP TS 36.523-2)". |
941346d7292bbb3cbcd6a2c87ca69d21 | 104 149-2 | 3 Definition of terms, symbols and abbreviations | |
941346d7292bbb3cbcd6a2c87ca69d21 | 104 149-2 | 3.1 Terms | For the purposes of the present document, the terms given in ETSI TR 121 905 [i.1] (3GPP TR 21.905) and the following apply: NOTE: A term defined in the present document takes precedence over the definition of the same term, if any, in ETSI TR 121 905 [i.1] (3GPP TR 21.905). Moreover, some terms defined in ISO/IEC 9646-1 [i.3] and ISO/IEC 9646-7 [2] are explicitly included below with small modification to reflect the terminology used in 3GPP. ICS pro forma: document, in the form of a questionnaire, which when completed for an implementation or system becomes an ICS Implementation Conformance Statement (ICS): statement made by the supplier of an implementation or system claimed to conform to a given specification, stating which capabilities have been implemented Implementation eXtra Information for Testing (IXIT): statement made by a supplier or implementer of an UEUT which contains or references all of the information (in addition to that given in the ICS) related to the UEUT and its testing environment, which will enable the test laboratory to run an appropriate test suite against the UEUT IUT containing MCX Client: statement identifying which entity, and associated requirements, from the MCX service architecture is subject of testing NOTE: Depending on the ETSI TS 136 579-5 [1] (3GPP TS 36.579-5) test model being used, the LTE UE (with the MCX Client installed) is considered as the IUT (MCX EUTRA test model), or, only the MCX Client is considered as the IUT (MCX IPCAN test model). In both cases the SUT is the UE, communicating with the SS over the Uu radio interface. ETSI ETSI TS 104 149-2 V1.1.1 (2026-02) 8 IXIT pro forma: document, in the form of a questionnaire, which when completed for an UEUT becomes an IXIT Protocol Implementation Conformance Statement (PICS): ICS for an implementation or system claimed to conform to a given protocol specification Protocol Implementation eXtra Information for Testing (PIXIT): IXIT related to testing for conformance to a given protocol specification static conformance review: review of the extent to which the static conformance requirements are claimed to be supported by the UEUT, by comparing the answers in the ICS(s) with the static conformance requirements expressed in the relevant specification(s) |
941346d7292bbb3cbcd6a2c87ca69d21 | 104 149-2 | 3.2 Symbols | Void. |
941346d7292bbb3cbcd6a2c87ca69d21 | 104 149-2 | 3.3 Abbreviations | For the purposes of the present document, the abbreviations given in ETSI TR 121 905 [i.1] (3GPP TR 21.905) and the following apply: NOTE: An abbreviation defined in the present document takes precedence over the definition of the same abbreviation, if any, in ETSI TR 121 905 [i.1] (3GPP TR 21.905). EUTRA Evolved UMTS Terrestrial Radio Access ICS Implementation Conformance Statement IPCAN IP Connectivity Access Network IUT Implementation Under Test IXIT Implementation eXtra Information for Testing MC Mission Critical MCData Mission Critical Data MCPTT Mission Critical Push To Talk MCVideo Mission Critical Video MCX Mission Critical X NOTE: With X = PTT or X= Video or X= Data. SS System Simulator SUT System Under Test TC Test Case |
941346d7292bbb3cbcd6a2c87ca69d21 | 104 149-2 | 4 Recommended Test Case Applicability | The applicability of each individual test is identified in Table 4-1 (MCData Server). This is just a recommendation based on the purpose for which the test case was written. The applicability of every test is formally expressed by the use of Boolean expression that are based on parameters (ICS) included in annex A of the present document. Additional information related to the Test Case (TC), e.g. affecting its dynamic behaviour or its execution may be provided as well The columns in Table 4-1 have the following meaning: Clause The clause column indicates the clause number in ETSI TS 104 149-1 [4] respectively which contains the test body. ETSI ETSI TS 104 149-2 V1.1.1 (2026-02) 9 Title The title column describes the name of the test and contains the clause title of the clause in ETSI TS 104 149-1 [4] respectively which contains the test body. Release The release column indicates the earliest release from which the test case is applicable. In some specific cases it may indicate the release(s) for which the TC is only applicable. NOTE 1: Some exceptions to this interpretation may be indicated in Notes in column 'Number of TC Executions'. Applicability - Condition The following notations are used for the applicability column: R recommended - the test case is recommended O optional β the test case is optional N/A not applicable - in the given context, the test case is not recommended. Ci conditional - the test is recommended ("R") or not ("N/A") depending on the support of other items. "i" is an integer identifying a unique conditional status expression which is defined immediately following the table. For nested conditional expressions, the syntax "IF ... THEN (IF ... THEN ... ELSE...) ELSE ..." is used to avoid ambiguities. NOTE 2: The conditions are defined in Table 4-1a (MCData Server). To avoid ambiguity for the MCData Server testing conditions the notation of CCi is used. Applicability - Comments This column contains a verbal description of the condition. Additional Information - Specific ICS This column contains the mnemonics of ICS(s) affecting the dynamic behaviour of the TC. NOTE 3: ICS items specified in other test specifications can be referred, to avoid redundant definitions. Additional Information - Specific IXIT This column contains the mnemonics of IXIT(s) affecting the dynamic behaviour of the TC. IXITs are defined in ETSI TS 136 579-5 [1] (3GPP TS 36.579-5) Additional Information - Number of TC Executions This column contains, wherever applicable, the recommended for certification purposes number of TC executions. It may contain also other information e.g. exceptions to the release applicable to the test. Clarifying notes when available are listed in dedicated tables with table numbers having the suffix "b". ETSI ETSI TS 104 149-2 V1.1.1 (2026-02) 10 Table 4-1: Applicability of MCData Server tests and additional information for testing Clause TC Title Release Applicability Condition Comment Additional Information Specific ICS Specific IXIT Number of TC Executions 5 MCData Server - MCData Client Configuration 5.1 MCDATA Server - MCDATA Client / Configuration / Authentication / User Authorization / UE Configuration / User Profile / Key generation Rel-15 CC01 IUT is MCData Server 5.2 MCDATA Server - MCDATA Client / Configuration / Group Creation / Group Regroup Creation / Group Regroup Teardown Rel-15 CC04 IUT is MCData Server 5.3 MCDATA Server - MCDATA Client / Configuration / Group Affiliation / Implicit Affiliation / Remote change / De-affiliation / Home MCDATA system Rel-15 CC01 IUT is MCData Server 5.4 MCDATA Server - MCDATA Client / Configuration / Pre-established Session Establishment / Pre-established Session Modification / Pre-established Session Release Rel-16 CC01 IUT is MCData Server 5.5 MCDATA Server - MCDATA Client / Configuration / Determination of MCDATA Service Settings / Current Active MCDATA Settings / De-subscribe Rel-15 CC01 IUT is MCData Server 5.6 MCDATA Server - MCDATA Client / Configuration / Download CSK Rel-16 CC01 IUT is MCData Server 5.7 MCDATA Server - MCDATA Client / Configuration / Functional Alias / Functional alias status determination / Activate functional alias / Deactivate functional alias Rel-16 CC01 IUT is MCData Server 6 MCData Server - MCData Client operation 6.1 Short Data Service 6.1.1 MCDATA Server - MCDATA Client / Short Data Service (SDS) / Standalone SDS Using Signalling Control Plane / One-to-one Standalone SDS Rel-15 CC02 IUT is MCData Server 6.1.2 MCDATA Server - MCDATA Client / Short Data Service (SDS) / Standalone SDS Using Media Plane / Group Standalone SDS Rel-15 CC02 IUT is MCData Server 6.1.3 MCDATA Server - MCDATA Client / Short Data Service (SDS) / SDS Session / One-to-one SDS Session Rel-15 CC02 IUT is MCData Server ETSI ETSI TS 104 149-2 V1.1.1 (2026-02) 11 Clause TC Title Release Applicability Condition Comment Additional Information Specific ICS Specific IXIT Number of TC Executions 6.1.4 MCDATA Server - MCDATA Client / Short Data Service (SDS) / Standalone SDS Using Media Plane / One-to-one Standalone SDS / Pre-established session Rel-16 CC02 IUT is MCData Server 6.2 File Distribution 6.2.1 MCDATA Server - MCDATA Client / File Distribution (FD) / FD Using HTTP / One-to- one Standalone FD / Non-Mandatory Download / FILE DOWNLOAD REQUEST ACCEPTED / FILE DOWNLOAD COMPLETED / FILE DOWNLOAD REQUEST REJECTED / FILE DOWNLOAD DEFERRED Rel-15 CC03 IUT is MCData Server 6.2.2 MCDATA Server - MCDATA Client / File Distribution (FD) / FD Using HTTP / Group Standalone FD / Mandatory Download / Without Disposition Request Rel-15 CC03 IUT is MCData Server 6.2.3 MCDATA Server - MCDATA Client / File Distribution (FD) / FD Using Media Plane / One-to-one Standalone FD Rel-15 CC03 IUT is MCData Server 6.2.4 MCDATA Server - MCDATA Client / File Distribution (FD) / Accessing list of deferred data group communications Rel-15 CC03 IUT is MCData Server 6.3 Enhanced Status (ES) 6.3.1 MCDATA Server - MCDATA Client / Enhanced Status (ES) Rel-15 CC02 IUT is MCData Server Table 4-1a: Applicability of tests Conditions MCData Server CC01 IF A.4.1-1/1 THEN R ELSE N/A CC02 IF A.4.1-1/1 AND A.4.2-1/1 THEN R ELSE N/A CC03 IF A.4.1-1/1 AND A.4.2-1/2 THEN R ELSE N/A CC04 IF A.4.1-1/2 THEN R ELSE N/A ETSI ETSI TS 104 149-2 V1.1.1 (2026-02) 12 Annex A (normative): ICS pro forma for Mission Critical Services A.0 The right to copy Notwithstanding the provisions of the copyright clause related to the text of the present document, ETSI grants that users of the present document may freely reproduce the ICS pro forma for Mission Critical Services pro forma in this Annex so that it can be used for its intended purposes and may further publish the completed ICS pro forma for Mission Critical Services. A.1 Guidance for completing the ICS pro forma A.1.1 Purposes and structure The purpose of this ICS pro forma is to provide a mechanism whereby a supplier of an implementation of the requirements defined in relevant specifications may provide information about the implementation in a standardized manner. The ICS pro forma is subdivided into clauses for the following categories of information: β’ instructions for completing the ICS pro forma; β’ identification of the implementation; β’ identification of the protocol; β’ ICS pro forma tables (for example: Client implementation, Server implementation, etc.). A.1.2 Abbreviations and conventions The ICS pro forma contained in this annex is comprised of information in tabular form in accordance with the guidelines presented in ISO/IEC 9646-7 [2]. Item column The item column contains a number which identifies the item in the table. Item description column The item description column describes in free text each respective item (e.g. parameters, timers, etc.). It implicitly means "is <item description> supported by the implementation?". Reference column The reference column gives reference to the relevant ETSI core specifications. Release column The release column indicates the earliest release from which the capability or option is relevant. Mnemonic column The Mnemonic column contains mnemonic identifiers for each item. ETSI ETSI TS 104 149-2 V1.1.1 (2026-02) 13 Comments column This column is left blank for particular use by the reader of the present document. References to items For each possible item answer (answer in the support column) within the ICS pro forma there exists a unique reference, used, for example, in the conditional expressions. It is defined as the table identifier, followed by a solidus character "/", followed by the item number in the table. If there is more than one support column in a table, the columns shall be discriminated by letters (a, b, etc.), respectively. A.1.3 Instructions for completing the ICS pro forma The supplier of the implementation may complete the ICS pro forma in each of the spaces provided. More detailed instructions are given at the beginning of the different clauses of the ICS pro forma. A.2 Identification of the MCData Server Equipment A.2.0 The right to copy Identification of the MCData Server should be filled in so as to provide as much detail as possible regarding version numbers and configuration options. The product supplier information and client information should both be filled in if they are different. A person who can answer queries regarding information supplied in the ICS should be named as the contact person. A.2.1 Date of the statement ......................................................................................................................................................................................... A.2.2 MCData Server under test identification MCData Server under test name: ......................................................................................................................................................................................... ......................................................................................................................................................................................... Hardware configuration: ......................................................................................................................................................................................... ......................................................................................................................................................................................... ......................................................................................................................................................................................... Software configuration: ......................................................................................................................................................................................... ......................................................................................................................................................................................... ......................................................................................................................................................................................... ETSI ETSI TS 104 149-2 V1.1.1 (2026-02) 14 A.2.3 Product supplier Name: ......................................................................................................................................................................................... Address: ......................................................................................................................................................................................... ......................................................................................................................................................................................... ......................................................................................................................................................................................... Telephone number: ......................................................................................................................................................................................... Facsimile number: ......................................................................................................................................................................................... E-mail address: ......................................................................................................................................................................................... Additional information: ......................................................................................................................................................................................... ......................................................................................................................................................................................... ......................................................................................................................................................................................... A.2.4 The Organization responsible for the Product testing Name: ......................................................................................................................................................................................... Address: ......................................................................................................................................................................................... ......................................................................................................................................................................................... ......................................................................................................................................................................................... Telephone number: ......................................................................................................................................................................................... Facsimile number: ......................................................................................................................................................................................... E-mail address: ......................................................................................................................................................................................... ETSI ETSI TS 104 149-2 V1.1.1 (2026-02) 15 Additional information: ......................................................................................................................................................................................... ......................................................................................................................................................................................... ......................................................................................................................................................................................... A.2.5 ICS contact person Name: ......................................................................................................................................................................................... Telephone number: ......................................................................................................................................................................................... Facsimile number: ......................................................................................................................................................................................... E-mail address: ......................................................................................................................................................................................... Additional information: ......................................................................................................................................................................................... ......................................................................................................................................................................................... A.3 Identification of the protocol This ICS pro forma applies to the ETSI standards listed in the normative references clause of the present document. A.4 ICS pro forma tables A.4.1 Implementation Types Table A.4.1-1: Mission Critical Services general functionality Item Functionality Reference Release Mnemonic Comments 1 MCData Server ETSI TS 123 282 (3GPP TS 23.282) Rel-15 pc_MCDataServer 2 MCData Server performing GMS function ETSI TS 123 282 (3GPP TS 23.282) Rel-15 pc_MCDataServer_GMS ETSI ETSI TS 104 149-2 V1.1.1 (2026-02) 16 A.4.2 Additional information Table A.4.2-1: Additional information Item Additional information Reference Release Mnemonic Comments 1 Support of MCData Short Data Service (SDS) ETSI TS 123 282 (3GPP TS 23.282) Rel-14 pc_MCData_SDS 2 Support of MCData File Distribution (FD) ETSI TS 123 282 (3GPP TS 23.282) Rel-14 pc_MCData_FD ETSI ETSI TS 104 149-2 V1.1.1 (2026-02) 17 History Version Date Status V1.1.1 February 2026 Publication |
a32e74849246f00b60e7850e364e069e | 104 153-3 | 1 Scope | The present document specifies the Abstract Test Suite (ATS) and partial Protocol Implementation eXtra Information for Testing (PIXIT) pro forma for the test specification for the IP/ICMP Translation Algorithm as specified in IETF RFC 7915 [1] in compliance with the relevant requirements and in accordance with the relevant guidance given in ISO/IEC 9646-7 [i.2] and ETSI ETS 300 406 [5]. The test notation used in the ATS is TTCN-3 (see ETSI ES 201 873-1 [i.3]). The following test specification and design considerations can be found in the body of the present document: β’ the overall test suite structure; β’ the testing architecture; β’ the test methods and port definitions; β’ the test configurations; β’ TTCN styles and conventions; β’ the partial PIXIT pro forma; β’ the modules containing the TTCN-3 ATS. Annex A provides the Partial Implementation Extra Information for Testing (PIXIT) pro forma. |
a32e74849246f00b60e7850e364e069e | 104 153-3 | 2 References | |
a32e74849246f00b60e7850e364e069e | 104 153-3 | 2.1 Normative references | References are either specific (identified by date of publication and/or edition number or version number) or non-specific. For specific references, only the cited version applies. For non-specific references, the latest version of the referenced document (including any amendments) applies. Referenced documents which are not found to be publicly available in the expected location might be found in the ETSI docbox. NOTE: While any hyperlinks included in this clause were valid at the time of publication, ETSI cannot guarantee their long-term validity. The following referenced documents are necessary for the application of the present document. [1] IETF RFC 7915: "IP/ICMP Translation Algorithm". [2] ETSI TS 104 153-1: "Core Network and Interoperability Testing (INT); Conformance Testing for IP/ICMP Translation Algorithm; (IETF RFC 7915); Part 1: Protocol Implementation Conformance Statement (PICS)". [3] ETSI TS 104 153-2: "Core Network and Interoperability Testing (INT); Conformance Testing for IP/ICMP Translation Algorithm; (IETF RFC 7915); Part 2: Test Suite Structure (TSS) and Test Purposes (TP)". [4] ISO/IEC 9646-6: "Information technology β Open Systems Interconnection β Conformance testing methodology and framework β Part 6: Protocol profile test specification". [5] ETSI ETS 300 406: "Methods for testing and Specification (MTS); Protocol and profile conformance testing specifications; Standardization methodology". ETSI ETSI TS 104 153-3 V1.1.1 (2026-02) 6 |
a32e74849246f00b60e7850e364e069e | 104 153-3 | 2.2 Informative references | References are either specific (identified by date of publication and/or edition number or version number) or non-specific. For specific references, only the cited version applies. For non-specific references, the latest version of the referenced document (including any amendments) applies. NOTE: While any hyperlinks included in this clause were valid at the time of publication, ETSI cannot guarantee their long-term validity. The following referenced documents may be useful in implementing an ETSI deliverable or add to the reader's understanding, but are not required for conformance to the present document. [i.1] ISO/IEC 9646-1: "Information technology β Open Systems Interconnection β Conformance testing methodology and framework β Part 1: General concepts". [i.2] ISO/IEC 9646-7: "Information technology β Open Systems Interconnection β Conformance testing methodology and framework β Part 7: Implementation Conformance Statements". [i.3] ETSI ES 201 873-1: "Methods for Testing and Specification (MTS); The Testing and Test Control Notation version 3; Part 1: TTCN-3 Core Language". |
a32e74849246f00b60e7850e364e069e | 104 153-3 | 3 Definition of terms, symbols and abbreviations | |
a32e74849246f00b60e7850e364e069e | 104 153-3 | 3.1 Terms | For the purposes of the present document, the terms given in ISO/IEC 9646-7 [i.2] and IETF RFC 7915 [1] apply. |
a32e74849246f00b60e7850e364e069e | 104 153-3 | 3.2 Symbols | Void. |
a32e74849246f00b60e7850e364e069e | 104 153-3 | 3.3 Abbreviations | For the purposes of the present document, the abbreviations given in ISO/IEC 9646-1 [i.1], ISO/IEC 9646-6 [4], ISO/IEC 9646-7 [i.2], IETF RFC 7915 [1] and the following apply: ATM Abstract Test Method ATS Abstract Test Suite CF Configuration GE Generation of ICMPv4/ICMPv6 Error messages ICMP Internet Control Message Protocol ICMPE translating ICMPv4/ICMPv6 Error messages ICMPH translating ICMPv4/ICMPv6 Headers ICMPv4 Internet Control Message Protocol version 4 ICMPv6 Internet Control Message Protocol version 6 IP Internet Protocol IPHDR translating IPv4/IPv6 Headers IPv4 Internet Protocol version 4 IPv6 Internet Protocol version 6 IUT Implementation Under Test SIIT Stateless IP/ICMP Translation SUT System Under Test TLH Transport-Layer Header translation TS Test System WTT knowing When To Translate XLAT Translator ETSI ETSI TS 104 153-3 V1.1.1 (2026-02) 7 |
a32e74849246f00b60e7850e364e069e | 104 153-3 | 4 Abstract Test Method (ATM) | |
a32e74849246f00b60e7850e364e069e | 104 153-3 | 4.1 Introduction | This clause describes the ATM used to test the IP/ICMP Translation Algorithm at the translator. |
a32e74849246f00b60e7850e364e069e | 104 153-3 | 4.2 Test architecture | |
a32e74849246f00b60e7850e364e069e | 104 153-3 | 4.2.1 Test method | Void. |
a32e74849246f00b60e7850e364e069e | 104 153-3 | 4.2.2 Test machine configuration | Following configurations are simplified to highlight tested interface and involved entities. Figure 1: Test configuration CF_XLAT_SIIT |
a32e74849246f00b60e7850e364e069e | 104 153-3 | 5 ATS conventions | |
a32e74849246f00b60e7850e364e069e | 104 153-3 | 5.1 Introduction | The ATS conventions are intended to give a better understanding of the ATS but they also describe the conventions made for the development of the ATS. These conventions shall be considered during any later maintenance or further development of the ATS. The ATS conventions contain two clauses, the testing conventions and the naming conventions. The naming conventions describe the structure of the naming of all ATS elements. To define the ATS, the guidelines of ETSI ETS 300 406 [5] were considered. XLAT IUT TS IPv4 Node IPv6 Node ETSI ETSI TS 104 153-3 V1.1.1 (2026-02) 8 |
a32e74849246f00b60e7850e364e069e | 104 153-3 | 5.2 Testing conventions | |
a32e74849246f00b60e7850e364e069e | 104 153-3 | 5.2.1 Test cases Preamble and Postamble | As described in the ETSI TS 104 153-2 [3], the test tool is required to emulate both an IPv4 node and an IPv6 node. It must handle traffic conversion in two directions: IPv4-to-IPv6 (T46) and IPv6-to-IPv4 (T64). For that reason, the test case preambles and postambles are named as follows: IUT is an XLAT (example TC_XLAT_T46_IPHDR_01) f_cf_XLAT_T46_IPHDR_Up f_cf_XLAT_T46_IPHDR_Down IUT is an XLAT (example TC_XLAT_T64_IPHDR_01) f_cf_XLAT_T64_IPHDR_Up f_cf_XLAT_T64_IPHDR_Down |
a32e74849246f00b60e7850e364e069e | 104 153-3 | 5.3 Naming conventions | |
a32e74849246f00b60e7850e364e069e | 104 153-3 | 5.3.1 General guidelines | The naming conventions are based on the following underlying principles: β’ In most cases, identifiers should be prefixed with a short alphabetic string (specified in table 1) indicating the type of TTCN-3 element it represents. β’ Suffixes should not be used except in those specific cases identified in table 1. β’ Prefixes and suffixes should be separated from the body of the identifier with an underscore ("_"). EXAMPLE 1: c_sixteen, t_wait_max. β’ Only module names, data type names and module parameters should begin with an upper-case letter. All other names (i.e. the part of the identifier following the prefix) should begin with a lower-case letter. β’ The start of second and subsequent words in an identifier should be indicated by capitalizing the first character. Underscores should not be used for this purpose. EXAMPLE 2: f_send_Location_Reporting_Control. Table 1 specifies the naming guidelines for each element of the TTCN-3 language indicating the recommended prefix, suffixes (if any) and capitalization. ETSI ETSI TS 104 153-3 V1.1.1 (2026-02) 9 Table 1: TTCN-3 naming convention Language element Naming convention Prefix Suffix Example Notes Module Use upper-case initial letter LibNGAP_ none LibNGAP_Interface TSS grouping Use all upper-case letters none none NGAP_gNB_PDU Message template Use lower-case initial letter m_ none m_E_PDUSetupRequest Message template with wildcard or matching expression Use lower-case initial letters mw_ none mw_cause Port instance Use upper-case initial letter none none NGAPPort Constant Use lower-case initial letter c_ none c_maxRetransmission Function Use lower-case initial letter f_ none f_recv_NGAP_PDU() Altstep Use lower-case initial letter a_ none a_receive() Variable Use lower-case initial letter v_ none v_basicId PICS values Use all upper case letters PICS_ none PICS_NGAP_AMF_IUT See note PIXIT values Use all upper case letters PX_ none PX_NGAP_AMF_ETS_PORT See note Parameterization Use lower-case initial letter p_ none p_servedPLMNs Enumerated Value Use lower-case initial letter e_ none e_preamble NOTE: In this case it is acceptable to use underscore as a word delimiter. |
a32e74849246f00b60e7850e364e069e | 104 153-3 | 5.3.2 Test case grouping | The ATS structure is based on the Test Purposes for the IP/ICMP Translation Algorithm as defined in ETSI TS 104 153-2 [3]. |
a32e74849246f00b60e7850e364e069e | 104 153-3 | 5.3.3 Test case identifiers | The test cases have been divided according to the functionalities into several groups. The test case names are built up according to the following scheme. Table 2: TC identifier naming convention scheme Identifier: <TC>_<scope>_<nn> <tc> = Test Case: fixed to "TC" <interface or protocol> Interface or protocol: stateless IP/ICMP translation (SIIT) algorithm <direction> from IPv4 to IPv6 (T46) or from IPv6 to IPv4 (T64) <scope> = group IPHDR Translating IPv4/IPv6 Headers ICMPH Translating ICMPv4/ICMPv6 Headers ICMPE Translating ICMPv4/ICMPv6 Error Messages GE Generation of ICMPv4/ICMPv6 Error Message TLH Transport-Layer Header Translation WTT Knowing When to Translate <nn> = sequential number (01 to 99) NOTE: This naming scheme results into a one-to-one correspondence between the test purpose identifiers as defined in ETSI TS 104 153-2 [3] and the test case identifiers. The TP identifier of the test case TC_xxx_01 is TP_xxx_01. ETSI ETSI TS 104 153-3 V1.1.1 (2026-02) 10 Annex A (normative): SIIT partial PIXIT pro forma A.1 The right to copy Notwithstanding the provisions of the copyright clause related to the text of the present document, ETSI grants that users of the present document may freely reproduce the partial PIXIT pro forma in this annex so that it can be used for its intended purposes and may further publish the completed partial PIXIT. The PIXIT Pro forma is based on ISO/IEC 9646-6. Any additional information which may be needed can be found in this international standard document. A.2 Identification summary Table A.1 PIXIT Number: Test Laboratory Name: Date of Issue: Issued to: A.3 ATS summary Table A.2 Protocol Specification: IETF RFC 7915 Protocol to be tested: SIIT ATS Specification: ETSI TS 104 153-2 Abstract Test Method: ETSI TS 104 153-3, clause 4 A.4 Test laboratory Table A.3 Test Laboratory Identification: Test Laboratory Manager: Means of Testing: SAP Address: A.5 Client identification Table A.4 Client Identification: Client Test manager: Test Facilities required: ETSI ETSI TS 104 153-3 V1.1.1 (2026-02) 11 A.6 SUT Table A.5 Name: Version: SCS Number: Machine configuration: Operating System Identification: IUT Identification: PICS Reference for IUT: Limitations of the SUT: Environmental Conditions: A.7 Protocol layer information A.7.1 Protocol identification Table A.6 Name: IETF RFC 7915: "IP/ICMP Translation Algorithm" Version: PICS References: ETSI TS 104 153-1 A.8 PIXIT items A.8.1 Introduction Tables in this clause need to be filled by the IUT Manufacturer to specify how the IUT needs to be configured with IUT specific values or describe IUT specific procedures required for complete testing of the IUT. Each PIXIT item corresponds to a Module Parameter of the ATS. A.8.2 Port and Address items Table A.7: Test system ports and addresses Item Identifier Type Description 1 PX_XLAT_T46_IPADDR Charstring IP address of the XLAT test system connected to IPv4 node 2 PX_XLAT_T46_PORT Integer Port number of the XLAT test system connected to IPv4 node 3 PX_XLAT_T64_IPADDR Charstring IP address of XLAT test system connected to IPv6 node 4 PX_XLAT_T64_PORT Integer Port number of XLAT test system connected to IPv6 node ETSI ETSI TS 104 153-3 V1.1.1 (2026-02) 12 Table A.8: SUT ports and addresses Item Identifier Type Description 1 PX_XLAT_SUT_T46_IPADDR Charstring IP address of the XLAT system under test connected to IPv4 node 2 PX_XLAT_SUT_T46_PORT Integer Port number of the XLAT system under test connected to IPv4 node 3 PX_XLAT_SUT_T64_IPADDR Charstring IP address of the XLAT system under test connected to IPv6 node 4 PX_XLAT_SUT_T64_PORT Integer Port number of the XLAT system under test connected to IPv6 node ETSI ETSI TS 104 153-3 V1.1.1 (2026-02) 13 History Version Date Status V1.1.1 February 2026 Publication |
4df6e2656e25a6f3246b032b84b99976 | 104 161 | 1 Scope | The present document is a transposition of Recommendation ITU-T Q.4077 [1] without modifications. |
4df6e2656e25a6f3246b032b84b99976 | 104 161 | 2 References | |
4df6e2656e25a6f3246b032b84b99976 | 104 161 | 2.1 Normative references | References are either specific (identified by date of publication and/or edition number or version number) or non-specific. For specific references, only the cited version applies. For non-specific references, the latest version of the referenced document (including any amendments) applies. Referenced documents which are not found to be publicly available in the expected location might be found in the ETSI docbox. NOTE: While any hyperlinks included in this clause were valid at the time of publication, ETSI cannot guarantee their long-term validity. The following referenced documents are necessary for the application of the present document. [1] Recommendation ITU-T Q.4077 (04/2025): "Testbed as a service application programinterfaces descriptions and interoperability requirements". |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.