hash
stringlengths
32
32
doc_id
stringlengths
7
13
section
stringlengths
3
121
content
stringlengths
0
2.2M
b1381823cf46aa778d0671bd8826a4bf
101 577
5.1 Measured services
When requesting information about the performance of a car we are provided with figures for top speed, acceleration, maximum load, mileage, service intervals, etc. The performance figures apply to the tested car as a whole, i.e. on system level or top level of the tested system. However the measured performance of a car is a result of the car design and the performance of the various components of the car involved in producing its services. Examples of the components of a car contributing to measured powerfulness values are the performance of the engine, the transmission system, the electrical system, etc. To design the performance of the car we need to measure and evaluate the performance of its components and how the components interact. A similar approach can be applied on a computer system. At the system (application) level we measure the performance of system service delivery, such as transaction processing capacity, or response time of various services, etc.
b1381823cf46aa778d0671bd8826a4bf
101 577
5.2 Measured components
Similar to a car the measured performance of a computer system service is not atomic, but the end result of the performance of many levels of processing services or components. The performance of application services depends on the performance of requested middleware services. The performance of middleware services depends on requested operating system services. The performance of operating system services depends on the performance of requested hardware components services, etc. We do not need to know the performance of each component involved in delivering an application service to measure the performance of an application service. However, performance is built from inside out, i.e. to design an application that meets defined performance requirements, we need to measure and control the performance of all components.
b1381823cf46aa778d0671bd8826a4bf
101 577
5.3 Service concepts
A distributed system provides its services to users over a published interface. If a service is general enough it can be used as a shared service by multiple applications. Access to a service in a distributed system is open to any client that has the authority to use the service and is authenticated as a valid user. The rational of this concept is reuse of software as an on-line service.
b1381823cf46aa778d0671bd8826a4bf
101 577
5.3.1 Service and component performance
An application service is normally resolved by a set of internal services. The measured performance of a system service is the aggregated result of all components (hardware and software) involved in and contributing to the results. The performance of an application service depends on the performance of requested middleware services. The performance of the middleware services depend on requested operating system services. The performance of operating system services depend on the performance of requested hardware components services, etc. The track can basically be followed down to execution of CPU instructions. ETSI ETSI TR 101 577 V1.1.1 (2011-12) 27
b1381823cf46aa778d0671bd8826a4bf
101 577
5.3.2 Service topology and topology performance
The service topology describes how an application service is dependent on other application services to resolve its task. Performance tests of service topology focus on the responsiveness of distributed services processing components. Performance tests of service topology cover such as: • tests of latency in accessing distributed services; and • tests of capacity in accessing distributed services. A Distributed system is not only built on several layers of services, but each layer of services may also be distributed across a large number of system components (servers). The system topology describes how the system components are interconnected and the requirements to traverse the system between any two components. For example registration of an IMS user is initiated by the UE sending a REGISTER request on the IMS Gm interface (SIP) to the user's servicing CSCF (S-CSCF). The S-CSCF in turn requests services from the operators HSS to identify and authenticate the user and set up secured IPsec channels to the UE. This is achieved by sending a request on the IMS Cx interface (Diameter) to the HSS. P-CSCF SIP-AS OSA-SCS IM-SSF BGCF MGCF MRF-C SGW MGW MRF-P UE The IMS media plane RTP The IMS signaling plane Mj Gm HSS SLF Cx Dx ISC Ut Mw I-CSCF Mw S-CSCF Mw P-CSCF UE Sh Mn Mg Mi Mp Application services AAA services Control services IP Mr Gm Figure 6: Example of service topology from IMS
b1381823cf46aa778d0671bd8826a4bf
101 577
5.4 Service characteristics
Service characteristics are service attributes that determine how performance tests of a service should be designed. • service initiation characteristics; • service duration characteristics; • service resource and load characteristics; • service design characteristics; and • service flow characteristics. The purpose of specifying the characteristics of each service is to design correct performance test cases. A well written specification of the characteristics of a tested service improves the understanding of what to measure and how. ETSI ETSI TR 101 577 V1.1.1 (2011-12) 28
b1381823cf46aa778d0671bd8826a4bf
101 577
5.4.1 Service initiation characteristics
Service initiation characteristics describe how a service is invoked. There are two types of services in this context: • pulled services; and • pushed services. Pulled services A pulled service is initiated by a Client in a service request and responded to by the service in one or more response messages. Pushed services A pushed service is initiated by a Service in a service request to a subscribing Client and responded to by the Client in one or more response messages. A pushed service is usually triggered by an event usually causing a state change in the service. The service is distributed to any Client with a pending subscription to the service. An example of a pushed service is the following: An IMS user with an active publication sends a PUBLISH of a status change to the publication server. The publication server updates the publication and sends NOTIFY messages to all active subscribers of the publication.
b1381823cf46aa778d0671bd8826a4bf
101 577
5.4.2 Service duration characteristics
Service duration characteristics describe the combination of stateless or stateful services and the service duration: • services with short duration; • services with variable duration; and • services with long duration. System services with short duration For System services with short duration a short response time is essential. Services are usually stateless. A service request of this kind usually has a timer at application protocol level that terminates the request if no response message can be returned within a standard response time limit. Examples of services with short duration are any type of simple request-response service, such as web browsing or a Google search. System services with variable duration For System services with variable duration the requested service has no time constraints and consequently changes from case to case. Services with variable duration are stateful. An example of a service with variable duration is a call, where ring time and/or hold time (the actual duration of the conversation) varies from call to call. System services with long duration For System services with long duration the requested service is usually a prerequisite for other subsequent services during a user session and lasts consequently until the list of subsequent services is finished. Services are stateful. A service with long duration usually has a timer, for security reasons, that expires when no activities are registered during a specified time. An example of a service with long duration is a user session or a subscription to presence status. ETSI ETSI TR 101 577 V1.1.1 (2011-12) 29 time VARIABLE REGISTRATION SUBSCRIPTION NOTIFICATION CALL LONG LONG SHORT Figure 7: Examples of different service durations
b1381823cf46aa778d0671bd8826a4bf
101 577
5.4.3 Service resource and load characteristics
Service resource characteristics describe the mix of requirements on the following hardware resources: • processing (CPU) requirements; • storage (memory) requirements; and • transmission (bandwidth) requirements. Service load characteristics describe the type of system load caused by the resource profile and the duration of a service. The load profile has two values: services causing active load and services causing passive load. Services causing active load A service with short or variable duration, such as a web transaction or streaming multimedia in a call typically causes an active load on system resources. A service causing active load on a system is characterized by: • high requirements on processing resources (CPU) or transmission resources; and • variable requirements on Memory space. The processing capacity for services causing active load is limited by processing and transmission resources. Services causing passive load A service with long duration, such as a user session or a user subscription, typically causes a passive load on system resources. A service causing passive load on a system is characterized by: • low requirements on processing resources (CPU) or transmission resources; and • memory space, usually related to the context of the service, is occupied throughout the duration of the service, which can be long. The processing capacity for services causing passive load is limited by available memory for service. Even small amounts of memory per service request can end up in large demands on memory. The registration service in an IMS system where each pending user registration occupies 25 K bytes will need 25 Gigabyte of memory for one million concurrently registered users.
b1381823cf46aa778d0671bd8826a4bf
101 577
5.4.4 Service design characteristics
Service design characteristics describe the complexity of a service. There are many types of service constructions. In this context we will look at the following types. • single step services; • multi step services; and • composite services. ETSI ETSI TR 101 577 V1.1.1 (2011-12) 30 Single step services A single step service contains a single request with related responses on a specified interface. Multi step services A multi step service contains several requests with related responses on a specified interface. Composite services A composite service contains several logically separate subservices, where each subservice has a defined interface with a set of one or more requests and their related responses. A service design containing several logically separate subservices using separate interfaces. SINGLE STEP SERVICE time UE P-CSCF 1 200 SUB time UE P-CSCF 1 401 REG unauth 2 200 REG auth 3 200 SUB time UE P-CSCF 1 183 INV 2 200 PRA 3 200 UPD 180 200 4 ACK MRFP codec codec codec codec 5 200 BYE MULTI STEP SERVICE COMPOSITE SERVICE Figure 8: Examples of different service designs
b1381823cf46aa778d0671bd8826a4bf
101 577
5.4.5 Service flow characteristics
Service flow characteristics describe how a service is communicated. Two types of service flows are discussed in the present document: • transaction services; and • streamed services. ETSI ETSI TR 101 577 V1.1.1 (2011-12) 31 Transaction services Streamed services Transaction services time 1 183 INV 2 200 PRA 3 200 UPD 180 200 4 ACK MRFP codec codec codec codec 5 200 BYE COMPOSITE SERVICE UE P-CSCF Figure 9: Examples of transaction service and streamed service Transactional services A transaction services is communicated in a relatively limited number of interactions between client and server. Transaction services are often tied to some kind of processing of centralized services or databases. Streamed services A streamed service is communicated in a continuous flow of interactions between client and server that can last from a few seconds up to several hours and more. A streamed services can be bidirectional such as a multimedia call between two persons or unidirectional such as an IPTV media transfer. Performance requirements and performance attributes of a streamed service are quite different from a transaction service.
b1381823cf46aa778d0671bd8826a4bf
101 577
5.5 Service Interfaces
The services of a system are accessible on one or more system interfaces, where different services might use different interfaces. An interface between the system under test and the performance test tools can be an Application Programming Interface (an API) or a Communications Protocol Interface.
b1381823cf46aa778d0671bd8826a4bf
101 577
5.5.1 Application Programming Interfaces (API)
An Application Programming Interface provides a library of functions for a call based dialogue between the tested system and the test tools. An Application Programming Interface hides the actual network between the client and the server. The actual network between the client and the server is in most cases a Communications Protocol Interface.
b1381823cf46aa778d0671bd8826a4bf
101 577
5.5.2 Communication Protocol Interfaces
A Communications Protocol Interface is a protocol stack with protocols from the following three of the OSI layers: 1) application layer protocols (OSI layer 7); 2) transport layer protocols (OSI layer 4); and 3) network layer protocols (OSI layer 3). ETSI ETSI TR 101 577 V1.1.1 (2011-12) 32 Examples of application layer protocols are HTTP, SOAP, SIP, Radius, Diameter, DHCP, etc., or subsets thereof. Examples of transport layer protocols are TCP, UDP, and SCTP, etc. Examples of network layer protocols in are IP (IPv4 and/or IPv6), IPsec, etc. The client requests a service from the server by sending a message formatted according to the application layer protocol used by the Communication Protocol Interface. A Communications Protocol Interface usually defines a subset of the used application layer protocol. SUT Test Tool API Communication Protocol SUT Test Tool Communication Protocol Figure 10: Test Tools where the SUT interface is an API (left), or a Communication Protocol Interface (right) 6 Performance measurement data objectives and attributes
b1381823cf46aa778d0671bd8826a4bf
101 577
6.1 Performance metric objectives
The following set of characteristics applies to good and useable performance metrics: • understandable; • reliable; • accurate; • repeatable; • linear; • consistent; and • computable. Understandability characteristics Measured values for a good performance metric are easy to interpret and understand. Understandability is an important characteristic of performance metrics for presentations and reports. Reliability characteristics A good performance metric is reliable if the measured values in a performance test are in accordance with the measured values in real production. Such performance metrics can be trusted. Reliability is an important characteristic for performance metrics used as input to other performance metrics. Accuracy characteristics A good performance metric is accurate or precise if the measured values are very close to real values. The actual deviations from real values express the precision of a performance metric. Accuracy and precision is an important characteristic for performance metrics used as input to other performance metrics. Computed performance metrics based on input from performance metrics with varying accuracy and precision are of questionable value. ETSI ETSI TR 101 577 V1.1.1 (2011-12) 33 Repeatability characteristics A good performance metric delivers the same value every time a performance test is repeated identically. Repeatability is an important characteristic for performance metrics used in regression testing. Linearity characteristics A performance metric with a linear characteristic is a metric that has a linear relation to values that are proportional to changes in the measured object. This makes the performance metric easier to understand. Another aspect of linearity is that mixing linear and non-linear performance metrics in computations of other performance metrics is of questionable value. Linearity is therefore an important characteristic for performance metrics used as input to other performance metrics. Consistency characteristics A performance metric is consistent if the metric units and the definitions of metric units are the same on different systems. When the units of a metric are not consistent on different platforms performance metric values cannot be compared. Linearity is therefore an important characteristic for performance metrics used as input to other performance metrics. Consistency is therefore an important characteristic for performance metrics used in benchmarks. Computability characteristics A good performance metric has precise definitions of measurement units, measurement accuracy and measurement methods. This is a prerequisite for using the measurement variable in the various computations of performance. Computability is therefore an important characteristic for performance metrics used as input to other performance metrics.
b1381823cf46aa778d0671bd8826a4bf
101 577
6.2 Measurement data attribute sets
To support and verify the performance metric objectives above, performance measurement data have several sets of attributes telling different aspects of what they represent and how they can be used such as: • processing attributes or Metric types; • identification attributes or Metric identifiers; • unit attributes or Metric units; and • conditional attributes or measurement conditions (reproduce able). Metrics identifiers - What ……. - Where ….. - When …… Metrics units - Value type - Unit - Accuracy (+/- %) Measurement Data Measurement conditions - How ……. Metrics types - Raw data - Normalized data - Composite data from … Figure 11: Measurement data attribute sets ETSI ETSI TR 101 577 V1.1.1 (2011-12) 34
b1381823cf46aa778d0671bd8826a4bf
101 577
6.3 Processing attributes or Metric types
Processing attributes describe how a metric variable value is derived and makes it understandable and reproduce able. Metrics can be based on: • raw performance data; • normalized performance data; • transformed performance data; and • composite performance data.
b1381823cf46aa778d0671bd8826a4bf
101 577
6.3.1 Metrics based on raw performance data
Raw performance metrics are performance data collected during a performance test and recorded in native form, i.e. data are not yet processed or formatted in any way, such as collected response time values for a specific type of transaction.
b1381823cf46aa778d0671bd8826a4bf
101 577
6.3.2 Metrics based on normalized performance data
Normalized performance metrics are performance data transformed to a common norm, for example transactions per second or rejected requests per million service requests, etc. Time based metrics In graphs performance metrics are usually displayed with metrical values on the Y-axis and recording time on the X-axis. We call this type of presentation time based metrics, for example variations in response time during a test. Value based metrics Presentation of metrics can also be based on other figures than time, i.e. show other variables on the X-axis. Presentation of metrics is value based when the metrical values are displayed on the X-axis and frequencies of metrical values are displayed on the Y-axis. An example of value based metrics is response time distribution usually described in a histogram with response time interval values on the X-axis and the frequency or percentage of each response time interval on the Y-axis.
b1381823cf46aa778d0671bd8826a4bf
101 577
6.3.3 Metrics based on transformed performance data
Transformed performance metrics are performance metrics processed into other logically related performance metrics, such as response time data transformed into average response time metrics or standard deviation of response time values.
b1381823cf46aa778d0671bd8826a4bf
101 577
6.3.4 Metrics based on composite performance data
Composite performance metrics are based on the processing of multiple performance data sources. Input to composite performance metrics can be any kind computable of performance data. An example of composite performance metrics is resource usage per processed request of a service.
b1381823cf46aa778d0671bd8826a4bf
101 577
6.4 Identification attributes or Metric identifiers
Identification attributes contain reference values that together make collected performance measurement data unique. There are three types of identification attributes: • measurement type; ETSI ETSI TR 101 577 V1.1.1 (2011-12) 35 • measurement point; and • measurement recording time. Performance data without identification attribute values are of questionable value with no references to what they represent or why.
b1381823cf46aa778d0671bd8826a4bf
101 577
6.4.1 Measurement type
Measurement type identifies the type or name of collected measurement data.
b1381823cf46aa778d0671bd8826a4bf
101 577
6.4.2 Measurement points
Measurement points identifies where performance data are captured. There are two types of measurement points: 1) external; and 2) internal. TS (Test System) TS (Test System) SUT Internal Measurement Point TS Service Responder TS Service Requestor Probes External Measurement Point External Measurement Point Figure 12: External and Internal Measurement Points External measurement points External measurement points are data collection locations outside the SUT, usually at the test tools. At an external measurement point performance data about the flow of requested services of different types and related service responses are collected. An external measurement point can be a requested service or all types of responses to requested services including failures such as timeout or closed connections. In this context, i.e. identification of performance measurement data External measurement points are also processes producing composite performance metrics based on multiple sources of recorded performance data. Internal measurement points Internal measurement points are data collection locations inside the SUT. Data collection at Internal Measurement Points is usually done by probes managed by the test tools. At an Internal measurement point performance data are collected about how resources are managed under different load conditions inside the SUT. Internal measurement points can be global for a server, or local for a process group. Internal measurement points can also be located inside a process group capturing data about resource management in application code, such as queues, object instances, etc.
b1381823cf46aa778d0671bd8826a4bf
101 577
6.4.3 Measurement recording time
Measurement recording time is usually a timestamp with high resolution identifying at what point in time a performance measurement value was recorded. The measurement recording time can be relative or absolute. ETSI ETSI TR 101 577 V1.1.1 (2011-12) 36 Relative time Relative time shows elapsed time since the start of a performance test (time zero). There are several reasons for applying relative time in performance tests: • Relative time enables a simple way of comparing different test runs of the same performance test. For instance it is easy to see if certain behaviour appears after a certain period of time in all performance tests. • Relative time makes it easy to calculate elapse time between different events in a performance test. Absolute time Absolute time is calendar time of a measurement recording. There are several reasons for applying absolute time in performance tests. Absolute time is preferred when there is no need for comparing different test runs or other kinds of analysis of product behaviour, such as monitoring service production. In monitoring service production it is important to see at what time different situations happen when they reappear, such as a repeated situation during the night at 2:30 every working day. In some situations, such as when a test project is geographically distributed on separate locations with different time zones, it is convenient to use absolute time from a single time zone such as GMT for all test sites.
b1381823cf46aa778d0671bd8826a4bf
101 577
6.5 Unit attributes or Metric formats
Unit attributes make performance data comparable, accurate, and computable. Performance data have several attributes telling different aspects of what a measurement figure represents: • unit type attributes; • unit resolution attributes; • unit accuracy attributes; and • unit recording attributes. Unit type attributes Examples of unit type attributes are elapsed time, timestamps, counters, percentages, or other units. Unit resolution attributes Examples of unit resolution attributes are time in days, hours, seconds, or milliseconds. Unit accuracy attributes Examples of accuracy attributes are deviations from correct values (+/- %). Unit recording attributes Examples of unit recording attributes are accumulative or instantaneous values.
b1381823cf46aa778d0671bd8826a4bf
101 577
6.6 Conditional attributes
Conditional attributes are references to stated and actual conditions that apply on collected performance measurement data and consequently are of importance to make performance data reproduce able. There are two types of conditional attributes: • requested conditions; and • actual conditions. ETSI ETSI TR 101 577 V1.1.1 (2011-12) 37
b1381823cf46aa778d0671bd8826a4bf
101 577
6.6.1 Requested conditions
Requested condition attributes are links to requested measurement conditions, i.e. stated requirements on SUT and Test System conditions for capturing performance data. External conditions External conditions describe what the SUT should be exposed to during a performance test, i.e. the workload specifications. Internal conditions Internal conditions describe expected situations inside the SUT during a performance test, such as maximum CPU load, memory usage, etc.
b1381823cf46aa778d0671bd8826a4bf
101 577
6.6.2 Actual conditions
Actual conditions are links to collected measurement data about actual measurement conditions during a performance test. Actual measurement conditions include metrics such as load deviations - the differences between intended load and actual load during a performance test.
b1381823cf46aa778d0671bd8826a4bf
101 577
7 Abstract performance metrics
b1381823cf46aa778d0671bd8826a4bf
101 577
7.1 Abstract performance metrics and performance categories
Abstract performance metrics are formal representations of performance characteristics and, in this context, classified into three categories for: Powerfulness, Reliability, and Efficiency.
b1381823cf46aa778d0671bd8826a4bf
101 577
7.2 Abstract powerfulness metrics
Abstract powerfulness metrics express measurements of volume and speed of service production. Abstract powerfulness metrics have subcategories for Capacity, Responsiveness, and Scalability.
b1381823cf46aa778d0671bd8826a4bf
101 577
7.2.1 Capacity metrics and related attributes
Capacity metrics express maximum number of service request handled by a SUT per time unit, where time unit usually is per second. Sustained arrival capacity Definition: A performance metric for the maximum number of service requests of some kind that can be accepted by the SUT per time unit continuously. Measurement unit: Counter value as frequency per time unit: Number of requests/second. Resolution: 1 request/second. Accuracy: Measurement recording: Accumulated value per recording period of time and entire performance test time. Processing: Recorded value normalized to number of requests per second. Example: Sustained arrival capacity of service xx" is 2 160 requests/second. ETSI ETSI TR 101 577 V1.1.1 (2011-12) 38 Peak arrival capacity Definition: A performance metric for the maximum arrival rate of service requests during a specified period of time. Measurement unit: Counter value as frequency per time unit: Number of requests/second. Resolution: 1 request/second. Accuracy: Measurement recording: Accumulated value per recording period of time and entire performance test time. Processing: Recorded value normalized to number of requests per second. Example: Peak arrival capacity of service "xx" is 3 160 requests/second for up to 15 seconds. Sustained throughput capacity Definition: A performance metric for the maximum number of service requests of some kind that can be completed by the SUT per time unit continuously. Measurement unit: Counter value as frequency per time unit: Number of requests second. Resolution: 1 request/second. Accuracy: Measurement recording: Accumulated value per recording period of time and entire performance test time. Processing: Recorded value normalized to number of requests per second. Example: Sustained throughput capacity of service "xx" is 2 160 requests/second. In progress active load capacity Definition: A performance metric for the maximum number of active load service requests of some kind concurrently in progress. Measurement unit: Counter: Number of concurrent requests. Resolution: 1 request. Accuracy: Measurement recording: Number of requests per second and average response time. Processing: Calculated as Number of requests per second divided by average response time. Example: In progress active load capacity of service "xx" is 300 requests. In progress passive load capacity Definition: A performance metric for the maximum number of passive load service requests of some kind concurrently in progress. Measurement unit: Counter: Number of concurrent requests. Resolution: 1 request. Accuracy: Measurement recording: Accumulated value of service requests in progress per recording period. Processing: Example: In progress passive load capacity of service "xx" is 30 000 requests. ETSI ETSI TR 101 577 V1.1.1 (2011-12) 39
b1381823cf46aa778d0671bd8826a4bf
101 577
7.2.2 Responsiveness metrics and related attributes
Responsiveness metrics express some kind of time delay of a measured service. The metric units for responsiveness cover response time, response time percentiles, different kinds of latency, etc. Average response time Definition: A performance metric for the average elapsed time from sending a request until receiving a response. Measurement unit: Elapsed time in seconds or milliseconds. Resolution: 1 second or 1 millisecond. Accuracy: Measurement recording: Accumulated value per recording period of time and entire performance test time. Processing: Accumulated response time value per recording period divided by number of completed requests. Example: Average response time for service "xx" is 30 milliseconds. Response time percentiles Definition: A performance metric for the maximum response time recorded for a specified percentile value of all service requests. A response time percentile of 90 % shows maximum response time for 90 % of all service requests. A response time percentile can be calculated for the entire performance test period or a part thereof. Measurement unit: Elapsed time in seconds or milliseconds. Resolution: 1 second or 1 millisecond. Accuracy: Measurement recording: Accumulated value per recording period of time and entire performance test time. Processing: The highest recorded response time value for the specified percentage of processed service requests. Example: The maximum response time for the 90 percentile of service "xx" is 40 milliseconds. The maximum response time for the 95 percentile of service "xx" is 120 milliseconds. Distribution latency Definition: A performance metric for the delay from receiving a request until passing the request to the next processing instance. Measurement unit: Elapsed time in seconds or milliseconds. Resolution: 1 second or 1 millisecond. Accuracy: Measurement recording: Accumulated value per recording period of time and entire performance test time. Processing: Accumulated latency time value per recording period divided by number of completed requests. Example: Average distribution latency time for service "xx" is 30 milliseconds. ETSI ETSI TR 101 577 V1.1.1 (2011-12) 40 Notification latency Definition: A performance metric for the delay in notifying a subscriber of a change in a subscribed object. Measurement unit: Elapsed time in seconds or milliseconds. Resolution: 1 second or 1 millisecond. Accuracy: Measurement recording: Accumulated value per recording period of time and entire performance test time. Processing: Accumulated latency time value per recording period divided by number of completed requests. Example: Average notification latency time for service "xx" is 12 milliseconds. Disk access latency Definition: A performance metric for the average time to position the head on the correct cylinder, track, and sector of a disk, i.e. the delay for positioning a disk for a read or write operation. Measurement unit: Elapsed time in milliseconds. Resolution: 1 millisecond. Accuracy: Measurement recording: Accumulated value per recording period of time and entire performance test time. Processing: Calculated value from time to position disk head to desired cylinder plus time to rotate the disk a half turn. Example: Average disk access latency time is 4 milliseconds.
b1381823cf46aa778d0671bd8826a4bf
101 577
7.2.3 Scalability metrics and related attributes
Scalability metrics express the relation between hardware resource increases and related service capacity increases. Scalability metrics can be measured for a single type of hardware resources or a balanced mix of hardware resources. The measurement units for Scalability metrics are service capacity increases in absolute numbers. The measurement units for capacity increases can also be percentage values, however, any percentage value depends on the current service capacity level if you add a fixed quantity of resources. Service capacity per additional Processing Unit Definition: Performance metric for the increase of service capacity of a kind by adding a Processing Unit, such as a server, or a CPU, or more cores per CPU. The metric value is expressed as additional service requests processed per time unit.. The scalability metric applies for services with active load. Measurement unit: Counter value as frequency per time unit: Number of requests/second. Resolution: 1 request/second. Accuracy: Measurement recording: Accumulated value per recording period of time and entire performance test time. Processing: Processed as Sustained throughput capacity for SUT including additional processing unit reduced by Sustained throughput capacity for SUT without additional processing unit. Example: Service capacity per additional Processing Unit for service "xx" is 850 requests/second. ETSI ETSI TR 101 577 V1.1.1 (2011-12) 41 Service capacity per additional Memory Unit Definition: Performance metric for the increase of service capacity of a kind by adding a Memory Unit, such as a DIMM. This scalability characteristic applies for services with passive load. Measurement unit: Counter: Number of requests. Resolution: 1 request. Accuracy: Measurement recording: Accumulated value per recording period of time and entire performance test time. Processing: Processed as In progress passive load capacity for SUT including additional memory unit reduced by In progress passive load capacity for SUT without additional memory unit. Example: Service capacity per additional Memory Unit for service "xx" is 2 000 requests.
b1381823cf46aa778d0671bd8826a4bf
101 577
7.3 Abstract reliability metrics
Abstract reliability metrics express measurements of how predictable a system's service production is. Abstract reliability metrics have subcategories for Quality-of-Service, Stability, Availability, Robustness, Recovery, and Correctness.
b1381823cf46aa778d0671bd8826a4bf
101 577
7.3.1 Quality-of-Service metrics and related attributes
Quality-of-Service metrics are closely related to stability and availability metrics for service production. The measurement units for Quality-of-Service metrics are events per number of service requests (such as 1 000 000), or events per production time unit (usually one year). Service fail rate Definition: A performance metric for frequencies of denied services. Measurement unit: Counter value as frequency per Mega-Service-Requests (MSR): Decimal value. Resolution: 0,1 request/MSR. Accuracy: Measurement recording: Accumulated value for the entire performance test time. Processing: A composite metric value based on the number of rejected requests and the total number of requests. Example: Service fail rate per MSR for service "xx" is 4,2 requests.
b1381823cf46aa778d0671bd8826a4bf
101 577
7.3.2 Stability metrics and related attributes
Stability metrics express changes in measured performance figures for powerfulness characteristics and/or efficiency characteristics over long periods of time. Changes in performance figures are measured in two ways: 1) The response time spread, i.e. the difference between highest and lowest response time value. A high value for response time spread means that actual service response time is unpredictable and therefore unstable. 2) The response time trend, i.e. an indication that response time values are increasing over time. An increasing response time trend is an indicator of increasing resource usage over time. ETSI ETSI TR 101 577 V1.1.1 (2011-12) 42 Service response time spread Definition: A performance metric for response time variations over a long period of time of constant load. The service response time spread is a metrics for the difference between shortest and longest average response time value during a test. A small service response time spread value is an indicator of a system with a stable service production. A large service response time spread value is an indicator of a system with unpredictable behaviour. Measurement unit: Elapsed time in milliseconds: Response time difference. Resolution: 1 millisecond. Accuracy: Measurement recording: Accumulated value per recording period of time and entire performance test time. Processing: A calculated metric value based on the highest and lowest average response time value per recording period of time and entire performance test time. Example: The maximum response time spread for service "xx" is 13 milliseconds. Service resource usage spread Definition: A performance metric for resource usage variations over a long period of time of constant load. The service resource usage spread is a measure the difference between highest and lowest average resource usage value during a test. A small service resource usage spread value is an indicator of a system with a stable service production. A large service resource usage spread value is an indicator of a system with unpredictable behaviour. Measurement unit: Resource usage difference: Depending on type of resource. Resolution: Depending on type of resource. Accuracy: Depending on type of resource. Measurement recording: Accumulated value per recording period of time and entire performance test time. Processing: A calculated metric value based on the highest and lowest resource usage value per recording period of time and entire performance test time. Example: The maximum CPU usage spread for service "xx" is 2 milliseconds. Service response time trends Definition: A performance metric for response time trends over a long period of time of constant load. The trend may indicate an increasing or a decreasing service response time. The service response time trend is a presented as a probability figure in the range 0,0 to 1,0, where 0,0 indicates no trend and 1,0 a very strong trend. Measurement unit: Probability value: Decimal value in the range 0,0 to 1,0. Resolution: Five decimals. Accuracy: Measurement recording: Average response time value per recording period of time. Processing: A metric value based on a recorded sequence of average response time values that cover the entire test period. Example: The response time trend for service "xx" is 0,02, i.e. no trend identified. ETSI ETSI TR 101 577 V1.1.1 (2011-12) 43 Service resource usage trends Definition: A performance metric for resource usage trends over a long period of time of constant load. The trend may indicate an increasing or a decreasing service resource usage. The service resource usage trend is a presented as a probability figure in the range 0,0 to 1,0, where 0,0 indicates no trend and 1,0 a very strong trend. Measurement unit: Probability value: Decimal value in the range 0,0 to 1,0. Resolution: Five decimals. Accuracy: Measurement recording: Average resource usage value per recording period of time. Processing: A metric value based on a recorded sequence of average resource usage values that cover the entire test period. Example: The resource usage trend for service "xx" is 0,02, i.e. no trend identified.
b1381823cf46aa778d0671bd8826a4bf
101 577
7.3.3 Availability metrics and related attributes
Availability metrics for software services express frequency rates of service request errors, frequency rates of correctly processed service request, or probabilities of service request errors or correctly processed service request. The measurement units for frequency rates are number of events per KSR (Kilo Service Requests), or MSR (Mega Service Requests. The measurement units for probabilities are percentage values. Availability metrics for hardware express estimated operational time for a device. The measurement units for hardware operability is hours. Service rejection rates Definition: A performance metric for frequencies of rejected service requests. Service Rejection Rate is based on the number of rejected requests and the total number of requests. Service Rejection Rate is normalized per MSR (Mega Service Requests). Measurement unit: Counter value as frequency per Mega-Service-Requests (MSR): Decimal value. Resolution: 0,1 request/MSR. Accuracy: Measurement recording: Accumulated number of rejected requests and accumulated number of processed requests for the total test execution time. Processing: Composite metric value based on the number of rejected requests and the number of processed requests. Example: Service Rejection Rate for service "xx" is 0,4/MSR. Service rejection probability Definition: A performance metric for the probability a service request to be rejected. Service Rejection Probability is based on the number of accepted requests and the total number of processed requests. Service Rejection Probability is normalized per MSR (Mega Service Requests). Measurement unit: Counter value as frequency per Mega-Service-Requests (MSR): Percentage value. Resolution: 0,1 request/MSR. Accuracy: Measurement recording: Accumulated number of rejected requests and accumulated number of processed requests for the total test execution time. ETSI ETSI TR 101 577 V1.1.1 (2011-12) 44 Processing: Composite metric value based on the number of rejected requests and the number of processed requests. Example: The Service Rejection Probability for service "xx" is 0,0002 %. Service acceptance rates Definition: A performance metric for frequencies of accepted service requests. Service Acceptance Rate is based on the number of accepted requests and the total number of processed requests. Service Acceptance Rate is normalized per MSR (Mega Service Requests). Measurement unit: Counter value as frequency per Mega-Service-Requests (MSR): Decimal value. Resolution: 0,1 request/MSR. Accuracy: Measurement recording: Accumulated number of accepted requests and accumulated number of processed requests for the total test execution time. Processing: Composite metric value based on the number of accepted requests and the number of processed requests. Example: The Service Acceptance Rate for service "xx" is 999 993,2 per MSR. Service acceptance probability Definition: A performance metric for the probability a service request to be accepted. Service Acceptance Probability is based on the number of accepted requests and the total number of processed requests. Service Acceptance Probability is normalized per MSR (Mega Service Requests). Measurement unit: Counter value as frequency per Mega-Service-Requests (MSR): Percentage value. Resolution: 0,1 request/MSR. Accuracy: Measurement recording: Accumulated number of accepted requests and accumulated number of processed requests for the total test execution time. Processing: Composite metric value based on the number of accepted requests and the total number of processed requests. Example: The Service Acceptance Probability for service "xx" is 99,9999 %. Mean Time Between Failures (MTBF) Definition: A performance metric for the mean time between failures of a hardware component or a hardware system. Mean time between failures is a statistical metric value expressed in number of hours. Measurement unit: Elapsed time. Resolution: Hours. Accuracy: Depending on measured component. Measurement recording: Depending on measured component. Processing: A statistical value based on a number of observations depending on measured component. Example: The MTBF figure for disk "xx" is 220 000 hours. ETSI ETSI TR 101 577 V1.1.1 (2011-12) 45
b1381823cf46aa778d0671bd8826a4bf
101 577
7.3.4 Robustness metrics and related attributes
Robustness metrics express consequences in service production due to a specified condition or set of conditions. Robustness metrics can be can be calculated for service capacity, service responsiveness, or service availability. Service capacity impact Definition: A performance metric for the impact on service capacity due to a specified condition or set of conditions expressed as a percentage value of total capacity decrease. Measurement unit: Percentage value. Resolution: 0,1 %. Accuracy: Measurement recording: Accumulated per recording period of time and entire performance test time. Processing: The capacity reduction is processed as Sustained throughput capacity for SUT under normal conditions reduced by Sustained throughput capacity for SUT when tested conditions apply. The Service capacity impact is then calculated as the capacity reduction as a percentage of Sustained throughput capacity for SUT under normal conditions. Example: Service capacity impact on service "xx" is a reduction of 47,2 %. Service responsiveness impact Definition: A performance metric for the impact on service responsiveness due to a specified condition or set of conditions expressed as a percentage value for average response time increase. Measurement unit: Percentage value. Resolution: 0,1 %. Accuracy: Measurement recording: Accumulated per recording period of time and entire performance test time. Processing: Response time increase is processed as Average response time for SUT when tested conditions apply reduced by Average response time for SUT under normal conditions. The Service responsiveness impact is then calculated as the response time increase as a percentage value of Average response time for SUT under normal conditions. Example: Service responsiveness impact on service "xx" is a response time increase of 215,0 %. Service availability impact Definition: A performance metric for the impact on service availability due to a specified condition or set of conditions expressed as a percentage value for service rejection increase. Measurement unit: Percentage value. Resolution: 0,1 %. Accuracy: Measurement recording: Accumulated per recording period of time and entire performance test time. Processing: Service rejection increase is processed as Service Rejection Rate when tested conditions apply reduced by Service Rejection Rate under normal conditions. The Service availability impact is then derived from Service rejection increase as a percentage value of Service Rejection Rate under normal conditions. Example: Service availability rate impact on service "xx" is a service reject increase of 32,4 %. ETSI ETSI TR 101 577 V1.1.1 (2011-12) 46
b1381823cf46aa778d0671bd8826a4bf
101 577
7.3.5 Recovery metrics and related attributes
Recovery metrics express different aspects of recovery a specified situation or set of situations. Robustness metrics can be can be calculated for detection of a situation, correction of the consequences of a situation, and restart after completed correction. Correction could be anything from hardware repair or replacement to software or data recovery. Detection time Definition: A performance metric for time to identify a specified condition or set of conditions. Measurement unit: Elapsed time in seconds or milliseconds. Resolution: 1 second or 1 millisecond. Accuracy: Depending on situation. Measurement recording: Measured time from SUT is set to stated conditions until log messages are recorded. Processing: No processing requirements. Example: Average time to detect situation of type "xx" is 2 seconds. Partial system restart time Definition: A performance metric for partial restart of system services after an outage. Measurement unit: Elapsed time in seconds or milliseconds. Resolution: 1 second or 1 millisecond. Accuracy: Depending on situation. Measurement recording: Measured time from service becomes unavailable until service request traffic can be resumed. Processing: No processing requirements. Example: Average time to restart after situation of type "xx" is 25 seconds. Total system restart time Definition: A performance metric for restart of a system after an outage that requires total restart. Measurement unit: Elapsed time in seconds or milliseconds. Resolution: 1 second or 1 millisecond. Accuracy: Depending on situation. Measurement recording: Measured time from SUT becomes unavailable until service request traffic can be resumed. Processing: No processing requirements. Example: Average time to do a total restart of the system is 325 seconds. Application restart time Definition: A performance metric for time restart of a system after system software updates. Measurement unit: Elapsed time in seconds or milliseconds. Resolution: 1 second or 1 millisecond. Accuracy: Depending on situation. Measurement recording: Measured time from SUT update is ready until service request traffic can be resumed. ETSI ETSI TR 101 577 V1.1.1 (2011-12) 47 Processing: No processing requirements. Example: Average time to do a total restart of the system after software update is 75 seconds. Service take-over time Definition: A performance metric for restart of system services on a backup system. Measurement unit: Elapsed time in seconds or milliseconds. Resolution: 1 second or 1 millisecond. Accuracy: Depending on situation. Measurement recording: Measured time from SUT becomes unavailable until service request traffic can be resumed. Processing: No processing requirements. Example: Average time to resume operations on a back-up system is 175 seconds.
b1381823cf46aa778d0671bd8826a4bf
101 577
7.3.6 Correctness metrics and related attributes
Correctness metrics express frequency rates of service request errors, frequency rates of correctly processed service request, or probabilities of service request errors or correctly processed service request. The measurement units for frequency rates are number of events per KSR (Kilo Service Requests), or MSR (Mega Service Requests). The measurement units for probabilities are percentage values. Service error rate Definition: A performance metric for frequencies of incorrectly processed service requests. Service Error Rate is based on the number of incorrectly processed requests and the total number of processed requests. Service Error Rate is normalized per MSR (Mega Service Requests). Measurement unit: Counter value as frequency per Mega-Service-Requests (MSR): Decimal value. Resolution: 0,1 request/MSR. Accuracy: Measurement recording: Accumulated number of incorrectly processed requests and accumulated number of processed requests for the total test execution time. Processing: Composite metric value based on the number of incorrectly processed requests and the number of processed requests. Example: Service Error Rate for service "xx" is 0,4/MSR. Service correctness rate Definition: A performance metric for frequencies of correctly processed service requests. Service Correctness Rate is based on the number of correctly processed requests and the total number of processed requests. Service Correctness Rate is normalized per MSR (Mega Service Requests). Measurement unit: Counter value as frequency per Mega-Service-Requests (MSR): Decimal value. Resolution: 0,1 request/MSR. Accuracy: Measurement recording: Accumulated number of correctly processed requests and accumulated number of processed requests for the total test execution time. Processing: Composite metric value based on the number of correctly processed requests and the number of processed requests. ETSI ETSI TR 101 577 V1.1.1 (2011-12) 48 Example: The Service Correctness Rate for service "xx" is 999 998,5/MSR. Service error probability Definition: A performance metric for the probability a service request to be incorrectly processed. Service Error Probability is based on the number of incorrectly processed requests and the total number of processed requests. Service Error Probability is normalized per MSR (Mega Service Requests). Measurement unit: Counter value as frequency per Mega-Service-Requests (MSR): Percentage value. Resolution: 0,0001. Accuracy: Measurement recording: Accumulated number of incorrectly processed requests and accumulated number of processed requests for the total test execution time. Processing: Composite metric value based on the number of incorrectly processed requests and the number of processed requests. Example: The Service Error Probability for service "xx" is 0,0001 %. Service correctness probability Definition: A performance metric for the probability a service request to be correctly processed. Service Correctness Probability is based on the number of correctly processed requests and the total number of processed requests. Service Acceptance Probability is normalized per MSR (Mega Service Requests). Measurement unit: Counter value as frequency per Mega-Service-Requests (MSR): Percentage value. Resolution: 0,0001. Accuracy: Measurement recording: Accumulated number of correctly processed requests and accumulated number of processed requests for the total test execution time. Processing: Composite metric value based on the number of correctly processed requests and the number of processed requests. Example: The Service Correctness Probability for service "xx" is 99,9999 %.
b1381823cf46aa778d0671bd8826a4bf
101 577
7.4 Abstract efficiency metrics
Abstract efficiency metrics express measurements of service production dependencies on resources and service production utilization of resources. Efficiency is measured on: 1) Service level, i.e. the tested application; and 2) Platform level, i.e. for hardware and software supporting the tested application. Abstract efficiency metrics have subcategories for Service resource usage, Service resource linearity, Service resource scalability, Platform resource utilization, platform resource distribution and Platform resource scalability.
b1381823cf46aa778d0671bd8826a4bf
101 577
7.4.1 Service resource usage metrics and related attributes
Service resource usage metrics express resource usage per processed service request or fixed amount of service requests of a kind. The measurement unit is in absolute figures or as a percentage value of available resources. Service resource usage is usually calculated per processed service request or batches thereof such as 1 000 service requests. Measured resources can be physical (hardware) or logical (software). ETSI ETSI TR 101 577 V1.1.1 (2011-12) 49 CPU usage per service request Definition: A performance metric for used CPU resources per processed service request of some kind. Measurement unit: CPU usage in time. Resolution: Milliseconds or microseconds depending on service type. Accuracy: Depending on service type. Measurement recording: Accumulated value for CPU usage and number of service requests. Processing: Composite metric value based on the accumulated CPU time and the total number of processed requests. Example: CPU Usage per Service Request for service "xx" is 1,23 millisecond. Memory usage per service request Definition: A performance metric for used memory resources per processed service request of some kind. Measurement unit: Memory used. Resolution: Number of KB (kilobytes). Accuracy: Depending on service type. Measurement recording: Instantaneous value for allocated memory and number of concurrent service requests. Processing: Instantaneous value for allocated memory resources and the concurrent number of processed requests. Example: Memory Usage per Service Request for service "xx" is 270 KB.
b1381823cf46aa778d0671bd8826a4bf
101 577
7.4.2 Service resource linearity metrics and related attributes
The Service resource linearity characteristics are indicators of a system's ability to use a constant amount of resources for the production of a service regardless of the actual load level on the system. Service resource linearity metrics express the probability of an identified trend in resource usage correlated with an increase in service request rate. The measurement unit for a trend is a probability value for the correctness of the trend, where 0 % means no identifiable trend and 100 % means a guaranteed or reliable trend. CPU usage trend Definition: A performance metric for CPU usage trends when number of service requests per time unit increase. The trend may indicate an increasing or a decreasing CPU usage. The CPU usage trend is a presented as a probability figure in the range 0,0 to 1,0, where 0,0 indicates no trend and 1,0 a very strong trend. Measurement unit: Probability of trend: percentage value. Resolution: 0,00001. Accuracy: Measurement recording: Average CPU usage value per recording period of time. Processing: A metric value based on a recorded sequence of average CPU usage values that cover the entire test period. Example: The probability of an identified CPU usage trend for service "xx" is 2,0 %. ETSI ETSI TR 101 577 V1.1.1 (2011-12) 50 Memory usage trend Definition: A performance metric for memory usage trends when number of service requests per time unit increase. The trend may indicate an increasing or a decreasing memory usage. The memory usage trend is a presented as a probability figure in the range 0,0 to 1,0, where 0,0 indicates no trend and 1,0 a very strong trend. Measurement unit: Probability of trend: percentage value. Resolution: 0,00001. Accuracy: Measurement recording: Average memory usage value per recording period of time. Processing: A metric value based on a recorded sequence of average memory usage values that cover the entire test period. Example: The probability of an identified memory usage trend for service "xx" is 2,0 %.
b1381823cf46aa778d0671bd8826a4bf
101 577
7.4.3 Service resource scalability characteristics
The Service resource scalability characteristics are indicators of a system's ability to utilize additional platform resources for increased production of a service. Service resource scalability metrics express the relation between hardware resource increases and related service capacity increases. Service resource scalability metrics can be measured for a single type of hardware resources or a balanced mix of hardware resources. The measurement units for Scalability metrics are service capacity increases in absolute numbers. The measurement units for capacity increases can also be percentage values, however, any percentage value depends on the current service capacity level if you add a fixed quantity of resources. Service capacity per additional Processing Unit Definition: Performance metric for the increase of service capacity of a kind by adding a Processing Unit, such as a server, or a CPU, or more cores per CPU. The metric value is expressed as additional service requests processed per time unit.. The scalability metric applies for services with active load. Measurement unit: Counter value as frequency per time unit: Number of requests/second. Resolution: 1 request/second. Accuracy: Measurement recording: Accumulated value per recording period of time and entire performance test time. Processing: Processed as Sustained throughput capacity for SUT including additional processing unit reduced by Sustained throughput capacity for SUT without additional processing unit. Example: Service capacity per additional Processing Unit for service "xx" is 850 requests/second. Service capacity per additional Memory Unit Definition: Performance attributes for the increase of service capacity of a kind by adding a Memory Unit, such as a DIMM. This scalability metric applies to services with passive load. Measurement unit: Counter: Number of requests. Resolution: 1 request. Accuracy: Measurement recording: Accumulated value per recording period of time and entire performance test time. ETSI ETSI TR 101 577 V1.1.1 (2011-12) 51 Processing: Processed as In progress passive load capacity for SUT including additional memory unit reduced by In progress passive load capacity for SUT without additional memory unit. Example: Service capacity per additional Memory Unit for service "xx" is 1 500 requests.
b1381823cf46aa778d0671bd8826a4bf
101 577
7.4.4 Platform resource utilization metrics and related attributes
Platform resource utilization metrics express the utilization level as percentage value of the resource total or as a quote between used resources such as CPU and Memory. Platform resource utilization profile Definition: Performance metric for the utilization of various resources at Sustained Throughput capacity of a service "xx" or a mix of services. To be comparable utilization of each resource is expressed as a percentage value of the total resource. Measurement unit: Resource usage level: Percentage value. Resolution: 0,1. Accuracy: Measurement recording: Resource utilization percentage value at Sustained throughput capacity for service "xx". The Resource utilization percentage value is recorded for a set of resources. Processing: Convert resource usage level or resource usage quantity to a percentage value of the resource total. Example: Utilization level per additional Processing Unit 82 % to 93 % depending on service. CPU-to-Memory usage ratio Definition: A performance metric for resource usage ratio between CPU and memory for a service or a service mix calculated as the quote of CPU usage level divided by memory usage level. Measurement unit: Quote of CPU usage level (%) and memory usage level (%): Decimal value. Resolution: 0,00001. Accuracy: Measurement recording: Average CPU usage values and average memory usage values recorded during the entire test period. Processing: A metric value based on a recorded sequence of average CPU usage values and average memory usage values that cover the entire test period. Example: The CPU-to-memory memory ratio for service "xx" is 0,85. Memory-to-CPU usage ratio Definition: A performance metric for resource usage ratio between memory and CPU for a service or a service mix calculated as the quote of memory usage level divided by CPU usage level. Measurement unit: Quote of memory usage level (%) and CPU usage level (%): Decimal value. Resolution: 0,00001. Accuracy: Measurement recording: Average CPU usage values and average memory usage values recorded during the entire test period. ETSI ETSI TR 101 577 V1.1.1 (2011-12) 52 Processing: A metric value based on a recorded sequence of average memory usage values and average CPU usage values that cover the entire test period. Example: The Memory-to-CPU ratio for service "xx" is 0,90.
b1381823cf46aa778d0671bd8826a4bf
101 577
7.4.5 Platform resource distribution metrics and related attributes
The platform resource distribution characteristics are indicators of how fast and evenly platform resources are distributed to requesting services. The Platform resource distribution can be measured for an individual service or a mix of services. The platform resource distribution characteristics discussed here are of two kinds: 1) Demand driven resource distribution characteristics are indicators of a system's ability to distribute available resources to demanding service requests. A performance metric for demand driven resource distribution is Average queuing time-for requested resource; and 2) Outage driven resource distribution characteristics are indicators of a system's ability to redistribute available resources to demanding service requests after various outage situations. The objectives of Outage driven resource distribution is to minimize the effects of various outage situations. A performance metric for outage driven resource distribution is Average latency time-for resource redistribution. Average queuing time for requested resource Definition: A performance metric for the average elapsed time spent waiting for requested resources. Measurement unit: Elapsed time in seconds or milliseconds. Resolution: 1 second or 1 millisecond. Accuracy: Measurement recording: Average queue length to service and processing time in service. Processing: Average queuing time is calculated as (average queue length -1) x processing time. Example: Average queuing time for resource "aa" for service "xx" is 3 milliseconds. Average latency time for resource redistribution Definition: A performance metric for the average elapsed time spent waiting for requested resources. Measurement unit: Elapsed time in seconds or milliseconds. Resolution: 1 second or 1 millisecond. Accuracy: Measurement recording: Average queue length to service and processing time in service. Processing: Average queuing time is calculated as (average queue length -1) x processing time. Example: Average queuing time for resource "aa" for service "xx" is 3 milliseconds.
b1381823cf46aa778d0671bd8826a4bf
101 577
7.4.6 Platform resource scalability metrics and related attributes
The Platform resource scalability characteristics are indicators of to what level additional system resources can be utilized for production of a service or a mix of services, i.e. resource utilization applied on additional resources. Platform resource scalability can be measured for an individual service or a mix of services. The highest possible value for Platform resource scalability is of course 100 % of additional resources. However there are some limitations to what is possible to reach set by the Platform resource utilization measured before addition of resources. Few systems have a perfectly linear Platform resource scalability. The effect of adding more resources of some kind is in reality limited by the capacity of other related resources. For example the effect of adding more processing power and memory to a system is in reality limited by the transmission capacity between CPU and memory. ETSI ETSI TR 101 577 V1.1.1 (2011-12) 53 Utilization level per additional Processing Unit Definition: Performance attributes for the predicted utilization of an added Processing Unit, such as a server, or a CPU, or more cores per CPU. This scalability metric applies to services with active load. Measurement unit: Resource utilization level: Percentage value. Resolution: 0,1. Accuracy: Measurement recording: Resource utilization percentage value at maximum throughput for service "xx". The Percentage value is recorded for a set of frequently used services. Processing: Convert resource usage level or resource usage quantity to a percentage value of the resource total. Example: Utilization level per additional Processing Unit 82,2 % to 93,7 % depending on service.
b1381823cf46aa778d0671bd8826a4bf
101 577
8 Performance data processing
b1381823cf46aa778d0671bd8826a4bf
101 577
8.1 Steps in performance data processing
A major part of performance testing is the processing of all performance data collected during the performance test. Performance data is processed in the following sequence of steps: 1) Collection and storage of raw performance data. 2) Condensation and normalization of raw performance data. 3) Performance data computations. 4) Evaluation of performance data. 5) Presentation of performance data. This is an abstract flow of performance data processing. The text does not address whether performance data is processed and presented during or after execution of a performance test, since this is regarded as a test tool implementation issue in this context.
b1381823cf46aa778d0671bd8826a4bf
101 577
8.2 Time series of performance data
Performance data as discussed in the following text represent sequences of measurement values (usually long sequences) recorded at different times during measurement period. We call this time series of measurement values. Measured Object Measurement values time V1 V2 V3 V4 V5 V6 V7 V8 Figure 13: A time series of measurement values Management of time series of measurement data is critical to the usability of a performance test tool. ETSI ETSI TR 101 577 V1.1.1 (2011-12) 54
b1381823cf46aa778d0671bd8826a4bf
101 577
8.3 Collection and storage of raw performance data
Collection and storage of raw performance data is the first processing step. It is performed during execution of the performance test. Raw performance data means the performance data is still in its native form as it was collected and has not been processed in any form yet. In reality this means that there are response time measurements recorded for every response to a service request. Therefore raw performance data occupies lots of space and needs to get condensed. Additionally Raw performance data observations are hard to visualize in a graph. A plotting of millions of response time recordings usually looks like someone has spilled ink on a piece of paper. This is another reason for the processing of performance data in the following steps.
b1381823cf46aa778d0671bd8826a4bf
101 577
8.4 Condensation and normalization of raw performance data
Condensation and normalization of raw performance data is the second processing step. It can be performed during execution of the performance test or after the performance test has completed. This processing step is mandatory during the performance test execution, if a performance test monitoring tool is used. Condensation of performance data usually reduce the amount of data to a small fraction of the raw performance data. Normalization of performance data is transformation to a common norm for example transactions per second or rejected requests per million service requests, etc.
b1381823cf46aa778d0671bd8826a4bf
101 577
8.5 Performance data computations
Performance data computation is the third processing step. All requested performance metrics requesting some kind of computations are processed in this step. The following shows three areas of performance data computations: • trend analysis; • comparisons of regressions tests; and • computations of composite performance metrics.
b1381823cf46aa778d0671bd8826a4bf
101 577
8.5.1 Trend analysis
Trend analysis of performance data are usually done for stability and availability tests. The purpose is to find traces of undesired behaviour that will cause severe disturbances in production if not handled at an early stage.
b1381823cf46aa778d0671bd8826a4bf
101 577
8.5.2 Comparisons of regression tests
Comparisons of regression tests are usually done to verify improvements in performance of a service.
b1381823cf46aa778d0671bd8826a4bf
101 577
8.5.3 Computations of composite performance metrics
Computations of composite performance metrics is processing of performance metrics based on multiple sources of recorded performance data. One example of composite performance metrics is resource usage per processed request of a service.
b1381823cf46aa778d0671bd8826a4bf
101 577
8.6 Evaluation of performance data
Evaluation of performance data is the fourth processing step. In this step measured performance metrics are rated according to a set of rules expressing stated or desired performance goals. Evaluation of performance data can also be performed on the output from comparison of regression tests. An evaluation of performance data results in some kind of verdict of the tested system's performance. ETSI ETSI TR 101 577 V1.1.1 (2011-12) 55
b1381823cf46aa778d0671bd8826a4bf
101 577
8.7 Presentation of performance data
Presentation of performance data is the fifth processing step. Presentation of performance data will convert processed and evaluated performance data into easy to understand presentations, such as diagrams, tabular format, or something else.
b1381823cf46aa778d0671bd8826a4bf
101 577
9 General performance test concepts
b1381823cf46aa778d0671bd8826a4bf
101 577
9.1 Performance tests
Performance tests collect performance data that indicates the behaviour of a System Under Test under specified conditions in a controlled environment. The goal of a performance test can be to find capacity limits of a system, or testing a system's ability to deliver services regardless of time, or many other performance characteristics. Performance measurements may cover an almost infinite number of performance characteristics of a system, but are for practical reasons performance tests are limited to a carefully selected set of performance characteristics in most cases. The conditions for captured performance data are caused by performance test tools generating artificial load on the tested system in the form of realistic service requests from simulated users.
b1381823cf46aa778d0671bd8826a4bf
101 577
9.2 Performance tests and system life cycle phases
Performance testing applies to all life cycle phases of a system from the first design steps of a system throughout real production of services. There are two main groups of performance tests: 1) pre-deployment performance tests; and 2) post-deployment performance tests.
b1381823cf46aa778d0671bd8826a4bf
101 577
9.2.1 Pre-deployment performance test applications
Performance cannot be added to a system after it is designed and implemented. A system has to be designed and built for good performance to achieve stated performance goals. A general principle is therefore to start work on performance issues as early as possible during system design and development. Pre-deployment performance testing takes place during system design and development and includes tests of: 1) intended performance goals; 2) system design; 3) system implementation; and 4) system integration. Performance tests of intended performance goals Performance tests of intended performance goals are done to set realistic performance goals, i.e. to test if intended performance goals are possible to reach and if not how they should be changed. The purpose is to transform intended performance goals to stated performance goals. Performance tests of system design Performance tests of system design are done to verify or modify stated performance objectives. The test results are also input to performance goals for implementation of individual elements of the system. ETSI ETSI TR 101 577 V1.1.1 (2011-12) 56 Performance tests of system implementation Performance tests of system implementations are done to maintain control over stated performance objectives in developed code. The focus of performance tests of system implementation is powerfulness and efficiency of developed code. Performance tests of reliability characteristics are of little value during this phase since the developed code is not stable. Performance tests of system integration Performance tests of system integration are done to verify that measured performance of a system is maintained, when the system gets integrated with other related systems. Performance test objectives during system integration cover all specified performance characteristics of the system.
b1381823cf46aa778d0671bd8826a4bf
101 577
9.2.2 Post-deployment performance test applications
Post-deployment performance tests are measurements of a deliverable or a delivered system. Post-deployment performance tests include: 1) benchmarking or system evaluation; 2) performance tests of system delivery; and 3) performance tests of service production. Benchmarking or system evaluation Benchmarking is performance tests of a system based on a suite of standardized performance tests. The main purpose of a performance benchmark is to produce metric that can be rated and compared with the metric values produced by other systems using the same benchmark. Performance tests of system delivery Performance tests of system delivery are done to verify that stated performance requirements for a delivered system are at hand, when the system is integrated with other systems on the installation site. Performance test objectives in system delivery cover all specified performance characteristics of the system. Performance tests of service production (performance monitoring) Performance tests of service production (also referred to as performance monitoring) are done to verify that produced services are in accordance with stated quality requirements (Quality of Services). Performance monitoring of service production can be: • reactive; or • proactive. Reactive performance monitoring Reactive performance monitoring aims at detecting and acting on situations after they have happened. Actions are based on situations identified from analysis of log files of different kinds from the system and service production. Proactive performance monitoring Proactive performance monitoring aims at detecting and acting on identified situations or trends that might evolve into severe disturbances or a disaster in service production before the situations get critical. ETSI ETSI TR 101 577 V1.1.1 (2011-12) 57
b1381823cf46aa778d0671bd8826a4bf
101 577
9.3 Performance test objectives
Performance test objectives describe the reasons for doing performance tests. Performance test objectives can be confirmative or explorative. A performance test execution can be both confirmative and explorative.
b1381823cf46aa778d0671bd8826a4bf
101 577
9.3.1 Confirmative performance tests
Confirmative performance tests are done to verify stated performance objectives. A confirmative performance test case can be to verify required throughput capacity for a specific service or mix of services. Other cases of confirmative performance tests are regression testing. Examples of confirmative performance tests: • Has the tested system processing capacity for the specified load? • Does the system respond fast enough under specified load conditions? • Has the tested system processing capacity for the specified load? • Does the system handle requested services continuously? • Does the system deliver correct results under heavy load?
b1381823cf46aa778d0671bd8826a4bf
101 577
9.3.2 Explorative performance tests
Explorative performance tests are done to get an understanding of the behaviour of a system under specified conditions, or to find the performance limitations. An example of explorative performance tests could be to find the maximum throughput capacity for a specific service or mix of services under specified conditions such as maximum CPU load. Examples of explorative performance tests: • Does the system respond fast enough under specified load conditions? • Has the system processing limitations due lack of certain resources? • Is the system's responsiveness continuously predictable? • Can the system's processing capacity be expanded with more hardware? • Has the system processing bottlenecks limiting the capacity? • Is the system configured to fully utilize the hardware platform? • Can the system manage various production critical situations such as DOS attacks? • How long time does it take to recover from a partial or a full restart?
b1381823cf46aa778d0671bd8826a4bf
101 577
9.4 Performance objectives and performance requirements
Performance objective is a term for desired performance goals that a system should meet. Performance objectives should at least cover stated performance requirements, if specified. Performance objectives can be defined as absolute performance figures or as performance figures relative to stated performance requirements or the measured performance of other systems. Relative performance figures are usually percentage values, such as 30 % higher capacity than brand "XYZ" or 20 % reduction of response time for service "ABC". Performance requirements is a term for performance figures a system is expected to meet to be approved. Performance objectives and performance requirements can be stated for a range of objects from a whole system, or a subsystem down to individual services. ETSI ETSI TR 101 577 V1.1.1 (2011-12) 58
b1381823cf46aa778d0671bd8826a4bf
101 577
9.5 Performance measurement conditions
Performance measurement conditions specify the circumstances under which performance data can be recorded during a performance test. Performance measurement conditions can be external, internal, or both.
b1381823cf46aa778d0671bd8826a4bf
101 577
9.5.1 External measurement conditions
External measurement conditions describe what a tested system (SUT) should be exposed to during a performance test. External measurement conditions include types of service requests, volumes of service requests (traffic rates), duration of service request volumes, and volumes of simulated entities or users requesting services.
b1381823cf46aa778d0671bd8826a4bf
101 577
9.5.2 Internal measurement conditions
Internal measurement conditions describe expected situations inside a tested system during a performance test, such as resource usage levels of CPU, memory, etc.
b1381823cf46aa778d0671bd8826a4bf
101 577
9.5.3 Example
If, for example, an explorative performance test case is to find the maximum system throughput of a specific service at 80 % CPU load there are two test conditions - one internal and one external: 1) external test condition: System load from service requests of the specified type; and 2) internal test condition: A measured CPU load of 80 %.
b1381823cf46aa778d0671bd8826a4bf
101 577
9.6 Performance targets
Performance targets describe expected performance goals in a validating performance test.
b1381823cf46aa778d0671bd8826a4bf
101 577
9.7 Performance measurements standards
Performance measurement standards are generally accepted specifications for how to measure, and evaluate some kind of performance on a standardized system or an architectural standard for a system.
b1381823cf46aa778d0671bd8826a4bf
101 577
9.8 Some performance test characteristics
b1381823cf46aa778d0671bd8826a4bf
101 577
9.8.1 Test coverage
Performance tests cover the application services with potential performance implications, i.e. the most frequent, the most critical services, or the most demanding services of an application. Performance tests normally measure the performance of system services that have passed functional tests (positive test cases). Measuring performance of non-working system requests (negative test cases) is regarded as beside the point. However in some explorative cases performance tests can be positive test cases driven to the point of non-working service requests by load level increase beyond what the tested system can handle.
b1381823cf46aa778d0671bd8826a4bf
101 577
9.8.2 Test purposes
A performance test is either explorative with the purpose to find performance limits of a tested system, or confirmative with the purpose to verify that one or more stated performance requirements are met.
b1381823cf46aa778d0671bd8826a4bf
101 577
9.8.3 Test cases
Performance tests of a system require a smaller number of test cases than functional tests. ETSI ETSI TR 101 577 V1.1.1 (2011-12) 59
b1381823cf46aa778d0671bd8826a4bf
101 577
9.8.4 Test concurrency
Performance tests block the test environment, i.e. no other activities can be performed on a tested system during an on- going performance test.
b1381823cf46aa778d0671bd8826a4bf
101 577
9.8.5 Test resources
Performance tests are usually demanding on hardware resources and consequently expensive to set up. The main reason for this is that performance tests are exclusive, i.e. no other activities are allowed on a System Under Test for performance tests. In particular performance tests of availability and stability are demanding, since the test time might last up to several weeks during which no other activities can be performed on the System Under Test.
b1381823cf46aa778d0671bd8826a4bf
101 577
9.8.6 Test execution and test case
A single performance test collects performance data that cover many performance test cases.
b1381823cf46aa778d0671bd8826a4bf
101 577
9.8.7 Test execution time
The execution time of a single performance test varies from some minutes up to several weeks depending on type
b1381823cf46aa778d0671bd8826a4bf
101 577
9.8.8 Recorded test data
A performance test records massive amounts of measurement data that needs to be managed wisely.
b1381823cf46aa778d0671bd8826a4bf
101 577
9.8.9 Test data evaluation and test results
A Performance test result is rarely binary (passed/not passed). On the contrary performance measurement results are complicated to understand and require experienced and careful handling to give an understandable and useful verdict.
b1381823cf46aa778d0671bd8826a4bf
101 577
10 Performance test environment
b1381823cf46aa778d0671bd8826a4bf
101 577
10.1 Test environment concepts
The Performance Test Environment contains hardware and software components required to run performance tests. When operational for performance tests the Performance Test Environment is called a Test Bed or a Test Site. Test Site Infrastructure Test bed 1 SUT 1 System Under Test 1 TS 1 Test System 1 Test bed 2 SUT 2 System Under Test 2 TS 2 Test System 2 Figure 14: A general view of a Test Site with Test Beds, Test Systems (TS) and Systems Under Test (SUTs) ETSI ETSI TR 101 577 V1.1.1 (2011-12) 60
b1381823cf46aa778d0671bd8826a4bf
101 577
10.1.1 Test Bed concepts
A Test Bed contains hardware and software components that: 1) constitute the Test System (TS); 2) constitute the System Under Test (SUT); and 3) connects the Test System to the System Under Test. The System Under Test and the Test System are usually installed on physically separated equipment. A Test Bed contains the physical interface between the Performance Test Tools and the System Under Test, i.e. network components such as switches, routers and other components corresponding to layer 1 of the OSI model. The Test Bed supports the Logical Interface between the Performance Test Tools (TS) and the System Under Test (SUT). The Logical Interface can be an Application Programming Interface (API) or a Communication Protocol Interface corresponding to layer 7 of the OSI model. Both interfaces are in most cases IP based communication services, but other interfaces such as SS7 can be requested.
b1381823cf46aa778d0671bd8826a4bf
101 577
10.1.2 Test Site concepts
A performance test requires exclusive access to the System Under Test. Consequently any concurrent performance test is done on a separate SUT. Large development projects usually use several performance Test Beds to enable all required performance tests to be done within given time limits of the performance test project. A Test Site is a test location with equipment that: 1) enables two or more Test Beds to be configured and work concurrently; and 2) allows equipment to be reassigned between the supported Test Beds, i.e. each test bed can be equipped differently from performance test to performance test.
b1381823cf46aa778d0671bd8826a4bf
101 577
10.2 System Under Test concepts
b1381823cf46aa778d0671bd8826a4bf
101 577
10.2.1 System Under Test components
A System Under Test or SUT is the set of hardware and software components constituting the tested system in a performance measurement. A System Under Test is composed of two parts: 1) Tested Components (TC); and 2) Supporting Components (SC). The reason for the decomposition is that a System Under Test will report different performance figures depending on the set of Supporting Components it is tested on. This applies to all systems not dedicated for a specific platform SUT Tested Components Application Supporting components (SC) Middleware Operating system Hardware Figure 15: Components of a SUT ETSI ETSI TR 101 577 V1.1.1 (2011-12) 61 Tested Components The Tested Components are, in the context of a distributed system, the requested services on a System Under Test. Supporting Components The Supporting Components are all hardware and software components required to enable performance tests of the Tested Components. Typical Supporting Components are: 1) middleware software, such as database software or application platform software, etc.; 2) operating system software; and 3) hardware, such as servers, disk systems, load balancing equipment, etc. The Supporting Components are regarded as Tested Components when the System Under Test is able to run on one specific set of Supporting Components only. In those cases there is only one set of measured performance results. The Supporting Components are regarded as a test condition, when the System Under Test is able to run on multiple sets of Supporting Components. In such cases measured performance results refer to the used set of Supporting Components.
b1381823cf46aa778d0671bd8826a4bf
101 577
10.2.2 Borders of a System Under Test
A System Under Test has two types of borders interfacing the Test System components: 1) front-end borders; and 2) back-end borders. Front-end borders The Front-end borders of are the intersections between the System Under Test and the Test System's Service Requesting Tools. The Front-end borders contain the interfaces for incoming service requests. Back-end borders A System Under Test may provide services where requests are passed on to an external unit sometimes called a service responding device. In order to test such services the Test System contains Service Responding Tools simulating such units. The Back-end borders contain the interfaces between the Test System and the System Under Test for outgoing service requests. Front End SIP / Gm SUT IMS / P-CSCF Service Requesting Tool Service Responding Tool Back End SIP / Mw Figure 16: Example of a SUT with front-end border SIP/Gm (left side) and back-end border SIP/Mw (right side)
b1381823cf46aa778d0671bd8826a4bf
101 577
10.2.3 System Under Test replacements
In a distributed system services are usually available in a client-server relation. The party requesting a service is called a client and the party providing the requested service is called a server. A server can in turn act as a client requesting services. These services can be internal services or external services shared with other application systems. Such internal or external services can in some cases be replaced by Service Simulation Tools providing identical services to the tested system (see also clause 10.3.3). ETSI ETSI TR 101 577 V1.1.1 (2011-12) 62
b1381823cf46aa778d0671bd8826a4bf
101 577
10.3 Test System concepts
b1381823cf46aa778d0671bd8826a4bf
101 577
10.3.1 Performance Test Tools
A complete Performance Test Tool is a set of hardware and software components that can handle the following tasks: 1) expose the SUT to a set of (load) conditions, under which performance measurement data are captured; 2) transform captured performance measurement data into desired performance metrics about the SUT; 3) monitor and display captured and processed performance measurement data on-line during an on-going test; 4) evaluate performance test results; and 5) present evaluated performance test results. The first task is handled by Service Handling Tools, Service Simulation Tools, and Performance Data Recording Tools. The second task is handled by Performance Data Processing Tools. The third task is handled by Performance Test Monitoring Tools. The fourth task is handled by Performance Evaluation Tools. The fifth task is handled by Performance Presentation Tools.
b1381823cf46aa778d0671bd8826a4bf
101 577
10.3.2 Service handling tools
Service handling tools are the interfaces to the SUT for system services specified in the performance test cases. Service handling tools interact with the SUT in two ways: 1) as service requesting tools; and 2) as service responding tools. SUT Service Responding Tool Service Requesting Tool Figure 17: Example of a Test Bed with Service requesting and Service Responding Tools Service Requesting Tools Service requesting tools, usually called load generators, send service request to the SUT according to the test specifications. When the SUT has an API interface the service requesting tool simulates an application requesting services over the Application Programming Interface. When the SUT has a protocol interface the service requesting tool simulates device or a system requesting services over the protocol. Regardless of the SUT interface the service requests are in performance tests referred to as Client requests for services from the SUT. Service Responding Tools There are system services that connect a requesting client to one or more counterparts (usually called terminating devices) outside the tested system. Terminating devices are usually simulated by test tools receiving and responding to requests in the test bed. Such services normally require a peer-to-peer protocol, such as SIP or Diameter, where communicating devices are able to concurrently act as clients initiating server requests and servers responding to service requests. Performance test tools interfacing peer-to-peer protocols are able to send service requests to the SUT and receive requests from the SUT concurrently. ETSI ETSI TR 101 577 V1.1.1 (2011-12) 63
b1381823cf46aa778d0671bd8826a4bf
101 577
10.3.3 Service Simulation Tools
The SUT contains all components required to resolve requested services, whenever a service is tested. SUT components with well defined services and interfaces can be replaced by Service Simulation Tools. There are several purposes with Service Simulation Tools such as: 1) reduction of costs for a Test Bed. Service Simulation Tools are usually much cheaper than replaced units; 2) shorten the time to build a Test Bed. Service Simulation Tools are usually less complex and easier to install; and 3) reducing the complexity to build a Test Bed. Service Simulation Tools are usually less complex to use. Example: Registration of an IMS user is handled by two components in the IMS architecture, the S-CSCF and the HSS. When testing the capacity of an S-CSCF to handle registration requests a real HSS can be replaced by a Service Simulation Tool acting as an HSS when accessed by the S-CSCF. a. SUT IMS / HSS IMS / S-CSCF IMS / P-CSCF Simulated UEs Diameter/ Cx SIP Mw SIP Gm b. SUT IMS / S-CSCF IMS / P-CSCF Simulated UEs Simulated IMS / HSS Diameter/ Cx SIP Mw SIP Gm Figure 18: Example of a SUT, with (a) a real HSS and (b) a simulated HSS Service Simulation Tools also enable new possibilities to measure performance. By replacing an HSS by a test tool simulating the HSS services we can measure the time spent on processing a registration request in an S-CSCF, since the time spent processing a registration request in an HSS is controlled by the test tool.
b1381823cf46aa778d0671bd8826a4bf
101 577
10.3.4 Performance data recording tools
A main function of performance test tools is to capture and save performance data. Performance data can be captured externally and internally with respect to the SUT. External performance recording tools External performance data are measurements of how the SUT responds to requests from the Test System's Service Requesting Tools. External performance data are captured by the Test System's Service Requesting Tools and the Test System's Service Responding Tools (if any) and recorded by the Test System's Data Recorder tools. Internal performance recording tools (Probes) Internal performance data are measurements of how the SUT handles service requests from the Service Requesting Tools internally. Internal performance data are captured by probes running inside the SUT and recorded by the Test System's Data Recorder tools. The probes are managed by the Test System. ETSI ETSI TR 101 577 V1.1.1 (2011-12) 64 TS (Test System) TS (Test System) SUT Internal Performance Data Recorder TS Service Responder TS Service Requestor Probes External Performance Data Recorder External Performance Data Recorder Figure 19: External and internal performance recording tools
b1381823cf46aa778d0671bd8826a4bf
101 577
10.3.5 Performance test monitoring tools
Performance Test Monitoring tools enable captured measurement data to be processed and viewed in real time or semi-real time during execution of a performance test. TS (Test System) TS (Test System) SUT Internal Performance Data TS Service Responder TS Service Requestor Probes External Performance Data External Performance Data Performance Test Monitoring Tools On-line user Alerts On-line user On-line user Logging Figure 20: Performance test monitoring tools The purpose of a Performance Test Monitoring tool is to make the information about the progress of an on-going test instant and consequently to improve the control of tests with long execution time. For example if a performance test of stability and availability characteristics is planned to run one week but for some reason after fails three hours it is a waste of time to let the test continue the remaining 165 hours. Performance Test Monitoring tools can usually also be set to trigger on specified conditions. Alerts about such situations are sent to other monitoring systems for further processing. Monitoring tools can also in many cases send alerts as SMS message to remote units.
b1381823cf46aa778d0671bd8826a4bf
101 577
10.3.6 Performance data processing tools
Performance Data Processing tools transform measurement data into metric values describing requested performance characteristics of a system. ETSI ETSI TR 101 577 V1.1.1 (2011-12) 65