id
stringlengths
40
40
text
stringlengths
9
86.7k
metadata
stringlengths
3k
16.2k
source
stringclasses
1 value
added
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
created
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
9d0902911693abaf326a53d4490daba7c871d5a4
Towards a UML profile for software product lines. Tewfik Ziadi, Loïc Hélouët, Jean-Marc Jézéquel To cite this version: Tewfik Ziadi, Loïc Hélouët, Jean-Marc Jézéquel. Towards a UML profile for software product lines.. Proceedings of the Fifth International Workshop on Product Family Engineering (PFE-5), 2003, RENNES, France. hal-00794817 HAL Id: hal-00794817 https://inria.hal.science/hal-00794817 Submitted on 26 Feb 2013 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Towards a UML Profile for Software Product Lines Tewfik Ziadi¹, Loïc Hélouët¹, and Jean-Marc Jézéquel² ¹ IRISA-INRIA, Campus de Beaulieu 35042 Rennes Cedex, France {tziadi,lhelouet}@irisa.fr ² IRISA-Rennes1 University, Campus de Beaulieu 35042 Rennes Cedex, France jezequel@irisa.fr Abstract. This paper proposes a UML profile for software product lines. This profile includes stereotypes, tagged values, and structural constraints and it makes possible to define PL models with variabilities. Product derivation consists in generating product models from PL models. The derivation should preserve and ensure a set of constraints which are specified using the OCL. 1 Introduction The Unified Modeling Language (UML) [5] is a standard for object-oriented analysis and design. It defines a set of notations (gathered in diagrams) to describe different aspects of a system: use cases, sequence diagrams, class diagrams, component diagrams and statecharts are examples of these notations. A UML Profile contains stereotypes, tagged values and constraints that can be used to extend the UML metamodel. Software Product Line engineering aims at improving productivity and decrease realization times by gathering the analysis, design and implementation activities of a family of systems. Variabilities are characteristics that may vary from a product to another. The main challenge in the context of software Product Lines (PL) approach is to model and implement these variabilities. Even if the product line approach is a new paradigm, managing variability in software systems is not a new problem and some design and programming techniques allows to handle variability; however outside the Product Line context, variability concerns a single product, i.e variability is inherent part of a single software and is resolved after the product is delivered to customers and loaded into its final execution environment. In the product line context, variability should explicitly be specified and is a part of the product line. Contrarily to the single product variability, PL variability is resolved before the software product is delivered to customers. [1] calls the variability included in a single product “the run time variability”, and the PL variability is called “the development time variability”. UML includes some techniques such as inheritance, cardinality range, and class template that allow the description of variability in a single product i.e variability is specified in the product models and resolved at run time. Furthermore, it is interesting to use UML to specify and to model not only one product but a set of products. In this case the UML models should be considered as reference models from which product models can be derived and created. This variability corresponds to the product line variability. In this paper we consider this type of * This work has been partially supported by the FAMILIES European project. Eureka Σ! 2023 Program, ITEA project ip 02009. variability and we use UML extension mechanisms to specify product line variability in UML class diagrams and sequence diagrams. A set of stereotypes, tagged values and structural constraints are defined and gathered in a UML profile for PL. The paper is organized as follows: Section 2 presents the profile for PL in terms of stereotypes, tagged values and constraints, Section 3 presents the use of this profile to derive product models from the PL, Section 4 presents related work, and Section 5 concludes this work. 2 A UML Profile for Product Lines The extensions proposed here for PL are defined on the UML 2.0 [5] and they only concern the UML class diagrams and sequence diagrams. We use an ad-hoc example to illustrate our extensions. The example concerns a digital camera PL. A digital camera comports an interface, a memory, a sensor, a display and may comport a compressor. The main variability in this example concerns the presence of the compressor, the format of images supported by the memory, which can be parameterized and the interface supported. We distinguish three types of interfaces: Interface 1, Interface 2, and Interface 3. 2.1 Extensions for Class Diagrams UML class diagrams are used to describe the structure of the system in terms of classes and their relationships. In the context of Product Lines, two types of variability are introduced and modeled using stereotypes. Stereotypes – **Optionality.** Optionality in PLs means that some features are optional for the PL members. i.e: they can be omitted in some products. The stereotype «optional» is used to specify optionality in UML class diagrams. The optionality can concern classes, packages, attributes or operations. So The «optional» stereotype is applied to Classifier, Package and Feature meta-classes. – **Variation.** We model variation point using UML inheritance and stereotypes: each variation point will be defined by an abstract class and a set of subclasses. The abstract class will be defined with the stereotype «variation» and each subclass will be stereotyped «variant». A specific product can choose one or more variants. These two stereotypes extend the metaclass Classifier. The alternative variability especially defined in feature driven approaches is a particular case of our variation variability type where each product should choose one and only one variant. This can be modeled using OCL (Object Constraint Language) [10] as a mutual exclusion constraint between variants. The mutual exclusion constraint will be presented in Section 3. Constraints.** A UML profile also includes constraints that can be used to define structural rules for all models specified by the defined stereotypes. An example of such profile constraint concerns the stereotype <<variant>>. It specifies that all classes stereotyped <<variant>>, should have one and only one ancestor among all its ancestors stereotyped <<variation>>. This can be formalized using the OCL as follows: **context <<variant>>** **inv:** self.supertype → select(oclIsKindOf(Variation))→size()=1 **Example.** Figure 1 shows the class diagram of the camera PL example, the Compressor class is defined with the stereotype <<optional>> to indicate that some camera products do not support the compression feature. The camera interface is defined as an abstract class with three concrete subclasses: Interface 1, Interface 2, and Interface 3. A specific product can support one or more interfaces, so the stereotype <<variation>> is added to the abstract class Interface. All subclasses of the interface abstract class are defined with the stereotype <<variant>>. Notice that the class diagram of the camera PL includes a class template Memory with a parameter that indicates the supported format of images. This type of variability is resolved at run time and all camera products include it. ![Camera PL Class Diagram](image) **Fig. 1.** The class diagram of the Camera PL. ### 2.2 UML Extensions for Sequence Diagrams In addition to class diagrams, UML includes other diagrams that describe other aspects of systems. Sequence diagrams model the possible interactions in a system. They are generally used to capture the requirements, but can then be used to document a system, or to produce tests. The UML 2.0 [5] makes sequences diagrams very similar to the ITU standard MSC (Message Sequence Chart)[7]. It introduces new mechanisms, especially interaction operators such as alternative, sequence, and loop to design respectively a choice, a sequencing, and a repetition of interactions. [11] proposes three constructs to introduce variability in MSC. In this subsection we formalize this proposition in terms of extensions on the UML 2.0 metamodel for sequence diagrams. Before describing these extensions, we briefly present sequence diagrams as defined in UML 2.0 metamodel. **Sequence Diagrams in UML 2.0.** Figure 2 summarizes the UML 2.0 metamodel part that concerns sequence diagrams (interested readers can consult [5] for a complete description of the metamodel). The *Interaction* metaclass refers to the unit of behavior that focuses on the observable exchanges of information between a set of objects in the sequence diagram. The *Lifeline* metaclass refers to the object in the interaction. *InteractionFragment* is a piece of an interaction. The aggregation between the *Interaction* and the *InteractionFragment* specifies composite interaction, in the sense that an interaction can enclose other sub-interactions. The *CombinedFragment* defines a set of interaction operators that can be used to combine a set of *InteractionOperand*. All possible operators are defined in the enumeration *InteractionOperator*. The *EventOccurrence* metaclass refers to events that occur on a specific lifeline, these events can be either sending, receiving messages or other kinds. The notation of an interaction in a sequence diagram is a solid-outline rectangle (see Figure 3 for example). The keyword *sd* followed by the name of the interaction in a pentagon in the upper left corner of the rectangle. **Stereotypes and Tagged Values.** Variability for sequence diagrams is introduced in terms of three constructs: Optionality, Variation and Virtuality, in what follows we formalize these constructs using stereotypes and tagged values on the UML 2.0 metamodel. - **Optionality.** Optionality proposed for sequence diagrams comports two main aspects: optionality for objects in the interaction, and optionality for interactions themselves. Optionality for object is specified using the stereotype `<<optionalLifeline>>` that extends the *Lifeline* metaclass. Optional interactions are specified by the stereotype `<<optionalInteraction>>` that extends the *Interaction* metaclass. - **Variation.** A variation point in a PL sequence diagram means that for a given product, only one interaction variant defined by the variation point will be present in the derived sequence diagram. The *Interaction* encloses a set of sub-interactions, the variation mechanism can be specified by two stereotypes: `<<variation>>` and `<<variant>>`; the both stereotypes extend the *Interaction* metaclass. To distinguish different variants in the same sequence diagram, we associate to the interaction stereotyped with <<variant>> a tagged value: {variation = Variation} to indicates its enclosing variation point( the enclosing interaction stereotyped with <<variation>>). - **Virtuality.** A virtual part in a sequence diagram means that this part can be redefined for each product by another sequence diagram. The virtual part is defined using a stereotype <<virtual>> that extends the Interaction metaclass. An interaction can be a variation point and a variant for another variation point in the same time. This means that it is enclosed in the interaction stereotyped <<variation>> and in the same time it encloses a set of interaction variants. In this situation, the interaction is defined with the two stereotypes: <<variation>> and <<variant>>. **Constraints.** Structural constraints can be associated to the stereotypes and the tagged value defined above. For example, the constraint that concerns the <<variant>> stereotype and that specifies that each interaction stereotyped <<variant>> should be enclosed in one and only one interaction stereotyped <<variation>> can be formalized using OCL as an invariant to the <<variant>> stereotype: ``` context <<variant>> inv: self.enclosingInteraction → select(oclIsKindOf(Variation) → size()=1 ``` ![Fig. 3. The Sequence Diagram Capture](image-url) Example. Figure 2 shows the CapturePL sequence diagram that concerns the camera PL example. It illustrates the interaction to capture and to store data into the memory. This sequence diagram includes two types of variability: the presence of the Compressor object and the variation in the interaction Store. The Compressor lifeline is defined as optional, and the interaction Store (stereotyped <<variation>>) defines two variants interaction Var1 and Var2 (stereotyped <<variant>>) to store data into the memory. The first stores data after its compression and the second one stores them without compression. The tagged value \{variation = Store\} is added to the two interactions variants. <table> <thead> <tr> <th>Stereotype/Tagged values</th> <th>Applies to</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>&lt;&lt;optional&gt;&gt;</td> <td>Classifier, Package, Feature</td> <td>Indicates that the element (classifier, package, or feature) is optional.</td> </tr> <tr> <td>&lt;&lt;variation&gt;&gt;</td> <td>Classifier</td> <td>Indicates that the abstract class represents a variation point with a set of variants.</td> </tr> <tr> <td>&lt;&lt;variant&gt;&gt;</td> <td>Classifier</td> <td>Indicates that a class is a variant of a variation point.</td> </tr> <tr> <td>&lt;&lt;optionalLifeline&gt;&gt;</td> <td>Lifeline</td> <td>Indicates that the lifeline in the sequence diagram is optional.</td> </tr> <tr> <td>&lt;&lt;optionalInteraction&gt;&gt;</td> <td>Interaction</td> <td>Indicates that the behavior described by the interaction is optional.</td> </tr> <tr> <td>&lt;&lt;variation&gt;&gt;</td> <td>Interaction</td> <td>Indicates that the interaction is a variation point with two or more interaction variants.</td> </tr> <tr> <td>&lt;&lt;variant&gt;&gt;</td> <td>Interaction</td> <td>Indicates that the interaction is a variant behavior in the context of a variation interaction.</td> </tr> <tr> <td>&lt;&lt;virtual&gt;&gt;</td> <td>Interaction</td> <td>Indicates that the interaction is a virtual part.</td> </tr> <tr> <td>{variation = Variation}</td> <td>&lt;&lt;variant&gt;&gt;</td> <td>Indicates the variation point related to this variant.</td> </tr> </tbody> </table> 2.3 The Profile Structure Table 1 summarizes the defined stereotypes and tagged values in the UML profile for PL. Figure 4 illustrates the structure of the proposed profile for PL (we follow notations for profiles as defined in [5]). Stereotypes are defined as class stereotyped with <<stereotype>>. UML metaclasses are defined as classes stereotyped with <<metaclass>>. Tagged values are defined as attributes for the defined stereotypes. The extensions proposed for class diagrams are defined in the staticAspect package, and thus for sequence diagrams are gathered in the dynamicAspect package. The <<variant>> stereotype in the staticAspect (respectively in the package... Towards a UML Profile for Software Product Lines 3 From PL Models to Product Models A UML profile includes not only stereotypes, tagged values and constraints but also a set of operational rules that define how the profile can be used. These rules concern for example code generation from models that conform to this profile or model transformations. For the PL profile, this part can be used to define the product derivations as model transformations. A product derivation consists in generating from PL models the UML models of each product. The product line architecture is defined as a standard architecture with a set of constraints [2]. PL constraints guide the derivation process. In what follows we present two types of PL constraints: the generic constraints that apply to all PL, and specific constraints that concern a specific PL. We show how these constraints should be considered for the derivation process. 3.1 Generic Constraints The introduction of variability, and more especially optionality in the UML class diagrams (specified by the <<optional>> stereotype), improves genericity but can generate some incoherences. For example, if a non-optional element depends on an optional one, the derivation can produce an incomplete product model. So the derivation process should preserve the coherence of the derived products. [12] proposes to formalize coherence constraints using OCL. Constraints that concern any PL model are called Generic Constraints. An example of such constraint is the dependency constraint that forces non optional elements to depend only on non optional elements. A dependency in the UML specifies a require relationship between two or more elements. It is represented in the UML meta-model [5] by the meta-class Dependency; it represents the relationship between a set of suppliers and clients. An example of the UML Dependency is the dynamicAspect) inherits from the <<optional>> stereotype (respectively from the <<optionalInteraction>> stereotype). This means that each variant is optional too. “Usage”, which appears when a package uses another one. The dependency constraints is specified using OCL as an invariant for the Dependency metaclass: \[ \text{context Dependency} \text{inv:} \text{self.supplier exists(S:ModelElement | S.isStereotyped ('optional')) implies self.client forAll( C:ModelElement | C.isStereotyped('optional') )} \] While the \<<variant>> stereotype inherits from the \<<optional>> one (see Figure 4), the dependency constraint also is applied to variants. In the sens that a non-optional element can not depends on the variant one. The generic constraints may be seen as well-formedness rules for the UML modeled product line. 3.2 Specific Constraints A fundamental characteristic of product lines is that all elements are not compatible. That is, the selection of one element may disable (or enable) the selection of others. For example in the sequence diagram CapturePL in Figure 4 the choice of the variant \textit{Var1} in the specific product needs the presence of the compressor object. Dependencies between PL model elements are called Specific Constraints. They are associated to a specific product line and will be evaluated on all products derived from this PL. So another challenge for the product derivation is to ensure specific constraints in the derived products. These constraints can be formalized as OCL meta-level constrains [12]. The following constraint specifies the presence dependency in the sequence diagram CapturePL between the interaction variant \textit{Var1}, and the \textit{Compressor} lifeline. i.e: the presence of the interaction variant \textit{Var1} requires the presence of the \textit{Compressor} lifeline. It is added as an invariant to the Interaction metaclass: \[ \text{context Interaction} \text{inv: self.fragments \rightarrow exists (l: IntercationFragment | l.name = 'Var1') implies self.lifeline \rightarrow exists (l:Lifeline | l.name = 'Compressor')} \] In addition to the presence constraint, specific constraints include the mutual exclusion constraint. It expresses in a specific PL model that two optional classes cannot be present in the same product. This can be formalized using OCL, for example the mutual exclusion constraint between two optional classes called \textit{C1} and \textit{C2} in a specific PL is expressed using OCL as an invariant to the Model meta-class: \[ \text{context Model} \text{inv:} \text{self.presenceClass('C1') implies not self.presenceClass('C2'))} \text{and (self.presenceClass('C2') implies not self.presenceClass('C1'))} \] 1. \texttt{isStereotyped(S)} : boolean is an auxiliary OCL operation indicates if an element is stereotyped by a string \textit{S}. 2. \texttt{presenceClass(C)} : boolean is an auxiliary OCL operation indicates if a class named \textit{C} is present in a specific UML Model. 3.3 Product Models Derivation The products derivation consists in generating from the PL models (UML class diagrams and sequence diagrams) models for each product. Product line models should satisfy generic constraints before the derivation while the derived product model should satisfy specific constraints. This means that generic constraints represent the pre-conditions of the derivation process and specific constraints represent the post - conditions for the derivation process: DeriveProduct(PLModel : Model):Model pre: –check generic constrains on PLModel post: – check specific constraints on the derived product model. Figure 5 shows the derived class diagram for a camera product. This product does not support the compression feature and only supports Interface 1 and Interface 2. It is obtained from the PL class diagram by removing the class Compressor and the class Interface 3. Figure 6 shows the derived Capture sequence diagram for this camera product. It is obtained by removing the Compressor lifeline and the choice of the Var2 interaction (the Var1 interaction is removed). Fig. 5. The derived class diagram for a specific camera product 4 Related Work Many work have studied modeling of PL variability using UML. [4] uses UML extensions mechanisms to specify variability in UML diagrams. However, despite the <<optional>> stereotype for UML statecharts and sequence diagrams, these extensions mainly focuses on the static aspects of the PL architecture. To model dynamic aspects of PLs, we have proposed three constructs to specify variability in sequence diagrams. KobrA [1] is a method that combines product line engineering and component-based software development. It uses the UML to specify component. Variability is introduced in the KobrA components using the $<<\text{variant>>>$ stereotype. This stereotype is used to model any feature that are not common to all product. [3] proposes a set of UML extensions to describe product line variability. They only concern UML class diagrams. While we use OCL to model specific constraints, [3] models them using two stereotypes: $<<\text{require>>>$ and $<<\text{mutex>>>$ respectively for the presence and the exclusion mutuel constraint. [9] proposes notations for product lines. They are gathered in the profile called UML-F. In fact this profile is defined for frameworks and it only concerns static aspects of the product line. [8] proposes a metamodel based on UML for product line architectures. Variability is introduced only in terms of alternatives. 5 Conclusion In this paper, we have proposed a set of extensions as a UML profile for PL. These extensions concern UML class diagrams, and sequence diagrams. They are defined on the UML 2.0 metamodel. This profile is not yet implemented. We have only proposed some constraints, the definition of the defined profile should be refined with more constraints. We intend to implement this profile with the UMLAUT. UMLAUT [6] is a framework for building tools dedicated to the manipulation of models described using the UML. A new version of the UMLAUT framework is currently under construction in the Triskell\(^3\) team based on the MTL (Model Transformation Language), which is an extension of OCL with the MOF (Meta-Object Facility) architecture and side effect features, so it permits us to describe the process at the meta-level and to check OCL constraints. The MTL language can be used to define the derivation process. \(^3\) www.irisa.fr/triskell References
{"Source-Url": "https://inria.hal.science/hal-00794817/file/Ziadi03c.pdf", "len_cl100k_base": 5093, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 25730, "total-output-tokens": 6140, "length": "2e12", "weborganizer": {"__label__adult": 0.0002593994140625, "__label__art_design": 0.00047087669372558594, "__label__crime_law": 0.00024259090423583984, "__label__education_jobs": 0.000667572021484375, "__label__entertainment": 4.601478576660156e-05, "__label__fashion_beauty": 0.00011539459228515624, "__label__finance_business": 0.00017499923706054688, "__label__food_dining": 0.00022685527801513672, "__label__games": 0.0003521442413330078, "__label__hardware": 0.0004580020904541016, "__label__health": 0.0002899169921875, "__label__history": 0.00018846988677978516, "__label__home_hobbies": 5.6803226470947266e-05, "__label__industrial": 0.00027561187744140625, "__label__literature": 0.00021004676818847656, "__label__politics": 0.00018310546875, "__label__religion": 0.00032448768615722656, "__label__science_tech": 0.0088348388671875, "__label__social_life": 6.937980651855469e-05, "__label__software": 0.005840301513671875, "__label__software_dev": 0.97998046875, "__label__sports_fitness": 0.00022172927856445312, "__label__transportation": 0.0003228187561035156, "__label__travel": 0.00014722347259521484}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26044, 0.02475]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26044, 0.35316]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26044, 0.83647]], "google_gemma-3-12b-it_contains_pii": [[0, 971, false], [971, 3958, null], [3958, 6697, null], [6697, 8814, null], [8814, 11136, null], [11136, 12539, null], [12539, 15406, null], [15406, 17450, null], [17450, 20278, null], [20278, 22042, null], [22042, 23778, null], [23778, 26044, null]], "google_gemma-3-12b-it_is_public_document": [[0, 971, true], [971, 3958, null], [3958, 6697, null], [6697, 8814, null], [8814, 11136, null], [11136, 12539, null], [12539, 15406, null], [15406, 17450, null], [17450, 20278, null], [20278, 22042, null], [22042, 23778, null], [23778, 26044, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26044, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26044, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26044, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26044, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26044, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26044, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26044, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26044, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26044, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26044, null]], "pdf_page_numbers": [[0, 971, 1], [971, 3958, 2], [3958, 6697, 3], [6697, 8814, 4], [8814, 11136, 5], [11136, 12539, 6], [12539, 15406, 7], [15406, 17450, 8], [17450, 20278, 9], [20278, 22042, 10], [22042, 23778, 11], [23778, 26044, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26044, 0.09016]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
ecdaaa635b1a1ed555f4742764ee6711021b52ae
Chapter 7 Single-Dimensional Arrays 7.1 Introduction 250 7.2 Array Basics 250 7.3 Case Study: Analyzing Numbers 257 7.4 Case Study: Deck of Cards 258 7.5 Copying Arrays 260 7.6 Passing Arrays to Methods 261 7.7 Returning an Array from a Method 264 7.8 Case Study: Counting the Occurrences of Each Letter 265 7.9 Variable-Length Argument Lists 268 7.10 Searching Arrays 269 7.11 Sorting Arrays 273 7.12 The Arrays Class 274 7.13 Command-Line Arguments 276 Chapter 7 Single-Dimensional Arrays Objectives - To describe why arrays are necessary in programming (§7.1). - To declare array reference variables and create arrays (§§7.2.1–7.2.2). - To obtain array size using `arrayRefVar.length` and know default values in an array (§7.2.3). - To access array elements using indexes (§7.2.4). - To declare, create, and initialize an array using an array initializer (§7.2.5). - To program common array operations (displaying arrays, summing all elements, finding the minimum and maximum elements, random shuffling, and shifting elements) (§7.2.6). - To simplify programming using the foreach loops (§7.2.7). - To apply arrays in application development (AnalyzeNumbers, DeckOfCards) (§§7.3–7.4). - To copy contents from one array to another (§7.5). - To develop and invoke methods with array arguments and return values (§§7.6–7.8). - To define a method with a variable-length argument list (§7.9). - To search elements using the linear (§7.10.1) or binary (§7.10.2) search algorithm. - To sort an array using the selection sort approach (§7.11). - To use the methods in the java.util.Arrays class (§7.12). - To pass arguments to the main method from the command line (§7.13). Chapter 7 Single-Dimensional Arrays 7.1 Introduction 250 - Array is a data structure that stores a fixed-size sequential collection of elements of the same types. 7.2 Array Basics 250 - An array is used to store a collection of data, but it is often more useful to think of an array as a collection of variables of the same type. - This section introduces how to declare array variables, create arrays, and process arrays. 7.2.1 Declaring Array Variables - Here is the syntax for declaring an array variable: ``` dataType[ ] arrayRefVar; ``` - The following code snippets are examples of this syntax: ``` double [ ] myList; ``` 7.2.2 Creating Arrays - Declaration of an array variable doesn’t allocate any space in memory for the array. - Only a storage location for the reference to an array is created. - If a variable doesn’t reference to an array, the value of the variable is null. - You can create an array by using the new operator with the following syntax: ``` arrayRefVar = new dataType[arraySize]; ``` - This element does two things: 1) It creates an array using new dataType[arraySize]; 2) It assigns the reference of the newly created array to the variable arrayRefVar. - Declaring an array variable, creating an array, and assigning the reference of the array to the variable can be combined in one statement, as follows: ``` dataType[]arrayRefVar = new dataType[arraySize]; ``` • Here is an example of such a statement ```java double[] myList = new double[10]; ``` ![Figure 7.1: The array `myList` has 10 elements of `double` type and `int` indices from 0 to 9.](image) • This statement declares an array variable, `myList`, creates an array of ten elements of double type, and assigns its reference to `myList`. **NOTE** • An array variable that appears to hold an array actually contains a reference to that array. Strictly speaking, an array variable and an array are **different**. ### 7.2.3 Array Size and Default values • When space for an array is allocated, the array size must be given, to specify the number of elements that can be stored in it. • The size of an array cannot be changed after the array is created. • Size can be obtained using `arrayVar.length`. For example, `myList.length` is 10. • When an array is created, its elements are assigned the default value of 0 for the numeric primitive data types, ‘\u0000’ for char types, and `false` for Boolean types. 7.2.4 Accessing Array Elements - The array elements are accessed through an index. - The array indices are 0-based, they start from 0 to $arrayRefVar.length-1$. - In the example, myList holds ten double values and the indices from 0 to 9. The element $myList[9]$ represents the last element in the array. - After an array is created, an indexed variable can be used in the same way as a regular variable. For example: ```java myList[2] = myList[0] + myList[1]; // adds the values of the 1st and 2nd elements into the 3rd one for (int i = 0; i < myList.length; i++) // the loop assigns 0 to myList[0] .. and 9 to myList[9] myList[i] = i; // 1 to myList[1] .. ``` 7.2.5 Array Initializers - Java has a shorthand notation, known as the array initializer that combines declaring an array, creating an array and initializing it at the same time. ```java double[] myList = {1.9, 2.9, 3.4, 3.5}; ``` - This shorthand notation is equivalent to the following statements: ```java double[] myList = new double[4]; myList[0] = 1.9; myList[1] = 2.9; myList[2] = 3.4; myList[3] = 3.5; ``` Caution - Using the shorthand notation, you have to declare, create, and initialize the array all in one statement. Splitting it would cause a syntax error. For example, the following is wrong: ```java double[] myList; myList = {1.9, 2.9, 3.4, 3.5}; ``` ### 7.2.6 Processing Arrays - When processing array elements, you will often use a `for` loop. Here are the reasons why: 1. All of the elements in an array are of the same type. They are evenly processed in the same fashion by repeatedly using a loop. 2. Since the size of the array is known, it is natural to use a `for` loop. - Here are some examples of processing arrays: 1. Initializing arrays with input values 2. Initializing arrays with random values 3. Printing arrays 4. Summing all elements 5. Finding the largest element 6. Finding the smallest index of the largest element 7. Random shuffling 8. Shifting elements ### 7.2.7 Foreach Loops - JDK 1.5 introduced a new for loop that enables you to traverse the complete array sequentially without using an index variable. For example, the following code displays all elements in the array `myList`: ```java for (double u : myList) System.out.println(u); ``` - In general, the syntax is ```java for (elementType element: arrayRefVar) { // Process the value } ``` - You still have to use an index variable if you wish to traverse the array in a different order or change the elements in the array. 7.3 Case Study: Analyzing Numbers - Read the numbers of user inputs, compute their average, and find out how many numbers are above the average. LISTING 7.1 AnalyzeNumbers.java ```java public class AnalyzeNumbers { public static void main(String[] args) { java.util.Scanner input = new java.util.Scanner(System.in); System.out.print("Enter the numbers of items: "); int n = input.nextInt(); double[] numbers = new double[n]; double sum = 0; System.out.print("Enter the numbers: "); for (int i = 0; i < n; i++) { numbers[i] = input.nextDouble(); sum += numbers[i]; } double average = sum / n; int count = 0; // The numbers of elements above average for (int i = 0; i < n; i++) if (numbers[i] > average) count++; System.out.println("Average is " + average); System.out.println("Number of elements above the average is " + count); } } ``` Enter the numbers of items: 10 Enter the numbers: 3.4 5 6 1 6.5 7.8 3.5 8.5 6.3 9.5 Average is 5.75 Number of elements above the average is 6 7.4 Case Study: Deck of Cards 258 - The problem is to write a program that picks four cards randomly from a deck of 52 cards. All the cards can be represented using an array named deck, filled with initial values 0 to 52, as follows: ```java int[] deck = new int[52]; // Initialize cards for (int i = 0; i < deck.length; i++) deck[i] = i; ``` ![Array Representation of Cards](image) Figure 7.2 52 cards are stored in an array named deck. ``` cardNumber / 13 = 0 Spades 1 Hearts 2 Diamonds 3 Clubs 0 1 2 ``` ``` cardNumber % 13 = 0 Ace 1 2 . 10 Jack 11 Queen 12 King ``` Figure 7.3 cardNumber identifies a card’s suit and rank number. LISTING 7.2 DeckOfCards.java ```java public class DeckOfCards { public static void main(String[] args) { int[] deck = new int[52]; String[] suits = {"Spades", "Hearts", "Diamonds", "Clubs"}; String[] ranks = {"Ace", "2", "3", "4", "5", "6", "7", "8", "9", "10", "Jack", "Queen", "King"}; // Initialize cards for (int i = 0; i < deck.length; i++) deck[i] = i; // Shuffle the cards for (int i = 0; i < deck.length; i++) { // Generate an index randomly int index = (int)(Math.random() * deck.length); int temp = deck[i]; deck[i] = deck[index]; deck[index] = temp; } // Display the first four cards for (int i = 0; i < 4; i++) { String suit = suits[deck[i] / 13]; String rank = ranks[deck[i] % 13]; System.out.println("Card number " + deck[i] + ": "+rank + " of " + suit); } } } ``` Card number 6: 7 of Spades Card number 48: 10 of Clubs Card number 11: Queen of Spades Card number 24: Queen of Hearts 7.5 Copying Arrays 260 - Often, in a program, you need to duplicate an array or a part of an array. In such cases you could attempt to use the assignment statement (=), as follows: ``` list2 = list1; ``` - This statement does **not** copy the contents of the array referenced by `list1` to `list2`, but merely copies the reference value from `list1` to `list2`. After this statement, `list1` and `list2` reference to the same array, as shown below. ![Diagram](image) **FIGURE 7.4** Before the assignment, `list1` and `list2` point to separate memory locations. After the assignments the reference of the `list1` array is passed to `list2`. - The array previously referenced by `list2` is no longer referenced; it becomes garbage, which will be automatically collected by the **Java Virtual Machine**. This process is called **garbage collection**. - You can use assignment statements to copy primitive data type variables, but not arrays. - Assigning one array variable to another variable actually copies one reference to another and makes both variables point to the **same memory location**. There are three ways to copy arrays: o Use a **loop** to copy individual elements one by one. o Use the static **arraycopy** method in the `System` class. o Use the **clone** method to copy arrays; this will be introduced in Chapter 13, Abstract Classes and Interfaces. Using a **loop**: ```java int[] sourceArray = {2, 3, 1, 5, 10}; int[] targetArray = new int[sourceArray.length]; for (int i = 0; i < sourceArray.length; i++) targetArray[i] = sourceArray[i]; ``` **The arraycopy method:** ```java arraycopy(sourceArray, src_pos, targetArray, tar_pos, length); ``` Example: ```java System.arraycopy(sourceArray, 0, targetArray, 0, sourceArray.length); ``` - The number of elements copied from `sourceArray` to `targetArray` is indicated by `length`. - The `arraycopy` does not allocate memory space for the target array. The target array must have already been created with its memory space allocated. - After the copying take place, `targetArray` and `sourceArray` have the same content but **independent** memory locations. 7.6 Passing Arrays to Methods 261 - The following method displays the elements of an int array: ```java public static void printArray(int[] array) { for (int i = 0; i < array.length; i++) { System.out.print(array[i] + " "); } } ``` The following invokes the method to display 3, 1, 2, 6, 4, and 2. ```java int[] list = {3, 1, 2, 6, 4, 2}; printArray(list); ``` ```java printArray(new int[]{3, 1, 2, 6, 4, 2}); // anonymous array; no explicit reference variable for the array ``` - Java uses pass by value to pass arguments to a method. There are important differences between passing the values of variables of primitive data types and passing arrays. - For an argument of a primitive type, the argument’s value is passed. - For an argument of an array type, the value of an argument contains a reference to an array; this reference is passed to the method. ```java public class Test { public static void main(String[] args) { int x = 1; // x represents an int value int[] y = new int[10]; // y represents an array of int values m(x, y); // Invoke m with arguments x and y System.out.println("x is " + x); System.out.println("y[0] is " + y[0]); } public static void m(int number, int[] numbers) { number = 1001; // Assign a new value to number numbers[0] = 5555; // Assign a new value to numbers[0] } } ``` x is 1 y[0] is 5555 - y and numbers reference to the same array, although y and numbers are independent variables. - When invoking m(x, y), the values of x and y are passed to number and numbers. - Since y contains the reference value to the array, numbers now contains the same reference value to the same array. • The JVM stores the array in an area of memory called **heap**, which is used by dynamic memory allocation where blocks of memory are allocated and freed in an arbitrary order. FIGURE 7.5 The primitive type value in x is passed to number, and the reference value in y is passed to numbers LISTING 7.3 TestPassArray.java: Passing Arrays as Arguments - For a parameter of an array type, the value of the parameter contains a reference to an array; this reference is passed to the method. Any changes to the array that occur inside the method body will affect the original array that was passed as the argument. - Example: write two methods for swapping elements in an array. The first method, named `swap`, fails to swap two int arguments. The second method, named `swapFirstTwoInArray`, successfully swaps the first two elements in the array argument. ```java public class TestPassArray { /** Main method */ public static void main(String[] args) { int[] a = {1, 2}; // Swap elements using the swap method System.out.println("Before invoking swap"); System.out.println("array is {" + a[0] + ", " + a[1] + "}"); swap(a[0], a[1]); System.out.println("After invoking swap"); System.out.println("array is {" + a[0] + ", " + a[1] + "}"); // Swap elements using the swapFirstTwoInArray method System.out.println("Before invoking swapFirstTwoInArray"); System.out.println("array is {" + a[0] + ", " + a[1] + "}"); swapFirstTwoInArray(a); System.out.println("After invoking swapFirstTwoInArray"); System.out.println("array is {" + a[0] + ", " + a[1] + "}"); } /** Swap two variables */ public static void swap(int n1, int n2) { int temp = n1; n1 = n2; n2 = temp; } /** Swap the first two elements in the array */ public static void swapFirstTwoInArray(int[] array) { int temp = array[0]; array[0] = array[1]; array[1] = temp; } } ``` Before invoking `swap` array is {1, 2} After invoking `swap` array is {1, 2} Before invoking `swapFirstTwoInArray` array is {1, 2} After invoking `swapFirstTwoInArray` array is {2, 1} • The first method doesn’t work. The two elements are not swapped using the \textit{swap} method. • The second method works. The two elements are actually swapped using the \textit{swapFirstTwoInArray} method. • Since the arguments in the first method are primitive type, the values of \texttt{a[0]} and \texttt{a[1]} are passed to \texttt{n1} and \texttt{n2} inside the method when invoking \textit{swap(a[0], a[1])}. • The memory locations for \texttt{n1} and \texttt{n2} are independent of the ones for \texttt{a[0]} and \texttt{a[1]}. • The contents of the array are not affected by this call. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figure7.6.png} \caption{When passing an array to a method, the reference of the array is passed to the method.} \end{figure} • The parameter in the \textit{swapFirstTwoInArray} method is an array. • As shown above, the reference of the array is passed to the method. • Thus, the variables \texttt{a} (outside the method) and \texttt{array} (inside the method) both refer to the same array in the same memory location. • Therefore, swapping \texttt{array[0]} with \texttt{array[1]} inside the method \textit{swapFirstTwoInArray} is the same as swapping \texttt{a[0]} with \texttt{a[1]} outside of the method. 7.7 Returning an Array from a Method 264 - You can pass arrays to invoke a method. A method may also return an array. - For example, the method below returns an array that is the reversal of another array: ```java public static int[] reverse(int[] list) { int[] result = new int[list.length]; // creates new array result for (int i = 0, j = result.length - 1; // copies elements from array i < list.length; i++, j--) { // list to array result result[j] = list[i]; } return result; } ``` - The following statement returns a new array list2 with elements 6, 5, 4, 3, 2, 1: ```java int[] list1 = new int[]{1, 2, 3, 4, 5, 6}; int[] list2 = reverse(list1); ``` 7.8 Case Study: Counting the Occurrences of Each Letter 265 - Generate 100 lowercase letters randomly and assign to an array of characters. - Count the occurrence of each letter in the array. LISTING 7.4 CountLettersInArray.java ```java /* Output The lowercase letters are: en n e v n s f w x i u b x w w m y v h o c j d y t b e c p w q h e w d u v t q p c d k q m v j o k n u x w f c b p p n z t x f e m o g n o y y l b s b h f a h t e i f a h f x 1 e y u i w v g The occurrences of each letter are: 2 a 5 b 4 c 4 d 7 e 6 f 3 g 5 h 3 i 2 j 2 k 2 1 3 m 5 n 4 o 4 p 3 q 0 r 2 s 4 t 4 u 7 v 8 w 5 x 5 y 1 z */ public class CountLettersInArray { /** Main method */ public static void main(String args[]) { // Declare and create an array char[] chars = createArray(); // Display the array System.out.println("The lowercase letters are:"); displayArray(chars); // Count the occurrences of each letter int[] counts = countLetters(chars); // Display counts System.out.println(); System.out.println("The occurrences of each letter are:"); displayCounts(counts); } /** Create an array of characters */ public static char[] createArray() { // Declare an array of characters and create it char[] chars = new char[100]; // Create lowercase letters randomly and assign // them to the array for (int i = 0; i < chars.length; i++) chars[i] = RandomCharacter.getRandomLowerCaseLetter(); // Return the array return chars; } /** Display the array of characters */ ``` ```java ``` public static void displayArray(char[] chars) { // Display the characters in the array 20 on each line for (int i = 0; i < chars.length; i++) { if ((i + 1) % 20 == 0) System.out.println(chars[i] + " "); else System.out.print(chars[i] + " "); } } /** Count the occurrences of each letter */ public static int[] countLetters(char[] chars) { // Declare and create an array of 26 int int[] counts = new int[26]; // For each lowercase letter in the array, count it for (int i = 0; i < chars.length; i++) counts[chars[i] - 'a']++; return counts; } /** Display counts */ public static void displayCounts(int[] counts) { for (int i = 0; i < counts.length; i++) { if ((i + 1) % 10 == 0) System.out.println(counts[i] + " " + (char)(i + 'a')); else System.out.print(counts[i] + " " + (char)(i + 'a') + " "); } } (a) Executing createArray in Line 5 (b) After exiting createArray in Line 5 Space required for the createArray method char[] chars: ref Array of 100 characters Space required for the main method char[] chars: ref Space required for the main method char[] chars: ref Array of 100 characters FIGURE 7.8 (a) An array of 100 characters is created when executing createArray. (b) This array is returned and assigned to the variable chars in the main method 7.9 Variable-Length Argument Lists 268 - A variable number of arguments of the same type can be passed to a method and treated as an array. LISTING 7.5 VarArgsDemo.java ```java public class VarArgsDemo { public static void main(String args[]) { printMax(34, 3, 3, 2, 56.5); printMax(new double[]{1, 2, 3}); } public static void printMax(double... numbers) { if (numbers.length == 0) { System.out.println("No argument passed"); return; } double result = numbers[0]; for (int i = 1; i < numbers.length; i++) if (numbers[i] > result) result = numbers[i]; System.out.println("The max value is " + result); } } ``` The max value is 56.5 The max value is 3.0 7.10 Searching Arrays 269 - Searching is the process of looking for a specific element in an array; for example, discovering whether a certain score is included in a list of scores. Searching is a common task in computer programming. - There are many algorithms and data structures devoted to searching. In this section, two commonly used approaches are discussed, linear search and binary search. 7.10.1 The Linear Search Approach - The linear search approach compares the key element, key, sequentially with each element in the array list. The method continues to do so until the key matches an element in the list or the list is exhausted without a match being found. - If a match is made, the linear search returns the index of the element in the array that matches the key. If no match is found, the search returns -1. | 0 | 1 | 2 | ... |---|---|---|--- | list | key Compare key with list[i] for i = 0, 1, ... ```java public class LinearSearch { /** The method for finding a key in the list */ public static int linearSearch(int[] list, int key) { for (int i = 0; i < list.length; i++) if (key == list[i]) return i; return -1; } } ``` - The linear search method compares the key with each element in the array. ```java int[] list = {1, 4, 4, 2, 5, -3, 6, 2}; int i = LinearSearch.linearSearch(list, 4); // Returns 1 int j = LinearSearch.linearSearch(list, -4); // Returns -1 int k = LinearSearch.linearSearch(list, -3); // Returns 5 ``` 7.10.2 The Binary Search Approach - For binary search to work, the elements in the array must already be ordered. Without loss of generality, assume that the array is in ascending order. \[2 4 7 10 11 45 50 59 60 66 69 70 79\] - The binary search first compares the key with the element in the middle of the array. - If the key is less than the middle element, you only need to search the key in the first half of the array. - If the key is equal to the middle element, the search ends with a match. - If the key is greater than the middle element, you only need to search the key in the second half of the array. - The binarySearch method returns the index of the element in the list that matches the search key if it is contained in the list. Otherwise, it returns - insertion point - 1. - The insertion point is the point at which the key would be inserted into the list. ![Binary Search Diagram] FIGURE 7.9 Binary search eliminates half of the list from further consideration after each comparison. LISTING 7.7 BinarySearch.java ```java public class BinarySearch { /** Use binary search to find the key in the list */ public static int binarySearch(int[] list, int key) { int low = 0; int high = list.length - 1; while (high >= low) { int mid = (low + high) / 2; if (key < list[mid]) high = mid - 1; else if (key == list[mid]) return mid; else low = mid + 1; } return -(low + 1); // Now high < low } } ``` - To better understand this method, trace it with the following statements and identify low and high when the method returns. ```java int[] list = {2, 4, 7, 10, 11, 45, 50, 59, 60, 66, 69, 70, 79}; int i = BinarySearch.binarySearch(list, 2); // Returns 0 int j = BinarySearch.binarySearch(list, 11); // Returns 4 int k = BinarySearch.binarySearch(list, 12); // Returns -6 int l = BinarySearch.binarySearch(list, 1); // Returns -1 int m = BinarySearch.binarySearch(list, 3); // Returns -2 ``` <table> <thead> <tr> <th>Method</th> <th>Low</th> <th>High</th> <th>Value Returned</th> </tr> </thead> <tbody> <tr> <td>binarySearch(list, 2)</td> <td>0</td> <td>1</td> <td>0</td> </tr> <tr> <td>binarySearch(list, 11)</td> <td>3</td> <td>5</td> <td>4</td> </tr> <tr> <td>binarySearch(list, 12)</td> <td>5</td> <td>4</td> <td>-6</td> </tr> <tr> <td>binarySearch(list, 1)</td> <td>0</td> <td>-1</td> <td>-1</td> </tr> <tr> <td>binarySearch(list, 3)</td> <td>1</td> <td>0</td> <td>-2</td> </tr> </tbody> </table> ### 7.11 Sorting Arrays 273 - Sorting, like searching, is also a common task in computer programming. Many different algorithms have been developed for sorting. This section introduces a simple, intuitive sorting algorithm: selection sort. - **Selection sort** finds the smallest number in the list and places it first. It then finds the smallest number remaining and places it second, and so on until the list contains only a single number. ![Selection Sort Diagram](image) **FIGURE 7.11** Selection sort repeatedly selects the smallest number and swaps it with the first number in the list. LISTING 7.8 SelectionSort.java ```java public class SelectionSort { /** The method for sorting the numbers */ public static void selectionSort(double[] list) { for (int i = 0; i < list.length - 1; i++) { // Find the minimum in the list[i..list.length-1] double currentMin = list[i]; int currentMinIndex = i; for (int j = i + 1; j < list.length; j++) { if (currentMin > list[j]) { currentMin = list[j]; currentMinIndex = j; } } // Swap list[i] with list[currentMinIndex] if necessary; if (currentMinIndex != i) { list[currentMinIndex] = list[i]; list[i] = currentMin; } } } } ``` - To understand this method better, trace it with the following statements: ```java double[] list = {1, 9, 4.5, 6.6, 5.7, -4.5}; SelectionSort.selectionSort(list); ``` ``` -4.5 1.0 4.5 5.7 6.6 9.0 ``` 7.12 The Arrays Class 274 - The `Arrays.binarySearch` Method: Since binary search is frequently used in programming, Java provides several overloaded binarySearch methods for searching a key in an array of int, double, char, short, long, and float in the `java.util.Arrays` class. For example, the following code searches the keys in an array of numbers and an array of characters. ```java int[] list = {2, 4, 7, 10, 11, 45, 50, 59, 60, 66, 69, 70, 79}; System.out.println("Index is " + java.util.Arrays.binarySearch(list, 11)); // Return is 4 Index is 4 char[] chars = {'a', 'c', 'g', 'x', 'y', 'z'}; System.out.println("Index is " + java.util.Arrays.binarySearch(chars, 't')); // Return is -4 insertion point is 3, so return is -3-1 Index is -4 ``` - For the binarySearch method to work, the array must be pre-sorted in increasing order. - The `Arrays.sort` Method: Since sorting is frequently used in programming, Java provides several overloaded sort methods for sorting an array of int, double, char, short, long, and float in the `java.util.Arrays` class. For example, the following code sorts an array of numbers and an array of characters. ```java double[] numbers = {6.0, 4.4, 1.9, 2.9, 3.4, 3.5}; java.util.Arrays.sort(numbers); 1.9 2.9 3.4 3.5 4.4 6.0 double[] chars = {'a', 'A', '4', 'F', 'D', 'P'}; java.util.Arrays.sort(chars); 4 A D F P a ``` 7.13 Command-Line Arguments 276 - The main method can receive string arguments from the command line. - In the main method, get the arguments from args[0], args[1], ..., args[n], which corresponds to arg0, arg1, ..., argn in the command line. ```java java Calculator 2 + 3 ``` **LISTING 7.9 Calculator.java** - Problem: Write a program that will perform binary operations on integers. The program receives three parameters: an operator and two integers. ```java public class Calculator { /** Main method */ public static void main(String[] args) { // Check number of strings passed if (args.length != 3) { System.out.println( "Usage: java Calculator operand1 operator operand2"; System.exit(0); } // The result of the operation int result = 0; // Determine the operator switch (args[1].charAt(0)) { case '+': result = Integer.parseInt(args[0]) + Integer.parseInt(args[2]); break; case '-': result = Integer.parseInt(args[0]) - Integer.parseInt(args[2]); break; case '.': result = Integer.parseInt(args[0]) * Integer.parseInt(args[2]); break; case '/': result = Integer.parseInt(args[0]) / Integer.parseInt(args[2]); } // Display result System.out.println(args[0] + ' ' + args[1] + ' ' + args[2] + " = " + result); } } ```
{"Source-Url": "https://www2.southeastern.edu/Academics/Faculty/kyang/2020/Fall/CMPS161/ClassNotes/CMPS161ClassNotesChap07.pdf", "len_cl100k_base": 8184, "olmocr-version": "0.1.53", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 57078, "total-output-tokens": 9779, "length": "2e12", "weborganizer": {"__label__adult": 0.00036263465881347656, "__label__art_design": 0.00021922588348388672, "__label__crime_law": 0.0002503395080566406, "__label__education_jobs": 0.0007648468017578125, "__label__entertainment": 4.875659942626953e-05, "__label__fashion_beauty": 0.00013208389282226562, "__label__finance_business": 9.03606414794922e-05, "__label__food_dining": 0.00037479400634765625, "__label__games": 0.0006723403930664062, "__label__hardware": 0.0009741783142089844, "__label__health": 0.0003066062927246094, "__label__history": 0.00017750263214111328, "__label__home_hobbies": 8.952617645263672e-05, "__label__industrial": 0.0002956390380859375, "__label__literature": 0.0001569986343383789, "__label__politics": 0.00018227100372314453, "__label__religion": 0.0004656314849853515, "__label__science_tech": 0.0032482147216796875, "__label__social_life": 6.866455078125e-05, "__label__software": 0.0031604766845703125, "__label__software_dev": 0.98681640625, "__label__sports_fitness": 0.00036263465881347656, "__label__transportation": 0.0004286766052246094, "__label__travel": 0.00021004676818847656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29976, 0.03594]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29976, 0.84101]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29976, 0.6597]], "google_gemma-3-12b-it_contains_pii": [[0, 457, false], [457, 1673, null], [1673, 3108, null], [3108, 4117, null], [4117, 5504, null], [5504, 6708, null], [6708, 7854, null], [7854, 8560, null], [8560, 9667, null], [9667, 10768, null], [10768, 11813, null], [11813, 13553, null], [13553, 13844, null], [13844, 15759, null], [15759, 17031, null], [17031, 17723, null], [17723, 19382, null], [19382, 20772, null], [20772, 21556, null], [21556, 23074, null], [23074, 24101, null], [24101, 25530, null], [25530, 26126, null], [26126, 27140, null], [27140, 28519, null], [28519, 29976, null]], "google_gemma-3-12b-it_is_public_document": [[0, 457, true], [457, 1673, null], [1673, 3108, null], [3108, 4117, null], [4117, 5504, null], [5504, 6708, null], [6708, 7854, null], [7854, 8560, null], [8560, 9667, null], [9667, 10768, null], [10768, 11813, null], [11813, 13553, null], [13553, 13844, null], [13844, 15759, null], [15759, 17031, null], [17031, 17723, null], [17723, 19382, null], [19382, 20772, null], [20772, 21556, null], [21556, 23074, null], [23074, 24101, null], [24101, 25530, null], [25530, 26126, null], [26126, 27140, null], [27140, 28519, null], [28519, 29976, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 29976, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29976, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29976, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29976, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 29976, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29976, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29976, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29976, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29976, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29976, null]], "pdf_page_numbers": [[0, 457, 1], [457, 1673, 2], [1673, 3108, 3], [3108, 4117, 4], [4117, 5504, 5], [5504, 6708, 6], [6708, 7854, 7], [7854, 8560, 8], [8560, 9667, 9], [9667, 10768, 10], [10768, 11813, 11], [11813, 13553, 12], [13553, 13844, 13], [13844, 15759, 14], [15759, 17031, 15], [17031, 17723, 16], [17723, 19382, 17], [19382, 20772, 18], [20772, 21556, 19], [21556, 23074, 20], [23074, 24101, 21], [24101, 25530, 22], [25530, 26126, 23], [26126, 27140, 24], [27140, 28519, 25], [28519, 29976, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29976, 0.01227]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
5712f733177d7b9f396c524d71ef3f69c6647221
Oracle Forms & a Service Oriented Architecture (SOA). A Whitepaper from Oracle Inc. June 2007 # Table of Content OVERVIEW...........................................................................................................................................5 HOW CAN FORMS BE PART OF A SERVICE ORIENTED ARCHITECTURE?...........................5 THE THREE AREAS..........................................................................................................................5 CALLING OUT FROM ORACLE FORMS............................................................................................6 USE OF EXTERNAL SERVICES.........................................................................................................6 CALLING JAVA.....................................................................................................................................6 Web Services.......................................................................................................................................11 Basic flow............................................................................................................................................11 Identifying the WSDL file..................................................................................................................11 Making the proxy...............................................................................................................................11 Package it all up................................................................................................................................12 Import into Forms.............................................................................................................................12 What if the Web service is secured with a password?......................................................................13 Business Process Execution Language (BPEL)..............................................................................14 An example..........................................................................................................................................14 EXPOSING FORMS BUSINESS LOGIC TO THE OUTSIDE WORLD........................................18 OVERVIEW..........................................................................................................................................18 A SIMPLE EXAMPLE........................................................................................................................18 USING THE APPLICATION SERVER'S INFRASTRUCTURE............................................................22 Enterprise Manager............................................................................................................................22 Single sign-on server...........................................................................................................................25 Enterprise User Security....................................................................................................................26 Switching it on....................................................................................................................................26 Defining users in OID..........................................................................................................................28 CONCLUSION.......................................................................................................................................31 TABLE OF FIGURES <table> <thead> <tr> <th>Figure</th> <th>Description</th> <th>Page</th> </tr> </thead> <tbody> <tr> <td>FIGURE 1</td> <td>THE JAVA IMPORTER MENU ITEM</td> <td>7</td> </tr> <tr> <td>FIGURE 2</td> <td>JAVA IMPORTER DIALOG</td> <td>7</td> </tr> <tr> <td>FIGURE 3</td> <td>BASIC FLOW</td> <td>11</td> </tr> <tr> <td>FIGURE 4</td> <td>WEB SERVICE PROXY WIZARD</td> <td>12</td> </tr> <tr> <td>FIGURE 5</td> <td>BPEL PROCESS FLOW</td> <td>15</td> </tr> <tr> <td>FIGURE 6</td> <td>APPLYING FOR A LOAN</td> <td>16</td> </tr> <tr> <td>FIGURE 7</td> <td>APPROVING THE LOAN</td> <td>16</td> </tr> <tr> <td>FIGURE 8</td> <td>CHOOSING THE BEST OFFER</td> <td>17</td> </tr> <tr> <td>FIGURE 9</td> <td>EXAMPLE FORM</td> <td>18</td> </tr> <tr> <td>FIGURE 10</td> <td>SCOTT IS LOGGED ON AND HE IS NOT THE PRESIDENT SO HE CANNOT SEE KING'S SALARY</td> <td>20</td> </tr> <tr> <td>FIGURE 11</td> <td>COPYING A FUNCTION TO THE DATABASE</td> <td>21</td> </tr> <tr> <td>FIGURE 12</td> <td>FORMS EM HOME PAGE</td> <td>22</td> </tr> <tr> <td>FIGURE 13</td> <td>EM USER SESSIONS PAGE</td> <td>23</td> </tr> <tr> <td>FIGURE 14</td> <td>EM CONFIGURATIONS PAGE</td> <td>23</td> </tr> <tr> <td>FIGURE 15</td> <td>EM TRACE CONFIGURATIONS PAGE</td> <td>24</td> </tr> <tr> <td>FIGURE 16</td> <td>EM ENVIRONMENT PAGE</td> <td>24</td> </tr> <tr> <td>FIGURE 17</td> <td>EM JVM CONTROLLERS PAGE</td> <td>25</td> </tr> <tr> <td>FIGURE 18</td> <td>EM UTILITIES PAGE</td> <td>25</td> </tr> <tr> <td>FIGURE 19</td> <td>SSO FLOW</td> <td>26</td> </tr> <tr> <td>FIGURE 20</td> <td>SWITCHING ON SSO</td> <td>27</td> </tr> <tr> <td>FIGURE 21</td> <td>SINGLE SIGN-ON EXAMPLE</td> <td>27</td> </tr> </tbody> </table> OVERVIEW More and more businesses are looking to the principles of Service Oriented Architecture (SOA) to align their business and I.T. needs. The ability to build services modeled on business functions, reuse and orchestration of common, loosely coupled services and the agility associated with working with modular services, built on recognized standards, are attractive options. Oracle Forms has been very successful in the market place but has traditionally been a monolithic tool. You either used Forms (and Reports and possibly Graphics) and only Forms or you didn't use it. In the new world of disparate and distributed services making up much of new development Forms has been changed from being monolithic to being part of a Service Oriented Architecture. How can Forms be part of a Service Oriented Architecture? Recent versions of Oracle Forms has gained functionality that makes it possible to integrate existing (or new) Forms applications with new or existing development utilizing the Service Oriented Architecture concepts. With its support for Java and its integration into SOA, Forms provides an incremental approach for developers who need to extend their business platform to JEE. This allows Oracle Forms customers to retain their investment in Oracle Forms while leveraging the opportunities offered by complementing technologies. But how do you actually integrate Forms with a Service Oriented Architecture. How can Forms be a part of SOA? This whitepaper is meant to shed some light on this topic. THE THREE AREAS There are three areas where Oracle Forms can be integrated with a Service Oriented Architecture: - **Use of external service** With functionality recently added to Forms it is now possible to call from Forms to Java making it feasible to use Web services and BPEL processes. • **Exposure of Oracle Forms business logic to the outside world** In a world of distributed applications, Forms code might need to be moved out of Forms and into a place where it can be used by other applications. This section covers how to achieve that. • **Using the Application Server's infrastructure.** Oracle Forms coexists and integrates with Oracle's Applications Server's infrastructure functionality. Forms' integration with Oracle Single Sign-on and Enterprise Manager is covered in this section. ## CALLING OUT FROM ORACLE FORMS ### Use of external services Oracle Forms use of external services hinges on recent functionality regarding Java integration. Oracle Forms can call out to Java on the file system. It can make use of Java beans and its native screen widgets can be customized with custom Java code (the Pluggable Java Component architecture). In this section we are going to touch on the functionality of calling out to Java code residing on the file system. Once that functionality was put in place Oracle Forms was able to call all kinds of external services such as Web services and be part of a BPEL process flow. But let's start at the beginning. ### Calling Java The functionality that makes it possible to call out to Java on the file system is called the Java Importer. It is incorporated into the Forms Builder and takes a class as an input and creates a PL/SQL package that acts as a wrapper around the Java class making it possible to call Java code from a PL/SQL trigger or function/procedure. The Java Importer scans the local machine in the directories specified in the Registry variable FORMS_BUILDER_CLASSPATH and finds all Java class files. The Importer can scan JAR files only one level deep. It cannot find class files in JAR files that are in turn in a JAR file. Using the new built-in packages `ora_java` and `jni`, the Importer will take a class definition like the following one and make a PL/SQL wrapper package. ```java public class Dates { public static void main(String[] args) { System.out.println(theDate()); } public static final String theDate() { return new Date().toString(); } } ``` The package will be created with the name of the class. Each public Java function will have a corresponding PL/SQL function and each Java function returning `void` will result in a corresponding PL/SQL procedure: ```sql PACKAGE BODY dates IS args JNI.ARGLIST; -- Constructor for signature ()V FUNCTION new RETURN ORA_JAVA.JOBJECT IS BEGIN args := NULL; RETURN (JNI.NEW_OBJECT('formstest/dates', '()V', args)); END; -- Method: main ([Ljava/lang/String;)V PROCEDURE main( a0 ORA_JAVA.JARRAY) IS BEGIN args := JNI.CREATE_ARG_LIST(1); JNI.ADD_OBJECT_ARG(args, a0, '[Ljava/lang/String;'); JNI.CALL_VOID_METHOD(TRUE, NULL, 'formstest/dates', 'main', '([Ljava/lang/String;)V', args); END; -- Method: theDate ()Ljava/lang/String; FUNCTION theDate RETURN VARCHAR2 IS BEGIN args := NULL; RETURN JNI.CALL_STRING_METHOD(TRUE, NULL, 'formstest/dates', 'theDate', '()Ljava/lang/String;', args); END; BEGIN NULL; END; ``` Java classes declared static, classes that are always instantiated with one and only instance in a Java application, will have no new functions generated. The package code can subsequently be used to, at runtime, call out to the Java code and return not just scalar values as in this example but complex data structures like arrays and Java objects. There are functions in the ora_java built-in PL/SQL package to manipulate arrays and complex data types. At runtime the class files needs to be accessible to the Forms runtime. To do that you add either the full path to the directory the class file is in or add the full path to the JAR file to the CLASSPATH environment variable in the default.env file (see the EM section below for more information). It is also possible to import system classes and thus manipulate Java data types and extract data that can be used in PL/SQL. One such example is the java.lang.Exception class. The following is a partial representation of the PL/SQL package code that is the result of importing the Exception class. ```plsql PACKAGE BODY Exception_ IS args JNI.ARGLIST; -- Constructor for signature -- (Ljava/lang/Throwable;)V FUNCTION new( a0 ORA_JAVA.JOBJECT) RETURN ORA_JAVA.JOBJECT IS BEGIN args := JNI.CREATE_ARG_LIST(1); JNI.ADD_OBJECT_ARG(args, a0, 'java/lang/Throwable'); RETURN (JNI.NEW_OBJECT('java/lang/Exception', '(Ljava/lang/Throwable;)V', args)); END; -- Method: toString ()Ljava/lang/String; FUNCTION toString( obj ORA_JAVA.JOBJECT) RETURN VARCHAR2 IS BEGIN args := NULL; RETURN JNI.CALL_STRING_METHOD(FALSE, obj, 'java/lang/Exception', 'toString', '()Ljava/lang/String;', args); END; BEGIN NULL; END; END; ``` ‘Exception’ is a reserved word in PLSQL so the importer resolves a potential naming conflict by adding an underscore. The resulting Exception_ class is used in this PL/SQL function shown in more depth later in this white paper when obtaining a currency conversion rate from a Web service. The Exception_.toString call is used to get a character representation of an exception obtained from Java that is of a type PL/SQL cannot understand but Java can. ```java function get_conversion_rate(currency_code varchar2) return number IS conv ora_java.jobject; rate ora_java.jobject; excep ora_java.jobject; begin conv:=currencyConverter.new; rate:=currencyConverter.getrate(conv, 'USD', currency_code); return float_.floatvalue(rate); exception when ora_java.java_error then message('Error: ' || ora_java.last_error); return 0; when ora_java.exception_thrown then excep:=ora_java.last_exception; message('Exception: ' || Exception_.toString(excep)); return 0; end; ``` Web Services A Web service is a piece of code that can accept remote procedure calls using the HTTP protocol and an XML based data format called SOAP and return data in the form of XML to the originator. Web services are a major part of any Service Oriented Architecture and makes it possible to distribute business logic exposed as a service onto servers on the local network or even the Internet. Forms cannot make use of a Web service directly but now that we know how to call out to Java we can make it happen. With a proxy made in Oracle's JDeveloper we can call any Web Service from Forms. Basic flow See this link to learn more about the WSDL file standard. Identifying the WSDL file A Web service is defined by a WSDL (Web Service Description Language) file. It has definitions on how to call the service and what you can expect as return data. Since a Web service is a network service, its definition is also a network resource. The administrator of the Web service you are interested in, should be able to tell you how to get the WSDL file URL. Web services may be published in Universal Description Discovery Integration (UDDI) registry which acts like a telephone directory of Web services. Making the proxy A proxy, in this context means a Java class that has been instrumented to know how to call a specific Web service. Oracle's JDeveloper can help us to make the proxy. JDeveloper has a wizard that takes a WSDL file as input and creates a Java package that we can subsequently import into Forms, making it possible for our Forms application to call the Web service. Once you have the WSDL file URL you can plug it into the JDeveloper Web Service Proxy Wizard. The screenshot below has a WSDL file already plugged in. ![Create Web Service Proxy](image) **Figure 4: Web service Proxy Wizard** The name of the service defined in this WSDL file is CurrConv and the wizard has picked that up. See this link for exact instructions on how to make a Web service proxy. **Package it all up** See this link to learn how to package the Web service proxy that you have created up so that you can import it into Forms. **Import into Forms** Once you have the proxy class you can import it into Forms and call its functions from PL/SQL code. The operative functions imported from the proxy are listed below: ```sql package CurrencyConverter /* currconv3.mypackage.CurrConv3Stub */ IS function new return ora_java.object; function getRate( obj ora_java.object, a0 varchar2, a1 varchar2) return ora_java.object; end; ``` The `getRate` function is the function that we are after. It takes two currency symbols and returns a Java object that we know is a number of type float (the first argument is the class instance and will be described later). The WHEN-BUTTON-PRESSED trigger code is listed below: ```sql function get_conversion_rate(currency_code varchar2) return number IS conv ora_java.jobject; rate ora_java.jobject; excep ora_java.jobject; begin conv:=currencyConverter.new; rate:=currencyConverter.getrate(conv, 'USD', currency_code); return float_.floatvalue(rate); exception when ora_java.java_error then message('Error: ||ora_java.last_error); return 0; when ora_java.exception_thrown then excep:=ora_java.last_exception; message('Exception: ||Exception_.toString(excep)); return 0; end; ``` This code delivers the rate used in the application. It takes a currency symbol (in this example we only convert from US dollars) as an argument and returns a PL/SQL number which holds the rate. We first declare three local variables of type JOBJECT (JOBJECT is a type defined in the ora_java package representing a Java object of any kind). The first line after `begin` fetches a reference to an instance of the proxy and stores it in the variable `conv`. The second line gets the rate from that instance. The return clause makes a number from the resulting rate Java object with the help of an imported Java system class, `java.lang.Float`. It has a call, `floatValue`, which converts a float to a number, which is what we need here (Forms cannot handle a Java float directly but it can handle a number). The exception handler is reached if any of the calls out to Java causes (throws is the term used in Java) an exception or an error. In case of an exception we get the exception with the help of the `ora_java` package call `last_exception`. To actually see it, we call an imported `Exception_` package routine called `toString`. **What if the Web service is secured with a password?** This page has a good tutorial on how to change a Web service proxy to use authentication. Business Process Execution Language (BPEL) Now that we have established that Forms can call out to Java and use Web services, we can start looking at BPEL or Business Process Execution Language. BPEL is an emerging standard for orchestrating disparate and heterogenous business services into a process flow. Oracle offers a comprehensive BPEL solution and Forms can take part in a BPEL process flow as a manual process step or as an initiating step or both. An example In the following example Forms is going to act both as an initiator and as a manual process step. The process is that of a consumer loan. The business flow diagram below outlines what the process looks like. The process first receives an application for a consumer loan. This step is performed in a Forms application that communicates with the BPEL server through either a Web service exposed by the BPEL server or by way of a Java interface supplied by Oracle as a part of the BPEL server. Figure 5: BPEL Process Flow The screenshot above shows the application that the fictitious user Dave uses to send in his application for a car loan. When the user clicks the Submit button the Forms application sets in motion the BPEL process flow described earlier. The process fetches Dave's Social Security Number and then gets his credit rating. As the developer of the application that Dave uses we don't have to know how this is achieved. That is up to the developer of the BPEL process. We just need to know how to kick off the process. After having fetched Dave's credit rating the BPEL process collects two offers from two different loan vendors, Star Loans and United Loans. United Loans uses an automated process that we, in this example, are not interested in. Star Loan on the other hand uses a manual process that is implemented in a Forms application. The loan officer at Star Loan queries the BPEL process and sees that an application has arrived from Dave. He/she determines the appropriate interest rate and clicks the Approve button. This will kick off the next step of the process where Dave has the opportunity to select the best offer. He does that in the same Forms application he used earlier: Dave sees the two loan offers in his application and can select the best offer by clicking the Accept button. This will again cause Forms to communicate with the BPEL server and causes the process to conclude. Note the Refresh button. Forms could potentially poll the BPEL server with the help of a Forms timer. This application does not do that. Instead Dave has to manually query the BPEL process. Forms cannot yet (as of version 10.1.2) easily register an interest in a BPEL event and automatically be notified if input is needed from it. In version 11 of Forms we intend to have functionality in place that will make this much easier. For more information on how to achieve BPEL integration and a working example, see this link. EXPOSING FORMS BUSINESS LOGIC TO THE OUTSIDE WORLD Overview So far we have covered how Forms can call out to the outside world. What about the opposite? Can the outside world use existing Forms business logic? Perhaps as exposed as a Web service? Or could Forms business logic (which is written in PL/SQL) be called directly? The answer is a qualified Yes. As a development tool, the fact that the user interface and the business logic are so closely integrated, makes the development of Forms very simple and intuitive. However, this tight integration of UI code and business logic makes the exposure of the pure business services to outside consumers, much more challenging. It is however possible to move Forms PL/SQL code from Forms to the database and from there expose it to the outside world, either as a database procedure/function or as a PL/SQL Web service. A simple example In this simple example we have a block in Forms with a column labeled Salary that shows the total salary for the employees in the Emp table. The sum is calculated with a POST-QUERY trigger. ![Example form](image) The trigger calls a local (it is executed by Forms rather than the database) PL/SQL function thus: function calc_total_sal return number is total number; l_mgr number; l_mgr_name varchar2(30); usr varchar2(200):= get_application_property(username); begin select mgr into l_mgr from emp where empno=:emp.empno; select ename into l_mgr_name from emp where mgr is null; if (l_mgr is not null) then select sal+nvl(comm,0) into total from emp where empno=:emp.empno; elsif (usr<>l_mgr_name) then total:=-1; elsif (usr=l_mgr_name) then select sal+nvl(comm,0) into total from emp where empno=:emp.empno; end if; return total; end; The business logic implemented in the function is this: If the employee whose salary is being calculated is not the President, calculate his total salary by adding columns sal and comm together, taking into account that the comm column can potentially be null. If the employee row is the President's and the current user is not the President return -1 otherwise return the total salary for the President (nobody but the President can see the salary of the President of the company). To use this code from the database we need to refactor it. The database does not understand references to any Forms objects nor does it understand any of the Forms specific PL/SQL built-ins. In this case we have a call to a Forms built-in, namely get_application_property. We also have a reference to the empno field. If we take them out and pass them in as parameters the function now looks like this: function calc_total_sal(l_empno number, usr varchar2) return number is total number; l_mgr number; l_mgr_name varchar2(30); begin select mgr into l_mgr from emp where empno=l_empno; select ename into l_mgr_name from emp where mgr is null; if (l_mgr is not null) then select sal+nvl(comm,0) into total from emp where empno=l_empno; elsif (usr<>l_mgr_name) then total:=-1; elsif (usr=l_mgr_name) then select sal+nvl(comm,0) into total from emp where empno=l_empno; end if; return total; end; Note that the variables usr and l_empno are now external and have to be passed in. The POST-QUERY trigger has to change accordingly, to say; :emp.total_sal:=calc_total_sal(:emp.empno, get_application_property(username); but after that it will continue to function as before. Figure 10: Scott is logged on and he is not the President so he cannot see KING's salary Moving a PL/SQL procedure from a form definition file to the database can be achieved in the Forms Builder by dragging and dropping the PL/SQL unit between a form module and a database node in the Forms navigator. Now that we have the Forms PL/SQL code in the database, we can leverage its business logic in any application that can call a database function. With the help of JDeveloper we can now also make a Webservice out of the code, making it possible for environments that cannot directly call into the database but which can call a Web service, to leverage legacy Forms code. See this web page for more information on how to use JDeveloper to make a Web service from database PL/SQL code. **USING THE APPLICATION SERVER'S INFRASTRUCTURE** When Forms becomes a part of a larger setting it needs to be able to participate in application server wide functions such as maintenance and management and user authentication. It doesn't make much sense to have one application use its own authentication scheme and all the others use another scheme. In versions 9 and 10 Forms is a full member of the Application Server infrastructure and is automatically configured to be able to use both the Single Sign-on server (Oracle OID and SSO) and to be managed thru Oracle Enterprise Manager. **Enterprise Manager** As part of the Oracle Application Server platform, Oracle Forms applications can now be managed remotely through Oracle Enterprise Manager's Application Server Control running in a browser. This is the screen that meets the administrator when he logs onto the Application Server Control console: ![Forms EM Home page](image) In this overview screen the administrator can monitor the over-all status of the system and critical metrics such as CPU and memory usage for Forms and its main components. --- Oracle Forms and SOA. Page 22 In this next screen, called User Sessions, each user's session is listed with crucial metrics such as its CPU and memory usage, IP address and username and trace settings. The configuration section used when starting the application is also shown. If tracing is turned on from here the trace log can be viewed from this console. Individual user sessions can be terminated from here. Figure 13: EM User Sessions page Figure 14: EM Configurations page In this section you can create, edit and delete configuration settings in three different configuration files: the formsweb.cfg which is the main configuration file for a Forms installation, the ftrace.cfg which is the trace facility configuration file and the Registry.dat file which governs the font mapping. Here is a screenshot of the ftrace section: ![Figure 15: EM Trace Configurations page](image1) ![Figure 16: EM Environment page](image2) In this Environment screen the administrator can manipulate environment settings stored in the default.env file that pertain to Forms: JVM Controllers are used to reduce memory requirements when calling Java from the file system and they are controlled from this screen. The last screen is used for additional functionality not yet accessible from other screens. **Single sign-on server** Changing Forms to use the Single Sign-on server rather than its own database based user authentication method is a matter of setting a flag in a configuration file. Once Forms is set up to use SSO, Forms users who are logged on because they have done so in their Forms application need not reauthenticate themselves when they log in to a Portal or a Java application. SSO-based authentication happens in 5 steps. In step 1 the user supplies his credentials (username and password). Instead of calling the database to authenticate the user, Forms calls the SSO server who in turn queries the OID/LDAP server to see if the user has enough privileges. If so, it returns the actual database username and password that will be used by Forms to log on. During the session Forms can access data about the logged in user from the OID repository thru a PL/SQL API. The same single sign-on instance can of course be used by other applications served from the same application server so users need not be stored in multiple places and can be authenticated only once, namely the first time they log in to any of the applications making use of the same SSO Server. **SSO & OID** **Figure 19: SSO flow** **Enterprise User Security** The database can also use the Single Sign-on server as its authentication scheme with functionality called Enterprise User Security. This makes it possible to store authentication information in a single place, both for access to the database thru an application and thru direct means, such as SQL*Plus or the new SQL Developer. **Switching it on** Making Forms use the Single Sign-on server is as easy as switching a switch in the formsweb.cfg file. Figure 20: Switching on SSO Here we are setting the ssoMode switch in Enterprise Manager's Application Server Configuration section to true. This is all that is needed to enable SSO in a default Forms installation. When a user starts a Forms application he/she will be met by this SSO login screen instead of by the normal Forms login window: ![Sign In Form] Figure 21: Single Sign-on example Defining users in OID SSO users must be defined in OID. That can be done in Portal. Log in as orcladmin after clicking on the link in the Application Server welcome screen that is highlighted in the following screenshot: ![Figure 22: Loggin on to Portal](image) Click the Administer tab when you arrive at this screen: ![Figure 23: Administer page](image) This will take you to a part of the Portal builder that has a user interface for OID management. To create a new user, click the link in the User portal in the top right hand corner that is highlighted in the following screenshot: Doing that will open up this screen where you can specify all the data you will need about the user, such as First and Last name, username, e-mail, permissions and roles. ![Create New Users](image) **Figure 24: Create new user** A mandatory part (for Forms users) that is not entirely obvious is the Resource Access Information part: ![New User page](image) **Figure 25: New User page** A mandatory part (for Forms users) that is not entirely obvious is the Resource Access Information part: This information will be used by Forms (and Reports incidentally) when logging on to the database. The user will be authenticated by the information specified in this big screen but the actual log-in will be done with the account specified in this section. To create a new resource click the Create Resource button and fill in the name of the resource in this screen: The resource type should be OracleDB. Click the Next button to get to this screen: CONCLUSION This whitepaper has covered the topic of integrating Oracle Forms with a Service Oriented Architecture. We had a look at how Forms can call out to Java, thus making it possible to make use of Web services and process flows orchestrated by BPEL. We touched on the process of exposing existing Forms code to SOA processes by refactoring the code and moving it to the Oracle Database. Finally we explored how Forms can take advantage of the remote management capabilities or Oracle Enterprise Management (EM) and of centralized user authentication by way of Oracle Identity Management (SSO and OID).
{"Source-Url": "https://www.oracle.com/technetwork/developer-tools/forms/documentation/forms-soa-wp-1-129441.pdf", "len_cl100k_base": 6718, "olmocr-version": "0.1.50", "pdf-total-pages": 32, "total-fallback-pages": 0, "total-input-tokens": 55828, "total-output-tokens": 7807, "length": "2e12", "weborganizer": {"__label__adult": 0.00024008750915527344, "__label__art_design": 0.000240325927734375, "__label__crime_law": 0.0002684593200683594, "__label__education_jobs": 0.0003859996795654297, "__label__entertainment": 4.094839096069336e-05, "__label__fashion_beauty": 8.7738037109375e-05, "__label__finance_business": 0.0012617111206054688, "__label__food_dining": 0.0002129077911376953, "__label__games": 0.0003445148468017578, "__label__hardware": 0.0005431175231933594, "__label__health": 0.0002014636993408203, "__label__history": 0.00013077259063720703, "__label__home_hobbies": 4.851818084716797e-05, "__label__industrial": 0.0003604888916015625, "__label__literature": 0.00012433528900146484, "__label__politics": 0.00016129016876220703, "__label__religion": 0.00020694732666015625, "__label__science_tech": 0.004852294921875, "__label__social_life": 3.9637088775634766e-05, "__label__software": 0.0189666748046875, "__label__software_dev": 0.970703125, "__label__sports_fitness": 0.0001665353775024414, "__label__transportation": 0.00024819374084472656, "__label__travel": 0.00013339519500732422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31208, 0.01334]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31208, 0.19891]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31208, 0.84699]], "google_gemma-3-12b-it_contains_pii": [[0, 95, false], [95, 3554, null], [3554, 4735, null], [4735, 4735, null], [4735, 6562, null], [6562, 8101, null], [8101, 8378, null], [8378, 9823, null], [9823, 11899, null], [11899, 12645, null], [12645, 14237, null], [14237, 15208, null], [15208, 17314, null], [17314, 17994, null], [17994, 18306, null], [18306, 19497, null], [19497, 20232, null], [20232, 21437, null], [21437, 23186, null], [23186, 24051, null], [24051, 24535, null], [24535, 25687, null], [25687, 26139, null], [26139, 26725, null], [26725, 27147, null], [27147, 28660, null], [28660, 29056, null], [29056, 29648, null], [29648, 30146, null], [30146, 30599, null], [30599, 31208, null], [31208, 31208, null]], "google_gemma-3-12b-it_is_public_document": [[0, 95, true], [95, 3554, null], [3554, 4735, null], [4735, 4735, null], [4735, 6562, null], [6562, 8101, null], [8101, 8378, null], [8378, 9823, null], [9823, 11899, null], [11899, 12645, null], [12645, 14237, null], [14237, 15208, null], [15208, 17314, null], [17314, 17994, null], [17994, 18306, null], [18306, 19497, null], [19497, 20232, null], [20232, 21437, null], [21437, 23186, null], [23186, 24051, null], [24051, 24535, null], [24535, 25687, null], [25687, 26139, null], [26139, 26725, null], [26725, 27147, null], [27147, 28660, null], [28660, 29056, null], [29056, 29648, null], [29648, 30146, null], [30146, 30599, null], [30599, 31208, null], [31208, 31208, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 31208, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31208, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31208, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31208, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31208, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31208, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31208, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31208, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31208, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31208, null]], "pdf_page_numbers": [[0, 95, 1], [95, 3554, 2], [3554, 4735, 3], [4735, 4735, 4], [4735, 6562, 5], [6562, 8101, 6], [8101, 8378, 7], [8378, 9823, 8], [9823, 11899, 9], [11899, 12645, 10], [12645, 14237, 11], [14237, 15208, 12], [15208, 17314, 13], [17314, 17994, 14], [17994, 18306, 15], [18306, 19497, 16], [19497, 20232, 17], [20232, 21437, 18], [21437, 23186, 19], [23186, 24051, 20], [24051, 24535, 21], [24535, 25687, 22], [25687, 26139, 23], [26139, 26725, 24], [26725, 27147, 25], [27147, 28660, 26], [28660, 29056, 27], [29056, 29648, 28], [29648, 30146, 29], [30146, 30599, 30], [30599, 31208, 31], [31208, 31208, 32]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31208, 0.06479]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
f4f9cca51a404f53295d56fa6954f3e66a72272d
CHAPTER V SYSTEM IMPLEMENTATION System implementation is a process that converts the system requirements and design into program code. Thus, the suitable development environment helps effective implementation of the system. Using the hardware and software defined in the system development phase also helps to speed up the system development. 5.1 VPN Implementation Overview Virtual private networking will be implemented during the Information Technology subject examination to support connection between the candidates taking examination in The Examination Center throughout the country and the examination web page located in the Examination Syndicate Server. In an ordinary web-based on-line examination, candidates of an examination will access the examination provider server to browse the examination web page and submit their responses back to the server. The server will process the responses and give score to the student response. This is a common relationship between a web server and its client in a World Wide Web activities. The Information Technology SPM examination on the other hand, due to security reason discussed in earlier chapter, requires a secure channel in the Internet connection to be established between the Examination Syndicate Server as the examination provider and the Examination Centers before the candidates can access the examination web site. Packet of data transmit between the Examination Center and The Examination Syndicate server will be routed through this predefined channel via Internet and encapsulated with encryption algorithm. To establish VPN connection, a router-to-router VPN implementation will be deployed. The Examination Center Chief Invigilator would initiate VPN connection to the Examination Syndicate server by making a router-to-router VPN dial-up. The router-to-router VPN will enable the entire workstations from the Examination Center to access the Examination web site once the Examination Center is authenticated. After the VPN router-to-router is established, candidates may access the Examination web site folder located in the Examination Syndicate server. As router-to-router connection will only allow the entire IP traffic of the workstations to the other end of VPN gateway, browsing to other site through the Internet is automatically blocked. This feature will prevent candidates from browsing the other site of the Internet and stop outsiders from accessing the Examination Center LAN sources. 5.2 VPN Deployment A number of value added services provided by Microsoft would be used for VPN deployment in the Information Technology subject on-line examination. All candidates workstations in the Examination Center and The Examination Syndicate server must be configured to enable VPN connection between the two locations. As the examination would involve 20 candidates machines in the Examination Center accessing the examination web site at the same time, a router-to-router VPN implementation approach would be implemented to connect the two networks. The Examination Center server must be set to enable a VPN connection to the Examination Center. At the same time, The Examination Syndicate server must be set to accept VPN connection from the Examination Center and grant permission to access the examination folder resources after certain authentication implemented. Proper VPN set up will ensure data transfer between the two machines is encrypted and traverse through a controlled tunnel. The Routing and Remote Access Server (RRAS) add on for Microsoft Windows NT together with Proxy Server 2.0 are use to create VPN across the Internet. To ease congestion during authentication, Remote Authentication Dial In User Service (RADIUS) authentication service will be install at the Examination Syndicate PDC server. The VPN setup for the on-line examination will be divided into two phases: a) The Examination Center VPN Gateway setup b) The Examination Syndicate VPN Gateway setup 5.2.1 The Examination Center VPN Gateway The Examination Center LAN will be set as PPTP client by deploying RRAS and Proxy 2.0 together at the administrator machine. RRAS can create a router-to-router connection to a Window NT 4 with RRAS-based server. The combination of RRAS and Proxy 2.0 installation enable a single VPN dial-up connection to be shared by the other member of the LAN. Any proxy clients behind the proxy server will also be able to use the PPTP session that has been established. This is because after the PPTP connection is up, the Proxy 2.0 server treats the PPTP connection just like another network interface. The entire workstations within the Examination Center can gain access to the examination folder resided on the Examination Syndicate machine once the Examination Syndicate PDC server authenticated the PPTP dial-up. 5.2.2 RRAS Installation RRAS (formerly code-named Steelhead) is Microsoft’s set of enhancements to NT’s RAS and Multi-Protocol Routing (MPR) services. Among the significant enhancements that RRAS includes are: a) Routing Information Protocol (RIP) 2.0 b) Open Shortest Path First (OSPF) c) RADIUS client support, a graphical interface and administration tool d) Demand-dial routing e) PPTP server-to-server connections RRAS will be installed to the Examination Center server operating system and The Examination Syndicate Windows NT server as a router-to-router VPN gateway to establish PPTP connection during the examination session. To initiate a VPN session, the Chief Invigilator of an Examination Center will first establish an Internet connection. Then he can make a PPTP connection to the Examination Syndicate server from RRAS installed. RRAS can be downloaded from Microsoft NT support web site as free wares and part of add on to the Windows NT server operating system. Follows are system requirement prior to RRAS installation: a) Windows NT 4.0 operating system with Service Pack 3 or greater installed. b) A 32-bit x86-based microprocessor (such as Intel 80486/50 or higher), Intel Pentium, or supported RISC-based microprocessor, such as the Digital Alpha Systems c) One or more network adapter cards, WAN cards, or modems d) VGA monitor e) A 40 MB minimum free disk space on the partition that will contain the Routing and Remote Access Service system files f) 16-MB RAM minimum RRAS must be installed on Windows NT 4.0 or Windows 2000 Server platform with Service Pack 3 or later must be installed prior to the RRAS installation. It is also suggested to install any LAN and WAN hardware such as modem or ISDN devices for PPP connection before installing RRAS. Then install the RRAS component downloaded from the Windows NT support web site. 5.2.3 Installation After RRAS downloaded, run the MPRI386.exe and install RRAS to an assign directory. The RRAS setup routine dialog box will prompt to delete the existing RAS, RIP, SAP, and BOOTP relay agent services if it were already installed. Opt to “yes” as their Registry settings are now a part of RRAS. The setup will prompt for the services that RRAS will install in the system. Three services are available: a) Remote access service b) LAN routing c) Demand-Dial routing Select all three services. The figure below shows the services offers by RRAS to be selected by user. Figure 5.1: RRAS services. After selecting the required services, add ports to the server. To make Internet connection and PPTP connection, configure the dial-up port as: a) RAS client b) RAS server c) Demand-dial router Accept the default RAS Server TCP/IP configuration parameters when the window appears: “Let clients access the entire network, and use the Dynamic Host Configuration Protocol (DHCP) to dynamically assign addresses to clients”. 5.2.4 Router-to-router Connection The on-line examination will be deploying router-to-router connection to traverse examination data packet between the candidates in the Examination Centers and examination web site located at the Examination Syndicate server. Both server, The Examination Syndicate server that supply the examination question, and The Examination Center server where candidates sit for the examination need to be connected to the Internet. The following tasks must be performing on each server to enable the VPN connection. There are: a) Internet connection through RRAS. b) PPTP connection setup. c) Define user credentials that the servers use for validation d) Writing routing information. 5.2.5 Internet Connection. A dial-up connection to the Internet Service Provider (ISP) can be set in the RRAS Administration program. RRAS use a demand-dial interface to establish a router-to-router VPN connection and forward packets. The demand-dial interface is configured as follows: a) General tab. Type the host name and IP address of the VPN server. b) Protocols Tab. c) Security tab. Option to select "accept only Microsoft encrypted authentication and Require data encryption" to apply encryption on all forwarding packets. d) Credential for demand-dial interface. In the Administrative tools group, right-click LAN and Demand Dial Interfaces to add a new demand-dial interface to connect to ISP. Select the Add Interface option and name the connection, i.e. Internet Connection. A Dial-up networking phone book entry will appear. Configure the connection to connect the ISP. After the Internet connection is completed, a new dialog box titled IP Configuration-ISP Connection appears. For now, accept the defaults. As the connection to the ISP is a demand-dial interface, a prompt for the name and password to use to establish your connection to an ISP will not appear. It must be provided in the Interface Credentials dialog box, instead. The figure below shows the Interface Credentials setup dialog box. ![Interface Credentials dialog box](image) *Figure 5.2: Interface credential setup* Define the user name, domain, and password for the particular server. This process can be neglected is the server has a dedicated connection to the Internet. ### 5.2.6 PPTP Connection PPTP connection is a second dialup connection that must be configured in a system to establish a VPN connection over the Internet. The PPTP connection is the gateway interface that encapsulate PPP packet and apply encryption to the packet. It also defines the control channel that the packet will be tunneled through. A PPTP connection can be set in a RRAS by right-clicking the LAN and Demand Dial Interfaces option. Select ADD interfaces and name the Interface as PPTP connection. In the Protocols and Security dialog box, select Route IP packets on this interface. Also select the two options Add a user account so a remote router can dial in and Authenticate a remote router when dialing out. The figure below shows the Protocols and security dialog box with appropriate check box selected. ![Protocols and Security dialog box](image) *Figure 5.3: PPTP dial up Protocols and Security checkbox.* At the prompts, select the RASPPPDM VPN adapter for this connection. When configuring The Examination Center PPTP connection, enter the Internet IP address of the Examination Syndicate server at the prompt for a phone number or address. After defining the PPTP connection, define user credentials that the Examination Syndicate server and the Examination NT server can use to validate themselves against the other. User credentials are the user IDs that RRAS sets up for and that the routers use to identify each other. There are two type of PPTP dialing credentials: 1. Dial-out credential 2. Dial-in credential. The dial-out credentials are the first credentials must be created while configuring the Examination Syndicate server. Define a user name and password to the dial-out credential. The next window will show a prompt to create dial-in credentials for remote routers connecting in. Set the password for the dial-in authentication. User name will be given by the RRAS and cannot be changed. Since the PPTP connection must be configured on both server, The Examination Syndicate and The Examination Center Windows NT Server to enable a router-to-router communication, therefore the dial-out and dial-up credential must be assign accordingly for mutual authentication. 5.2.7 Routing Information There are two methods to write routing information to the routing table: a) Manual-static route b) Auto-static route. Manual static route is appropriate for the Examination Center server as it deal with small number of routes information to the Examination Syndicate server. The Examination Syndicate server on the other hand may need to configured with auto-static routes as it has manage large number of routes connecting the entire Examination Centers throughout the country. The auto-static updates the static routes that are available across the router-to-router VPN connection during an examination session. ![Routing and Remote Admin](image) Figure 5.4: Routing and Remote Admin 5.3 Managing VPN A VPN management concern with following activities: a) User account, b) IP address assignment to clients, c) Authentication d) Session log 5.3.1 Managing User In a Windows NT server, user to it resources can be created and given a define grant by the administrator. User can login to the server and access to the folder that it granted to. Every user will be authenticated during the login. Only user keyed-in a matched user name and password with the Windows NT user account database can access to the server. Most administrator set up a master account database at Primary Domain Controller (PDC). Many organizations set another server to act as a backup Domain Controller (BDC) as an alternative to authenticate user should the PDC is not working properly. In an examination environment, the user (the Examination Center and the candidates) account can be located in a different machine from the web server machine that stores the examination questions to ease congestion. A BDC can be set up by the Examination Syndicate to replicate the Examination Center and candidates account database. 5.3.2 Managing Address The Examination Syndicate VPN server must have IP address available to allocate them the VPN server's virtual interface and to Examination VPN server (the client) during the IP Control Protocol (IPCP) negotiation phase of the connection establishment phase. The IP address allocated to the VPN client is assign to the virtual interface of the Examination Center. 5.3.3 Managing Access The permission for remote access to the Examination Syndicate VPN can be configured at the dial-in properties on the user accounts. At the Examination center dial-in properties enable the GRANT dial-in permission. 5.3.4 Managing Authentication Routing and Remote Access Service (RRAS) can be configured to use Windows NT or RADIUS as an authentication provider. As the Information Technology examination will involves many Examination Centers making remote access to the Examination Syndicate VPN server in the same time, using RADIUS server is the best solution to minimized congestion to the server. RADIUS can responds to authentication request based on it own database or it can be a front end to another database server such as a generic Open Database Connectivity (ODBC) server or a Window NT 4.0 PDC. On a RRAS PPTP connection initiated by the Chief Invigilator of an Examination Center, the user credentials and parameters of the connection request as a series of a RADIUS message to a RADIUS server configured at the Examination Syndicate site. It can authenticate user using it authentication database. The RADIUS server will inform the Examination Syndicate VPN server either the permission is grant or not and other applicable connection parameters such as maximum session time and static IP address assignment. 5.3.5 Session Log Management Session log is a report of VPN connection session done. The Examination Syndicate RRAS can be configured to use RADIUS as an accounting provider. The RRAS will sent message to the RADIUS server that request accounting records at start of the call, end of the call, and at predetermined intervals during the calls. 5.4 The Router-to-router Connection Procedure in an On-line Examination The VPN router-to-router initiated when candidates send a http request to Examination Syndicate web page during an examination session. The packets of request are routed to The Examination Center Windows NT RRAS router and the following process will follow; a) The router checks it routing table and find a route to the Examination Syndicate server that uses the VPN demand dial interface. b) Based on the VPN demand-dial interface configuration, the Examination Syndicate router attempts to initialize a router-to-router at the IP address of the Examination syndicate. c) To establish a PPTP-based VPN to the Examination Syndicate server, a TCP connection must be established with the VPN server on the Examination Syndicate site. The VPN establishment packet is created. d) The Examination Center checks its routing table and finds the Examination Syndicate route using the ISP demand dial interface. e) The Examination Center router uses its modem to dial and establish a connection with local ISP. f) The VPN establishment packet is sent to the Examination Syndicate router once the connection to the ISP is made. g) A VPN is negotiated between the Examination Center router and the Examination Syndicate router. The Examination Syndicate server sends authentication credentials that are verified by Examination Syndicate router. h) The Examination Syndicate router checks its demand-dial interfaces and finds the one that matches the user name sent during authentication and changes its interface to a connected state. i) The Examination Syndicate router forwards the candidates packet across the VPN and the VPN server forwards the packet to the web server. 5.5 VPN Implementation A router-to-router VPN connection had been set up and tested in The Information System Lab, Computer Science and Information Technology Faculty, University of Malaya, on Monday, 22nd of July, 2002. The implementation involves two Windows 2000 servers, efac 07 and efac 11. Both servers were configured to enable a router-to-router VPN connection. The figure below shows efac 07 Windows 2000 server and efac 11 Windows 2000 server located in the lab. The efac 07 acted as the examination web server. The VPN dial-up connection was made from efac 11 computer to the efac 7 computer through the Internet. A successful VPN connection was first recorded at 4.30pm at the same day. ![Diagram showing the configuration of efac 07 and efac 11 servers](image) *Figure 5.5: Two routers involve in VPN connection implementation in The Information System Lab, University Of Malaya.* 5.5.1 VPN Connection Print-screen The following figures are print-screen captured during the VPN connection. They contain dialog-boxes and pop-ups that appeared on initiating and maintaining the VPN connection. Figure 5.6: VPN dial-up connection properties. Figure 5.6 shows the VPN connection configuration properties. The connection properties were configured accordingly to initiate the connection. Figure 5.7: User credential dialog box. The Figure 5.7 on the previous page shows user credential dialog box sent by the efac 07 VPN server to efac 11 computer who were trying to initiate a VPN connection to efac 07 server. User must fill in the dialog box that will be authenticated by the efac 07 VPN server. ![Routing Interfaces](image) Figure 5.8: Interface of Demand-dial when connecting to the remote router. The figure above shows Routing and Remote Access Window interface on the efac 7 VPN server. It shows a remote host is connecting through VPN. The figure below on the other hand shows that the VPN connection is accepted and is being registered to the network. Figure 5.9: A Window pop-up shows the registration of the remote host to the network. Figure 5.10: A Window pop-up shows the VPN is connected. The Figure 5.10 is a pop-up on the VPN client stating the VPN is connected. The user of the host may now access the granted folder located on the other end server. Figure 5.11: A Window pop-up shows the VPN connection status. Figure 5.12: A Window pop-up shows the details of VPN connection status. Figure 5.11 and Figure 5.12 indicate the status of the VPN connection. Figure 5.11 shows the duration of the connection and quantity of bytes received and transferred. Figure 5.12 on the other hand shows details of the connection such as authentication and encryption protocols being used. 5.5.2 Accessing the Examination Main Page The examination main page is located at the efac 07 web server. The figure below shows the efac 07 web server tree directory. The Information Technology examination web page is located in the spm folder. Figure 5.13: Efac 07 web server directory. Figure 5.14 shows the Information Technology examination main page retrieved from efac 07 web server. Candidates must fill in their user name and password to be authenticated by the server before they can access the examination question page. ![Image of the Information Technology examination main page] **Figure 5.14: Information Technology examination main page.** Figure 5.15 below shows the Information Technology examination question page. Candidates will respond to this web page and their answer will be sending back to efac 07 server for marking and scoring process, through the VPN connection. ![Image of the Information Technology examination question page] **Figure 5.15: Information Technology examination web page access from efac 07 computer web server through VPN.** 5.6 The On-line Examination Web Page Development On the proposed online examination for Information Technology subject, candidate will sit the examination by responding to the design interactive web page for the examination. The web page is written in server scripting language. The server scripting language enable the Examination syndicate server to process the response posted by the candidates. The server will compare the candidates’ answer input and compare it to the correct answer in the examination database. If the answer is matched, marks will be given. A total of the marks calculated will be inserted in the database. 5.6.1 Scripting Language The on-line examination web page was developed using Hyper Text Markup Language (HTML) and Active Server Pages (ASP) server-side scripting language. ASP processes response to a client request via the HTTP protocol of the World Wide Web. When a client sends an HTTP request to the server, the server receives the request and directs it to be processed by the appropriate Active Server Page. The Active Server Pages does it processing (which often include interacting with a database), then return it result to it clients in the form of a HTML document to display in the client browser. An ASP web page may include HTML, DInamic HTML, ActiveX controls, client side scripts and Java applets with VB script is the de facto language for ASP scripting.[19] ASP can be written using text editor or commercial WYSIWYG web editor. In developing this project, Visual Interdev and Macromedia Dreamweaver web editor were used together with Note Pad text editor. The table below shows the file developed for the Information Technology on-line examination. <table> <thead> <tr> <th>File Name</th> <th>Extension</th> <th>Task</th> </tr> </thead> <tbody> <tr> <td>Login</td> <td>ASP</td> <td>Main page and candidates authentication input.</td> </tr> <tr> <td>Check</td> <td>ASP</td> <td>Verify authentication input.</td> </tr> <tr> <td>Destination</td> <td>ASP</td> <td>Route the browser to the particular page; Examination page if authenticated, and reroute to the login page is password and user name is incorrect.</td> </tr> <tr> <td>Information technology</td> <td>ASP</td> <td>Display the examination question and answer response on mouse click to the choice of answer</td> </tr> </tbody> </table> *Table 5.1: Web page file description* 5.6.2 Database Connection The database that stores the examination questions and answer and candidates authentication was developed using Microsoft Access database application. Microsoft Access provide Open Database Connectivity (ODBC) interface, which works well with ASP. Record from the database is retrieved using Structured Query Language (SQL), a high level programming language.[20] However in developing this on-line examination web page, Data Source Name (DSN) was use to connect to the ODBC only when the final version is developed. During the testing phase, a DSNless database connectivity approach was implemented because different machines were use for system testing. New DSN must be defined whenever a different computer was used as web server. As an alternative, a server.mappath command approach was applied to implement a DSNless database connection. 5.6.3 System Coding As described above, ASP together with HTML, VBScript and JavaScript were used to write the dynamic on-line examination web page for Information Technology subject examination. References from books and Internet were gathered to help in developing this web page. [22] Follows are the coding used to carry out the functional part of the ASP. 5.6.3.1 Candidates Authentication. Candidates authentication coding begins in the login.asp file. In this web page, two input box were created to receive authentication input from the input candidates. A submission button is located below the input box, which post the input data to check.asp file. The following login form tag define the destination of the user name and password input. ```html <Form name=frmLogin METHOD=POST ACTION=check.asp> ``` There are two tags in the table section that retrieved input for user name and user password. There are: 1. `<Input Type="TEXT" Name="txtUserid">` 2. `<Input Type="PASSWORD" Name="txtPassword">` 5.6.3.2 Define Path to Database In a DSN less approach, it is important to define the path use to identify the database. The following code defines the database location. ```vbs Dim DB_CONN_STRING DB_CONN_STRING = "DBQ=\" & Server.MapPath("quiz.mdb") & ";" DB_CONN_STRING = DB_CONN_STRING & ";Driver=Microsoft Access Driver (*.mdb);" DB_CONN_STRING = DB_CONN_STRING & ";DriverId=25;FIL=MSAccess;" ``` The database connection was dimensioned as DB_CONN_STRING and the path to the database by the Server.MapPath object as quiz.mdb using Microsoft access driver. Any SQL statement that follows will use this database for manipulation. For the DSN approach, an Open Database Connectivity (ODBC) connection was created at the Control Panel. A simple coding is needed to create a path to this database connection. The coding is as below: \[\begin{align*} \text{Dim } DB\_CONN\_STRING \\ \text{Dim } DB \\ \text{Set } DB = \text{Server.CreateObject("ADODB.Connection")} \\ \text{DB.Open "ESDSN"} \\ \text{DB\_CONN\_STRING} = \text{DB} \end{align*}\] The "ESDSN" is the system data source name define for the database ODBC connection. **5.6.3.3 Using Database or Hard Code Option** The ASP file written for this system give option to user to use database to retrieve the questions item and compare the responses or use line of code ("hard code") to write individual question and answers. Database offer flexibility to add or change new item but the trade off is increased in download time. The web page can be retrieved faster by writing "hard code" but it take a lot of work to write lines of codes. The line to select the option is: \[\text{Const } \text{USE\_DB\_FOR\_INFO} = \text{true}\] To use hard coded value, change the constant to false. 5.6.3.4 Retrieve Question from Database If, Then …. Else statement was used to retrieve the question from database. If the constant to use database connection as described above was set to through, then the connection to the database will be applied using ADOB connection method. \[ \text{If USE\_DB\_FOR\_INFO Then} \] \[ \text{cnnQuiz} = \text{Server.CreateObject("ADODB.Connection")} \] \[ \text{cnnQuiz.Open DB\_CONN\_STRING} \] After the database connection was opened, a record set will be defined and SQL statement will be written, as following example: \[ \text{Set rsQuiz} = \text{Server.CreateObject("ADODB.Recordset")} \] \[ \text{rsQuiz.Open "SELECT * FROM questions WHERE quiz_id="} \] \[ \text{& QUIZ\_ID & " AND question_number="} \] \[ \text{& iQuestionNumber & ",", cnnQuiz} \] 5.6.3.5 Retrieve Question The question item was retrieved from the question_text field of the Quiz table. \[ \text{strQuestionText} = \text{CStr(rsQuiz.Fields("question_text").Value)} \] 5.6.3.6 Retrieve Array of Answers Set of answers must be retrieved after the question text is displayed. To display 4 different item of answers, an array statement was coded as below: ' Get an array of answers aAnswers = Array( CStr(rsQuiz.Fields("answer_a").Value & ""), _, CStr(rsQuiz.Fields("answer_b").Value & ""), _, CStr(rsQuiz.Fields("answer_c").Value & ""), _, CStr(rsQuiz.Fields("answer_d").Value & ""), _ After the array, a For Next statement is used to represent the choosen response. For I = LBound(aAnswers) To UBound(aAnswers) If aAnswers(I) = "" Then ReDim Preserve aAnswers(I - 1) Exit For End If Next I Connection to the database must be closed to avoid error after the task was complete. rsQuiz.Close Set rsQuiz = Nothing cnnQuiz.Close Set cnnQuiz = Nothing 5.6.3.7 Question Number The following script defined the question number of total question: ``` Soalan ke <%= iQuestionNumber %> dari <%= iNumberOfQuestions %><BR> ``` 5.6.3.8 Function that Convert Response to Appropriate Letter. The following function were used to identify the letter of the input selected by the user on clicking one of the multiple choice answer. This function will convert the input to a string that will be compared to the correct answer field in the question table. Select case method were used to defined the input ``` <% \\ ' Takes a integer parameter and converts it to the appropriate letter \\ Function GetLetterFromAnswerNumber(input) \\ Dim strTemp \\ Select Case input \\ Case 0 \\ strTemp = "A" \\ Case 1 \\ strTemp = "B" \\ Case 2 \\ strTemp = "C" ``` Case 3 strTemp = "D" End Select GetLetterFromAnswerNumber = strTemp End Function 5.6.3.9 Function to Get the Answer String According to the Last Entered Value of the Particular Question Function GetAnswerFromAnswerString(iQuestionNumber, strAnswers) Dim strTemp Dim iOffset ' Find the location of the question number we want to use iOffset = InStrRev(strAnswers, "|" & iQuestionNumber & ",", -1, 1) ' Get our answer by using the offset we just found and then moving ' right the length of the question indicator to arrive at the ' appropriate letter strTemp = Mid(strAnswers, iOffset + Len("|" & iQuestionNumber & "|"), 1) ' There's no way it should be anything else, but to be sure we ' convert it to a string and make sure it's uppercase GetAnswerFromAnswerString = UCase(CStr(strTemp)) End Function %> 5.6.3.10 Progress Bar The progress bar was written in VBScript code. The bar length will be compared to the number of question completed. The progress bar inside the long bar will increased as the number of the question answered increase. Image with blue and red color will fill the bar to denote the progress. <% Const BAR_LENGTH = 160 If iQuestionNumber = 1 Then ' Since a 0 width is ignored by the browsers we need to remove the image altogether! Response.Write "<IMG SRC=""./images/spacer_red.gif"" HEIGHT=""10"" WIDTH=""""] Response.Write BAR_LENGTH Response.Write ""><BR>" Else Response.Write ""<IMG SRC=""./images/spacer_blue.gif"" HEIGHT=""10"" WIDTH=""""] %> Response.Write (BAR_LENGTH / iNumberOfQuestions) * (iQuestionNumber - 1) Response.Write """ Response.Write """"<IMG SRC=""""images/spacer_red.gif"""" HEIGHT=""""10"""" WIDTH=""" Response.Write (BAR_LENGTH / iNumberOfQuestions) * (iNumberOfQuestions - (iQuestionNumber - 1)) Response.Write """"<BR>" End If %> 5.6.3.11 Scoring Scoring was done by retrieving the correct answer submitted by the candidates and compare to the responses chosen by the candidates. To retrieve the correct answer, database connection must be opened first. If USE_DB_FOR_INFO Then ' Code to use DB! ' Create DB connection and connect to the DB Set cnnQuiz = Server.CreateObject("ADODB.Connection") cnnQuiz.Open DB_CONN_STRING ' Create RS and query DB for quiz info Set rsQuiz = Server.CreateObject("ADODB.Recordset") ' Specify 3, 1 (Static, Read Only) ' rsQuiz.Open "SELECT * FROM questions WHERE quiz_id= & QUIZ_ID & _ " ORDER BY question_number;", cnnQuiz, 3, 1 Score is represent by iScore and the initial value was zero. A Do While Not looping statement is used get the score. iScore = 0 I = 1 Do While Not rsQuiz.EOF If UCase(CStr(rsQuiz.Fields("correct_answer").Value)) = _ GetAnswerFromAnswerString(I, strAnswers) Then iScore = iScore + 1 ' This and the Else could be used to output a ' correctness status for each question ' Also useful for bug hunting! 'Response.Write "Right" & "<BR>" & vbCrLf Else 'Response.Write "Wrong" & "<BR>" & vbCrLf strResults = strResults & I & ", " End If I = I + 1 If --- 107 rsQuiz.MoveNext Loop ' Close and dispose of our DB objects rsQuiz.Close Set rsQuiz = Nothing cnnQuiz.Close Set cnnQuiz = Nothing 5.6.3.12 Result The score earned by the candidates is added to the mark field of on the spmit database. The following codes retrieved the score and update the result table. <% Set cnnQuiz = Server.CreateObject("ADODB.Connection") cnnQuiz.Open DB_CONN_STRING Set rsQuiz = Server.CreateObject("ADODB.Recordset") rsQuiz.Open "result", cnnQuiz, 1, 2 rsQuiz.AddNew rsQuiz("username")=CStr("session") rsQuiz("mark")=iScore rsQuiz.Update rsQuiz.Close Set rsQuiz = Nothing cnnQuiz.Close Set cnnQuiz = Nothing % } 5.7 Hardware The on-line examination system was developed using Toshiba Satellite 2590 CDS computer. The following table lists are the devices attached to the machine: <table> <thead> <tr> <th>Hardware</th> <th>Capability</th> </tr> </thead> <tbody> <tr> <td>Processor</td> <td>400 MHz</td> </tr> <tr> <td>Hard Disk</td> <td>4.02 GB</td> </tr> <tr> <td>Random Access Memory</td> <td>192 MB</td> </tr> <tr> <td>CD ROM</td> <td>52 time speed</td> </tr> <tr> <td>Monitor</td> <td>32 bit SVGA</td> </tr> </tbody> </table> Table 5.2: Hardware used to develop the On-line examination system 5.8 Software Different commercial software were used to implement the system. The software were used according to the need in the system development process. It can be categorized into software tools for design, web development and system documentation. Table 5.2 shows the software used to develop the system. <table> <thead> <tr> <th>Software</th> <th>Function</th> </tr> </thead> <tbody> <tr> <td>Visual Interdev</td> <td>Web editor and server side scripting tools.</td> </tr> <tr> <td>Macromedia DreamWeaver</td> <td>Web Editor</td> </tr> <tr> <td>Microsoft Word</td> <td>Documentation</td> </tr> <tr> <td>Microsoft Access</td> <td>Database development tool</td> </tr> <tr> <td>Microsoft Visio</td> <td>Charts design</td> </tr> </tbody> </table> Table 5.3: Software used to develop the On-line examination system
{"Source-Url": "http://studentsrepo.um.edu.my/2085/8/BAB_5.pdf", "len_cl100k_base": 7807, "olmocr-version": "0.1.53", "pdf-total-pages": 38, "total-fallback-pages": 0, "total-input-tokens": 37810, "total-output-tokens": 9347, "length": "2e12", "weborganizer": {"__label__adult": 0.0004801750183105469, "__label__art_design": 0.0005612373352050781, "__label__crime_law": 0.0006680488586425781, "__label__education_jobs": 0.034210205078125, "__label__entertainment": 0.00015234947204589844, "__label__fashion_beauty": 0.00023293495178222656, "__label__finance_business": 0.0009236335754394532, "__label__food_dining": 0.0005054473876953125, "__label__games": 0.0011081695556640625, "__label__hardware": 0.006744384765625, "__label__health": 0.0004954338073730469, "__label__history": 0.0004723072052001953, "__label__home_hobbies": 0.0002148151397705078, "__label__industrial": 0.0007081031799316406, "__label__literature": 0.0004451274871826172, "__label__politics": 0.0002696514129638672, "__label__religion": 0.0005097389221191406, "__label__science_tech": 0.044708251953125, "__label__social_life": 0.0001850128173828125, "__label__software": 0.085693359375, "__label__software_dev": 0.8193359375, "__label__sports_fitness": 0.0002999305725097656, "__label__transportation": 0.0009260177612304688, "__label__travel": 0.0003464221954345703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35645, 0.02442]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35645, 0.42424]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35645, 0.86462]], "google_gemma-3-12b-it_contains_pii": [[0, 1492, false], [1492, 2964, null], [2964, 4340, null], [4340, 5566, null], [5566, 6698, null], [6698, 7314, null], [7314, 8450, null], [8450, 9480, null], [9480, 10203, null], [10203, 11186, null], [11186, 12229, null], [12229, 12947, null], [12947, 14061, null], [14061, 15232, null], [15232, 16609, null], [16609, 17890, null], [17890, 19000, null], [19000, 19233, null], [19233, 19773, null], [19773, 20180, null], [20180, 20316, null], [20316, 20898, null], [20898, 21685, null], [21685, 23135, null], [23135, 24327, null], [24327, 25512, null], [25512, 26593, null], [26593, 27707, null], [27707, 28701, null], [28701, 29616, null], [29616, 30406, null], [30406, 31101, null], [31101, 31903, null], [31903, 32749, null], [32749, 33535, null], [33535, 34180, null], [34180, 35025, null], [35025, 35645, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1492, true], [1492, 2964, null], [2964, 4340, null], [4340, 5566, null], [5566, 6698, null], [6698, 7314, null], [7314, 8450, null], [8450, 9480, null], [9480, 10203, null], [10203, 11186, null], [11186, 12229, null], [12229, 12947, null], [12947, 14061, null], [14061, 15232, null], [15232, 16609, null], [16609, 17890, null], [17890, 19000, null], [19000, 19233, null], [19233, 19773, null], [19773, 20180, null], [20180, 20316, null], [20316, 20898, null], [20898, 21685, null], [21685, 23135, null], [23135, 24327, null], [24327, 25512, null], [25512, 26593, null], [26593, 27707, null], [27707, 28701, null], [28701, 29616, null], [29616, 30406, null], [30406, 31101, null], [31101, 31903, null], [31903, 32749, null], [32749, 33535, null], [33535, 34180, null], [34180, 35025, null], [35025, 35645, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35645, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35645, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35645, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35645, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35645, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35645, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35645, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35645, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35645, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35645, null]], "pdf_page_numbers": [[0, 1492, 1], [1492, 2964, 2], [2964, 4340, 3], [4340, 5566, 4], [5566, 6698, 5], [6698, 7314, 6], [7314, 8450, 7], [8450, 9480, 8], [9480, 10203, 9], [10203, 11186, 10], [11186, 12229, 11], [12229, 12947, 12], [12947, 14061, 13], [14061, 15232, 14], [15232, 16609, 15], [16609, 17890, 16], [17890, 19000, 17], [19000, 19233, 18], [19233, 19773, 19], [19773, 20180, 20], [20180, 20316, 21], [20316, 20898, 22], [20898, 21685, 23], [21685, 23135, 24], [23135, 24327, 25], [24327, 25512, 26], [25512, 26593, 27], [26593, 27707, 28], [27707, 28701, 29], [28701, 29616, 30], [29616, 30406, 31], [30406, 31101, 32], [31101, 31903, 33], [31903, 32749, 34], [32749, 33535, 35], [33535, 34180, 36], [34180, 35025, 37], [35025, 35645, 38]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35645, 0.05391]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
b9f988d50ea96138cea0618bfe77c0d3f4f37843
[REMOVED]
{"Source-Url": "http://www.computing.surrey.ac.uk/personal/st/H.Treharne/papers/2011/icfem2011.pdf", "len_cl100k_base": 7922, "olmocr-version": "0.1.49", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 39584, "total-output-tokens": 10222, "length": "2e12", "weborganizer": {"__label__adult": 0.00026607513427734375, "__label__art_design": 0.0003969669342041016, "__label__crime_law": 0.00027871131896972656, "__label__education_jobs": 0.0006475448608398438, "__label__entertainment": 5.14984130859375e-05, "__label__fashion_beauty": 0.0001208186149597168, "__label__finance_business": 0.0001798868179321289, "__label__food_dining": 0.0002319812774658203, "__label__games": 0.0004532337188720703, "__label__hardware": 0.00055694580078125, "__label__health": 0.00030303001403808594, "__label__history": 0.00020396709442138672, "__label__home_hobbies": 6.580352783203125e-05, "__label__industrial": 0.0003573894500732422, "__label__literature": 0.00019860267639160156, "__label__politics": 0.00020766258239746096, "__label__religion": 0.0003533363342285156, "__label__science_tech": 0.01812744140625, "__label__social_life": 7.009506225585938e-05, "__label__software": 0.007843017578125, "__label__software_dev": 0.96826171875, "__label__sports_fitness": 0.00023305416107177737, "__label__transportation": 0.0003693103790283203, "__label__travel": 0.0001627206802368164}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38889, 0.02968]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38889, 0.63888]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38889, 0.85567]], "google_gemma-3-12b-it_contains_pii": [[0, 2577, false], [2577, 5886, null], [5886, 8400, null], [8400, 10403, null], [10403, 12037, null], [12037, 13837, null], [13837, 16537, null], [16537, 19327, null], [19327, 22115, null], [22115, 24445, null], [24445, 26335, null], [26335, 27960, null], [27960, 30513, null], [30513, 33663, null], [33663, 36541, null], [36541, 38889, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2577, true], [2577, 5886, null], [5886, 8400, null], [8400, 10403, null], [10403, 12037, null], [12037, 13837, null], [13837, 16537, null], [16537, 19327, null], [19327, 22115, null], [22115, 24445, null], [24445, 26335, null], [26335, 27960, null], [27960, 30513, null], [30513, 33663, null], [33663, 36541, null], [36541, 38889, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38889, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38889, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38889, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38889, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38889, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38889, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38889, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38889, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38889, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38889, null]], "pdf_page_numbers": [[0, 2577, 1], [2577, 5886, 2], [5886, 8400, 3], [8400, 10403, 4], [10403, 12037, 5], [12037, 13837, 6], [13837, 16537, 7], [16537, 19327, 8], [19327, 22115, 9], [22115, 24445, 10], [24445, 26335, 11], [26335, 27960, 12], [27960, 30513, 13], [30513, 33663, 14], [33663, 36541, 15], [36541, 38889, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38889, 0.10244]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
bfe5fd38b918992715a0ac18ceec987818248520
Towards Adaptive Compliance Jesús García-Galán¹, Liliana Pasquale¹, George Grispos¹, Bashar Nuseibeh¹,² ¹ Lero - The Irish Software Research Centre, University of Limerick, Ireland ² Department of Computing & Communications, The Open University, Milton Keynes, UK ABSTRACT Mission critical software is often required to comply with multiple regulations, standards or policies. Recent paradigms, such as cloud computing, also require software to operate in heterogeneous, highly distributed, and changing environments. In these environments, compliance requirements can vary at runtime and traditional compliance management techniques, which are normally applied at design time, may no longer be sufficient. In this paper, we motivate the need for adaptive compliance by illustrating possible compliance concerns determined by runtime variability. We further motivate our work by means of a cloud computing scenario, and present two main contributions. First, we propose and justify a process to support adaptive compliance that extends the traditional compliance management lifecycle with the activities of the Monitor-Analyse-Plan-Execute (MAPE) loop, and enacts adaptation through re-configuration. Second, we explore the literature on software compliance and classify existing work in terms of the activities and concerns of adaptive compliance. In this way, we determine how the literature can support our proposal and what are the open research challenges that need to be addressed in order to fully support adaptive compliance. CCS Concepts • General and reference → Surveys and overviews; • Social and professional topics → Technology audits; • Governmental regulations; Keywords adaptive compliance, challenges, compliance as a service, self-adaptation 1. INTRODUCTION With software becoming increasingly pervasive, ensuring compliance to regulations, standards or policies is also becoming increasingly important to foster its wider adoption and acceptability by society and business. For example, in recent years, compliance with industrial regulations (e.g., Health Insurance Portability and Accountability Act (HIPAA)) and data security standards (e.g., Payment Card Industry Data Security Standard (PCI-DSS) and ISO/IEC 27000-series) has become an essential requirement of some software systems. Non-compliance can result in loss of reputation, financial fines¹ or even criminal prosecution. Within academia, compliance has been examined in the areas of requirements engineering [24], Service Oriented Architecture (SOA) [32], cloud computing [20] and Business Process Management (BPM) [8]. Each of these has tackled compliance from different perspectives, including the interpretation of regulations into compliance requirements [4, 10], compliance checking [1, 23], and the definition of a reference process for compliance management [36, 20]. Ensuring compliance is more challenging in software systems that operate in heterogeneous, highly distributed and changing environments, such as cloud computing services. Cloud providers often deliver their services to clients from different geographical locations that have their own compliance requirements. Providing customised Compliance-as-a-Service could relieve clients of the compliance burden and give providers a significant competitive advantage. However, cloud providers may still face different and multi-jurisdictional compliance requirements. They must also comply with the regulations that apply where their physical infrastructure resides. In a multi-tenancy environment, in which different clients may share computational resources, this could also lead to overlaps and conflicts between different compliance requirements. All this variability may in turn lead to compliance violations. Although compliance at runtime has gained attention recently [2, 12, 19], as far as we are aware, existing techniques are normally applied at design time and are unable to deal with this kind of runtime variability. In this paper, we propose adaptive compliance as the capability of a software system to continue to satisfy its compliance requirements, even when runtime variability occurs. We motivate our work by using a Platform as a Service (PaaS) scenario and provide two main contributions. First, we propose and justify a process to support adaptive compliance that extends the traditional compliance management lifecycle with the activities of the Monitor-Analyse-Plan-Execute (MAPE) loop, and achieves adaptation through re-configuration. Second, we explore the literature on software compliance to identify which activities of our process are al- ¹http://www.hhs.gov/about/news/2014/05/07/data-breach-results-48-million-hipaa-settlements.html ready supported and which present open research challenges. Our ambition is to motivate the need of adaptive compliance and encourage researchers from the adaptive systems community to address these challenges. The rest of the paper is organised as follows. Section 2 introduces the main concepts and terminology adopted in software compliance. Section 3 presents a motivating scenario that illustrates the concerns when handling runtime variability in compliance. Section 4 describes our adaptive compliance process and its existing support in the literature. Section 5 describes research challenges related to adaptive compliance. Finally, Section 6 concludes the paper. 2. COMPLIANCE IN SOFTWARE In the context of information systems, compliance refers to "ensuring that an organisation’s software and system conform with multiple laws, regulations and policies"[38]. From this definition we can distinguish two main elements in compliance: the system and the compliance sources that have to be conformed with. Moreover, the system runs in an operating environment, that may affect the compliance sources against which the system has to conform. Figure 1 shows these elements and their different dimensions. The type of compliance sources refers to the kind of rules specified in the source. In particular, a compliance source can include regulations such as HIPAA2, Cybersecurity Information Sharing Act (CISA)3, and General Data Protection Regulation (GDPR)4; standards such as PCI-DSS5 and ISO/IEC 27000 series6; good practices; and internal policies within a particular organisation. Mandate refers to the optional or mandatory character of the compliance source. For example, regulations are compulsory (e.g., HIPAA in the US) while standards and internal policies (e.g., ISO/IEC 27000 series) may not. The abstraction level denotes the level of interpretation necessary to enact the statements mandated by a compliance source in a system. Regulations are usually expressed at a higher level of abstraction than standards and internal policies. For example, GDPR requires companies to prove compliance without suggesting specific mechanisms, while an internal organisation policy may specifically state that rooted Android phones cannot connect to the company’s internal network. Compliance sources apply to particular jurisdictions, territories or spheres of activity. For example, HIPAA applies to US organisations dealing with personal healthcare information, while the GDPR is intended to apply to any organisation delivering services to EU citizens. A system executes compliance controls, which are implementations of the rules defined by a compliance source. For example, section 164.312(a)(2)(iii) within HIPAA mandates the implementation of procedures to log off from an electronic session after a predetermined time of inactivity. A compliance control to address this rule could involve forcing user sessions to expire after five minutes of inactivity. Compliance controls determine the compliance level of a system with regards to its compliance sources. Three de-facto levels of compliance are proposed in the literature [33]: ‘full compliance’ when all rules are satisfied; ‘partial compliance’ when all mandatory rules are satisfied; and ‘non-compliance’ where one or more mandatory rules are not satisfied. Although the operating environment is highly dependent on the specific application domain, we distinguish two main characterising elements. First, the system users who can reside and/or operate in different geographical locations and/or spheres of activity may need to comply with different compliance sources. Second, the physical or virtual infrastructure where a system operates influences the applicable compliance sources and the compliance level that needs to be achieved. For example, IT systems in a US hospital have to comply with HIPAA and doctors who use personal devices to access patient records become part of the operating environment. Different reference processes have been proposed in the literature to support compliance management [5, 6, 27, 32]. Figure 2 presents a simplified compliance management lifecycle covering the main activities of these processes. Initially, compliance sources are discovered depending on the system and its operating environment. Compliance sources are then interpreted to extract compliance requirements, which are expressed as rules about actors and their rights and obligations on particular data objects [4]. During development, the requirements are implemented in the system as compliance controls. Finally, compliance requirements and controls are evaluated to determine the compliance level they ensure and to assess how they can be improved, if necessary. --- 2http://www.hhs.gov/hipaa/ 4http://ec.europa.eu/justice/data-protection/ 5https://www.pcisecuritystandards.org 6http://www.iso.org/iso/catalogue_detail?csnumber=66435 3. MOTIVATING SCENARIO In this section we present a PaaS scenario, shown schematically in Figure 3, to motivate adaptive compliance and illustrate the main compliance concerns arising from runtime variability. The PaaS provider offers customers a technological stack offering Database Management Systems (DBMS), run-time environments and various frameworks. The stack is deployed on top of an infrastructure which is hosted in the United States. Customers can deploy their own software applications using the stack offered by the PaaS provider. The PaaS provider also aims to provide compliance-as-a-service to its clients, that is the capability to satisfy on-demand the compliance needs of its clients. As shown in Figure 3, initially the provider has to satisfy the compliance requirements of two different clients (Client 1 and 2). Client 1 operates in the US and stores patient health records, which are accessed by other third-party organisations. Hence, Client 1 has to comply with HIPAA regulations. Client 2 is also located in the US and handles its client’s credit card information using the PaaS. Therefore, Client 2 has to comply with PCI-DSS. Client 1 and Client 2 have differing compliance requirements, which the PaaS provider must ensure are satisfied. Although some cloud providers guarantee compliance for particular regulations (e.g., HIPAA compliance by Catalyze\(^7\) and TrueVault\(^8\)), they are usually unprepared to satisfy emerging differing and varying compliance requirements. New PaaS clients could introduce new compliance demands that need to be traded-off against those of existing clients who share the same execution platform. In our scenario, Client 3 is a new client providing financial services in the United States and therefore requires a certifiable degree of privacy and security (e.g., ISO/IEC 27018). Clients can also change their compliance requirements due to changes in the jurisdictions that apply to their services. For example, Client 3 is considering expanding its operations to Europe and therefore it will need to comply with the European Union’s GDPR, which requires explicit user authorisation for any re-purposing of their personal data. This could result in a direct conflict with CISA, which authorises sharing of personal data with federal institutions. Furthermore, variability in the compliance sources and the operating environment can also affect the compliance requirements and their satisfaction. In the case of compliance sources, Client 1, for example, could be required to comply with CISA in the near future. While compliance sources may rarely evolve, changes in the system and its operating environment can occur more frequently. For example, updates to the DBMS may affect how data encryption is supported and hence, the satisfaction of the compliance requirements. This also includes changes in the physical infrastructure of the service. In this sense, the PaaS provider could move some of its data centres to Europe, which would trigger the need to comply with EU regulations for data retention and management. The variability exposed by this scenario can be summarised by the following main concerns: a) **Awareness.** Runtime variability requires awareness of any changes that take place in the operating environment, the system and the compliance sources which can impact compliance satisfaction. In particular, PaaS clients must be able to elicit and modify their preferences with respect to the compliance sources they need to satisfy. The provider also needs to be aware of infrastructure changes and assess how these changes impact the compliance requirements. b) **Automation:** Compliance-as-a-service requires executing appropriate compliance controls using a dynamic approach. This would also involve automating the discovery and interpretation of compliance sources, as well as the identification and remedy of compliance violations and potential conflicts between compliance requirements. c) **Assurance:** The service provider needs to produce assurances about whether or not the compliance level is that required by its clients. These assurances can be provided by collecting data, showing traceability between compliance controls and requirements or by delivering formal proofs. d) **Performance:** The on-demand nature of cloud computing means that a cloud provider has to respond to changes in a timely manner so as to avoid service outages. 4. ADAPTIVE COMPLIANCE In this section we present a process to achieve adaptive compliance and a summary of the support provided by the existing literature. This process, shown in Figure 4, extends the compliance management lifecycle of Figure 2 with added support for variability runtime concerns through the MAPE loop. The adaptive compliance process begins with the automated discovery of compliance sources with which the system needs to comply. Factors that influence the discovery of compliance sources include the system’s physical location, its sphere of activity and the potential stakeholders. Next, the compliance sources must be interpreted in order to identify the compliance requirements, which demand close collaboration between legal and domain experts, and software engineers [24]. This activity could benefit from mechanisms to share and reuse compliance requirements, such as a multi-organisation repository. The requirements are subsequently implemented in the system as compliance controls. Since the system is intended to meet differing compliance needs at runtime, the compliance controls should be flexible enough to be enabled, disabled and customised when required. --- \(^7\)https://catalyze.io/ \(^8\)https://www.truevault.com/ At runtime, the system must *monitor* its own state, the operating environment and the compliance sources. Any changes in these elements may result in either compliance violations or a reduced compliance level. While changes in compliance sources take place slowly, changes in the system or the operating environment often require a response at runtime. When changes are detected, the system must *analyse* their impact on the compliance level. First, overlaps between the applying compliance requirements should be analysed, since they might lead to conflicts and consequently compliance violations. Second, the compliance level must be checked and if compliance violations are found, these must be diagnosed to determine their causes. In that case, the system needs to *plan* a reconfiguration of the compliance controls to improve the compliance level when possible. Finally, the computed reconfiguration must be *executed* in the system, effectively improving the compliance level. This process requires of “live” models, especially at runtime to enact the Knowledge (K) component of the MAPE loop. Such models must describe the compliance sources and its requirements, the system and its compliance controls, and also the operating environment, including user preferences and the system infrastructure. The relevance of these models depends on the particular activities. Some of them are more important at design time (e.g., compliance sources for their discovery and interpretation), while others are necessary at runtime (e.g., operating environment for the monitoring, or compliance controls for the plan). In the following, we explore how the compliance literature supports the adaptive compliance activities and addresses the concerns presented in Section 3. This analysis allows us to identify topics which have been well discussed, along with gaps leading to research challenges. Table 1 relates existing approaches supporting the activities with the concerns that these approaches have addressed. Partially addressed areas are shown in light grey, while areas not addressed are shown in dark grey. ### 4.1 Discover Although discovery is a fundamental activity in the compliance management lifecycle, it has received little attention from the research community. Some studies have highlighted its importance [24, 32], but without defining factors that affect applying compliance sources. Some studies have provided repository tools [3, 15]. Kerrigan and Law [15] describe environmental regulations by means of an XML-based format and facilitate the discovery through searchable concept hierarchies. Boella et al. [3] provide the Eunomos web-based system to manage knowledge about laws and legal concepts in the financial sector. However, in general, discovery lacks automated support and a general taxonomy of factors. ### 4.2 Interpret The interpretation of compliance sources has been widely covered by the research literature [24]. However, most of the existing work relies on partially or totally manual techniques to extract compliance requirements. These techniques include Semantic Parameterisation [4], goal based analysis, and CPR (commitment, privilege and right) analysis [29], which have been validated by means of empirical studies. Some authors have focused on compliance requirements variability, and in particular on the multiple possible interpretations of a regulation [3, 9, 31] and the evolution of the requirements [21]. The analysis and reconciliation of potentially conflicting multi-jurisdictional requirements have also received attention [13, 10]. In terms of automation, some research efforts have focused on the description of compliance requirements by using different approaches, such as Domain Specific Languages (DSLs) [35], UML [25] or semi-formal representations [36]. A repository of compliance requirements has also been proposed [34], although without real tool support. Assurances demonstrating the correctness of compliance requirements with respect to the sources have been suggested, especially in terms of traceability links [4, 9] and formal proofs [36, 3, 31]. ### 4.3 Implement Compliance implementation has received attention and partial automated support, in particular from the BPM community. Several works have considered configurable compliance controls for business processes, in the form of compliance descriptors [17], business process templates [28] and configurable compliance rules [27]. Implementation automa- tion has been addressed from different perspectives. While some approaches have proposed an automated derivation of compliance controls from the requirement descriptions [35], others have presented repositories of reusable process fragments [30] or compliance rules [27]. Some of these also provide support for implementation assurances by explicitly linking compliance controls and requirements [35], and concepts of the compliance source to the application domain [25]. However, the impact of compliance controls on the system performance has been surprisingly neglected. ### 4.4 Monitor The literature on monitoring is mainly focused on system changes, neglecting the compliance sources and the operating environment. While compliance sources rarely change at runtime, the operating environment does, requiring a timely detection and response. Several works have shown awareness of different monitoring factors, such as the system execution [2], the Quality of Service in Service Level Agreements (SLAs) [23], or time, resources and data in business processes [19]. However, the operating environment has only been considered for particular aspects of specific cases in the context of business processes [37]. Most of those works present approaches to automate the monitoring in business processes [2, 37] and SOA [23]. ### 4.5 Analyse Compliance analysis is the activity that most has attracted most attention from the research community, especially for checking compliance levels. However, additional awareness on the potential conflicts of multi-jurisdictional requirements [10, 13] or the impact of the operating environment [16, 37] is necessary. Compliance checking can take place at design time and at runtime. Design time checking approaches are common for business processes, and rely on a plethora of analytical techniques, such as those based on Petri Nets [1, 26] and temporal logic [1, 7]. Similar approaches have been proposed for general regulatory compliance, by means of inference engines [31] and first order predicate calculus [15], and for business to business interactions [22]. Runtime compliance checking has gained momentum recently, especially in business processes [2, 12, 37] and SOA [2, 23]. Some of these compliance checking approaches also provide diagnosis support (i.e. assurances) for the causes of compliance violations [12, 23, 26]. In general, checking automation and diagnosis are well covered at design and runtime. However, automated multi-jurisdictional analysis and the consideration of the operating environment for the checking remain as open challenges. Moreover, the performance of all these approaches have been generally overlooked, although compliance checking has been proven to be np-complete for business processes [33]. ### 4.6 Plan To the best of our knowledge, the compliance literature presents very few works supporting the remedy of compliance violations. Although some authors have discussed the concept of compliance improvement [11, 5, 18], there is a lack of automated support. Cabanillas et al. [5] state the need to provide recovery capabilities when compliance violations are detected at runtime. Ly et al. [18] discuss the notion of healable compliance violations, i.e. violations that can be fixed by restructuring the process or inserting additional branches, while Ghose and Koliadis [11] present a partially automated technique, based on structural and semantic patterns, to modify non-compliant processes in order to restore compliance. Our vision to enact compliance improvement is through the reconfiguration of the compliance controls. In this sense, several authors have proposed configuration capabilities for implementing compliance, as shown in Section 4.3. However, none of these present specific techniques to remedy violations when detected. ### 4.7 Execute The literature has not paid attention to this activity beyond works on configuration capabilities for compliance. In our view, the key concerns of this activity should be the automated execution, in a timely manner, of reconfigurations to provide recovery capabilities when compliance violations are detected. Although some authors have discussed the notion of compliance improvement [11, 5, 18], there is a lack of automated support. Cabanillas et al. [5] state the need to provide recovery capabilities when compliance violations are detected at runtime. Ly et al. [18] discuss the notion of healable compliance violations, i.e. violations that can be fixed by restructuring the process or inserting additional branches, while Ghose and Koliadis [11] present a partially automated technique, based on structural and semantic patterns, to modify non-compliant processes in order to restore compliance. Our vision to enact compliance improvement is through the reconfiguration of the compliance controls. In this sense, several authors have proposed configuration capabilities for implementing compliance, as shown in Section 4.3. However, none of these present specific techniques to remedy violations when detected. ### 4.8 Models Most of the research on compliance modelling has focused on compliance sources and requirements. Multiple approaches have been proposed to represent regulations, using XML notations [15], UML [25] or particular DSLs [31]. Other works describe additional compliance sources, such SLAs extending the WS-Agreement notation [23], or privacy policies using OWL-DL [14]. There are also numerous proposals to represent compliance requirements, especially by means of formal or semi-formal languages [4, 10, 13] and DSLs [2, 35]. Nonetheless, the description of compliance controls and the operating environment appear to have been overlooked: while only few works on business processes address the former [17, 30] by means of compliance descriptors and process fragments, the latter is even more neglected [37], as we have presented in the previous sections. 5. RESEARCH CHALLENGES Adaptive compliance poses a number of research challenges, some carried over from "traditional" compliance work, as well as some significant new ones. Open research challenges relevant to adaptive compliance but carried over from previous compliance research include: 1) Compliance sources interpretation. Multiple authors have proposed specific techniques to interpret regulations and extract compliance requirements, especially in requirements engineering. However, existing work is still limited, since this activity usually relies on the specific domain knowledge of the requirements engineers and is performed manually. 2) Multi-jurisdictional requirements. The analysis of overlaps between different regulations has also attracted attention, although existing approaches are only partially automated. In order to detect conflicts among different compliance sources at runtime, adaptive compliance requires more fully automated analysis than provided by current techniques. 3) Remedy for compliance violations. Our work also highlighted the ongoing challenge of automated remedies for compliance violations. Although there are multiple proposals for automated compliance checking and diagnosis, existing mitigation techniques for violations usually require human intervention. Since the adaptive compliance process has to handle violations at runtime, we need to provide the process with automated, dynamic mitigation approaches. Our work also suggests that adaptive compliance raises new research challenges. We identify five main gaps in the literature that must be addressed: 4) Compliance readiness. We define compliance readiness as the capability of a system to foresee and comply with different compliance requirements. This capability requires awareness of different compliance sources and requirements that may apply to the system or its clients, and a pool of compliance controls, ready to be customised and invoked. Although some authors have envisaged variability in business process rules to respond to variable compliance requirements, those approaches need to be extended for more complex controls and scenarios. 5) Compliance automation. Currently, only compliance checking has been somewhat automated, and even so, often overlooking the impact of the operating environment. Compliance sources discovery and interpretation remain to be automated, although there are some promising partial successes [15, 3, 34]. A repository of compliance sources and their different context dependent interpretations, could support this tedious and error-prone activity. Moreover, automating the reconfiguration of compliance controls in order to correct compliance violations is needed. 6) Runtime assurances. Demonstrating compliance is often just as important as actually being compliant. There are several approaches to demonstrate compliance by tracing compliance requirements to regulations and compliance controls. However, existing support at runtime is more patchy, and focuses on compliance violations diagnosis, primarily considering systems but not their compliance sources nor their operating environment. Therefore, we need to extend the diagnosis to these elements, and provide evidence about if and how reconfigurations really increase compliance levels. 7) Models for compliance controls and operating environment. Our work shows multiple proposals to describe compliance rules and requirements, but very few to describe compliance controls and the operating environment. While the operating environment is highly dependent on the specific application domain, we think that a standard way to describe the compliance controls, their variability, and their impact on the system is necessary. 8) Impact on performance. Surprisingly, existing research has overlooked the effects of compliance on system performance. Since the adaptive compliance process is intended to handle compliance issues at runtime, performance is increasingly important. Therefore, more efficient ways are needed to check compliance, which has already been shown to be an NP-complete problem. Furthermore, empirical studies are needed to assess the impact on the system performance of executing compliance controls and monitoring the operating environment. 6. CONCLUSIONS AND FUTURE WORK In this paper, we have attempted to broaden the definition of compliance-as-a-service to include our idea for adaptive compliance. We have proposed a process to achieve adaptive compliance and discussed how existing work can support the various activities of our adaptive compliance process. A short review of the literature has identified that while existing approaches focus on design-time compliance, very little work has examined the increasing run-time variability found in compliance sources, systems and their operational environment. Furthermore, our review was used to identify several future research challenges which need to be addressed in order to fulfill our vision for adaptive compliance. Although one of our main ambitions is to automate as much of the compliance process as is feasible, there are limits to this. Currently, we are better equipped to automate checking and enforcement of compliance rules. However, neither all our proposed activities nor compliance rules can be fully automated. For example, a rule that specifies the behaviour of a security incident response team may need to be crafted manually, and its enforcement depends on enforcing human behaviour. This requires a discussion on the "human-in-the-loop" aspects of adaptive compliance. Our future work will include conducting a more in-depth analysis of the literature to further investigate the compliance process gaps identified in this paper. The objective of this in-depth literature review would be to expand our understanding of related areas such as reconfiguration planning and execution in adaptive systems. As one of our main ambitions is to automate as much of the compliance process as possible, future work will need to examine which parts of the process can be automated and to what extent. This work will also look to address issues with automation through runtime re-configuration. Finally, future work will examine how our idea of adaptive compliance can be extended into other domains, such as the Internet of Things, where a myriad of heterogeneous and potentially untrusted devices interact. That could provide an additional and important cyber-physical perspective to adaptive compliance. Acknowledgements We acknowledge SFI grant 10/CE/11855 and ERC Advanced Grant no. 291652 (ASAP). We also thank Mark McGloin for later discussions on software compliance. References
{"Source-Url": "http://oro.open.ac.uk/46533/8/Towards_Adaptive_Compliance_SEAMS2016.pdf", "len_cl100k_base": 5935, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 25143, "total-output-tokens": 8811, "length": "2e12", "weborganizer": {"__label__adult": 0.0004892349243164062, "__label__art_design": 0.0006527900695800781, "__label__crime_law": 0.003629684448242187, "__label__education_jobs": 0.0022487640380859375, "__label__entertainment": 0.00013148784637451172, "__label__fashion_beauty": 0.0002532005310058594, "__label__finance_business": 0.0057830810546875, "__label__food_dining": 0.0004496574401855469, "__label__games": 0.0010700225830078125, "__label__hardware": 0.0009021759033203124, "__label__health": 0.001071929931640625, "__label__history": 0.0004107952117919922, "__label__home_hobbies": 0.00012791156768798828, "__label__industrial": 0.0008535385131835938, "__label__literature": 0.0005464553833007812, "__label__politics": 0.0009059906005859376, "__label__religion": 0.0004112720489501953, "__label__science_tech": 0.11138916015625, "__label__social_life": 0.00014460086822509766, "__label__software": 0.0821533203125, "__label__software_dev": 0.78515625, "__label__sports_fitness": 0.0002388954162597656, "__label__transportation": 0.0006308555603027344, "__label__travel": 0.00026416778564453125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39427, 0.03922]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39427, 0.17816]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39427, 0.91677]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 4725, false], [4725, 9709, null], [9709, 15412, null], [15412, 19899, null], [19899, 25835, null], [25835, 32523, null], [32523, 39427, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 4725, true], [4725, 9709, null], [9709, 15412, null], [15412, 19899, null], [19899, 25835, null], [25835, 32523, null], [32523, 39427, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39427, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39427, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39427, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39427, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39427, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39427, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39427, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39427, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39427, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39427, null]], "pdf_page_numbers": [[0, 0, 1], [0, 4725, 2], [4725, 9709, 3], [9709, 15412, 4], [15412, 19899, 5], [19899, 25835, 6], [25835, 32523, 7], [32523, 39427, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39427, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
916618e3a742db54b14da1d156a1a30b4b269027
An Approach for Evolution-Driven Method Engineering Jolita Ralyté*, Colette Rolland**, Mohamed Ben Ayed** * Université de Genève, CUI, Rue de Général Dufour, 24, CH-1211 Genève 4, Switzerland ralyte@cui.unige.ch ** Université Paris 1 Sorbonne, CRI, 90, rue de Tolbiac, 75634 Paris cedex 13, France rolland@univ-paris1.fr; mohamed.benayed@malix.univ-paris1.fr Abstract The paper considers the evolutionary perspective of the method engineering. It presents an approach for method engineering based on the evolution of an existing method, model or meta-model into a new one satisfying different engineering objective. This approach proposes several different strategies to evolve the initial paradigm model into a new one and provides guidelines supporting these strategies. The approach has been evaluated in the Franco-Japanese research project around the Lyee methodology. A new model called Lyee User Requirements Model has been obtained as an abstraction of the Lyee Software Requirements Model. The paper illustrates this evolution case. 1. Introduction In this paper we consider Method Engineering (ME) from the evolutionary point of view. In other words, we look for an approach supporting evolution of an existing method, model or meta-model in order to obtain a new one better adapted for a given engineering situation and/or satisfying different engineering objective. We consider such a method evolution as situation-driven and relate our work to the area of Situational Method Engineering (SME) [Welke92], which focuses on project-specific method construction. The approach that we propose in this paper is based on some initial modelling idea expressed as a model or a meta-model that we call the ‘paradigm model’ and supports the evolution of this paradigm model into a brand-new model satisfying another engineering objective. That’s why we call this approach the Evolution-Driven Method Engineering. We capture in it our experience accumulated in the method engineering and especially in the meta-modelling domain. The hypothesis of this approach is that a new method is obtained either by abstracting from an existing model or by instantiating a meta-model. Hence, this approach could be situated between the traditional ‘from scratch’ ME and the assembly-based SME [Harmsen97, Brinkkemper98, Ralyté01]. We use the Map formalism proposed in [Rolland99, Benjamen99] to express the process model of our approach for Evolution-Driven Method Engineering. Map provides a representation system allowing to combine multiple ways of working into one complex process model. It is based on a non-deterministic ordering of two fundamental concepts intentions and strategies. An intention represents a goal that can be achieved by the performance of the process. It refers to a task (activity) that is a part of the process and is expressed in the intentional level. A strategy represents the manner in which the intention can be achieved. Therefore, the map is a directed labelled graph with nodes representing intentions and labelled edges expressing strategies. The directed nature of the map identifies which intention can be done after a given one. A map includes two specific intentions, Start and Stop, to begin and end the process respectively. There are several paths from Start to Stop in the map for the reason that several different strategies can be proposed to achieve the intentions. A map therefore includes several process models that are selected dynamically when the process proceeds, depending on the current situation. An intention achievement guideline is associated to every triplet <source intention, target intention, strategy> providing advice to fulfil the target intention following the strategy given the source intention has been achieved. Furthermore, this guideline can be refined as an entire map at a lower level of granularity. We have evaluated our approach in the Franco-Japanese collaborative research project Lyee1. The aim of this project was to develop a methodology supporting software development in two steps: requirements engineering and code generation. The latter was already supported by the LyeeAll CASE tool [Negoro01a,b] in order to generate 1 Lyee, which stands for Governmental Methodology for Software Provider, is a methodology for software development used for the implementation of business software applications. Lyee was invented by Fumio Negoro. programs, provided a set of well-formatted software requirements are given. The Lyee Software Requirements Model (LSRM) expresses these requirements in rather low-level terms such as screen layouts and database accesses. Moreover they are influenced by the LyeeeALL internals such as the Lyee identification policy of program variables, the generated program structure and the Lyee program execution control mechanism. Experience with LyeeAll has shown the need to acquire software requirements from relatively high level user-centric requirements. For this reason, we have decided to evolve the Lyee methodology. We have used the existing LSRM as a baseline paradigm model for the more abstract Lyee User Requirements Model (LURM) construction. In the next section we outline our process model for Evolution-Driven ME. Section 3 details the Abstraction strategy for method product model construction whereas section 4 describes the Pattern-based strategy for method process model definition. Both strategies are illustrated by the LURM product and process models creation respectively. Some conclusions and discussions about our future work are done in the section 5. 2. Process Model for Evolution-Driven Method Engineering Our approach for Evolution-Driven ME uses meta-modelling as its underlying method engineering technique. Meta-modelling is known as a technique to capture knowledge about methods. It is a basis for understanding, comparing, evaluating and engineering methods. One of the results obtained by the meta-modelling community is the definition of any method as composed of a product model and a process model [Prakash99]. A product model defines a set of concepts, their properties and relationships that are needed to express the outcome of a process. A process model comprises a set of goals, activities and guidelines to support the process goal achievement and the action execution. Therefore, method construction following the meta-modelling technique is centred on the definition of these two models. This is reflected in the map representing the process model for Evolution-Driven ME (Figure 1) by two core intentions (the nodes of the map) Construct a product model and Construct a process model. ![Figure 1. Process Model for Evolution-Driven Method Engineering.](image) A number of product meta-models [Grundy96, Hofstede93, Prakash02, Saeki94, Plihon96] as well as process meta-models [Jarke99, Rolland95, Rolland99] are available and our approach is based on some of them. This is shown in Figure 1 by several different strategies (the labelled edges) to achieve each of the two core intentions. The construction of the product model depends of the ME goal that could be to construct a method: - by raising (or lowering) the level of abstraction of a given model, - by instantiating a selected meta-model, - by adapting a meta-model to some specific circumstances, - by adapting a model. Each of these cases defines a strategy to Construct a product model, namely the Abstraction, Instantiation, Adaptation and Utilisation strategies. Each of them is supported by a guideline that consists in defining various product model elements such as objects, links and properties in different manner. In our example, we use the Lyee Software Requirements Model (LSRM) model as a baseline paradigm model for the more abstract Lyee User Requirements Model (LURM) construction. In this case, the Abstraction strategy is the more appropriate one to Construct a product model as the ME goal is to rise the level of abstraction of the LSRM. For this reason, in the next section we detail and illustrate the guideline supporting product model construction following the Abstraction strategy. This guideline is based on the abstraction of different elements from the paradigm model (product and/or process model) into elements in the new product model and the refinement of the obtained elements until the new product model became satisfactory. Process model must conform to the product model. Process steps, activities, actions always refer to some product model parts in order to construct, refine or transform them. This is the reason why in the map of Figure 1 the intention to Construct a process model follows the one to Construct a product model. We know that a process model can take multiple different forms. It could be a simple informal guideline, a set of ordered actions or activities to carry out, a set of process patterns to be followed, etc. In our Evolution-Driven process model (Figure 1) we propose four strategies: Simple, Context-driven, Pattern-driven and Strategy-driven to Construct a process model. - The Simple strategy is useful to describe a uncomplicated process model that can be expressed as a textual description or a set of actions to execute. - The Context-driven process model is based on the NATURE process modelling formalism [Jarke99, Rolland95]. According to this formalism, a process model can be expressed as a hierarchy of contexts. A context is viewed as a couple <situation, intention>. The situation represents the part of the product undergoing the process and the intention reflects the goal to be achieved in this situation. - Process model obtained following the Pattern-driven strategy takes the form of a Catalogue of Patterns. Each pattern identifies a generic problem, which could occur quite often in the product model construction, and proposes a generic solution applicable every time the problem appears. A generic solution is expressed as set of steps allowing to resolve the corresponding problem. - Finally, the Strategy-driven process model, also called the Map [Rolland99, Benjamenn99] (see the introduction of this paper), permits to combine several process models into one complex process model. The process model of the LURM was defined following the Pattern-driven strategy. A set of patterns has been defined to take into account different situations in the user requirements definition. Each pattern provides an advice to capture and formulate requirements. The section 4 presents in detail and illustrates the guideline supporting the Pattern-driven strategy for the process model construction. 3. Abstraction-Based Product Model Construction The Abstraction strategy for product model construction consists in defining a new product model representing the level of abstraction higher than this of its paradigm model. As a consequence, the objective of the corresponding guideline is to support the construction of a product model as an abstraction of an other model (product or process or both of them). This guideline is also expressed by a map shown in Figure 2. ![Abstraction-Based Product Model Construction](image) As the product model construction consists in the definition of its elements (objects, properties, links), there is only one core intention in this map called Define product element. The achievement of this intention is supported by a set of strategies. Two strategies named Product-driven abstraction and Process-driven abstraction are provided to start... the construction process. The first one deals with the paradigm product model whereas the second one is based on the paradigm process model. The *Product-driven abstraction* consists in analysing the paradigm product model, identifying elements that could be represented by more abstract elements in the new model and defining these abstract elements. The *Process-driven abstraction* proposes to analyse the paradigm process model and to abstract some of its activities into the upper level ones. The product elements referenced by these more abstract activities must be integrated into the product model under construction. The concepts obtained following this strategy have to match concepts (or a collection of concepts) of the paradigm product model. The *Top-down mapping strategy* can be applied to assure it. The *Generalisation, Specialisation, Aggregation and Decomposition* strategies are used to refine the model under construction whereas the *Linking* strategy helps to connect different elements of this model obtained by applying different abstraction strategies. In order to illustrate the abstraction-based product model construction we present first our paradigm model, which is the Lyee Software Requirements Model depicted in Figure 3. ![Figure 3. The Lyee Software Requirements Model (LSRM).](image_url) The central concept in the LSRM is called a *Word*. A *Word* corresponds to a program variable: input words represent values captured from the external world whereas output words are produced by the system by applying specific formulae. Lyee Software Requirements processing mechanism applies a formulae to obtain output word from the given input words. The execution of formulae is controlled by the *Process Route Diagram (PRD)*. A *PRD* is composed of *Scenario Functions (SF)*, composed of *Pallets* which are made of *Vectors*. In order to carry out the generated program control, the function generates its own *Words* such as the *Action words* and *Routing words*. *Action words* are used to control physical Input/Output exchanges in a Lyee program, they implement application actions such as reading a screen, submitting a query to a database, opening or closing a file, etc. *Routing words* are used to distribute the control over various *SF*s of a *PRD*. In order to comply with the LSRM paradigm, the LURM should be centred on a notion that abstracts from the concept of *Word*. Obviously *Words* required by the Lyee processing mechanism are not relevant at this level. On the contrary, the concern is only with *Domain words*. For that reason, the LSRM concept *Domain word* is abstracted into LURM concept *Item* following the *Product-driven abstraction strategy*. The *Specialisation strategy* is applied in order to specialise the *Item* into *Output* and *Input* to match the LSRM, which makes the difference between input and output words used in its processing mechanism. An *Output* is produced by the system whereas the *Input* is captured from the user. In the same manner, the *Input* is specialised into *Active* and *Passive*. The former triggers the system actions whereas the latter represents values captured from the user. Next we analyse the LSRM process model. The paradigm process model deals with the generation of the Lyee program structure. The result of the obtained program execution must fit user’s requirements. In other words, it must allow the user to satisfy one of its goals. For that reason, in the upper user requirements level we need to reason with concepts allowing to identify these user goals and express how the user interacts with the system in order to achieve them. The *Process-driven abstraction strategy* allows us to define the notion of *Interaction* representing the exchanges between the user and the system from the user’s view point. An interaction is goal driven in the sense that the user asks the system to achieve the goal he/she has in mind without knowing how the system will do it. As a result, we associate an *Interaction goal* to each *Interaction*. The complexity of the interaction goal defines the complexity of the corresponding interaction. If the interaction goal can be decomposed into several atomic goals, the corresponding interaction can also be decomposed. Consequently, we specialise the interaction into Atomic and Compound thanks to the Specialisation strategy. Now we need to define how the Interaction concept could be mapped into the concepts defined in the lower LSRM product model. Any of the LSRM concepts does not correspond the interaction of the LURM directly. However, the Top-down mapping strategy suggests us that an interaction could be expressed as a combination of items that match the LSRM Domain word concept. An Atomic interaction delineates a number of input and output data: the user provides some input and receives the output that corresponds the expected result. Therefore, the Decomposition strategy helps us to decompose every Interaction into four kinds of Items that we call \( W_{\text{input}} \), \( W_{\text{output}} \), \( W_{\text{result}} \) and \( W_{\text{end}} \). Each of them represents: - \( W_{\text{input}} \): the input provided by the user, - \( W_{\text{result}} \): the result of the goal achievement, - \( W_{\text{output}} \): the output displayed to the user, - \( W_{\text{end}} \): the end point of the interaction. Then we consider the concept of Logical unit (from LSRM) that represents a coherent set of words used in the same processing (reading or writing) and constrained by the same physical device (database, file, screens, etc.) used by the program. The concept of Defined abstracts this notion in order to aggregate logically related Items processed together and constrained by the same conceptual device. One Defined can be specialised into one or more Logical units. For example, one Defined corresponding to a conceptual screen can be implemented by two physical screens requiring four Logical units. To resume, the Product-driven abstraction strategy followed by the Linking strategy allows us to create the Defined concept and to connect it with the Items composing it. Similarly, the concept of PSG, the Precedence Succedence Graph was obtained by abstraction of the PRD concept from the paradigm product model. A PSG specifies the ordering conditions between Defineds as the PRD do it with Words. The Decomposition strategy was applied to represent the structure of the PSG as a graph composed of Links. and Nodes. Following the Top-down mapping strategy we recognize that the Link matches the LSRM InterSF concept that captures different links between the Scenario Functions in a PRD whereas the Node corresponds the Scenario Function concept. Thanks to the Specialisation strategy the Link was specialised into Duplex, Continuous and Multiplex whereas the Node was specialised into Begin, End and Intermediate. Every Defined is an intermediate link in at least one PSG. Figure 4 summarizes the abstraction process from the lower LSRM into upper LURM. 4. Pattern-Based Process Model Construction The Pattern-based process model construction strategy is based on the concept of pattern, which has been introduced by Alexander in architecture [Alexander77] and borrowed by IT engineers to capture software design knowledge [Gamma94, Coad96, Coplien95, Fowler97] as well as method engineers to capture reusable method knowledge [Rolland96, Deneckere98]. According to Alexander, a pattern refers to ‘a problem which occurs again and again in our environment and describes the core of the solution to that problem, in such a way that you can use this solution a million times over, without ever doing it the same way twice’. The key idea of a pattern is thus, to associate a problem to its solution in a well identified context. Figure 5 shows the pattern meta-model. The problem refers to the situation in which pattern can be applied and the goal to achieve in this situation. The situation is characterised by a set of product elements. The solution is represented by a set of steps to realise in order to resolve the problem. A pattern can be simple or compound. The solution of a compound pattern contains steps which call other patterns and are named pattern steps in the contrary to stand alone steps which are executed. ![Figure 5. Pattern meta-model.](image) The process model for pattern construction is defined by a map based on two core intentions Identify a pattern and Construct a pattern (Figure 6). To Identify a pattern means to identify a generic problem. As shown in Figure 6, the problem identification can be based on the discovery of a typical situation or a generic goal in the method context. The two cases are respectively supported by two strategies: Situation-based and Goal-driven. The Aggregation strategy allows to combine several patterns into a compound one in order to propose solutions for complex problems whereas the Decomposition strategy deals with the identification of sub-problems, which could also be considered as generic ones. The identification of a new pattern situation advises us to consider that there must be another pattern creating this situation. This case is supported by the Precedence strategy. To Construct a pattern means to formalise its problem (the situation and the goal), to define the solution to its problem as a set of steps to execute, to define its template and to give some examples of its application. Two strategies named Product-driven and Goal-driven are provided for this purpose (Figure 6). The guideline supporting the Product-driven strategy is based on the transformation of the product elements from the pattern situation into the product element defined as the pattern target (pattern goal target). The Goal-driven strategy deals with the pattern goal reduction into a set of atomic actions to realise in order to achieve this goal. The Succedence strategy considers that the result product obtained by applying an already defined pattern can be considered as a potential situation for the definition of an other pattern. In order to define the patterns supporting LURM construction, we need to identify typical situations (the problem) in the Lyee user requirements capture (the context) and to define the corresponding guidelines (the solution) assisting in the requirements elicitation and formulation. As shown in Figure 6, we can start pattern identification process following one of two strategies: Goal-driven or Situation-based. The guidelines supporting these two strategies supplement each other and there is no pre-established order to realise them. In our case, we start pattern identification process following the Goal-driven strategy and we consider the core LURM objective ‘to define user requirements’. As stated in the previous section, the LURM defines user requirements as user-system interactions. Therefore, we found our reasoning on the notion of atomic interaction and investigate the possibility to identify generic activities for requirements capture within this context. We deduce that the requirements capture related to an atomic interaction comprises four activities that can be considered as four potential pattern goals: - to start the interaction (Formulate To Start requirement), - to perform the action (Formulate To Act requirement), - to prepare the output (Formulate To Output requirement) and, - to end the interaction (Formulate To End requirement). Each of these activities is linked to the item typology introduced in the section 3 as each activity is associated to one type of Item: - the Formulate To Start requirement deals with the capture of $W_{input}$, - the Formulate To Act requirement is concerned by the calculation of $W_{result}$, - the Formulate To Output requirement shall help eliciting and defining $W_{output}$, - finally, the Formulate To End requirement considers $W_{end}$. Each requirement activity is concerned with the elicitation and definition of these Items, their grouping in Defineds and the positioning of those in the PSG of the interaction. Next, we select the Situation-based strategy to Identify a pattern (Figure 6) and consider the possible situations in which these goals are relevant. For instance, we distinguish two different situations dealing with the capture of $W_{input}$: either the input value does not exist and is directly captured from the user or it exists in a database or a file and is captured from this container. As a consequence, we identify two patterns having the same goal Formulate To Start requirement but dealing with different situations Input capture from the user and Input capture from the internal device. We call these two patterns respectively Immediate Start and Prerequisite for Start. In the same manner we identify two generic situations for each of the four generic goals and identify so eight generic patterns. Table 1 characterises the discovered patterns. Each of these 8 patterns deals with one single requirement activity whereas to get the complete set of requirements for a given problem, the requirements engineer has to perform one of each type of activity. The complete set of requirements requires that each of the following be performed once: ‘To start’, ‘To Act’, ‘To Output’ and ‘To End’. To obtain advice on this, a new pattern, Pattern P9, is introduced thanks to the Composition strategy. The Succedence strategy for pattern identification suggests us to think about the construction of a compound interaction that could be based on the iteration of an atomic interaction creation that is the iteration of the pattern P9. As a result, we identify a new pattern for a compound interaction formulation that we call \textit{P10 Complex Composition} (Table 1). \begin{table}[h] \centering \begin{tabular}{|l|l|l|} \hline \textbf{Goal} & \textbf{Situation Characterisation} & \textbf{Pattern name} \\ \hline Formulate To Start requirement & \(W_{\text{input}}\) are captured directly from the user. & P2 Immediate Start \\ Formulate To Start requirement & \(W_{\text{input}}\) are retrieved from a database or a file. & P3 Prerequisite for Start \\ Formulate To Act requirement & \(W_{\text{result}}\) are calculated by a simple formulae, which does not require the calculation of the intermediate words. & P1 Simple Word \\ Formulate To Act requirement & \(W_{\text{result}}\) are calculated by a complex formulae, which requires the calculation of the intermediate words and possibly the access to the data in a file or a database. & P8 Complex Word \\ Formulate To Output requirement & There is no obstacle neither in the capture of \(W_{\text{input}}\) nor in the production of \(W_{\text{result}}\). & P6 Single Output \\ Formulate To Output requirement & A number of different cases of output production shall be considered due to possible obstacles either in the capture of \(W_{\text{input}}\) or in the production of \(W_{\text{result}}\). & P7 Multiple Output \\ Formulate To End requirement & The interaction ends normally without additional internal activity. & P4 Simple End \\ Formulate To End requirement & Some internal activity shall be performed such as storing part or the totality of \(W_{\text{outputs}}\). & P5 Compound End \\ Formulate requirement for an atomic interaction & The interaction goal is atomic. & P9 Simple Composition \\ Formulate requirement for a compound interaction & The interaction goal is compound. & P10 Complex Composition \\ \hline \end{tabular} \caption{Characterisation of the identified patterns.} \end{table} Let’s illustrate now the construction of a pattern solution. In our example, the pattern solution takes the form of a sequence of rules to be applied by the engineer. Each of them mentions an action to perform like ‘\textit{construct a hierarchy of intermediate words involved in the calculation of the result word}’. Most of these actions are identifying a requirement, i.e. referring to an element of the meta-model: \textit{Defined}, \textit{Item}, \textit{Node} and \textit{Link} in the \textit{PSG}, as for example ‘\textit{introduce a defined of type screen}’. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{pattern_p2.png} \caption{Pattern P2 : Immediate Start.} \end{figure} As an example we propose the construction of the pattern P2 following the \textit{Product-driven} strategy. The objective of this pattern is to prepare a user-system interaction. The \textit{Product-driven} strategy advises to instantiate the meta-model elements necessary to achieve the pattern goal. In this case we need to instantiate the meta-model elements: Defined, Item and PSG, which are necessary for the input values capture. As a consequence, the actions to perform should be: - to create the Defined for the necessary input values capture, - to define an Item to each input value, - to link the Items to the Defined, - to type Items as Input and Passive and - to create the PSG. Next we need to define the pattern template. The pattern template is an instance of the meta-model representing the configuration of concepts to be instantiated in any application. In the case of the pattern P2, a PSG must be created containing a Begin node, a Continuous link, an Intermediate node corresponding to the Defined of type screen (called S_input) composed of the elicited Items. Figure 7 shows the pattern P2, its problem, solution and template. In the same manner we construct all the patterns from P1 to P8. The pattern P9 can be constructed following the Goal-driven strategy, which advises to decompose the principal goal into sub-goals until the atomic actions had been obtained. Thus, the objective of the pattern P9 ‘Formulate requirement for an atomic interaction’ can be decomposed into four sub-goals ‘Formulate To Start requirement’, ‘Formulate To Act requirement’, ‘Formulate To Output requirement’, ‘Formulate To End requirement’ in this order. As there are always two patterns that are candidate to help achieving the goal, it is necessary to examine the situation first. As pattern situations are exclusive, the choice of the relevant pattern to apply is easy. The obtained pattern is a compound one. It is shown in Figure 8. <table> <thead> <tr> <th>Pattern P9 : Simple Composition</th> </tr> </thead> <tbody> <tr> <td><strong>Problem:</strong></td> </tr> <tr> <td>&lt; goal: Formulate requirement for an atomic interaction &gt;</td> </tr> <tr> <td>&lt; situation: The interaction goal is atomic &gt;</td> </tr> <tr> <td><strong>Solution:</strong></td> </tr> <tr> <td>Formulate requirement for an atomic interaction</td> </tr> <tr> <td>1. Formulate To Start requirement</td> </tr> <tr> <td>Determine the situation to Start</td> </tr> <tr> <td>Apply pattern P2</td> </tr> <tr> <td>2. Formulate To Act requirement</td> </tr> <tr> <td>Determine the situation to Act</td> </tr> <tr> <td>Apply pattern P3</td> </tr> <tr> <td>3. Formulate To Output requirement</td> </tr> <tr> <td>Determine the situation to Output</td> </tr> <tr> <td>Apply pattern P1</td> </tr> <tr> <td>4. Formulate To End requirement</td> </tr> <tr> <td>Determine the situation to End</td> </tr> <tr> <td>Apply pattern P8</td> </tr> </tbody> </table> Figure 8. Pattern P9: Simple Composition. Finally, the pattern P10 deals with the compound interaction. The goal to be achieved is to get a complete and coherent requirement formulation for a compound interaction. This pattern should give an advice on how to decompose a compound interaction into atomic interactions to which the pattern P9 should be applied. In fact, the pattern helps in recognising that the interaction is not an atomic one in the first place. Each of ten patterns captures a requirement situation and guides the formulation of the requirement in compliance with the requirement meta-model. The ten patterns will be applied again and again in the different software projects using Lyee. Even though actual situations are different from one project to another, each of them should match one pattern situation and the pattern will bring the core solution to the requirements capture problem raised by this situation. 5. Conclusion In this paper we propose an approach for evolution-driven method engineering. Evolution in this case means that we start method engineering with an existing paradigm model (model or meta-model) and we obtain a new model (or meta-model) by abstracting, transforming, adapting or instantiating this paradigm model. Our process model for evolution-driven ME captures these various evolution ways as different strategies to create the product part of the model under construction. The corresponding process part construction is also supported by a set of strategies the selection of which depends on the process nature and complexity. Every strategy is supported by a guideline assisting method engineer in his or her method evolution task. The flexibility offered by the map formalism that we use to express our Evolution-Driven ME process model allows us to include other ways for method evolution in a rather simple manner. They can be integrated as different strategies to satisfy the intention Construct a product model and Construct a process model. In this paper we present the evaluation of our approach by the LURM construction as evolution of the LSRM. The Abstraction strategy have been used to Construct a product model while the Pattern-driven strategy was applied to Construct a process model. In this paper we present these two strategies in more detail and illustrate their application. Our future preoccupation is to evaluate other proposed method evolution strategies as well as to validate it through real projects. References [Rolland96] Rolland, C., N. Prakash, A proposal for context-specific method engineering, IFIP WG 8.1 Conf. on Method Engineering, Chapman and Hall, pp 191-208, Atlanta, Georgia, USA, 1996.
{"Source-Url": "https://hal.science/hal-00706986/document", "len_cl100k_base": 6815, "olmocr-version": "0.1.49", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 34887, "total-output-tokens": 8302, "length": "2e12", "weborganizer": {"__label__adult": 0.00029921531677246094, "__label__art_design": 0.0005331039428710938, "__label__crime_law": 0.0002301931381225586, "__label__education_jobs": 0.0011816024780273438, "__label__entertainment": 5.1081180572509766e-05, "__label__fashion_beauty": 0.0001442432403564453, "__label__finance_business": 0.00024580955505371094, "__label__food_dining": 0.0002970695495605469, "__label__games": 0.0004580020904541016, "__label__hardware": 0.000507354736328125, "__label__health": 0.0003025531768798828, "__label__history": 0.0002624988555908203, "__label__home_hobbies": 7.975101470947266e-05, "__label__industrial": 0.0003786087036132813, "__label__literature": 0.0003261566162109375, "__label__politics": 0.00017178058624267578, "__label__religion": 0.0004010200500488281, "__label__science_tech": 0.01519775390625, "__label__social_life": 8.463859558105469e-05, "__label__software": 0.0062713623046875, "__label__software_dev": 0.9716796875, "__label__sports_fitness": 0.0002598762512207031, "__label__transportation": 0.0003895759582519531, "__label__travel": 0.00016558170318603516}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36208, 0.01701]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36208, 0.53067]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36208, 0.90026]], "google_gemma-3-12b-it_contains_pii": [[0, 4425, false], [4425, 7878, null], [7878, 11505, null], [11505, 15732, null], [15732, 18004, null], [18004, 21604, null], [21604, 25145, null], [25145, 28036, null], [28036, 32107, null], [32107, 36208, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4425, true], [4425, 7878, null], [7878, 11505, null], [11505, 15732, null], [15732, 18004, null], [18004, 21604, null], [21604, 25145, null], [25145, 28036, null], [28036, 32107, null], [32107, 36208, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36208, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36208, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36208, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36208, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36208, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36208, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36208, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36208, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36208, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36208, null]], "pdf_page_numbers": [[0, 4425, 1], [4425, 7878, 2], [7878, 11505, 3], [11505, 15732, 4], [15732, 18004, 5], [18004, 21604, 6], [21604, 25145, 7], [25145, 28036, 8], [28036, 32107, 9], [32107, 36208, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36208, 0.11875]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
8cb191c6f022788c39e1792a09a2e04639a11c89
[REMOVED]
{"Source-Url": "http://research.ihost.com/lcpc06/final/53/53_Paper.pdf", "len_cl100k_base": 7893, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 41608, "total-output-tokens": 10877, "length": "2e12", "weborganizer": {"__label__adult": 0.0006084442138671875, "__label__art_design": 0.0005426406860351562, "__label__crime_law": 0.0006151199340820312, "__label__education_jobs": 0.0005917549133300781, "__label__entertainment": 0.00011551380157470704, "__label__fashion_beauty": 0.0002911090850830078, "__label__finance_business": 0.00032019615173339844, "__label__food_dining": 0.000576019287109375, "__label__games": 0.0013589859008789062, "__label__hardware": 0.0103759765625, "__label__health": 0.0008630752563476562, "__label__history": 0.0004761219024658203, "__label__home_hobbies": 0.00019299983978271484, "__label__industrial": 0.0013427734375, "__label__literature": 0.00027179718017578125, "__label__politics": 0.0005059242248535156, "__label__religion": 0.0009851455688476562, "__label__science_tech": 0.1639404296875, "__label__social_life": 7.486343383789062e-05, "__label__software": 0.00676727294921875, "__label__software_dev": 0.806640625, "__label__sports_fitness": 0.0006060600280761719, "__label__transportation": 0.0014705657958984375, "__label__travel": 0.00031447410583496094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45883, 0.02904]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45883, 0.39947]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45883, 0.89689]], "google_gemma-3-12b-it_contains_pii": [[0, 2748, false], [2748, 6215, null], [6215, 8561, null], [8561, 11624, null], [11624, 15343, null], [15343, 17430, null], [17430, 21166, null], [21166, 25119, null], [25119, 26395, null], [26395, 28771, null], [28771, 32648, null], [32648, 34838, null], [34838, 38081, null], [38081, 41959, null], [41959, 45883, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2748, true], [2748, 6215, null], [6215, 8561, null], [8561, 11624, null], [11624, 15343, null], [15343, 17430, null], [17430, 21166, null], [21166, 25119, null], [25119, 26395, null], [26395, 28771, null], [28771, 32648, null], [32648, 34838, null], [34838, 38081, null], [38081, 41959, null], [41959, 45883, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45883, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45883, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45883, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45883, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45883, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45883, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45883, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45883, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45883, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45883, null]], "pdf_page_numbers": [[0, 2748, 1], [2748, 6215, 2], [6215, 8561, 3], [8561, 11624, 4], [11624, 15343, 5], [15343, 17430, 6], [17430, 21166, 7], [21166, 25119, 8], [25119, 26395, 9], [26395, 28771, 10], [28771, 32648, 11], [32648, 34838, 12], [34838, 38081, 13], [38081, 41959, 14], [41959, 45883, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45883, 0.10314]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
781ceed7a69a6051ce4ed584dcccc4f395ecc652
Requirements-driven mediation for collaborative security Conference or Workshop Item How to cite: Link(s) to article on publisher’s website: http://dx.doi.org/10.1145/2593929.2593938 © 2014 ACM Version: Accepted Manuscript Copyright and Moral Rights for the articles on this site are retained by the individual authors and/or other copyright owners. For more information on Open Research Online’s data policy on reuse of materials please consult the policies page. Requirements-Driven Mediation for Collaborative Security Amel Bennaceur\textsuperscript{1}, Arosha Bandara\textsuperscript{1}, Michael Jackson\textsuperscript{1}, Wei Liu\textsuperscript{2}, Lionel Montreieux\textsuperscript{1}, Thein Than Tun\textsuperscript{1}, Yijun Yu\textsuperscript{1}, Bashar Nuseibeh\textsuperscript{1,3} \textsuperscript{1} The Open University, Milton Keynes, UK \textsuperscript{2} Wuhan Institute of Technology, Wuhan, China \textsuperscript{3} Lero - The Irish Software Engineering Research Centre, Limerick, Ireland ABSTRACT Security is concerned with the protection of assets from intentional harm. Secure systems provide capabilities that enable such protection to satisfy some security requirements. In a world increasingly populated with mobile and ubiquitous computing technology, the scope and boundary of security systems can be uncertain and can change. A single functional component, or even multiple components individually, are often insufficient to satisfy complex security requirements on their own. Adaptive security aims to enable systems to vary their protection in the face of changes in their operational environment. Collaborative security, which we propose in this paper, aims to exploit the selection and deployment of multiple, potentially heterogeneous, software-intensive components to collaborate in order to meet security requirements in the face of changes in the environment, changes in assets under protection and their values, and the discovery of new threats and vulnerabilities. However, the components that need to collaborate may not have been designed and implemented to interact with one another collaboratively. To address this, we propose a novel framework for collaborative security that combines adaptive security, collaborative adaptation and an explicit representation of the capabilities of the software components that may be needed in order to achieve collaborative security. We elaborate on each of these framework elements, focusing in particular on the challenges and opportunities afforded by (1) the ability to capture, represent, and reason about the capabilities of different software components and their operational context, and (2) the ability of components to be selected and mediated at runtime in order to satisfy the security requirements. We illustrate our vision through a collaborative robotic implementation, and suggest some areas for future work. Keywords Security requirements, mediation, collaborative adaptation Categories and Subject Descriptors General Terms Security 1. INTRODUCTION As our reliance on digital, connected devices increases, so does our need for security. Secure systems must provide the necessary capabilities to protect assets from intentional harm. These systems rely on an explicit definition of their security requirements to describe precisely which actions in a system are allowed and which ones are prohibited [21]. Once security requirements are specified, it becomes possible to concentrate on the security controls by which these security requirements can be satisfied. The high-degree of dynamism and the inherent heterogeneity of mobile and ubiquitous computing environments make security requirements hard to satisfy. First, the components and the assets of the system might be mobile (e.g., smartphones and personal trackers), making the boundary of the system ill defined and uncertain. These components are also often developed independently, and may only be accessed through particular interfaces and specific protocols. For example, the protocol used by a lighting system is likely to be different from that used by a surveillance camera for video recording. In addition, the physical environment must also be taken into account. For example, a surveillance camera may only be able to capture video when there is enough light. Therefore, the light may need to be turned on when the camera needs to be operated in the dark. It is difficult to design and develop the security controls necessary to protect assets when the operational context is continually changing, and no exact assumptions about the environment can be made. The complexity of security controls necessary to satisfy security requirements in mobile and ubiquitous computing environments means that they may be unlikely to be implemented by a single component, or even by multiple components in isolation. For example, a smart home may be equipped with a whole range of electronic devices that can help keep the homeowner and their property safe from intruders. If the light is switched on without the door being opened, this can indicate an intrusion and should trigger the alarm. The lighting system, the door lock, and the alarm... each provides a specific capability, which put together allows the realisation of the security control necessary to protect the house. In this paper we propose a novel framework for collaborative security that leverages the capabilities of multiple components available in the environment in order to deploy the necessary security controls and satisfy security requirements. To realise collaborative security, we must be able to answer several questions: - How do we capture, represent, and reason about the capabilities of components in ubiquitous computing environments? - Which components should collaborate to satisfy security requirements? - How do we make components collaborate? - Can we ensure that the collaboration satisfies properties such as correctness, safety, and minimality? In taking initial steps in answering these questions, we build upon adaptive security [20] and collaborative adaptation [23]. The former focuses on identifying the security controls necessary to keep security requirements satisfied despite changes in the environment. The latter concentrates on the mechanisms necessary to make multiple components collaborate. However, this collaboration is often hampered by differences in the implementations of independently-developed components. To address these differences without changing the components, intermediary software components, called mediators [22], systematically compensate for the differences in the implementations of these components. We propose an initial realisation of collaborative security whereby the security controls are identified using adaptive security techniques and their deployment is performed by first selecting the appropriate components, according to their capabilities, and then synthesising the mediators that enable them to interact successfully. The selection and mediation of multiple components as a means to enact security controls, rather than achieving integration or interoperability only, raises a number of challenges for collaborative security research. The paper is structured as follows. Section 2 presents our framework for collaborative security that combines adaptive security, collaborative adaptation and an explicit representation of the capabilities of the software components. Section 3 describes the key techniques used in adaptive security to select, analyse, and deploy security controls. Section 4 examines the principal techniques used in collaborative adaptation to select and mediate components in order to satisfy some functional requirements. Section 5 introduces an implementation of collaborative security by combining and unifying techniques from adaptive security and collaborative adaptation. It also illustrates its application through a robotics example. Section 6 discusses the open research questions. ### 2. A FRAMEWORK FOR COLLABORATIVE SECURITY Collaborative security aims to exploit the collaboration of multiple, heterogeneous, software-intensive components in order to meet security requirements in the face of changes in the environment, changes in assets under protection and their values, and the discovery of new threats and vulnerabilities. A framework for collaborative security bridges the gap between the security requirements and the components available in the environment. Our framework revolves around three concepts: security controls, capabilities, and mediators. Security controls specify the mechanisms that need to be deployed in order to enforce security requirements. According to the value of the assets to protect, the potential threats, and knowledge of the system over time—the sort of attacks that have worked, their consequences, and how they were stopped—we must determine the appropriate security controls. The deployment of these security controls is achieved through the collaboration of multiple components with different capabilities. Central in this framework is the notion of capabilities. The capability of a component models what a component can do, how it can do it, and under which conditions. In order to enable the analysis and reasoning necessary for meeting security requirements, we must be able to capture, represent, and reason about the capabilities of the components. Capturing capabilities allows us to express what the components available in the environment. Representing capabilities allows us to describe the features of the components explicitly. Reasoning about capabilities allows us to select the components that need to collaborate in order to realise the appropriate security controls. As the selected components can be developed independently, mediators are used to enable them to interoperate. Mediators enable heterogeneous components to interoperate by reconciling the differences in their implementations. As the components that need to interact are not known a priori, we must dynamically synthesise and deploy the appropriate mediators to enable the selected components to collaborate in order to realise the security controls. ![Collaborative Security Framework](image) **Figure 1: Collaborative Security Framework** Figure 1 depicts the main elements of the collaborative security framework and the functions associated with each of them. In subsequent sections, we elaborate on the opportunities and challenges provided by adaptive security and collaborative adaptation to support each of these elements. Specifically, adaptive security is concerned with the analysis of security requirements and the operational context in order to determine the security controls that must be deployed to keep a system secure. Collaborative adaptation is concerned with the selection of multiple components, according to their capabilities, and their mediation, in order to realise adaptations that cannot be handled using a single component, or multiple components in isolation. 3. ADAPTIVE SECURITY Adaptive security aims to enable systems to vary their protection in the face of changes in their operational environment. An adaptive security solution specifies how to: (i) monitor the environment and evaluate the properties of the operational context, (ii) determine the adequate security controls enabling the satisfaction of security requirements according to the properties of the environment, and (iii) deploy these security controls. Figure 2 illustrates the principal elements of an adaptive security solution, which are described in the following. Monitoring the environment. The system must be aware of the properties of its operational environment in order to adapt its protection accordingly. Monitoring provides the mechanisms that collect, aggregate, filter and report details collected from the environment [7]. In a security context, the focus of monitoring is on the assets and their values as well as on potential threats and vulnerabilities. Besides techniques that detect violations of the security requirements [15], it is also possible to detect threats by calculating the likelihood of a potential violation of security requirements based on the correlation between past events [17]. Determination of security controls. Deciding what security controls are adequate is difficult because security threats are diverse and often unpredictable, and because the security controls selected often impact usability, performance, and other quality attributes of a system. Hence, the determination of security controls is a multi-objective and cost-sensitive decision-making problem [24]. A requirement-driven approach for adaptive security enables analysing and reasoning about the costs and benefits of the security controls. For example, Salehie et al. [20] propose an approach in which a runtime model that combines goals, threats, and assets models is used to evaluate the cost and benefit of applying each security control and choosing the most appropriate one. Deployment of security controls. As most modern software systems are distributed and increasingly connected, the deployment of a security control may necessitate intervention at different parts of the system. DISCOA [8] defines a model-driven approach to support the deployment of security controls using point cuts over the architectural model of a specific system. Design-time security patterns are increasingly used to provide flexible and effective solutions for incorporating security mechanisms into software systems [9]. However, security controls need not be enacted using a single component but rather through the collaboration of multiple components available in the environment. In the following section, we therefore describe techniques to make components collaborate, albeit to satisfy functional requirements. 4. COLLABORATIVE ADAPTATION Collaborative adaptation aims to address complex adaptations that cannot be handled by a single component, or by multiple components in isolation. A collaborative adaptation solution specifies how to: (i) represent capabilities in order to provide an explicit description of the components and enable reasoning about their collaboration, (ii) capture the capabilities of the components available in the environment, and (iii) synthesise and deploy the appropriate mediators that enable the selected components to interoperate even though they have heterogenous implementations. Figure 3 illustrates the principal elements of a collaborative adaptation solution. Representing and reasoning about capabilities. Since a capability models a component, its representation depends on the kind of analysis and reasoning that need to be performed on the associated component. Tropos [4] defines a capability as “the ability of an actor of defining, choosing and executing a plan for the fulfillment of a goal, given certain world conditions and in presence of a specific event”. In this definition, capabilities are used in the context of an agent-oriented software development methodology rather than in a runtime environment. We regard the capability of a component as independent of the plan it has to execute, and which may change at runtime. In the Semantic Web Services domain [18], a capability describes what the service does, i.e. the functionality it provides to its clients. It is described using the inputs, outputs, pre-, and post-conditions of the service, all of which are associated with concepts in some domain ontology. In addition, a process model specifies how the service achieves its capability and a service grounding describes the information necessary to invoke the service. An ontology-based description of services has many advantages: (i) it promotes the semantic matching between clients requests and available services, (ii) it cases the construction of composition of services by making explicit the input, output, pre- and post-conditions of the services as well as their behaviours, and (iii) it facilitates interoperability by formalising both the meaning of the input/output and the behaviour of services. Nevertheless, solutions based on process algebra and automata have proven more suitable for modelling and analysing the behaviour of components. Hence, they are often used to specify, formally, the behaviour of components (and connectors) in a software architecture. **Capturing capabilities.** In ubiquitous computing environments, components often advertise their presence using standard discovery protocols (e.g., UPnP-SSDP, Bonjour, and Jini) [21]. However, these discovery protocols often only provide the syntactic interfaces rather then a rich capability representation. Consequently, learning techniques are often used to infer additional information about the components and complete their capabilities. The additional information can include the semantics of the interface of a component [3], its behaviour [16], or its non-functional properties [10]. **Synthesis of mediators.** Mediators enable heterogeneous components to interoperate in a non-intrusive way, i.e. without changing the internal implementation of these components. Mediation research has thus far focused primarily on design time activities [13]. There is however a shift towards runtime synthesis of mediators. Furthermore, the complexity of software systems is such that it is difficult to develop ‘correct’ mediators manually; i.e. mediators that guarantee that the components interact without errors (e.g., deadlocks) and terminate successfully. Inverardi and Tivoli [12] propose an approach to compute a mediator that composes a set of pre-defined patterns in order to guarantee that the interaction of components is deadlock-free. Cavallaro et al. [6] combine assembly methods with pairwise mediators to enable the satisfaction of functional requirements. The former consider the structural constraints and specify a coarse-grained composition of components based on the functionality provided or required by each component, while the latter enforce this composition despite the behavioural differences that may exist between each pair of components given the correspondence between the interfaces of the components. However, in environments where there is little or no knowledge about the components that are going to meet and interact, the complete generation of suitable mediators must happen at runtime while existing approaches assume to be given the correspondence between the interfaces of components or that some mediation patterns are known a priori and composed at runtime. In addition, the synthesised mediators solve the differences between components only at the application-level while in a world increasingly populated with mobile and ubiquitous computing technology, the differences between components span both the application and middleware layers. **Deployment of mediators.** As a conceptual paradigm that facilitates the communication and coordination of distributed components despite the differences in hardware and operating systems, middleware has often been used as an enabler for collaborative adaptation. For example, M² [23] introduces a message-based collaboration protocol to implement collaborative adaptation and defines an interface to plug legacy components. Nevertheless, when collaboration takes place at runtime, it is necessary to execute mediators that are automatically synthesised for that purpose. Starlink [5] is a runtime framework that executes mediators specified using a domain-specific language that describes the behaviour of this mediator and the data translations that it must perform. Starlink also hides the heterogeneity of middleware protocols by generating, at runtime, the appropriate parsers and composers that translate network messages into and from the actions of the domain-specific language. However, collaborative adaptation targets integration and interoperability and is agnostic to security concerns. In other words, collaborative adaptation solutions cannot readily be used to satisfy security requirements as they do not explicitly reason about assets and their values as well as potential threats and vulnerabilities of the system. 5. TOWARDS COLLABORATIVE SECURITY We propose an initial solution for collaborative security, called collaborative adaptive security, based on our work on requirement-driven adaptive security, which is supported by the SecuriTAS tool [19], and dynamic synthesis of mediators in ubiquitous environments, which is supported by the MICS tool [1]. Collaborative adaptive security enables the selection and mediation of components to deploy the adequate security controls in order to keep the system secure in a changing operational context. Figure 4 illustrates this solution. ![Collaborative Adaptive Security](image-url) **Figure 4: Collaborative Adaptive Security** To determine the adequate security controls, SecuriTAS maintains a runtime model that combines goal, asset, and threat models, and uses it to compute the utility of each security control. The security goals are associated with pre- defined security controls. Assets are linked to the security goals and associated threats. Once the security controls are identified, we consider the capabilities of the available components and make them collaborate to realise these security controls. The capabilities of components are represented using a combination of ontologies and transition systems. Ontologies are used to describe the high-level functionality the component requires from or provides to its environment, and to define the semantics of the actions of its interface. Transition systems are used to specify the behaviour of the component formally. To determine which components need to collaborate to enact the selected security control, we must match the goal model against the high-level functionalities of the available components. For the moment, let us consider that the leaves of the goal model correspond to the high-level functionalities of the available components. However, the collaboration between independently-developed components is often hampered by differences in their interfaces and behaviours. Therefore, mediators are synthesised which systematically compensate for these differences by mapping the interfaces of the components and coordinating their behaviours. The automated synthesis of mediators, implemented by MICS, is performed in several steps. The first step is interface matching, which identifies the semantic correspondence between the actions required by one component and those provided by the others. We incorporate the use of ontology reasoning within constraint solvers, by defining an encoding of the ontology relations using arithmetic operators supported by widespread solvers, and use it to perform interface matching efficiently. For each identified correspondence, we generate an associated matching process that performs the necessary translations between the actions of the components’ interfaces. The second step is the synthesis of correct-by-construction mediators. To do so, we analyse the behaviours of components so as to generate the mediator that combines the matching processes in a way that guarantees that the components progress and reach their final states without errors. The synthesised mediator is the most general component that ensures freedom of both communication mismatches and deadlock in the composition of the components [2]. The last step consists in making the synthesised mediator concrete by incorporating all the differences about the interaction of components. To do so, we compute the translation functions necessary to reconcile the differences in the syntax of the input/output data used by the components and coordinate the different interaction patterns that can be used by middleware solutions. Hence, mediation is tackled from the application to the middleware layer in an integrated way. The mediators we synthesise act as: (i) translators by ensuring the meaningful exchange of information between components, (ii) controllers by coordinating the behaviours of the components to ensure the absence of errors in their interaction, and (iii) middleware by enabling the interaction of components across the network so that each component receives the data it expects at the right moment and in the right format. We have been experimenting with collaborative adaptive security an early prototype demonstrator using two robots: a programmable autonomous vacuum cleaner (iRobot Create) and a humanoid robot (NAO). The two robots need to collaborate in order to secure a particular area in our laboratory. The iRobot Create has a simple capability that consists of executing the moving commands it receives. NAO has the object-recognition capability and can indicate to iRobot Create the area in which it can move. Both iRobot Create and NAO rely on discovery protocols to advertise their presence in the environment, the former uses Bluetooth discovery while the latter uses Bonjour. Nevertheless, putting iRobot Create and NAO in the same environment is not enough to satisfy the security requirements, as they cannot interact with one another spontaneously. What is needed is a mediator that makes them collaborate in order to realise the security requirement. Therefore, a mediator is synthesised which exchanges messages with both robots through their specific interfaces, using the iRobot Create Open Interface to communicate with iRobot Create, and NAOqi to communicate with NAO; and coordinates the behaviours of the components by first receiving the messages from NAO then sending the commands to iRobot Create. Analysis. To present collaborative security more precisely, we describe it using Jackson and Zave’s framework for requirements engineering [14]. Let $E$ denote environment properties, $R_s$ denote security requirements, and $S$ denote the specification of a software system that satisfies the security requirement $R_s$, i.e. $S, E \models R_s$. When the environment properties evolve into $E'$, the specification of the software system $S$, and the software system itself, may no longer satisfy the security requirement $R_s$, i.e. $S, E' \not\models R_s$. As a result, the system must be adapted so as to keep the security requirements satisfied. To ensure that the security requirement $R_s$ remains satisfied in the environment $E'$, adaptive security transforms the specification of the software system $S$ into a specification $S'$ such that $S', E' \models R_s$. $S'$ is obtained from $S$ through the deployment of the adequate security controls. Nevertheless, the specification $S'$ need not be achieved using a single software component; rather, we may take advantage of the capabilities of the different components available in the environment and make them collaborate so as to realise $S'$. Let $\mathcal{C} = \{C_1, \ldots, C_n\}$ denote the set of the capabilities of the components available in the environment $E'$. Note that $\mathcal{C}$ may include the component(s) of $S$ as well. None of the components available in the environment is able to satisfy the security requirement $R_s$, i.e. $\forall C_i \in \mathcal{C} : C_i, E' \not\models R_s$. It might be the case that even the conjunction of multiple components cannot satisfy the security requirement $R_s$, i.e. $\forall C' \subseteq \mathcal{C} : C', E' \not\models R_s$. Collaborative security seeks a subset of components $C''$, together with the specification $M$ of the associated mediator(s) such that $C'', M, E' \models R_s$. 6. A RESEARCH AGENDA The increasing ubiquity of connected devices both challenges and supports security. The challenges arise from the frequent and unpredictable changes in the environment, in assets under protection and their values, and the discovery of new threats and vulnerabilities. The support comes from the plethora of devices that can be composed in a multitude 1http://www.irobot.com/us/learn/Educators/Create.aspx 2http://www.aldebaran-robotics.com/en/ of ways in order to protect valuable assets. Collaborative security aims to address new or changing threats by enabling the runtime deployment of adequate security controls through the collaboration of the components available in the operational environment. In this paper, we suggested that collaborative security may be realised by combining techniques from adaptive security and collaborative adaptation. Nevertheless, many challenges remain. The choice of security controls need not be specified but may be elicited according to the capabilities available in the environment. We envision inferring, at runtime, new security controls that a developer or a user did not consider at design time. We are also considering the use of security arguments to drive the satisfaction of security requirements [11]. We are thus investigating if argumentation can be used to determine the necessary security controls. Attempting to construct a security satisfaction argument exposes trust assumptions and oversights within the system that can affect security. At runtime, we can evaluate these assumptions to decide if they might be relaxed or invalidated. In addition, as ubiquitous computing and cyber-physical systems can influence their environment, they might change it in order to validate the necessary assumptions that would satisfy the security requirements. In addition, the trustworthiness of the individual components may help determine the components that need to collaborate in order to achieve security. Finally, while collaboration can help maintain security requirements, how can we prevent it from introducing vulnerabilities? In other words, how do we synthesise mediators that constrain the collaborative behaviour so as to disable anti-goals and prevent attacks from succeeding? We believe that collaborative security is a fertile research area, with both potential and challenges, and we invite other researchers to collaborate with us in addressing some of these challenges. 7. ACKNOWLEDGMENTS We acknowledge SFI grant 10/CE/I1855 and ERC Advanced Grant no. 291652 (ASAP). 8. REFERENCES
{"Source-Url": "http://oro.open.ac.uk/39643/1/seams14_88_cameraReady2.pdf", "len_cl100k_base": 5454, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 21259, "total-output-tokens": 7236, "length": "2e12", "weborganizer": {"__label__adult": 0.0003211498260498047, "__label__art_design": 0.00039768218994140625, "__label__crime_law": 0.0006022453308105469, "__label__education_jobs": 0.0007653236389160156, "__label__entertainment": 7.992982864379883e-05, "__label__fashion_beauty": 0.0001456737518310547, "__label__finance_business": 0.00032138824462890625, "__label__food_dining": 0.00030541419982910156, "__label__games": 0.0005998611450195312, "__label__hardware": 0.0006918907165527344, "__label__health": 0.0004754066467285156, "__label__history": 0.0002083778381347656, "__label__home_hobbies": 9.047985076904296e-05, "__label__industrial": 0.0003898143768310547, "__label__literature": 0.00032520294189453125, "__label__politics": 0.00030732154846191406, "__label__religion": 0.00035834312438964844, "__label__science_tech": 0.050811767578125, "__label__social_life": 0.0001437664031982422, "__label__software": 0.0123138427734375, "__label__software_dev": 0.9296875, "__label__sports_fitness": 0.00025844573974609375, "__label__transportation": 0.0003848075866699219, "__label__travel": 0.00016951560974121094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34604, 0.02399]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34604, 0.61268]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34604, 0.90255]], "google_gemma-3-12b-it_contains_pii": [[0, 797, false], [797, 5683, null], [5683, 11200, null], [11200, 15743, null], [15743, 21632, null], [21632, 28593, null], [28593, 34604, null]], "google_gemma-3-12b-it_is_public_document": [[0, 797, true], [797, 5683, null], [5683, 11200, null], [11200, 15743, null], [15743, 21632, null], [21632, 28593, null], [28593, 34604, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34604, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34604, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34604, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34604, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34604, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34604, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34604, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34604, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34604, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34604, null]], "pdf_page_numbers": [[0, 797, 1], [797, 5683, 2], [5683, 11200, 3], [11200, 15743, 4], [15743, 21632, 5], [21632, 28593, 6], [28593, 34604, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34604, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
860e70838c832f8d462714fbf7542da716711b13
Abstract. Challenges that are hardly technical, have been reported to occur during enterprise architecture development. To address those caused by ineffective collaboration between architects and organisation stakeholders, we are developing a method referred to as Collaborative Evaluation of Enterprise Architecture Design Alternatives (CEADA). Our method aims at enabling effective execution of collaborative tasks during enterprise architecture creation. In earlier work, initial requirements for CEADA were defined and collaboration engineering was used to design a collaboration process for CEADA. Following design science research guidelines, the initial models (describing the requirements and the design of the collaboration process) were analytically evaluated using structured walkthroughs with enterprise architects. The models were then refined and analytically re-evaluated using structured walkthroughs with skilled facilitators and enterprise architects. This paper presents findings from the analytical re-evaluation of the refined model (describing only the requirements), and reports how it was further refined. Keywords: Enterprise Architecture Creation, Collaboration 1 Introduction Organisations are often challenged by rapid and complex changes that occur in their business environment [14, 20], yet they should be capable of adapting swiftly to such changes [14]. Adapting to some organisational changes may imply a major redesign of an organisation’s structure, business processes, IT applications, and technical infrastructure [10]. (Enterprise) architecture then comes in handy to manage the complexity [10, 14, 21, 25], and inflexibility associated with an organisation’s business processes, information systems, and technical infrastructure [19]. Based on the definitions of (enterprise) architecture given by IEEE [6], ArchiMate [10], and The Open Group Architecture Framework (TOGAF) [24], we define enterprise architecture as a normative means to direct enterprise transformations. Normative means can take the shape of principles, views, or high level architecture models, whose role is to be a normative instrument during the intended transformation. Although enterprise architecture development generally involves creating, applying, and maintaining the architecture to realise its planned purpose [14], this research focuses on creating enterprise architecture. Enterprise architecting often involves challenges that are hardly technical but are associated with political, project management, and organisational issues and weaknesses [7]. Although politics is a potential risk, in modern business environments, that can fix an organisation into a rigid posture [17], addressing political issues is beyond the scope of this research. We instead focus on how some challenges associated with project management and organisational weaknesses, can be overcome during enterprise architecture creation. From [7], examples of such challenges include: choosing a suitable framework and model; limited modeling tools to allow alignment of business operations and processes; making enterprise architecture models understandable by stakeholders; inability of architecture modeling languages to represent dynamics of a system and enable end-to-end performance analysis of an architecture; and the unrecognised role of an enterprise architect among executives of organisations accustomed to reactive and proactive decision making. Moreover, the problematic nature of collaboration between architects and stakeholders during the architecture effort, leads to delivery of complex and abstract enterprise architecture models that are hardly usable in practice [19]. The stakeholder world and technical design world are quite disjointed, hence calling for the need for an architect to foster connections that will yield output that appropriately fulfills stakeholders’ interests [11]. Therefore, fulfilling all stakeholders’ interests implies addressing some aspects in the methodology for designing architectures [10]. Although literature (e.g. [9, 10, 11, 21, 24]) reveals efforts towards improving the architecting methodology, managing collaborative activities in enterprise architecture creation remains superficially addressed. From the call (by [1, 2]) for openness in the architecture process, it can be deduced that guidelines for enterprise architecting need to be strengthened with collaborative and interactive tasks. In [13], with the aim of addressing this need, we introduced a method to support collaborative decision making in architecture creation, i.e. Collaborative Evaluation of Enterprise Architecture Design Alternatives (CEADA). The initial requirements for CEADA were defined, and collaboration engineering was used to design a collaboration process to address those requirements. Following the design science research guidelines in [5], the initial models (describing the requirements and the design of the collaboration process to address them) were analytically evaluated using structured walkthroughs with enterprise architects. Output from the analytical evaluation was used to refine the initial models. It was vital to further expose the refined models to skilled facilitators (since CEADA addresses collaboration aspects) and to enterprise architects (for further evaluation). Hence the reason for an analytical re-evaluation (using structured walkthroughs) with skilled facilitators and enterprise architects. This paper presents findings from the analytical re-evaluation of the refined models, and reports how the requirements for CEADA were further refined. The remainder of this paper is structured as follows. Section 2 reports existing work on improving enterprise architecting. Section 3 briefly describes requirements for CEADA, while section 4 presents their analytical re-evaluation and section 5 their further refinement. Section 6 concludes the paper. 2 Existing Work on Improving Enterprise Architecting This section reports efforts towards improving the architecting methodology. In [21] an analysis of several enterprise architecture approaches is given, as well as insights into selecting and creating an appropriate architecture approach for an organisation. Since several architecture frameworks specify architecture products and are silent about the way of creating them, TOGAF defines a detailed method (Architecture Development Method - ADM) for architecture development and for defining an organisation-specific architecture approach [24]. TOGAF recommends several techniques for executing ADM tasks, however, details of executing some collaborative tasks are not given. In [10], ArchiMate is defined to enable visualisation and analysis of architectures, since there was no detailed architecture modeling language for expressing business processes and their IT support in an easily understandable way. Moreover, ArchiMate complements TOGAF by offering concepts that enable creation of consistent integrated architecture models, that aptly communicate TOGAF architecture views, and enable communication and decision making across all organisation domains [9]. In [19], in order to enable architects to understand what stakeholders expect from them, the following are reported as essential attributes for the architecture function: (1) explicitly demarcated stakeholders’ roles within the architecture function, at different hierarchical and functional organisation level; (2) willingness of architects to think along with stakeholders and understand their goals and problems so as to provide the best solutions; (3) architects to posses individual skills that enhance effective communication with stakeholders; (4) architects to have a long-term view and a realistic opinion about the organisation and realisation of its business and IT strategy. These attributes call for creation and deployment of (or adaptation of existing) techniques that enhance effective collaboration (among actors) into enterprise architecting. Deployment of collaborative measures in enterprise architecting is seen in the definition of architecture principles. For example in [12,15,16], approaches are presented for enabling formulation of architecture principles in a collaborative setting. Principles guide enterprise architecting, they justify decisions made on architecture components, they should be linked to stakeholders and their concerns [15], and they represent general requirements (functional and constructional) for a class of systems [25]. Other efforts in improving the way principles are defined include the following. In [3], it is demonstrated how the basic logical principles of Object Role Modeling and Object Role Calculus can be used to systematically formulate architecture principles and improve their quality. In [16], an Enterprise Engineering framework is defined to support: definition of principles in a specific and measurable way, so that they can effectively constrain design space; effective and efficient assessment of the impact of principle(s); and detection of possible contradictions in principles, so that they can be adequately prioritised/clarified. In [24], criteria of good architecture principles are defined. Enterprise architecture products generally include “visualizations, graphics, models, and/or narrative that depicts the enterprise environment and design” [21]; they are not limited to principles [14]. These products describe the enter- prise architecture decisions taken, and offer an organisation-wide approach for communicating and enforcing such decisions [19]. During the design of these products, the architect needs to communicate with all key stakeholders, balance their needs and constraints so as to acquire a feasible and acceptable enterprise architecture design [19]. In our view, this requires that during enterprise architecture creation, design alternatives for these products be collaboratively generated, evaluated, such that feasible and acceptable ones are selected. Since enterprise architecture literature hardly reveals an explicit way of achieving this, we are devising CEADA for that cause. 3 Requirements for CEADA Method In design science artifacts for solving organisational problems are created based on pre-existing theories, frameworks, instruments, constructs, models, methods, and instantiations [5]. Using this paradigm, the initial requirements for a method (i.e. CEADA) to support collaborative decision making in enterprise architecture creation were defined (as reported in [13]). This was done by using the causality analysis theory and adapting the generic decision making process defined by Simon in [22] to enterprise architecture. The initial model (describing the requirements) was analytically evaluated using structured walkthroughs with enterprise architects, and the feedback was used to refine the requirements. This section briefly describes the refined requirements for CEADA, as illustrated in figure 1 using Business Process Modeling Notation (BPMN). These requirements were analytically re-evaluated as discussed in section 4. Fig. 1. Decomposition of Requirements for CEADA The generic decision making process involves three phases, i.e.: Intelligence, investigating an environment for intervention opportunities; Design, devising possible courses of action (decision alternatives) to solve the problem or improve Requirements for CEADA environment); and Choice, choosing a particular course of action (decision alternative) from those available [22]. In figure 1, steps 0 and 1 depict intelligence, steps 2 and 3 depict design, and step 4 depicts choice. At step 0 CEADA should enable enterprise architects to collaborate with senior management so as to determine the organisation’s problem scope, external constraints from regulatory authorities, purpose of architecture effort, high level design specifications, and key stakeholders to participate in the subsequent collaboration required in architecture creation. Given that each stakeholder pursues specific objectives (depending on his/her role in the architecture function, organisation level at which (s)he operates, and the aspect area (s)he focuses on), there are extensive and potentially conflicting stakeholders’ expectations that are hard to satisfactorily address [19]. This calls for the need to seek a shared conceptualisation and understanding of the organisation’s problem and solution aspects between architects and stakeholders (and also among stakeholders). Besides, this shared understanding is the basis for enterprise evolution [14,23]; and it facilitates negotiation [26], which is vital during evaluation and selection of architecture design alternatives. Therefore, at step 1 CEADA should enable creation of a shared conceptualisation and understanding of output from step 0. It should also enable architects and stakeholders to identify, evaluate, and agree on quality criteria for evaluating design alternatives. At step 2 CEADA should enable identification, elaboration, and validation of architecture design alternatives. At step 3 the method should enable collaborative evaluation of feasible architecture design alternatives, while at step 4 it should enable collaborative selection of appropriate and efficient architecture design alternatives. The purpose of steps 0, 1, and 2 is to gradually building consensus among stakeholders (on the solution aspects) so that they can effectively evaluate design alternatives and select appropriate and efficient ones. 4 Analytical Re-Evaluation of Requirements for CEADA In design science evolving artifacts are evaluated using methods that are observational, analytical, experimental, descriptive, or testing-oriented [5]. This section reports how the requirements described in section 3 were analytically re-evaluated using bi-lateral structured walkthroughs with skilled facilitators and enterprise architects. Walkthroughs are one of the methods used for analytical evaluation [18]. A walkthrough is a step by step review and discussion, with practitioner(s), of activities that make up a process to reveal errors that are likely to hinder the effectiveness and efficiency of the process in realising its intended plan [8]. The aim of using walkthroughs was to have enterprise architects and skilled facilitators identify and eliminate faults and ambiguities in the requirements for CEADA. Each walkthrough session (with a duration of at most two hours) involved two actors, i.e. the researcher and either an enterprise architect or skilled facilitator. The agenda was: (1) the researcher explained the aim of the research and the role of the architect or facilitator in the walkthrough (i.e. to comment on the relevance of CEADA in practice, review its requirements, identify faults and ambiguities in them, and give insights on how to eliminate them); and (2) a step by step discussion of CEADA requirements. In each session the researcher took notes which were later studied and used to refine CEADA requirements (as shown in section 5). Output from the three walkthrough sessions is outlined in tables 1 and 2 and described in sections 4.1, 4.2 and 4.3. ### Table 1. Summary of Analytical Re-Evaluation of Requirements <table> <thead> <tr> <th>CEADA Activities</th> <th>Enterprise Architect</th> <th>Facilitator</th> <th>Architect and Facilitator</th> </tr> </thead> <tbody> <tr> <td>0.1 Define organisation problem scope</td> <td>Identifying problem scope and external constraints are vital activities as they are key inputs to visioning and strategy development in a business transformation initiative</td> <td>Interviews are not a suitable way to achieve these tasks, instead group support system can be used</td> <td>These activities are important because they yield the first set of design principles</td> </tr> <tr> <td></td> <td>Factors like business requirements, business strategy and objectives are vital inputs when defining the problem scope. At this level, detailed information on these factors may not be available but there should be pointers to them, in order to define a clear problem scope</td> <td>Pre-existing data files and models (developed using other applications) can be used along with the group support system to enable informed and successful discussions of these aspects</td> <td>They relate to sponsor meetings in the ASE concept</td> </tr> <tr> <td></td> <td>Fixed external legal constraints guide the formulation of solution aspects</td> <td>They address collaboration aspects when developing IAF artifacts</td> <td>In practice, ASE is used to address collaboration aspects when developing IAF artifacts</td> </tr> <tr> <td>0.2 Identify external design constraints</td> <td></td> <td></td> <td>There is need to ensure that management acknowledges the relevance of the subsequent collaborative work and its cost and time implications.</td> </tr> <tr> <td>0.3 Define purpose of the architecture effort</td> <td>defining purpose of the architecture effort is based on a clear problem scope</td> <td>relevant</td> <td></td> </tr> <tr> <td>0.4 Define high level design specifications</td> <td>should be global or high level specifications of the solution, and should not be confused with low level implementation (design) specifications</td> <td>explicitly define the type of design alternatives that CEADA is addressing</td> <td></td> </tr> <tr> <td>0.5 Select key stakeholders to participate in subsequent collaboration efforts</td> <td>relevant</td> <td>relevant</td> <td></td> </tr> <tr> <td>0.6 Reveal calendar of events for architecture effort &amp; expectations of architect team &amp; key stakeholders</td> <td></td> <td></td> <td></td> </tr> </tbody> </table> ### 4.1 Walkthrough with Enterprise Architect At step 0, defining problem scope and external design constraints are significant because they are key inputs to strategy development in a business transformation. However, figure 1 does not indicate the key inputs for defining a clear problem scope, i.e. business requirements, strategy, and objectives. A clear problem scope is useful for determining the purpose of the architecture effort. Also, considering fixed external constraints at step 0 is vital, as such constraints guide the formulation of solution aspects. Defining high level design specifications should instead be global or high level specifications of the solution, and should not be confused with low level implementation (design) specifications. Requirements (or activities) in step 0 and others in step 1 of the model, are useful for strategy elaboration in a business transformation process. Output from step 0 can be useful for defining criteria for evaluating design alternatives, however selecting a method for evaluating design alternatives is not the role of stakeholders. Additionally, there is need to clarify the type of evaluation criteria that should be defined at step 1. This is because evaluation criteria can be in four categories, i.e. business criteria, governance criteria, operational criteria, and architectural criteria. Different stakeholders are crucial in defining these criteria, e.g. senior management stakeholders for defining business criteria but not for the other criteria categories, IT manager and members of operational department are crucial for defining governance and operational criteria, while architectural criteria are defined by architects. Criteria categories can then be merged and their interrelationships defined, in order to enable stakeholders to evaluate and select alternatives in an informed way. In steps 2, 3, and 4 of the requirements, it should be made explicit which type of (design) alternatives CEADA is aims to address. If for example, an organisation’s strategy is to expand its operations to country X, then at least two types of design alternatives can be identified. (1) Business solution alternatives (high level alternatives), which are alternative ways in which the organisation can execute its strategy, e.g. by: taking over an already existing company in country X, merging with an already existing company in country X, or completely starting a new branch country X. (2) Implementation (low) level design alternatives, which are design alternatives for each of the three solution alternatives (identified in (1)) for executing the business strategy in country X. This implies that if one solution alternative has been chosen, then its implementation (low level) design alternatives are identified and evaluated. Therefore, the model should clearly indicate whether CEADA aims to address business solution alternatives or implementation level design alternatives. It should also be noted that each of these types of alternatives requires a different approach of formulation and evaluation. For example low level design alternatives are too technical for stakeholders to be involved in their formulation and evaluation, instead stakeholders should be involved in identifying and evaluating business solution alternatives, and defining requirements for the enterprise architecture. 4.2 Walkthrough with Facilitator Interviews are not a suitable way for achieving the goal of requirements at step 0, as proposed. Instead of interviews, collaboration engineering or group decision support can as well be used in step 0 to obtain all required information from senior management, especially if more than three stakeholders are involved. Applying collaboration engineering or group decision support should not be restricted to only steps 1, 2, 3, and 4, as proposed. Moreover, pre-existing data files and models (developed using other applications) can be used along with group decision support software at step 0. Besides, models that portray the dynamics of the organisation problem and the intended solution can be constructed at step 0, with the facilitation support from collaboration engineering. Pre-existing data files and models that were developed (using other applications) before or during step 0 (or any other step), can be used when executing other requirements in the method. Such a way of working will enable informed discussions when defining problem scope, external constraints, purpose of architecture effort, and design specifications. At step 1, sharing and categorisation of concerns can be improved by classifying concerns into: concerns associated with problem scope; and concerns associated with design specifications or negotiable constraints. 4.3 Walkthrough with Enterprise Architect and Facilitator In practice a generic approach known as Accelerated Solutions Environment (ASE), documented in [4], is used in large group interventions at the start of a business (transformation) initiative, to create commitment, agreement, and approval by aligning critical stakeholders. It has also been used to undertake collaborative activities when developing artifacts using the Integrated Architecture Framework (IAF). ASE addresses problems that are complex (in scope, market, politics) and involve a number of key stakeholders (e.g. 30-110) who have divergent interests and views. It is more than traditional facilitated workshops, and involves intensive collaborative work for a duration of three days (without group support systems). Output obtained from a three-day ASE event is used by architects to create a comprehensive high level solution (e.g. high level architecture description), which is later translated into a low level detailed solution. Prior to the three-day event, several sponsor meetings are held with company executives and the architect team. The sponsor meetings aim at developing content on: the objectives of the three-day event, input information for the success of the event, expected output from the event; and selecting type of stakeholders to be invited to the event, and the standards to be used. The three-day event is managed by a team of skilled facilitators, who design their facilitation intervention strategies based on desired end results of the event. The event generally follows a Scan-Focus-Act cycle. Scan phase involves seeking a common language and understanding of aspects. This involves: a plenary presentation for all invited participants (stakeholders); followed by parallel sessions known as knowledge bursts (of 15-20 minutes), in which participants work in small groups that focus on problem solving and learning new skills; followed by parallel small group presentations of aspects addressed and learned from the knowledge bursts; and completed with a plenary brainstorming session (led by a facilitator) on all aspects learned from this scan phase. Focus phase is assignment driven and aims at finding solutions. Participant groups handle domain specific aspects by answering questions and developing content for a given domain. Different scenarios are sought, stretched, evaluated, and validated to get the desired products and to gain stakeholders’ commitment. Act phase involves building group alignment and implementation plans for defined aspects. ASE concepts can be used to detail CEADA requirements (or devise a process for their execution), and the two approaches would complement each other and yield improved collaboration in the architecture function. For example, requirements at step 0 are important because they yield the first set of design principles for an initiative, and in ASE design principles are obtained through sponsor meetings. Some requirements at step 1 (e.g. sharing concerns on problem and solution aspects, defining criteria) are executed using “take-a-panel” technique in ASE, while others are executed using “share-a-panel” technique. Take-a-panel involves dividing participants into small groups to concentrate on problem solving and learn new skills (in short knowledge burst sessions) whereas share-a-panel involves turns in which each participant explains his or her own concepts to members in his or her group. Requirements at steps 2, 3, and 4 are executed in ASE using focus phase, in which scenarios are sought, stretched, evaluated, validated, and integrated into a first draft of the solution. 5 Further Refinement of Requirements for CEADA In design science, feedback from evaluating an evolving artifact is used to refine it so as to increase its utility [5]. Feedback from the analytical re-evaluation (in section 4) was used to further refine CEADA requirements as shown in figure 2. At step 1 appended requirements are: 1 (define purpose of session); 2 (define basic information on business requirements, strategy, and objectives); and 7 (seek consensus on whether scope of the problem and its intended solution require a collaborative effort). This is because from table 1 it was indicated that requirements at step 1 ought to be executed in a collaborative session (with senior management and enterprise architects) rather than using interviews. It was also indicated that basic information on business requirements, strategy, and objectives, is essential for defining a clear problem scope and its solution. Moreover, it was indicated that there is need to ensure that senior management acknowledges the relevance of a collaborative effort (in enterprise architecting, so as to achieve the intended solution), since this has cost and time implications. At step 2, requirements regarding defining evaluation criteria were moved to step 3 because input information for defining evaluation criteria would be obtained during execution of step 3. Step 3 was inserted to ensure that CEADA will enable the definition of requirements for the enterprise architecture, and quality criteria for evaluating architecture design alternatives. This is because from table 2 it was indicated that CEADA should enable stakeholders to define requirements for the enterprise architecture; and that quality criteria should be categorised into 4, where stakeholders should define the business, governance, and operational criteria. Step 4 was inserted to ensure that CEADA will enable stakeholders to formulate possible solution scenarios and refine quality criteria. This is because it was indicated that stakeholders should participate in identifying business solution alternatives rather than formulating and evaluating the technical architecture de- sign alternatives. This is also why step 5 has been defined as a black box session, to be conducted by mainly enterprise architects, since it involves translating the identified (or formulated) solution scenarios into proper enterprise architecture design alternatives. Step 5 requirements include: defining architectural quality criteria and merging them with the business, governance, and operational criteria defined at step 4; translating solution scenarios into design alternatives; elaborating and validating design alternatives; identifying a method to analyse design alternatives; and analysing design alternatives. Step 6, a collaborative session, has been decomposed into defining purpose of session; enterprise architects explaining the implications of analysed architecture design alternatives to business stakeholders; seeking shared understanding of these implications among key stakeholders; and guiding stakeholders to select the appropriate and efficient alternatives. 6 Conclusions and Future Work This paper presented the analytical re-evaluation and further refinement of the requirements for realising collaborative decision making during enterprise architecture creation. Currently, we are undertaking questionnaire surveys (with enterprise architects) aimed at further validating these requirements and capturing more practical insights into them. The analytical re-evaluation also offered suggestions on how these requirements can be executed. For example, we have used ASE techniques (learnt from the walkthroughs) to improve the design of the collaboration process that has been formulated (using collaboration engineering) to address CEADA requirements. Preparations are ongoing, for an experimental evaluation of this collaboration process using a fictitious case. After several experimental iterations have been done on the collaboration process, it will be evaluated using a real organisation case. Acknowledgements: We are very grateful to Richard Bredero, Karin Blum, Arnold van Overeem, Claudia Steghuis, Hans Mulder, Raymond Slot, Tommes Snels, and Mark van der Waals for their valuable contributions and practical insights into our research. References
{"Source-Url": "http://repository.ubn.ru.nl/bitstream/handle/2066/75939/75939.pdf?sequence=1", "len_cl100k_base": 5555, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 31844, "total-output-tokens": 7196, "length": "2e12", "weborganizer": {"__label__adult": 0.0011987686157226562, "__label__art_design": 0.08880615234375, "__label__crime_law": 0.0017080307006835938, "__label__education_jobs": 0.042938232421875, "__label__entertainment": 0.0006184577941894531, "__label__fashion_beauty": 0.0008816719055175781, "__label__finance_business": 0.04119873046875, "__label__food_dining": 0.0014314651489257812, "__label__games": 0.0017766952514648438, "__label__hardware": 0.002750396728515625, "__label__health": 0.00177764892578125, "__label__history": 0.003108978271484375, "__label__home_hobbies": 0.0005888938903808594, "__label__industrial": 0.00605010986328125, "__label__literature": 0.0032062530517578125, "__label__politics": 0.00185394287109375, "__label__religion": 0.00223541259765625, "__label__science_tech": 0.3056640625, "__label__social_life": 0.0004200935363769531, "__label__software": 0.0237579345703125, "__label__software_dev": 0.46435546875, "__label__sports_fitness": 0.0006122589111328125, "__label__transportation": 0.002193450927734375, "__label__travel": 0.0008454322814941406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34199, 0.02691]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34199, 0.26832]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34199, 0.91486]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2400, false], [2400, 5920, null], [5920, 9453, null], [9453, 11389, null], [11389, 14715, null], [14715, 18697, null], [18697, 19784, null], [19784, 23024, null], [23024, 26355, null], [26355, 27980, null], [27980, 30795, null], [30795, 34199, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2400, true], [2400, 5920, null], [5920, 9453, null], [9453, 11389, null], [11389, 14715, null], [14715, 18697, null], [18697, 19784, null], [19784, 23024, null], [23024, 26355, null], [26355, 27980, null], [27980, 30795, null], [30795, 34199, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34199, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34199, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34199, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34199, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34199, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34199, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34199, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34199, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34199, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34199, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2400, 2], [2400, 5920, 3], [5920, 9453, 4], [9453, 11389, 5], [11389, 14715, 6], [14715, 18697, 7], [18697, 19784, 8], [19784, 23024, 9], [23024, 26355, 10], [26355, 27980, 11], [27980, 30795, 12], [30795, 34199, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34199, 0.11628]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
ae922e944623378cb72e95b85fc8a18da3290f56
December 2004 Sharing and Access Right Delegation for Confidential Documents: A Practical Solution S.M. Yiu *University of Hong Kong* SW Yiu *Hydra Limited, Wanchai, Hong Kong* L. Lee *University of Hong Kong* Eric Li *University of Hong Kong* Michael Yip *University of Hong Kong* Follow this and additional works at: [http://aisel.aisnet.org/acis2004](http://aisel.aisnet.org/acis2004) Recommended Citation [http://aisel.aisnet.org/acis2004/85](http://aisel.aisnet.org/acis2004/85) This material is brought to you by the Australasian (ACIS) at AIS Electronic Library (AISeL). It has been accepted for inclusion in ACIS 2004 Proceedings by an authorized administrator of AIS Electronic Library (AISeL). For more information, please contact elibrary@aisnet.org. Abstract This paper addresses a practical problem in Document Management Systems for which no existing solution is available in the market. To store confidential documents in a Document Management System, a common approach is to keep only the encrypted version of the documents to ensure the confidentiality of the contents. However, how to share these encrypted documents and how to delegate the access rights of these documents are not straightforward while these operations are common in most companies. In this paper, we discuss the issues related to this problem and provide a practical and easy-to-implement solution for solving the problem. Our solution has been shown to be feasible by a prototype implementation. We also show how to extend our solution to take advantage of the hierarchical structure of a company to make it more scalable. As a remark, this problem was initiated by a local company based on its actual needs to fulfill a set of business objectives and requirements. Keywords Document management systems, access right delegation, encrypted documents, public-key infrastructure 1. INTRODUCTION Document Management Systems (e.g. Bertino 1990) are Information Systems specially designed for handling and managing documents. In these systems, documents are usually stored in a database for ease and efficient retrieval. This kind of systems is getting popular as it helps to save storage space for paper records and can enhance the communication and dissemination of information. In real applications, the management of these documents is complicated by practical requirements. Not all documents are accessible to everyone in the company. Some are strictly confidential and extra security measures must be taken to protect the contents of these documents. On the other hand, sharing of these documents is unavoidable and in most companies, it is also common for a manager to delegate his/her access rights of these documents to a subordinate to help handling the documents. Consider the following scenario. In a company, let $D$ be a confidential document (e.g. a bid document) that should only be accessible by some senior staff, say Bob, Mary, and John. However, John is having a business trip and would like to delegate his access right to his subordinate May for handling the document. Of course, later on, when John returns, he would like to revoke May’s access right to $D$. It is not straightforward how the sharing as well as the delegation of access rights can be done while keeping the contents of the documents as secure as possible. At a first glance, this problem is related to access control, one of the common techniques is to make use of an Access Control List (ACL) (Barkley 1997, Li et al. 2002, Qian 2001, Stiegler 1979) which states clearly which user is allowed to access which resources (documents) together with the allowable operations (e.g. read, write, delete) in the form of a data structure (e.g. a table). Whenever a user wants to access a document, the system will check on the ACL and see if the user is granted the access. In fact, using ACL, the sharing, delegation, and revocation can be done easily by adding and removing the corresponding user in the ACL. In our example, the ACL will probably contain four entries for the document $D$ for Bob, Mary, John, and May. When John comes back from his trip, the entry of May for $D$ will be deleted. However, from the security point of view, ACL alone does not provide a satisfactory solution to the problem. The reasons are as follow. If only the plain version of a document is stored in the database, the attackers may be able to get hold of the document if the attackers can break in the system. In fact, if the document is transmitted through the network, the attacker can easily obtain a copy of the document without breaking into the database. For highly confidential documents, we certainly need another level of security. Also, the ACL is usually manipulated by the database administrator who has the full control and access right to all documents in the database. So, the security of the system relies on a single person and it violates the principle of security that separation of duties should be implemented to avoid collusion of users. To conclude, the use of ACL alone does not solve the problem. On the other hand, to ensure the confidentiality of the contents of the documents, one can use encryption. That is, we only store the encrypted version of the documents so that even if the attackers are able to get hold of the documents, it is difficult for them to understand the content without the knowledge of the corresponding decryption key. To perform encryption, one can make use of the Public Key Infrastructure (PKI). For more information about PKI, see Kelm (2000), Kent and Polk (1995). In PKI, each entity has a public and private key pair. The private key is supposed to be kept secret by the user and the public key will be authenticated by a trusted certificate authority (CA). To encrypt a document for a user, say Bob, we can use the public key of Bob to do the encryption and the encrypted document can only be decrypted using Bob’s private key. However, using encryption alone also has problems. For examples, in our case, we may need to encrypt the document $D$ four times, each with the public key of Bob, Mary, John, and May respectively. We have to store four different encrypted versions of the document. It is obvious that the solution is not scalable and will waste a lot of storage space. Also, when compared to symmetric key encryption and decryption in which the same key is used for encryption and decryption (e.g. Data Encryption Standard (DES) and Advanced Encryption Standard (AES), see NIST (1999, 2001)), public-key encryption and decryption are slow and may not be appropriate for daily operations. Note also that a pure symmetric encryption approach is also not appropriate as the key distribution and management process will be rather complicated. **Our Contributions:** In this paper, we combine the techniques of ACL and encryption to provide a feasible solution for solving the problem of sharing and access right delegation for confidential documents. Our solution provides two levels of security to the system and divides the responsibility to two parties, the database administrator (DBA) and the security officer (SO). To speed up the encryption and decryption process, we make use of the concept of “session keys” such that the same document is only encrypted once based on a symmetric key encryption algorithm using the session key. For those legitimate users who can access the document, we encrypt the corresponding session key using their public keys. In other words, a user must have the appropriate private key to decrypt the session key and must be granted access to the document in the ACL before he/she can actually access the document. In this way, separation of duties is achieved; namely, the DBA can only manipulate the ACL, but is not allowed to touch any of the session keys while the SO is responsible for generating the session keys, but is not allowed to touch the ACL. So, unless the two parties collude, either one cannot break the system and gain access to the plain version of the documents. The rest of the paper is organized as follows. Section 2 highlights the major requirements for the access right delegation and sharing of the confidential documents. These requirements are based on the actual requirements from a local company. Our proposed solution is given in Section 3. We show in Section 4 how our basic scheme can be modified to take advantage of the hierarchical structure of a company to make it more scalable for large-sized companies. Section 5 concludes and discusses some of the possible future work. **Remarks:** There is no existing solution in the market that can solve the problem. Existing database and software packages only provide encryption functions and allow users to store an encrypted version of the document in the database. However, how we can share the documents and how access right delegation can be performed to satisfy our requirements (see Section 2) are not provided and discussed. On the other hand, there are other works on delegation (e.g. Gasser and McDermott 1990, Mambo et al. 1996a, Mambo et al. 1996b, Varadharajan et al. 1991, Ruan and Varadharajan 2004). However, it is not trivial how these approaches can be applied in our case as most of them focus on the delegation of a signing right only. In fact, most of these schemes are complicated and the practicality is still an open question. So, it is desirable to have a more practical and easy to implement solution. 2. REQUIREMENTS In this section, we list the assumptions and the requirements related to the sharing and access right delegation of a set of confidential documents. The list is extracted from the requirement specifications of a local company. Before we give the details of the requirements, we first distinguish two concepts in our application: delegation and access right granting. In the absence of encryption requirements, delegation can simply be achieved via access right granting. In the case of encrypted document delegation, as in our application, a delegatee needs to be able to have both the proper access right and decryption key in order to get access to the delegated document. There are three major assumptions for this system. Firstly, each user and each document in the system are assigned a security level. In general, if the security level of a user is lower than that of the document, the user is not allowed to access the document unless the user was delegated to do so (see the requirements below for details). For simplicity, in this paper, we only assume that there are two levels of security: confidential and unclassified in which the unclassified documents can be accessed by every user. In real applications, we can have more levels. Secondly, each user has generated a public key and a private key based on PKI. The public key is stored in a certificate signed by the Certificate Authority (CA) and is accessible by everyone through a database. The private key is supposed to be stored securely by the user, for example, it can be stored in a smart card or other hardware tokens. Thirdly, there are three types of access rights for the documents: read, update, and delete. In the real applications, we have two more types: downgrade and upgrade which modify the security levels of the documents. The owner (creator) of the document is automatically assigned all three types of access rights to the document. Next, we present the list of requirements and we mainly focus on the requirements related to confidential documents. Requirements - Confidential documents must be stored and transmitted in encrypted form. - The same (confidential) document can be accessible by multiple users*. - The system should support delegation of access rights at the Global level and the Document level. To make sure that delegation is done in a restricted manner, for each user (the delegator), a list of pre-defined delegatee(s) is set. Delegation can only be done for pre-defined delegatee(s). Any changes in the list of pre-defined delegatee(s) must go through a dedicated procedure. In fact, in our application, this delegation requirement can be used to support the “acting” operation which is common in major companies. - Global level delegation: The delegator can delegate all or selected access rights (read, update, and delete) to his/her delegatee(s). The delegation will be applied to all documents accessible by the delegator including those documents created in the future. However, the delegatee can only access the documents with a security level not greater than that of the delegatee. This requirement is to fit a real case scenario in which a manager usually delegates his/her secretary to handle most of the documents. - Document level delegation: The delegatee can be granted access rights on a document basis even if the document has a higher security level than that of the delegatee. - Delegation can be transferable. For example, if Bob delegates his access rights to John, then John can further delegates his access rights to Mary. In other words, if there is a document $D$ which is only accessible by Bob, but not accessible by both John and Mary. After the delegation, both John and Mary can access $D$. - Delegation is revocable. The revocation will be done through the whole delegation tree. In other words, if Bob has delegated his right to John and John has further delegated his right to Mary, then when Bob revokes the delegation to John, the delegation to Mary should be revoked as well. * Note that the issue of concurrent access, e.g. the read write conflict, is out of the scope of this paper and can be handled by standard techniques and the underlying database management system. Our main focus is to derive a scheme so that the same encrypted document is accessible by multiple users without creating different copies of the encrypted document. The system should allow user A to assign access rights of a document to multiple users provided that user A has the corresponding access rights (not by delegation) and the other users have a higher or equal security level as that of the document. Note that not all requirements are discussed in the paper. We only highlight the critical requirements that are related to the design of our solution. For example, the requirement that multiple versions of documents should be supported is omitted in our discussion. In fact, we can modify the scheme presented in the paper to satisfy these additional requirements, however, the details are out of the scope of this paper. 3. OUR PROPOSED SOLUTION In this section, we describe our proposed solution. We will first highlight the key features of our solution, then will provide details on the database design and the procedures. **Session key encryption for documents**: To avoid encrypt the same document more than once, we make use of the technique of session key. For each document, we generate a session key and encrypt the document using the session key. For each user who is allowed to access the document, we encrypt the session key using the public key of the user. In this case, only the user who has the corresponding private key can extract the session key in order to read the document. **Access control table and delegation table**: We make use of two tables in the database to store the access control list and the delegation information. These two tables control the access to the documents in the database. Together with the private key of the user, we have two levels of security. **Separation of duties**: In our solution, there are two independent entities involved, the database administrator (DBA) and the security officer (SO). DBA has the full control and access to the access control table and delegation table, however, DBA is not given the session key to any of the documents. In other words, DBA alone cannot read the contents of the documents. On the other hand, SO keeps the encrypted session keys for all documents. However, SO does not have an access entry in both the access control and delegation tables. In other words, either DBA or SO alone is not able to read the contents of the documents. Figure 1: System Architecture 3.1 DETAILS Figure 1 shows the system architecture of the design. Users access the system using a web browser. Since we have to make sure that only the encrypted version of the documents are transmitted over the network, so the encryption and decryption are done on the client side. The client side carries out the necessary cryptographic operations through a Java Crypto Engine. On the other hand, the application servers work with database servers that keep the access control table and the delegation table for the following functions: authentication, authorization, and document handling. Note that there is an administrator console for the security officer to perform functions specific to SO which is not discussed in details in this paper. The system keeps two tables for controlling access to the documents in the database: the access control table and the delegation table. The structures of these two database tables are shown in Figure 2. For the access control table, if Bob has access right to the document \( D \), then there is a record for Bob and \( D \) in the table. On the other hand, for the delegation table, if Peter is a pre-defined delegatee of Bob, then there will be a record for Peter and Bob in the table. For global level delegation, we can set the Doc_ID to some pre-defined value, say \(-1\), and the delegated access rights will be stored in the same record. When the system starts, records for pre-defined delegatees are created in the delegation table with all access rights set to “No”. For document level delegation, a separate record with the related ID of the document will be created in the table after the delegation has been performed. <table> <thead> <tr> <th>Access Control Table</th> <th>Data Type</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>A_ID</td> <td>Number</td> <td>Primary key</td> </tr> <tr> <td>Doc_ID</td> <td>Number</td> <td>Document unique ID</td> </tr> <tr> <td>User_ID</td> <td>Number</td> <td>User unique ID</td> </tr> <tr> <td>read</td> <td>Yes/No</td> <td>Grant read access?</td> </tr> <tr> <td>upd</td> <td>Yes/No</td> <td>Grant update access?</td> </tr> <tr> <td>del</td> <td>Yes/No</td> <td>Grant delete access?</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Delegation Table</th> <th>Data Type</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>D_ID</td> <td>Number</td> <td>Primary key</td> </tr> <tr> <td>Valid_from</td> <td>Date</td> <td>Starting date of delegation</td> </tr> <tr> <td>Valid_to</td> <td>Date</td> <td>Ending date of delegation</td> </tr> <tr> <td>Delegator_id</td> <td>Number</td> <td>Delegator ID</td> </tr> <tr> <td>Delegatee_id</td> <td>Number</td> <td>Delegatee ID</td> </tr> <tr> <td>read</td> <td>Yes/No</td> <td>Grant read access?</td> </tr> <tr> <td>upd</td> <td>Yes/No</td> <td>Grant update access?</td> </tr> <tr> <td>del</td> <td>Yes/No</td> <td>Grant delete access?</td> </tr> <tr> <td>Doc_ID</td> <td>Number</td> <td>Document ID (for document level delegation)</td> </tr> </tbody> </table> **User Bob creates a document \( D \):** On the client side, a session key is generated. The document is encrypted by the session key using a symmetric key encryption algorithm (e.g. triple DES). Access rights are granted to the appropriate users. The corresponding entries will be created in the access control table. For these users (including SO and all the pre-defined delegatees of these users), we encrypt the session key using their public keys. These encrypted session keys will also be stored in the database. Note that SO will have the encrypted versions of all session keys. Also, all session keys will not exist as plain text in the system. **User Bob grants an access right of document \( D \) to the user Peter:** Recall that this operation can only be done if Bob has the corresponding access right to document \( D \) and Peter has a security level greater than or equal to that of \( D \). An entry for Peter and \( D \) with the appropriate access right will be created in the access control table. The session key for \( D \) is decrypted using Bob’s private key and the session key is encrypted by the public keys of Peter and all his pre-defined delegatees. **User Bob delegates an access right of document \( D \) to the user Peter (Document level delegation):** A new entry for Peter and \( D \) with the appropriate access right is created in the delegation table. The session key for \( D \) will be encrypted using Peter’s public key and the public keys of all Peter’s pre-defined delegatees. User Bob delegates globally an access right to the user Peter (Global level delegation): Note that in this case, Peter must be a pre-defined delegatee of Bob. The entry for Bob and Peter in the delegation table will be updated. The session keys for all documents that are accessible by Bob will be encrypted using Peter’s public key. Note that we can have a trade off between the security and the efficiency of the system by creating the encrypted session keys of all documents accessible by Bob for Peter when the system starts. To check if the user Bob can access document $D$: We first consult the access control table and see if there is an entry for Bob and $D$ with the appropriate access right. Otherwise, we check the delegation table and see if Bob is delegated an appropriate access right to $D$. If yes, the encrypted version of $D$ can be sent to Bob. Recall that Bob can only read the content of $D$ if Bob can provide his private key to decrypt the document. The decryption is done in the client side so that the transmitted document is in its encrypted form. User Bob revokes the delegated access right from the user Peter: We update the corresponding record in the delegation table. Then, for security purpose, we delete all relevant encrypted session keys. However, for efficiency purpose, we can choose to keep these encrypted session keys. Due to the limitation of space, not all the functions are listed and discussed in details. For examples, there are functions designed for the security officer (SO) such as updating the list of pre-defined delegatees of a user. Also, there are other functions such as deleting a document that we do not discuss in the paper. However, based on the functions that we have discussed, one can see how our solution works. Note also that for messages sent from the client to the server, we can make use of digital signature or hash values to ensure the integrity. 3.2 IMPLEMENTATION We have implemented our solution. Figure 3 shows the software architecture of our implementation. The architecture is a standard 3-tier architecture. The Web server is Apache Group HTTP Server with SSL support by OpenSSL. The Application Server is Jakarta Tomcat Server while the Database Server is Oracle 8i Enterprise Database. Java 2 Platform, Enterprise Edition (J2EE) is used for programming the system. For the communication between the clients and the servers, Java Applets are used in the client side and JavaServer Pages (JSP) are used to present the Applets to clients. Java Servlets and JavaBeans are used in the server side for business logic and interaction with the Oracle database. The Java Cryptography Extension (JCE) is used for cryptographic operations. In order to utilize the cryptographic services, Bouncy Castle, one of the service providers suggested by Sun, is used as a service provider with the standard JCE. Figures 4 and 5 show some of the screen shots of the prototype. We have created a medium-sized company with about 50 users in our prototype for testing. Our preliminary study shows that the performance of the system is good (each of the operations can be done in about 15 seconds). From the testing, we realized that if the delegation chain is long, the performance of the system degrades as the system needs to follow the chain and perform updates for each user on the chain. Fortunately, long delegation chain is not likely to occur frequently, so in general, the performance of the system is reasonable. 4. MODELING COMPANY HIERARCHY USING VIRTUAL USER CONCEPT There is a scalability issue in the solution presented in Section 3 for company with many users. If the company is big, say with 1000 users, and there is a document that should be accessible by a majority of users, then the corresponding session key has to be encrypted hundred times and the number of encrypted session keys to be stored can be huge. In this section, we extend our basic scheme to take advantage of the hierarchical structure of a company to make it more scalable. We make use of the concept of “virtual user”. Figure 4: Delegation (Global-level) <table> <thead> <tr> <th>Delegatee</th> <th>Read</th> <th>Update</th> <th>Delete</th> <th>Reset</th> <th>Grant Delegation</th> </tr> </thead> <tbody> <tr> <td>clyip2</td> <td>✔</td> <td></td> <td></td> <td></td> <td>Grant Delegation</td> </tr> <tr> <td>lklee</td> <td>✔</td> <td></td> <td></td> <td></td> <td>Grant Delegation</td> </tr> </tbody> </table> Figure 5: Delegation (Document-level) <table> <thead> <tr> <th>Delegatee</th> <th>Read</th> <th>Update</th> <th>Delete</th> <th>Reset</th> <th>Grant Delegation</th> </tr> </thead> <tbody> <tr> <td>lklee</td> <td></td> <td>✔</td> <td></td> <td></td> <td>Grant Delegation</td> </tr> <tr> <td>clyip2</td> <td>✔</td> <td></td> <td></td> <td></td> <td>Grant Delegation</td> </tr> </tbody> </table> Figure 6 shows an example of a hierarchy of a company. We assume that the users in a company can be grouped into teams, teams can be grouped under different departments, and all departments are under the same root “Company” (see Figure 6 for an example). We call the logical units (teams, departments, company) “groups”. We make use of the virtual user concept to represent each group. In other words, each group is regarded as a virtual user and will be assigned a public and private key pair. Documents can be made accessible by a group (that is, a virtual user). Conceptually, all entities under the group should be able to access the documents accessible by that group with respect to the security level of the individual entity and the document to be accessed. The links between components in the company’s hierarchical structure are modelled as parent-child relations between virtual users, and between virtual user and real user (i.e. staff of the company). These parent-child relations can be easily captured in a database table. With groups being modelled as virtual users, granting access rights of a document to a virtual user (e.g. a department) is essentially the same as granting access rights of a document to a real user (i.e. the staff in a company). Since every virtual user would have its own public/private key pair, there is a problem in storage and management of these key pairs. Key management for the public key of virtual user is the same as that for real user (i.e. the public key certificate of each virtual user is being stored in the database so that every one in the company can have access to it.). However, for the private key of virtual user, special management approaches need to be taken. Consider again the example shown in Figure 6. If a user creates a new document \( Y \) and grants all access rights of document \( Y \) to Department \( B \), then staff \( S \) should also have all access rights of document \( Y \). since staff $S$ belongs to Department $B$. However, in the database, there will not be an encrypted session key of document $Y$ corresponding to staff $S$, but only an encrypted session key of document $Y$ corresponding to Department $B$. In other words, staff $S$ needs to have the private key of Department $B$ in order to decrypt document $Y$. In view of this, we want to have a scheme such that the private keys for all virtual users would be stored in an encrypted form in the database. The encrypted private key of a virtual user can be accessed and decrypted by a real user only if he/she has a “belongs-to” relationship with the virtual user in the hierarchical model. In some cases in which policies prohibits the storage of encryption key in harddisk, hardware encryption module can be used. ![Virtual users: Company X, Dept. A, Dept. B and Team 1](image) **Real users:** Staff $R$ and Staff $S$ **Parent-child relations:** <table> <thead> <tr> <th>Parent</th> <th>Child</th> </tr> </thead> <tbody> <tr> <td>Company X</td> <td>Dept. A</td> </tr> <tr> <td>Company X</td> <td>Dept. B</td> </tr> <tr> <td>Dept. A</td> <td>Team 1</td> </tr> <tr> <td>Dept. B</td> <td>Staff S</td> </tr> <tr> <td>Team 1</td> <td>Staff R</td> </tr> <tr> <td>Team 1</td> <td>Staff S</td> </tr> </tbody> </table> **Virtual users: Company X, Dept. A, Dept. B and Team 1** With the above key management scheme, if a real user has a “belongs-to” relationship with a virtual user (e.g. user $A$ belongs to Team 1, and Team 1 belongs to Department $B$), then the real user $A$ will be able to access and decrypt the encrypted private keys of the virtual users Team 1 and Department $B$, thus the real user $A$ can decrypt the documents accessible by both Team 1 and Department $B$ provided $A$ has the right security level. Our designed scheme for key management is as follows. By default, the system will always have a virtual user representing the whole company. When the Security Officer adds a group (i.e. virtual user) to the system, the group is being added as a child to one or more of the other groups in the system. For example, at the very beginning, there is only the default virtual user (i.e. the company) in the system, then when a new virtual user, say Department $A$ is being added; it will automatically become the child of the company. After that, suppose another new virtual user, Team 1 is being added; now team 1 may become the child of either the company or Department $A$, or both of them. Therefore, every virtual user (except the default one) is guaranteed to have at least 1 parent. Whenever a new virtual user is being added, the public key certificate of the new virtual user will be stored in the database, whilst the private key of the new virtual user will be encrypted by the public key of the Security Officer and being stored in the database. Besides that, the encrypted private key of the parent of the new virtual user (i.e. this encrypted private key is the one being encrypted by Security Officer’s public key at the time when the parent of the new virtual user is being added to the system) will be retrieved from the database. After that, the encrypted private key of the parent will be decrypted by Security Officer’s private key, and then the private key of the parent will be encrypted by the public key of the new virtual user and stored into the database. As a result, for every virtual user (except the default one) in the system, it will have the encrypted private keys of its parents. The addition of a real user (i.e. a staff) to the system is similar to the addition of virtual user to the system, except that the private key of the real user will not be encrypted by the Security Officer’s public key and stored in the database. 5. CONCLUSIONS In this paper, we discuss a practical security related problem in document management systems. In particular, we consider how the sharing and access right delegation of encrypted documents can be done in such systems. The problem we studied is based on the requirements from a client of a local company. There is no existing solution in the market that can solve the problem. We propose a feasible and practical solution to solve the problem. Our solution combines the technique of access control list (ACL) and session key symmetric encryption. We have implemented our solution as a prototype. The preliminary testing shows that the performance of our system is quite good even for a medium-sized company with about 50 users. To make the solution more scalable, we also show how to take advantage of the hierarchical structure of a company and introduce the concept of virtual users in order to make the solution more effective. For future work, there are a number of possible directions. Firstly, our solution is the first step towards solving the problem, one obvious direction is to design a better and more secure scheme to solve the problem. Secondly, there are no appropriate practical indexing schemes available for indexing encrypted documents that allow efficient keyword search. Some of the related works include Boneh et al. 2004, Golle et al. 2004, and Song et al. 2000. However, it is not trivial how their proposed schemes can be used in a document management system and whether the schemes are practical and scalable. REFERENCES COPYRIGHT [S.M. Yiu, Russell S.W. Yiu, L.K. Lee, Eric K.Y. Li, Michael C.L. Yip] © 2004. The authors assign to ACIS and educational and non-profit institutions a non-exclusive licence to use this document for personal use and in courses of instruction provided that the article is used in full and this copyright statement is reproduced. The authors also grant a non-exclusive licence to ACIS to publish this document in full in the Conference Papers and Proceedings. Those documents may be published on the World Wide Web, CD-ROM, in printed form, and on mirror sites on the World Wide Web. Any other usage is prohibited without the express permission of the authors.
{"Source-Url": "http://aisel.aisnet.org/cgi/viewcontent.cgi?article=1204&context=acis2004", "len_cl100k_base": 7335, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 26103, "total-output-tokens": 8513, "length": "2e12", "weborganizer": {"__label__adult": 0.0005435943603515625, "__label__art_design": 0.0006074905395507812, "__label__crime_law": 0.005207061767578125, "__label__education_jobs": 0.0025920867919921875, "__label__entertainment": 0.00012862682342529297, "__label__fashion_beauty": 0.00028777122497558594, "__label__finance_business": 0.00452423095703125, "__label__food_dining": 0.0004916191101074219, "__label__games": 0.0008029937744140625, "__label__hardware": 0.002338409423828125, "__label__health": 0.0011587142944335938, "__label__history": 0.0005097389221191406, "__label__home_hobbies": 0.00021708011627197263, "__label__industrial": 0.0011148452758789062, "__label__literature": 0.0004949569702148438, "__label__politics": 0.0008072853088378906, "__label__religion": 0.0005216598510742188, "__label__science_tech": 0.327392578125, "__label__social_life": 0.00023424625396728516, "__label__software": 0.09869384765625, "__label__software_dev": 0.55029296875, "__label__sports_fitness": 0.00022423267364501953, "__label__transportation": 0.0006651878356933594, "__label__travel": 0.000240325927734375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36553, 0.02012]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36553, 0.183]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36553, 0.90876]], "google_gemma-3-12b-it_contains_pii": [[0, 967, false], [967, 3988, null], [3988, 9675, null], [9675, 14084, null], [14084, 16393, null], [16393, 20849, null], [20849, 23725, null], [23725, 24919, null], [24919, 27483, null], [27483, 31076, null], [31076, 35169, null], [35169, 36553, null]], "google_gemma-3-12b-it_is_public_document": [[0, 967, true], [967, 3988, null], [3988, 9675, null], [9675, 14084, null], [14084, 16393, null], [16393, 20849, null], [20849, 23725, null], [23725, 24919, null], [24919, 27483, null], [27483, 31076, null], [31076, 35169, null], [35169, 36553, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36553, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36553, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36553, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36553, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36553, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36553, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36553, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36553, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36553, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36553, null]], "pdf_page_numbers": [[0, 967, 1], [967, 3988, 2], [3988, 9675, 3], [9675, 14084, 4], [14084, 16393, 5], [16393, 20849, 6], [20849, 23725, 7], [23725, 24919, 8], [24919, 27483, 9], [27483, 31076, 10], [31076, 35169, 11], [35169, 36553, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36553, 0.25547]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
4d929a01ddb5716aefb96806722dee26af475e73
Software Components — difficulties with acquisition Benneth CHRISTIANSSON Karlstad University Division for Information Technology Benneth.Christiansson@kau.se Abstract. Many industry observers have characterized the problems associated with software development as a crisis. We have all seen and read about some spectacular software failures that have occurred over the past decade. But we believe that it is more accurate to describe the problem as a chronic affliction rather than a crisis. A crisis is a turning point where things can’t get any worse, only better. A chronic affliction indicates rather that the problems are here to stay and maybe something we have to learn to live with and accept. This affliction is not only regarding software failure but also regarding software development, and the growing amount of already built software that demands maintenance. One possible solution to this chronic affliction may be in the use of software components. This implies developing software systems by joining a number of essentially standardized software components. These components can be developed in house or by a third party like COTS-components (Components Off The Shelf). In this paper we focus on the process of acquiring software components to be used as parts in software systems. This is a nontrivial and very essential issue regarding software component usage. The outline for this paper starts with a definition of the term “software component”, this will establish a foundation for the paper and also indicates some of the problems with acquisition. The following chapter describes a process for developing systems consisting of software components, this process identifies an acquisition phase and this phase is further elaborated in the next section of the paper. Some problems and issues regarding acquisition are raised and focused. The paper ends with a summary and a discussion regarding future research in this area. 1. Introduction Software systems constitute an essential part of most companies business backbone or infrastructure, and becomes increasingly more complex. In today’s industry, these enterprises have to continuously adjust and improve their business practices to maintain a competitive edge [1][2]. Such changes to business practices often raise requirements for change in their software systems and the need for new systems. It is in this context that being able to assemble or adapt software systems with reusable components proves useful. Experience has shown that even with advanced technological support, in general, it is not an easy task to assemble software components into systems [3][4]. A major issue of concern is the mismatches of the components in the context of an assembled system, especially when the mismatches are not easily identifiable. The hard-to-identify mismatches are largely due to the fact that the functionality of the components are not clearly described or understood [5]. Most commercially available software components are delivered in its binary form. We have to rely on the components’ interface description to understand their functionality is. Even with the components’ development documentation available, people would certainly prefer or can only afford to explore their interface descriptions rather than digesting their development details. Furthermore, interface descriptions in natural languages do not provide the level of precision required for component understanding, and therefore have resulted in the above mentioned mismatches. Although the motivation for widespread use of COTS products is cost savings, there are many more unknowns that must be addressed from a business perspective. For instance, the unforeseen expense needed to keep appropriate “wrappings” on COTS products throughout the entire maintenance phase [6]. How should a program manager react if a commercial approach results in a higher life-cycle cost? What business case should be made in that circumstance? [7]. In this paper we have chosen to focus on the lack of ways to describe and identify component behavior, and more specifically only regarding the acquisition of components. This is a nontrivial and very essential issue regarding software component usage. 2. The software component The term software component isn’t easy to define, it does not have a clear-cut definition in the software development community, but the meaning fluctuates. This paper does not focus on the issue of defining the term software component even though this is something needed and hopefully soon to be done. Instead we support the definition that Christiansson [8] makes. This definition is based on a survey and symbiosis of several more or less well-established definitions. We start our discussion concerning the meaning of the term with two quotations: "A component is a unit of software of precise purpose sold to the application developer community…with the primary benefit of eliminating a majority of the programming the buyer most perform to develop one or more function points…” [9] "A component is a reusable piece of software in binary form that can be plugged into other components from other vendors with relatively little effort” [10] These definitions, we believe, illustrates several of the more basic criteria such as a software components binary shape and its ability to connect without reconstruction to other software components. However these definitions does not include issues, such as identifying, maintaining and refining software components. Therefore we suggest a more mature definition of the concept software component. In this paper we will use a definition made by Christiansson [8], this quotation is translated from Swedish. “A software component: is independent and reusable; provides a defined functionality using a specific interface; can affect/be affected by other software components; should have a specification (in which the software component is described on a high level of abstraction); can have multiple implementations, meaning that the same component can be implemented in several programming languages; and can have several executable (binary) forms, i.e. the same component can be executed in different operating systems.” The fact that a component is independent and reusable shows that a component can be used without other components present, the services provided by the component should be accessible without any external help except from the software glue and necessary run-time environment. A component can affect and be affected by other software components. This means that two components can “work together” and “as a whole” create a greater service than used separately. In figure 1 we illustrate a software component with a context. ![Software component with context](image) Figure 1. A software component with a context The need for a documented specification for a software component is obvious if one consider the process of acquiring a component. How can one find a software component if one doesn’t have something to look for. This is maybe the one single factor that can decrease the gap between the formal- and informal-part when developing a software system. If there are documented specifications, these can be described in such ways that they are useful when dealing with the development of the informal part of a software system and then the component as such is directly applicable in the formal part [8]. This notion of documented specifications can be elaborated to incorporate the need for standardization concerning specifying software components. Vaughn [11] implies that: "A standard approach to building and using components must be set and universally practiced if the software engineering community is to reap the benefits…of reusable software components…" With this quotation we want to stress the fact that standardization of software component specifications will be one of the next issues for solving the fundamental problem with software development [12]. 3. The composition of component-based software systems A component-based software system is a more complex phenomenon than a traditional software monolith-based software system. A component-based software system can be regarded as consisting of sections on three different “levels” [7]. The innermost level is the component-architecture i.e. the components themselves and the necessary glue-code to enable their collaboration. The intermediate level is the application architecture, i.e. the grouping of cooperating components into software applications. The outer level is the software system architecture i.e. the software systems which the different applications support or of which they consist. These “levels” are illustrated by Fig 2. ![Diagram showing the composition of component-based software systems.](image) Each of the levels should be represented by a separate architecture. When developing software systems using software components one have to be aware of these different architectures. A specific software component have to ‘fit’ the architecture chosen for the system [13]. This is a condition both regarding acquisition as well as developing components from scratch. The different architectures define how a specific software component has to be designed and constructed. Component-based development will have to take into account all of the issues described in the section above. Of course the development processes will be affected by the architecture shown in the section above. Component architecture is very important to take into account as a component architecture may apply to a single application or to a wider context. This wider context can be a set of applications serving a particular business process [14]. The fact that components can, and should, be viewed from different perspectives is important for the quality of the component, and in the long run for the system which is composed by the components. A software system can be composed by components from different producers and different consumers can use the components in different combinations to fulfil the demands on the consumers’ specific software systems. This also implies that the same component may have different lifecycles dependent on from which perspective you look at the component. 4. Component-based Development the phases This section is based on the development process described in [7] You may find a detailed description and motivation of this process there. 4.1 Requirements analysis and definition The analysis phase involves identifying and making understandable demands and requirements that should be met by the information system. In this phase the systems boundaries should be defined and clearly specified. Using a component-based approach, analysis will also include specifying components. The components will collaborate to provide the functionality defined as the system. To be able to do this there is a need to define the domain or system architecture that will enable component collaboration. Analysis is a task with three dimensions in CBD. The first dimension being capturing of systems requirements and defining system boundaries. The second dimension is defining the component architecture to enable component collaboration and the third dimension; defining component requirements to enable acquisition or development of the required components. 4.2 Component acquisition In the following quotation Vigder and Dean [6] define the acquisition phase where the acronym COTS stands for Components Off The Shelf: “A survey of COTS components available in the marketplace must be performed, and criteria established for selecting the appropriate components. The criteria range across a broad spectrum including run-time characteristics, documentation, vendor support, etc.” This quotation shows that it is advisable to look over existing components on the market. To carry through this search and make identification possible, the component needs to be specified preferably in a standardized manner. If the search for a specific component results in a component that only partly fulfils the specification on which the acquisition is based, there are two alternatives: either the component is adapted to its particular specification, or the specification is adapted to the component. The latter alternative may be considered radical, but there are advocates who will point out the advantages of this approach [5]. The acquisition phase, should start by examining the content in the internal component repository. If the component can’t be found there, an external search is carried out, with component manufacturers. If the search does not result in a satisfactory way, the component requirements can be used to initiate a component development, which is developed either by refining existing components, or in a completely new development, see Figure 3. In this figure the triangles represent activities, the rhombi indicate choice and the arrows show the flow of work. ![Flow chart showing the acquisition process](image) Figure 3. A flow chart showing the acquisition process [8]. If the acquisition phase has resulted in the development of a component, whether from new production or by refinement, it needs to be designed. This design should harmonize with traditional software design. However, some differences should be identified: - In the design phase there should be a clearly expressed specification of the component [8]; - The design of components may be executed independently and parallel with the configuration of other components; - The design should be accomplished so that reuse can be achieved; - When components are designed, these must be adapted to a given component communication standard; and - To be able to reuse a component it has to be designed in a more general way than a solution tailored for a unique situation. The component has a purpose of being reused which requires adaptability, this added adaptability will increase the size and complexity of the component. However, to establish whether a component is unique or not, the design phase should initially be replaced by an acquisition phase for the components that were identified in the analysis phase. This is where the importance of using a standardized mode of expression for component specification is evident. If a standard is adopted, components can be selected in the acquisition phase that, partly or completely, meets the requested specification. 4.3 Implementation In the implementation phase, the design will be transformed into software. In a traditional sense, this is done by designing or purchasing the required software from a vendor. With component-based development, this means that the implementation will proceed from design to assembly. Instead of creating software with traditional software development tools, finished software components are assembled to make up the system that was visualized in the design phase. This, according to some proponents of a component-based development, will lead to a greatly reduced workload, the notion of pluggable components are often used here. A pluggable component is a component that can connect to other components using a mutual communication standard. Programmers no longer need to design complete systems from scratch, but can use existing components for the assembly. It should be mentioned, however, that some components will still need to be designed – those that are business critical or unique to a specific situation. Some components will require refinement to fit into a given solution. 4.4 Implementation testing Another important distinction, compared to traditional software development, is the need for a comprehensive initial testing phase before using a component. When a component is acquired, it should be tested before it is integrated. A new component should be tested to check and verify that it performs the functions it should according to specifications. Perhaps it is even necessary to perform tests in order to understand what a component will do and how it should be implemented depending on the degree of documentation that comes with the component. Tests may also be conducted to achieve a better understanding and knowledge of a given component. When a component has been acquired, it should be checked for performance fulfillment, to make sure it performs the task it was designed to do, and to ascertain that it is adequately reliable. Insufficient reliability could mean that all, or part of, the software system stops functioning or produces improper results. 4.5 Integration Integration means that the implemented and acquired components are put together into the software system defined in the analysis phase. According to advocates of component-based systems development, no great resources are needed, as all it takes is a plug-in process, obviously, on the condition that you have the necessary basic systems architecture and use the correct communication standard for component collaboration. An important aspect of the integration phase is that it is not possible to discover all the effects of using a component until the integration is done. This means that the integration phase too should to a large extent involve testing, although here the tests will be based on the effects that the components have on each other. 4.6 Integration and system testing The integration and system-testing phase consists of two major activities. Firstly the already mentioned integration tests to evaluate and examine component interoperability. The second, when the integrated software system is subject to a series of tests, to identify and eliminate defects and unwanted side-effects in the system, and to verify and check the quality of the system. A test phase does not automatically lead to all defects and shortcomings being eliminated, but means that a certain level of reliability is achieved. The test phase should be tied to a given component rather than to a given software system. Also, when a component is integrated with other components, the integration should be tested [5]. This will ensure that the acquired component does not contain any defects that will affect the functionality and reliability of other components. In order to perform a test, you have to know not only what to test; how to perform the test; but also what the system is expected to accomplish. In traditional systems development, this implies that the results from the analysis and design phases are used to make up the required test cases. A problem in achieving necessary test cases is that the analysis results must be translated into concrete tests during the implementation phase. With component-based development this can be made easier as each component should have a specification describing what it is expected to perform, from which test cases can be designed. 5. Difficulties with acquisition In this section we will focus on acquisition of already existing components. The acquisition can be internal (in house components) or external (purchased in a marketplace). The component acquisition is a nontrivial task. As described in previous section we identify a need to structure and preferably standardize the way we describe components to enable acquisition. Data, functional, and behavioral models (represented in a variety of different notations) can be created to describe what a particular component or application must accomplish. Written specifications can then be used to describe these components or applications. A complete description of requirements is the result. Ideally, the requirements in the form of specifications, are analyzed to determine those elements of the model that point to existing reusable components. The problem is extracting information from the requirements in a form that can lead to ‘specification matching’ i.e. The possibility to identify existing components from the documented requirements. Components should be defined and stored in three different states specifications, implementations and binary executables. Ideal these components are an engineered description of a product from previous applications. The specifications we suggest should be stored in the form of reuse-suggestions, which should contain directions for retrieving reusable components on the basis of their description and for composing and tailoring them after retrieval. Bellinzona et. al [15] describes one very formal but possible way of doing this. A reusable software component can be described in many ways, one formal way encompasses what Tracz [16] has called the 3C Model-concept, content, and context. The concept of a software component is a description of what the component does [17] The interface to the component is fully described and the semantics represented within the context of pre- and post-conditions-is identified. The concept should communicate the intent of the component. The content of a component describes how the concept is realized. In essence, the content is information that is hidden from casual users and need be known only to those who intend to modify the component. The question we raise is how this information should be expressed. Today there are three major areas for research regarding component specification they are: library and information science methods, artificial intelligence methods, and hypertext systems. The vast majority of work done to date suggests the use of library science methods for component classification. We do not prefer one of these strategies over another but instead we argue that the design of this description is the major difficulty concerning component acquisition. Christiansson [7] describes this as a need for component specifications (see previous section in this paper). All three of these methods rest on the foundation of rigorous and formal ways of defining component behavior and design. We believe that this is one important aspect of component specification but not always a very pragmatic and useful way of specifying. We argue for the use of Langefors [18] formal and informal way of defining information systems. We need two different or at least two parts of a specification one that describes the component in a formal way (this is where the vast majority of research is being done today) and one specification that describes the component in an informal way. This more informal specification, we believe, can be used for defining demands and as well acquiring and distribution purposes. For this more informal specification to be useful in the global marketplace we need to standardize the way we express this informal specification. We believe that in the same way as for instance CORBA and COM supplies standards for component construction we need standards for component specification both concerning the formal and informal part of the specification. In today’s software industry, automated tools are used to browse a repository in an attempt to match the requirement noted in the current specification with those described for existing reusable components. Characterization of functions and keywords are used to help find potentially reusable components. If specification matching yields components that fit the needs of the current application, the designer can extract these components from a reuse library (repository) and use them in the design of new systems. If components cannot be found, they may be acquired through a third party, or in house development. To be able to acquire components from a third party one need to be able to express requirements in a way that enables a global search through the marketplace for components. To be able to perform this search the component needs to be described in such a way that a global search is possible. We argue that function characterization and keywords are weak tools for this. We need more mature ways of describing and searching for components. We argue for the need of specific languages and notations preferably one global standardized notation (such as for instance UML). 6. Summary and future research A software component has a set of characteristics: “A software component: - is independent and reusable; - provides a defined functionality using a specific interface; - can affect/be affected by other software components; - should have a specification (in which the software component is described on a high level of abstraction); - can have multiple implementations, meaning that the same component can be implemented in several programming languages; and - can have several executable (binary) forms, i.e. the same component can be executed in different operating systems.” The composition of a component-based software system can be described as consisting of three different levels or architectures. The innermost level is the component-architecture i.e. the components themselves and the necessary glue-code to enable their collaboration. The intermediate level is the application architecture, i.e. the grouping of cooperating components into software applications. The outer level is the software system architecture i.e. the software systems which the different applications support or of which they consist. The specific architecture will affect the criteria used for acquisition. One have to acquire components that fit the architecture they are intended to be used in. The component acquisition is a nontrivial task. As described in previous section we identify a need to structure and preferably standardize the way we describe components to enable acquisition. Data, functional, and behavioral models (represented in a variety of different notations) can be created to describe what a particular application must accomplish. Written specifications are then used to describe these models. The design of this description is a major difficulty concerning component acquisition. The methods proposed can be categorized into three major areas: library and information science methods, artificial intelligence methods, and hypertext systems. We argue for the use of Langefors (1995) formal and informal way of defining information systems. We need two different or at least two parts of a specification one that describes the component in a formal way and one specification that describes the component in an informal way. This more informal specification, we believe, can be used for defining demands and as well acquiring and distribution purposes. For this more informal specification to be useful in the global marketplace we need to standardize the way we express this informal specification. One area for future research is creating languages/notations for describing informal component specifications. In this area we intend to do our future research. References
{"Source-Url": "http://www.vits.org/publikationer/dokument/241.pdf", "len_cl100k_base": 4769, "olmocr-version": "0.1.46", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 22731, "total-output-tokens": 6054, "length": "2e12", "weborganizer": {"__label__adult": 0.0003113746643066406, "__label__art_design": 0.0002608299255371094, "__label__crime_law": 0.0002944469451904297, "__label__education_jobs": 0.0005850791931152344, "__label__entertainment": 3.695487976074219e-05, "__label__fashion_beauty": 0.00010728836059570312, "__label__finance_business": 0.00021719932556152344, "__label__food_dining": 0.00026798248291015625, "__label__games": 0.00035071372985839844, "__label__hardware": 0.0003893375396728515, "__label__health": 0.00023651123046875, "__label__history": 0.00011432170867919922, "__label__home_hobbies": 4.309415817260742e-05, "__label__industrial": 0.00017714500427246094, "__label__literature": 0.0001786947250366211, "__label__politics": 0.00017774105072021484, "__label__religion": 0.0002646446228027344, "__label__science_tech": 0.001918792724609375, "__label__social_life": 5.882978439331055e-05, "__label__software": 0.004253387451171875, "__label__software_dev": 0.9892578125, "__label__sports_fitness": 0.0001838207244873047, "__label__transportation": 0.0002570152282714844, "__label__travel": 0.0001367330551147461}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29436, 0.01784]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29436, 0.81117]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29436, 0.92109]], "google_gemma-3-12b-it_contains_pii": [[0, 1948, false], [1948, 5210, null], [5210, 8011, null], [8011, 10487, null], [10487, 11393, null], [11393, 13131, null], [13131, 16244, null], [16244, 19376, null], [19376, 23886, null], [23886, 26787, null], [26787, 29062, null], [29062, 29436, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1948, true], [1948, 5210, null], [5210, 8011, null], [8011, 10487, null], [10487, 11393, null], [11393, 13131, null], [13131, 16244, null], [16244, 19376, null], [19376, 23886, null], [23886, 26787, null], [26787, 29062, null], [29062, 29436, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29436, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29436, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29436, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29436, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29436, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29436, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29436, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29436, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29436, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29436, null]], "pdf_page_numbers": [[0, 1948, 1], [1948, 5210, 2], [5210, 8011, 3], [8011, 10487, 4], [10487, 11393, 5], [11393, 13131, 6], [13131, 16244, 7], [16244, 19376, 8], [19376, 23886, 9], [23886, 26787, 10], [26787, 29062, 11], [29062, 29436, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29436, 0.0]]}
olmocr_science_pdfs
2024-11-23
2024-11-23
f15d89231fab0fedcd072edbb2325d60ceb8f14d
Phase Guided Profiling for Fast Cache Modeling Andreas Sembrant, David Black-Schaffer and Erik Hagersten Uppsala University, Department of Information Technology P.O. Box 337, SE-751 05 Uppsala, Sweden {andreas.sembrant, david.black-schaffer, eh}@it.uu.se ABSTRACT Statistical cache models are powerful tools for understanding application behavior as a function of cache allocation. However, previous techniques have modeled only the average application behavior, which hides the effect of program variations over time. Without detailed time-based information, transient behavior, such as exceeding bandwidth or cache capacity, may be missed. Yet these events, while short, often play a disproportionate role and are critical to understanding program behavior. In this work we extend earlier techniques to incorporate program phase information when collecting runtime profiling data. This allows us to model an application’s cache miss ratio as a function of its cache allocation over time. To reduce overhead and improve accuracy we use online phase detection and phase-guided profiling. The phase-guided profiling reduces overhead by more intelligently selecting portions of the application to sample, while accuracy is improved by combining samples from different instances of the same phase. The result is a new technique that accurately models the time-varying behavior of an application’s miss ratio as a function of its cache allocation on modern hardware. By leveraging phase-guided profiling, this work both improves on the accuracy of previous techniques and reduces the overhead. 1. INTRODUCTION The goal of this work is to develop and explore methods for understanding a program’s cache behavior over time and as a function of its cache allocation. Such information is important for understanding performance [24], resource sharing [15, 9], and scheduling [16]. In particular, the ability to analyze a program’s behavior as a function of its cache allocation is essential for modern systems with shared caches where the cache allocation can change dynamically. This requirement makes it difficult to use data from hardware performance counters, as they only provide information for one particular cache allocation. The low overhead statistical cache model, StatCache, developed by Berg and Hagersten [5, 6], can estimate the miss ratio for caches of arbitrary size. It answers the question: what is an application’s miss ratio if it receives $x$ amount of cache? StatCache has been used to estimate shared miss ratios for multi-threaded applications [7] and co-scheduled applications [15], and forms the basis of a commercial code optimization product [1]. However, these existing models only report the application’s average miss ratio, which can be misleading. Consider, for example, an application whose miss ratio is high enough to exceed the system bandwidth for a short portion of its execution. In such an application, the average miss ratio would fail to indicate that the application is at all bandwidth bound. The simplest way to extend these methods to handle program phases is to divide the program execution into many windows and profile each window. This approach has the downside of a significant increase in overhead from having to sample all portions of the application’s execution. To combat this, periodic profiling can be used, wherein only a randomly selected subset of all windows are profiled. Unfortunately, the number of profiled windows must still be high to capture short application phases. A more intelligent solution is to use phase-guided profiling [31, 27]. In this approach, a phase-detection algorithm is used to select only a small part of each phase to profile, and this data is then used for subsequent instances of the same phase. This minimizes the number of profiled windows by avoiding redundantly sampling windows from the same phase. For such an approach to be generally applicable, it must have the following properties: 1) It should not require custom hardware support; 2) It should have minimal runtime overhead without loss of accuracy and fidelity; 3) It should be transparent and non-intrusive (e.g., require no recompilation of the analyzed program and work with dynamically generated code); 4) It should be architecturally independent (e.g., not affected by system load and able to model different cache sizes), and, finally; 5) It should be fully automatic (e.g., users should not have to adjust settings for each application). To accomplish this, we leverage the ScarPhase (Sample-based Classification and Analysis for Runtime Phases) library developed during our previous work with phase classification [31]. This provides us with low-overhead (2%) runtime detection of program phases. We then combine this phase information with the StatCache [5] statistical cache model to accurately model application cache behavior as a function of time and allocated cache. The main contributions of this paper are: - A method for accurately modeling cache behavior (miss ratio as a function of cache allocation) over time. - An efficient method for capturing program cache behavior on modern hardware by integrating the StatCache cache model and the ScarPhase phase detection library. - A comparison with previous statistical cache modeling methods demonstrating improved accuracy (39%) and efficiency (6×). - An analysis of the impact of different types of intra-phase variations on phase-guided memory profiling. 2. CACHE BEHAVIOR OVER TIME An application’s cache behavior, in this case its miss ratio, varies due to both program phases and changes in cache allocation. To illustrate this, Figures 1 and 2 plot the miss ratio (intensity) over time (x-axis) as a function of cache size (y-axis), for the complete execution of the gcc/166 and bzip2/chicken benchmarks, respectively. The darker the points, the higher the miss ratio. The y-axis (cache size) indicates how the application’s miss ratio is affected by its cache allocation. The x-axis (time) shows the intrinsic phase behavior of the application. The vertical bar marked “Average” on the right shows the application’s overall average miss ratio as a function of cache size. And, finally, the bars above the graph indicate the phases detected by ScarPhase, with smaller phases grouped together in white for clarity. The top figures (1a and 2a) show reference results from a simulation using the Pin [8] instrumentation toolkit and the Figure 2: Miss ratio (intensity) as a function of time (x-axis) and cache allocation (y-axis) for the whole execution of bzip2/chicken on a Nehalem machine. The average miss ratio for the whole execution is shown on the right. The detected execution phases are shown above, with shorter phases shown in white for clarity. The top figure (2a) shows results from a reference simulation and the bottom (2b) from online profiling. The boxed area from 0 to 80B instructions highlights the importance of using hardware-independent information for determining phases: when run with 1MB or more of cache allocation, bzip2/chicken appears to have a single phase up to 80B instructions based on cache miss ratio. However, at lower allocations, or using hardware-independent metrics, distinct phases can be clearly seen. The benefits of time-based information are clearly visible in Figure 1. While the graph shows that there are two periods in gcc/166’s execution with a very high miss ratio at 2MB of allocated cache (phase E at 32 and 70 billion instructions), the overall average miss ratio appears far less severe. With the more fine-grained phase information, the correct portion of the application can be targeted for optimization. From this information we can also see the limitations of defining phases based on hardware-specific information, such as performance counters. For example, if miss ratio was used to define phases, a machine with 2MB or more of cache would group the first 80 billion instructions of bzip2/chicken into one phase as the miss ratio is constant. (See the red box in Figure 2b.) However, if the application’s cache allocation were decreased, due to resource sharing, for example, its behavior would change, thereby revealing different phases. This demonstrates the importance of finding phases that are architecturally independent properties of the application. These examples show how important it is to consider both the intrinsic program phase behavior as well as the impact of program cache allocation when examining application behavior. With this more detailed information we can analyze how various runtime optimizations [10, 19] and scheduling decisions [16, 36] will affect the cache performance. For example, migrating gcc/166 to a core with a smaller cache for phase D could potentially save energy without sacrificing performance. However, trying to migrate bzip2/chicken between a large-cache core for phase B and a small-cache core for phases C and D would entail many more relocations and might not be beneficial. 3. STATISTICAL CACHE MODELING StatCache [5, 6] is a low overhead statistical cache model. It can estimate the miss ratio of random replacement caches of arbitrary sizes. In this section we give an overview of the model and discuss how program phase behavior affects its accuracy. 3.1 Reuse Distance The input to the StatCache model is cache line reuse distance data. A reuse distance is defined to be the number of memory accesses between two accesses to the same cache line. For example, if the processor first accesses cache line A, then B and C, and finally A again, the reuse distance of the second access to A would be two. It is important to note that reuse distance counts all memory accesses. This is different from stack distance [26] where only the number of unique memory accesses are counted. As a result, measuring reuse distance requires far less bookkeeping. 3.2 The StatCache Cache Model The reuse distance distribution can be transformed into a miss ratio distribution using the StatCache [5, 6] cache model. StatCache first sorts the reuse distances of the memory accesses into buckets, hi, where hi is the number of memory accesses with a reuse distance of i. Then, the following equation is solved for the miss ratio R: \[ R \cdot N = h_1 f(R) + h_2 f(2R) + h_3 f(3R) + \cdots \] where \( N \) is the number of reuse distance samples, i.e., \( N = h_1 + h_2 + h_3 + \cdots \), and \( f(n) \) is a function that gives the probability that a cache line has been evicted from the cache if we know that it was in the cache \( n \) cache misses ago. With random replacement the function \( f(n) \) is: \[ f(n) = 1 - (1 - 1/L)^n \] where \( L \) is the number of cache lines in the cache. The cache size is \( L \) times the cache line size. We can then model caches of arbitrary sizes by changing the \( L \). The StatCache model can be readily extended to model LRU caches (StatStack [14]) without changing the input data. 3.3 Program Phases The StatCache model works very well for single phase applications and its accuracy improves with the number of samples. Indeed, Equation 1 assumes a constant miss ratio across the reuse distance samples. If the behavior is constant for the application, a phase oblivious overall miss ratio can be determined by simply applying the model to all samples at once. However, as we observed in the previous section, the miss ratio can vary significantly over time. As Berg and Hagersten [5, 6] had no means to detect phases, they instead gathered samples in short bursts, where each burst was short enough for the miss ratio to remain approximately constant. They then applied the model to each burst separately and averaged the model output to determine the overall miss ratio. This method improves accuracy by ensuring that the miss ratio is approximately constant across the samples given to the model. The work presented here is phase aware, and groups samples within the same phase together. The model is then applied to all samples from each phase separately. The application’s overall miss ratio is then the weighted average of the phases. This approach improves accuracy as the miss ratio is far more constant within phases than across them, and by combining samples across phases, the model has more samples to work with at each time. Figure 3 compares these three methods for gcc/166. Each method uses the same reuse distance samples. The phase oblivious approach applies the model to the most samples (all of them together), but incorrectly assumes that the underlying miss ratio is constant across all samples. As a result it has the worst accuracy. Both burst and phase aware apply the model to groups of samples taken from periods with a reasonably constant miss ratio, but the phase aware approach is able to group more samples together for the model, and thereby produce more accurate results. 3.4 Sampler Implementation and Overhead We have implemented a reuse distance sampler on an Intel Xeon E5620 (Nehalem) machine to provide data to the StatCache model. To minimize the overhead, we use hardware performance counters [18] and page protection to sample and monitor reuses. For the StatCache model to work, it is important that all memory accesses have the same probability of being sampled. We therefore use the executed loads and stores counters to interrupt the program execution at random (exponentially distributed) intervals. However, when these interrupts occur, a context-dependent number of extra instructions are executed. To avoid biasing the results with this “skid” [2], the counters are set up to interrupt before the target access. After the interrupt, the execution is single-stepped to the desired access. At this point the loads and stores counters are recorded for the access’s pending reuse, and page protection is turned on for the access’s page. Execution then continues until a page fault occurs. If the memory access that caused the page fault belongs to a pending reuse, the loads and stores counters are read and the resulting reuse distance recorded. Otherwise, it was a false positive, i.e., the page protection was turned on because of another cache line that resides on the same page. In the latter case, the page protection is temporarily turned off and the execution is single-stepped past the access, before turning the page protection on again. In this reuse sampler there are two parts to the overhead. First, the application must be single-stepped to the target access. Depending on the length of the skid, this can entail many context switches. Second, it can be equally time consuming to handle page faults, especially when the number of false positives are high. As both of these overheads are directly related to the number of samples required, it is clearly important to intelligently choose when to sample. 4. PHASE GUIDED PROFILING Phase-guided profiling is a method to reduce the overhead of profiling without sacrificing accuracy, by taking advantage of the (nominally) uniform behavior of each program phase. The idea is to only profile a small part of each phase, and then use that profile for all instances of the same phase. There are two benefits to this approach. First, it removes redundant profiling as only a minimum part of each phase is profiled. Second, it automatically adapts to the application’s characteristics, thereby eliminating the need to adjust profiling parameters for each application and data set. 4.1 Detecting Program Phases We use the ScarPhase [31] library to detect and classify phases. ScarPhase is an execution history based, low-overhead (less than 2%), online phase detection library. Because it is based on the application’s execution history, it detects hardware independent phases [34, 29]. Such phases can be readily missed by performance counter based phase detection, as shown in Figure 2b. To detect phases, ScarPhase monitors executed code, based on the observation that changes in executed code reflect changes in many different metrics [33, 34, 11, 35, 20]. To accomplish this, execution is divided into non-overlapping windows. During each window, hardware performance counters are used to sample conditional branches using Intel PEBS [25, 18]. The address of each branch is hashed into a vector of counters called a conditional branch vector (CBRV), similar to a basic block vector (BBV) [33] but with only conditional branches. Each entry in the vector shows how many times its corresponding conditional branches were sampled during the window. The vectors are then used to determine phases by clustering them together using an online clustering algorithm, such as leader-follower [12]. Windows with similar vectors are then grouped into the same cluster and considered to belong to the same phase. 4.2 Phase Guided Profiling The simplest approach to phase-guided profiling is to only profile one window from each phase, and to use that profile for all other instances of the same phase. This way, only a small part of each phase is profiled, thereby lowering overhead, and, if the behavior within the phase is uniform, the accuracy will not suffer. As a result, the overhead will be proportional to the number of phases in the application, and the profiling will automatically adapt to the application’s requirements. To illustrate how phase-guided profiling works, we have zoomed in on a short part of gcc/166’s execution in Figure 4. The black triangles show where phase-guided profiling decides to profile, and the white triangles show the same for periodic profiling. Phase-guided profiling places the samples at the beginning of each phase. Periodic profiling, on the other hand, may profile the same phase more than once. While the phases detected by ScarPhase have reasonably constant behavior within each phase, some applications exhibit intra-phase variation [33, 32]. To illustrate this, Figure 5 plots the Overall Absolute Miss Ratio Error for four applications as a function of the number of windows profiled in each phase. For example, at 10 on the x-axis, the y-axis shows the error if only the first ten windows of each phase are profiled. If there were no intra-phase variation, the error would be constant. However, this is clearly not the case. Indeed, we observed three different types of intra-phase variation: transition, periodic, and instance. These are illustrated in Figure 6 and discussed below. ### No Variations Figure 6a shows the miss ratio over time for gcc/166’s phase D in Figure 1. This is a typical phase for gcc/166 with very little intra-phase variation. Only a small part of each phase needs to be profiled, and we see that the error drops rapidly after seven windows in Figure 5. ### Transition Variations In Figure 6b, mcf can be seen to have a long transition phase where the behavior slowly changes from one phase to another. The Figure shows how the miss ratio slowly increases over time between the two 1. **Most phases span several windows and we only need to profile when we are sure we are in the correct phase. A mis-prediction is thus very uncommon. Furthermore, a mis-prediction is unlikely to affect the accuracy since the phase will be profiled later. 2. The Overall Absolute Miss Ratio Error for Figures 5 and 7 is defined as the absolute error from the reference simulation across all cache sizes, as shown in Figure 3. ### Periodic Variations Figure 6c shows the miss ratio over time for bzip2/chicken’s phase A in Figure 2. The behavior is highly periodic and changes rapidly. It is actually caused by two sub-phases whose vectors are not different enough to create two separate clusters. This tradeoff between uniformity and the number of phases is a limitation of clustering algorithms. If the settings are too sensitive we will identify more phases that necessary, and if they are too insensitive we combine the wrong sub-phases. ### Instance Variations In Figure 6d we can see that astar/lakes suffers from a different problem. The two separate instances of the same phase have different average miss ratios. The vectors for both instances are nearly identical but the cache behavior is different. This can happen when two instances of the same phase operate on different input data. This is the reason why so many windows must be profiled for the error to start to decrease in Figure 5: all windows in the first instance of the phase must be profiled before the second instance can be included in the results. There has Furthermore, the period between the profiles must be short enough to catch all phases. If an application has a mix of short and long phases, the profiling period must be set for the shortest phase to accurately capture the application’s behavior. This results in a high overhead and requires the user to adjust the profiler to the application. ScarPhase returns the phase ID of the just-executed window and a prediction for the next window [35, 23]. Since the phase ID is only known after the window has been executed, we need to rely on the prediction. If the predicted phase has not been profiled, we start to sample reuse distances, otherwise, we turn off the profiler and do not sample. ## 4.3 Intra-Phase Variations While the phases detected by ScarPhase have reasonably constant behavior within each phase, some applications exhibit intra-phase variation [33, 32]. To illustrate this, Figure 5 plots the Overall Absolute Miss Ratio Error for four applications as a function of the number of windows profiled in each phase. For example, at 10 on the x-axis, the y-axis shows the error if only the first ten windows of each phase are profiled. If there were no intra-phase variation, the error would be constant. However, this is clearly not the case. Indeed, we observed three different types of intra-phase variation: transition, periodic, and instance. These are illustrated in Figure 6 and discussed below. ### No Variations Figure 6a shows the miss ratio over time for gcc/166’s phase D in Figure 1. This is a typical phase for gcc/166 with very little intra-phase variation. Only a small part of each phase needs to be profiled, and we see that the error drops rapidly after seven windows in Figure 5. ### Transition Variations In Figure 6b, mcf can be seen to have a long transition phase where the behavior slowly changes from one phase to another. The Figure shows how the miss ratio slowly increases over time between the two 1. Most phases span several windows and we only need to profile when we are sure we are in the correct phase. A mis-prediction is thus very uncommon. Furthermore, a mis-prediction is unlikely to affect the accuracy since the phase will be profiled later. 2. The Overall Absolute Miss Ratio Error for Figures 5 and 7 is defined as the absolute error from the reference simulation across all cache sizes, as shown in Figure 3. ### Periodic Variations Figure 6c shows the miss ratio over time for bzip2/chicken’s phase A in Figure 2. The behavior is highly periodic and changes rapidly. It is actually caused by two sub-phases whose vectors are not different enough to create two separate clusters. This tradeoff between uniformity and the number of phases is a limitation of clustering algorithms. If the settings are too sensitive we will identify more phases that necessary, and if they are too insensitive we combine the wrong sub-phases. ### Instance Variations In Figure 6d we can see that astar/lakes suffers from a different problem. The two separate instances of the same phase have different average miss ratios. The vectors for both instances are nearly identical but the cache behavior is different. This can happen when two instances of the same phase operate on different input data. This is the reason why so many windows must be profiled for the error to start to decrease in Figure 5: all windows in the first instance of the phase must be profiled before the second instance can be included in the results. There has --- **Figure 6:** Intra-phase variation in miss ratio for Spec2006. Phases shown with shading. While most phases show little intra-phase variation, as in gcc/166 (6a), the three intra-phase variations identified above explain the improved error characteristics (Figure 7) of randomly selecting windows to profile. Both Transitions (6b) and Periodic (6c) are artifacts of the tradeoff between phase size and the number of phases. The Instance Variations (6d), however, represent data-dependent changes in behavior for the same code path. **Figure 7:** Choosing windows randomly. Overall Absolute Miss Ratio Error as a function of number of profiled windows in each phase. The windows are randomly selected within each phase. --- <table> <thead> <tr> <th>Phase</th> <th>Application</th> <th>Miss Ratio Error as a function of number of profiled windows in a Phase</th> </tr> </thead> <tbody> <tr> <td>gcc/166</td> <td>mcf</td> <td>0.2</td> </tr> <tr> <td>bzip2/chicken</td> <td>astar/lakes</td> <td>0.8</td> </tr> <tr> <td>gcc/166</td> <td>mcf</td> <td>0.6</td> </tr> <tr> <td>bzip2/chicken</td> <td>astar/lakes</td> <td>1.2</td> </tr> </tbody> </table> --- 180 been a significant amount of research [32, 4, 22] discussing how changes in the code path are correlated to changes in other metrics. The intra-phase variations identified above have a significant effect on the accuracy of this method. To make an accurate estimate of the behavior of a phase, it is therefore important to consider all instances of the phase. Figure 7 shows the same metric as in Figure 5, but instead of selecting only the first windows, the windows are randomly selected from the whole phase. For example, when \( x \) is ten, the average is calculated from ten randomly selected windows from the phase. If a phase is less than ten windows, the whole phase is profiled. The error starts to decrease rapidly for all applications compared to taking the windows in order. It is therefore important to spread samples throughout a phase. ### 4.4 Phase Sampling Implementation To handle intra-phase variations, we try to profile several windows spread throughout the phase. However, we do not know the length of the phase in advance. We therefore start with a short period to catch the shorter phases, and increase the period with the number of profiled windows until an upper limit is reached. Specifically, we start sampling windows with an exponential distribution, and increase the period by a factor of two after each window until we reach an upper limit. In this way, we can reliably profile both short and long phases while maintaining a good distribution of samples. It is worth noting that the runtime overhead is proportional to the number of sampled reuse distances. This is different from traditional profiling and simulation where the overhead comes from number of executed instructions. This has two implications for this work. First, spreading out profiling windows across an application’s execution time does not increase overhead. This is because the overhead is per sample, regardless of when they are taken, and instructions executed between samples run at native speed. Second, because we capture many profile windows, we are less sensitive to selecting optimal windows [34, 27]. For reference, the data collection and modeling for Figures 1b and 2b took minutes to execute, while the simulation to produce the reference results in Figures 1a and 2a took days. ### 5. EVALUATION In this section we evaluate and compare the accuracy and performance of StatCache with periodic profiling and phase-guided profiling. We implemented periodic profiling by periodically selecting windows to profile. The model was then independently applied to each window. The behavior over time was then approximated by observing how the behavior changes between the profiled windows. Phase-guided profiling used the ScarPhase library as discussed in Section 4. The memory reuse data was captured online using the memory reuse sampler described in Section 3.4. All benchmarks were run from start to completion with their reference input on an Intel Xeon E5620 (Nehalem) system. Because the random nature of the sampling, we average the data from 5 runs. To create the reference data, we simulated the cache behavior for the whole execution using the Pin [8] instrumentation toolkit and the Dinero [13] cache simulator. Pin was used to divide the execution into windows and extract a memory reference trace that was sent to Dinero. After each window, the miss ratio for the window was extracted from the Dinero simulation. We chose the eight applications from SPEC 2006 [17] with the most interesting phase behavior (astar/lakes, bzip2/chicken, bwaves, dealii, gcc/166, mcf, perl/splitmail and xalan) and simulated and modeled each for twelve cache sizes from 1KB to 2MB. The cache sizes were chosen to cover the most interesting changes in cache behavior for the benchmarks. \(^3\)To avoid periodic behavior, the sampled windows are selected at random from an exponential distribution with a fixed period. The CDF can also give valuable insight into application behavior. For example, if gcc/166 hits the bandwidth limit when it has a miss ratio above 20%, the average would indicate that gcc never hits the limit, while the CDF shows that 7% of the execution would be bandwidth bound. ### 5.2 Sampling Parameters We chose parameters for the window size and sample rate for periodic and phase-guided profiling to produce similar accuracy on gcc/166. (The exact settings and details on the selection process are found in the appendix.) This benchmark was chosen as the baseline because it has the highest number of phases (most difficult to accurately model) and a short execution (least chance to make up for missed phases). The results of choosing settings to produce similar accuracy for gcc/166 can be seen in Figure 8a. The shaded areas are the average miss ratio CDF +/- one standard deviation. The data shows the CDF for both the periodic and phase-guided methods, as well as the reference simulation. The smaller the shaded area and the closer it is to the reference the better the accuracy. In this graph both the periodic and the phase-guided methods show similar accuracy (1.34% and 1.29%, respectively), as expected. However, to obtain this degree of accuracy, the periodic method imposes an overhead of 89.3% compared to 32.5% for the phase-guided approach. ### 5.3 Accuracy: Error Figure 8a is on average off by one percent for phase-guided profiling. Despite using fewer samples, phase-guided profiling has a better accuracy. There are two reasons for this. First, phase-guided profiling can combine reuse distances from several windows belonging to the same phase which reduces the modeling error. Second, phase-guided profiling is better at distributing the samples over the execution: it forces shorter phases to be profiled which would otherwise have been missed. The profile thus represents a larger portion of the execution. ### 5.4 Performance: Overhead Figure 10 presents the overhead for the benchmarks. Phase-guided profiling demonstrates significantly better performance than periodic profiling. The overhead is on average six times lower (21% compared to 133%) than periodic profiling. The overhead for periodic profiling would be constant if the cost of a reuse distance was the same for all applications, and if the number of samples in each window was fixed. This is, however, not the case. First, the cost of a sample depends on the memory behavior, i.e., the number of false positives (page faults). Second, the sample rate is per memory access. Applications with more memory instructions will therefore collect more samples. This significant improvement is achieved by intelligently deciding when and where to profile. Only the most necessary parts of the execution are chosen, resulting in fewer samples required for similar accuracy. The longer the execution and the fewer phases an application has, the better the phase-guided profiling performance will be compared to periodic. 5.5 Summary The accuracy and overhead results show that the StatCache cache model can be efficiently combined with phase-guided profiling to estimate the miss ratio over time for different cache sizes. The result is both more accurate and has a better performance than periodic profiling. 6. RELATED WORK In this section we discuss work related to cache behavior over time and phase detection. Agaram et al. [3] looked at memory behavior over time by analyzing performance by data structure. They observed that a stable overall miss ratio can hide important changes. For example, the overall miss ratio can appear stable while the miss ratios for individual data structures changes. However, they did not map this behavior to application phases. The ScarPhase phase detection would detect such behavior as separate phases if it was caused by changes in the code path. Both Agaram et al. and others [33, 21] have observed that phase behavior depends on the size of the sampled windows. Dividing the execution into windows effectively averages the execution: the smaller the windows are, the larger the intra-phase variations will be, and vise versa. This is not a significant issue for this work since we profile several windows from the same phase. The profile for the phase will therefore be much closer to the true behavior than if only a single window was selected. One important feature of this work not found in these others is that we model arbitrary cache sizes. Focusing on just one cache size can ignore important phase distinctions at other cache sizes, as seen in Figure 2b. ThreadSpotter [1] is a commercial tool that can detect memory bottlenecks. It leverages the work with reuse distances and statical cache models from [5, 6] to find memory bottlenecks and provide developers with information on how to improve performance. ThreadSpotter uses exponential back-off to reduce overhead by increasing the time between samples for long-running applications. This allows profiling of both short- and long-running applications. Unfortunately, the method is best suited for average miss ratios. Consider gcc in Figure 1. Exponential back-off might detect phase B, but it would start to merge it with A, C or D in later instances. Nagpurkar et al. [28] used phase-guided profiling for distributed profiling in embedded devices, where each phase was profiled separately on a different device. The results showed that phase-guided profiling can reduce communication, computation and energy costs. Their implementation used custom hardware [35] to collect basic block vectors in order to detect phases, and assumed perfect prediction. In this work we use code-based phases to guide reuse distance sampling. Shen et al. [32] turned this approach around, and instead used stack distances [26] to define phases. They argue that their phases are better at predicting memory behavior. While they do not report any overhead numbers, it is clear that the cost of using hardware performance counters to sample code execution paths is much cheaper than sampling reuse distance or stack distances. 7. CONCLUSIONS In this paper we have shown the importance of considering both program phases and the effects of cache allocation in understanding application behavior. Phase-aware analysis is required to identify important transient behavior in applications (see Figure 1b), which is obscured by average metrics. The effects of different cache allocations can also have a significant impact on program behavior (see Figure 2b), and ignoring them can lead to incorrect phase classification. We have also shown the benefits of integrating online phase detection and statistical cache modeling to produce a phase-guided statistical cache analysis tool. By doing so we have improved both performance and accuracy over previous techniques, while also providing more valuable data in the form of time-dependent cache behavior. To further improve the accuracy we investigated different sources of intra-phase variation and described a sampling technique to overcome them. The resulting method has better accuracy than previous statistical cache modeling methods, requires no custom hardware or application modifications, and has an overhead six times lower than previous methods. Appendix: Selecting Sampling Parameters One of the goals of this work has been to find methods that are automatic and do not require custom settings for every benchmark or data set. This is important since it allows the user to seamlessly work with different input data and applications without having to adjust the tools for each change. Phase-guided profiling makes most parts of the profiling automatic by adapting to the number and length of phases in the application. Configuring periodic profiling, however, is more difficult as the sampler does not adapt to the application’s behavior. In general, a good sampler setting should be able to collect information from all phases of an application regardless of the input. In this appendix we show how we selected the settings used in the evaluation to achieve this. We chose to base our setting on gcc/166 as it is short and has many phases. This makes it a particularly tricky application to profile accurately. Therefore, settings with good accuracy for gcc should produce good results for other applications, but may do so at the cost of higher overhead than necessary. This tradeoff between overhead and accuracy is a problem with fixed sampling strategies in general. To evaluate the impact of changing the sampling settings on the periodic and phase-guided sampling, we ran the samplers five times and varied the number of samples in each window and the period between the windows. The more samples and the shorter period, the higher the overhead. For phase-guided profiling, the number of samples from several windows belonging to the same phase before processing them. Since the phase-guided method is able to combine samples in each window is reduced to one per 1M memory accesses. For periodic profiling, every eighth window is profiled with one sample for every 400K memory access in the instructions. For periodic and phase-guided profiling. The error bars indicate the standard deviation in error across the different runs. As expected, the accuracy tends to improve with the overhead. However, the phase-guided profiling is both more accurate and has a lower overhead across the full range. We can also see that the variance is lower for phase-guided profiling, since it is less sensitive to different settings. However, the phase-guided profiling is both more accurate and has a lower overhead across the full range. We can also see that the variance is lower for phase-guided profiling, since it is less sensitive to different settings. For the evaluation we chose the two settings indicated in Figure 11 with roughly the same accuracy on gcc/166. In both cases the application is divided up into windows of 100M instructions. For periodic profiling, every eighth window is profiled with one sample for every 400K memory access in the window. For phase-guided profiling, the number of samples in each window is reduced to one per 1M memory accesses. However, the number of samples in each phase is still higher since the phase-guided method is able to combine samples from several windows belonging to the same phase before processing them. **References**
{"Source-Url": "https://www.it.uu.se/katalog/andse541/cgo12-final.pdf", "len_cl100k_base": 8096, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 36238, "total-output-tokens": 10574, "length": "2e12", "weborganizer": {"__label__adult": 0.0004973411560058594, "__label__art_design": 0.0005612373352050781, "__label__crime_law": 0.0004935264587402344, "__label__education_jobs": 0.0007815361022949219, "__label__entertainment": 0.00014495849609375, "__label__fashion_beauty": 0.00025391578674316406, "__label__finance_business": 0.0003273487091064453, "__label__food_dining": 0.0004572868347167969, "__label__games": 0.0013189315795898438, "__label__hardware": 0.0086669921875, "__label__health": 0.0007424354553222656, "__label__history": 0.0006213188171386719, "__label__home_hobbies": 0.000179290771484375, "__label__industrial": 0.001094818115234375, "__label__literature": 0.0003268718719482422, "__label__politics": 0.0003902912139892578, "__label__religion": 0.0006999969482421875, "__label__science_tech": 0.346923828125, "__label__social_life": 8.416175842285156e-05, "__label__software": 0.01013946533203125, "__label__software_dev": 0.62353515625, "__label__sports_fitness": 0.0004878044128417969, "__label__transportation": 0.0009937286376953125, "__label__travel": 0.000293731689453125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46076, 0.03083]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46076, 0.55979]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46076, 0.90869]], "google_gemma-3-12b-it_contains_pii": [[0, 4502, false], [4502, 6467, null], [6467, 9021, null], [9021, 13785, null], [13785, 17746, null], [17746, 25402, null], [25402, 29328, null], [29328, 31988, null], [31988, 38130, null], [38130, 43063, null], [43063, 46076, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4502, true], [4502, 6467, null], [6467, 9021, null], [9021, 13785, null], [13785, 17746, null], [17746, 25402, null], [25402, 29328, null], [29328, 31988, null], [31988, 38130, null], [38130, 43063, null], [43063, 46076, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46076, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46076, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46076, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46076, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46076, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46076, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46076, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46076, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46076, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46076, null]], "pdf_page_numbers": [[0, 4502, 1], [4502, 6467, 2], [6467, 9021, 3], [9021, 13785, 4], [13785, 17746, 5], [17746, 25402, 6], [25402, 29328, 7], [29328, 31988, 8], [31988, 38130, 9], [38130, 43063, 10], [43063, 46076, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46076, 0.03448]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
feecb7dc2150c616777c082c21ef7041692f8f54
ABSTRACT SAS maintains a wealth of information about the active SAS session, including information on libraries, tables, files and system options; this information is contained in the Dictionary Tables. Understanding and using these tables will help you build interactive and dynamic applications. Unfortunately, Dictionary Tables are often considered an ‘Advanced’ topic to SAS programmers. This paper will help novice and intermediate SAS programmers get started with their mastery of the Dictionary tables. Ever needed a list of the tables (datasets) in a library? How about the columns (variables) in a table? Need to make sure you reset any titles after you run a report? Got some pesky warning messages in your SAS log you would like to clean up? Sure, you can look them up in the table and column properties in the explorer window. Or you can run a Proc Contents and check the listing. And of course you can ignore the warnings and errors in the SAS log since they ‘almost always appear’. Or you can go to the Dictionary Tables and have your programme find out what libraries are allocated or what columns are available. So, what are Dictionary Tables and where do we access them? Before we proceed, let’s come to some common ground with terminology. In this paper we will talk about tables; for SAS programmers a table is the same as a dataset. Where a dataset has observations a table has rows. Where a dataset has variables a table has columns. You ask “Why do we use this terminology?”. And we try to answer “Because Relational Database Management Systems (RDBMS) use this terminology, and they have always had their own Dictionary Tables. The SAS Dictionary Tables are documented in Proc SQL, so we assume this is why SAS uses the SQL/RDBMS terminology.” WHAT ARE DICTIONARY TABLES? What happens when you start a SAS session? Ever right clicked on a table in the SAS explorer and looked at its properties? Would you like to put basic information about an input file into a report footer? Have you been asked to change one of your macros so all of the SAS options are set to the same values they had before you macro was invoked? Ever wonder how you can access some of this information in a programme? The first step is using this information is to understand what this information represents; simply, this information represents metadata. In general we think of metadata as “data about data”. That is, whereas the data in a table might represent patient records, the metadata tell us attributes of the table; attributes such as column names, column type (numeric, character), table location etc.. The metadata SAS makes available goes beyond the traditional “data about data”. The metadata SAS makes available also includes data about the operating environment. The metadata SAS makes available can be found in the SAS Dictionary Tables. SAS Dictionary Tables are read only tables created and maintained by SAS; they contain a wealth of information about the current SAS session.; the Dictionary Tables are stored in a SAS library called DICTIONARY. Although SAS Dictionary tables are read only, they are dynamic in that virtually everything you do in the SAS session causes a change in the Dictionary Tables. When you create a new table, whether in a DATA Step, SQL or another SAS Proc, several Dictionary tables are immediately updated. When you set a title or footnote a Dictionary Table is immediately updated. When you create a new macro variable a Dictionary Table is immediately updated. Set a system option… that’s right, a Dictionary Table is updated. The tables are dynamically updated and always available; you have to do nothing to keep them up-to-date. Now that we have a basic understanding of what the SAS Dictionary Tables represent, metadata, let’s look at the tables that are available. SAS introduced Dictionary Tables in version 6. In SAS v8 the number of Dictionary Tables grew to eleven; these were augmented in SAS v9.1.3 and grew to twenty-two. As noted above, the Dictionary Tables cover virtually every aspect of the SAS session. The tables below enumerate the Dictionary Tables, first showing the v8 tables then the new tables in v9. ### SAS V8 DICTIONARY TABLES <table> <thead> <tr> <th>Table</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>CATALOGS</td> <td>Contains information about SAS Catalogs</td> </tr> <tr> <td>COLUMNS</td> <td>Contains information about variables/columns</td> </tr> <tr> <td>EXTFILES</td> <td>Contains information about external files</td> </tr> <tr> <td>INDEXES</td> <td>Contains information about columns participating in indexes</td> </tr> <tr> <td>MACROS</td> <td>Contains information specific to macros</td> </tr> <tr> <td>MEMBERS</td> <td>Contains information about all data types (tables, views and catalogs)</td> </tr> <tr> <td>OPTIONS</td> <td>Current session options</td> </tr> <tr> <td>STYLES</td> <td>ODS styles</td> </tr> <tr> <td>TABLES</td> <td>Contains information about tables/datasets</td> </tr> <tr> <td>TITLES</td> <td>Contains information about titles and footnotes</td> </tr> <tr> <td>VIEWS</td> <td>Contains information about views</td> </tr> </tbody> </table> ### ADDITIONAL SAS V9.1.3 DICTIONARY TABLES <table> <thead> <tr> <th>Table</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>CHECK_CONSTRAINTS</td> <td>Contains information about Check constraints</td> </tr> <tr> <td>CONSTRAINT_COLUMN_USAGE</td> <td>Contains information about Constraint column usage</td> </tr> <tr> <td>CONSTRAINT_TABLE_USAGE</td> <td>Constraint table usage</td> </tr> <tr> <td>DICTIONARIES</td> <td>DICTIONARY tables and their columns</td> </tr> <tr> <td>ENGINES</td> <td>Available engines</td> </tr> <tr> <td>FORMATS</td> <td>Available formats</td> </tr> <tr> <td>OPTIONS</td> <td>SAS/Graph options</td> </tr> <tr> <td>LIBNAMES</td> <td>LIBNAME information</td> </tr> <tr> <td>REFERENTIAL_CONSTRAINTS</td> <td>Referential constraints</td> </tr> <tr> <td>REMEMBER</td> <td>Remembered information</td> </tr> <tr> <td>TABLE_CONSTRAINTS</td> <td>Table constraints</td> </tr> </tbody> </table> In v6 and v8 you had to go to the SAS documentation to discover which Dictionary Tables were available. In v9 a new table was added, **DICTIONARIES**, which has table and column information on all of the Dictionary Tables. Now, in order to discover what Dictionary Tables are available you simply have to query the **DICTIONARIES** table; the following SQL snippet shows how to get a list of all available Dictionary Tables: ```sql Select distinct memname from dictionary.dictionaries; ``` In Figure 1 below (Courtesy of CodeCrafters Inc) we can see the tables and how they are related and organized. This diagram gives a concise picture of the tables and how they fit into the SAS environment. As we can see, many of the these dictionary tables contain the metadata about our data (e.g. libnames, members, tables, columns), but as SAS has evolved and added more RDBMS type capacity to its data management strengths we can see this reflected in new tables being added to the dictionary (e.g. constraint_table_usage, check_constraints). In addition, SAS has provided us with tables with metadata about session environment (e.g. options, goptions, macros) as well as auxiliary metadata (e.g. styles, formats). Before we look at how we can use these tables, let us look at how we can determine the structure and content of the tables. So, how can we see the structure of these tables; that is, what columns are in the table? Proc SQL has a DESCRIBE TABLE command that will display the SQL that was used to create the table; the table structure is displayed in the log window. For example Listing 1 shows the SAS code submitted, SAS log notes and the table structure of DICTIONARY.TABLES as they appear in the log window: LISTING 1 – GETTING THE STRUCTURE OF A DICTIONARY TABLE ```sas 27 proc sql; 28 describe table dictionary.tables; NOTE: SQL table DICTIONARY.TABLES was created like: create table DICTIONARY.TABLES ( libname char(8) label='Library Name', memname char(32) label='Member Name', memtype char(8) label='Member Type', dbms_memtype char(32) label='DBMS Member Type', memlabel char(256) label='Dataset Label', typemem char(8) label='Dataset Type', crdate num format=DATETIME informat=DATETIME label='Date Created', );``` HOW DO I ACCESS DICTIONARY TABLES? Now that we know how to find table and column names, how do we access them? There is an automatic library called DICTIONARY, so we access the tables the same way we access any SAS table using SQL. For example, to access the MEMBERS table we would do something like: ``` PROC SQL; SELECT * FROM dictionary.members; QUIT; ``` A VIEW OF THE DICTIONARY One thing that jumps out at experienced DATA step programmers is the library name of DICTIONARY. SAS libname definitions are limited to 8 characters yet DICTIONARY has 10 characters. This means that the Dictionary Tables cannot be directly accessed by the DATA step or by SAS PROCs, aside from PROC SQL. To access the Dictionary Tables directly you must use PROC SQL. However, that does not mean the metadata are not available to the DATA step or to other SAS PROCs; SAS also provides views in the SASHELP library that can surface the metadata in the DATA step and/or other PROCs. The Dictionary Tables are only directly accessible through PROC SQL whereas, the views are accessible from any SAS proc (including SQL), data step as well as the SAS explorer window. One of the simplest ways to become familiar with the Dictionary Tables is to open the equivalent SASHELP view in the SAS explorer window. In this paper I use the term Dictionary Table even though some of the examples use the equivalent views. The tables below list the SASHELP views along with the SQL that is used to create the view. The first table has the views that are identical to the underlying Dictionary Table, and the second table has those views that provide a subset of the underlying tables. **VIEWS THAT ARE IDENTICAL TO THE UNDERLYING TABLE** <table> <thead> <tr> <th>SASHELP VIEWS</th> <th>Source</th> </tr> </thead> <tbody> <tr> <td>SASHELP.VOPTION</td> <td>select * from dictionary.OPTIONS</td> </tr> <tr> <td>SASHELP.VGOPT</td> <td>select * from dictionary.GOPTIONS</td> </tr> <tr> <td>SASHELP.VTITLE</td> <td>select * from dictionary.TITLES</td> </tr> <tr> <td>SASHELP.VMACRO</td> <td>select * from dictionary.MACROS</td> </tr> <tr> <td>SASHELP.VLIBNAM</td> <td>select * from dictionary.LIBNAMES</td> </tr> <tr> <td>SASHELP.VENGINE</td> <td>select * from dictionary.ENGINES</td> </tr> <tr> <td>SASHELP.VEXTFL</td> <td>select * from dictionary.EXTFILES</td> </tr> <tr> <td>SASHELP.VMEMBER</td> <td>select * from dictionary.MEMBERS</td> </tr> <tr> <td>SASHELP.VTABLE</td> <td>select * from dictionary.TABLES</td> </tr> <tr> <td>SASHELP.VVIEW</td> <td>select * from dictionary.VIEWS</td> </tr> <tr> <td>SASHELP.VCOLUMN</td> <td>select * from dictionary.COLUMNS</td> </tr> <tr> <td>SASHELP.VDICTNRY</td> <td>select * from dictionary.DICTIONARIES</td> </tr> <tr> <td>SASHELP.VINDEX</td> <td>select * from dictionary.INDEXES</td> </tr> <tr> <td>SASHELP.VCATALG</td> <td>select * from dictionary.CATALOGS</td> </tr> <tr> <td>SASHELP.VFORMAT</td> <td>select * from dictionary.FORMATS</td> </tr> <tr> <td>SASHELP.VSTYLE</td> <td>select * from dictionary.STYLES</td> </tr> <tr> <td>SASHELP.VCHKCON</td> <td>select * from dictionary.CHECK_CONSTRAINTS</td> </tr> <tr> <td>SASHELP.VREFCON</td> <td>select * from dictionary.REFERENTIAL_CONSTRAINTS</td> </tr> <tr> <td>SASHELP.VTABCON</td> <td>select * from dictionary.TABLE_CONSTRAINTS</td> </tr> <tr> <td>SASHELP.VCNTABU</td> <td>select * from dictionary.CONSTRAINT_TABLE_USAGE</td> </tr> <tr> <td>SASHELP.VCNCOU</td> <td>select * from dictionary.CONSTRAINT_COLUMN_USAGE</td> </tr> <tr> <td>SASHELP.VREMEMB</td> <td>select * from dictionary.REMEMBER</td> </tr> </tbody> </table> **VIEWS THAT ARE A SUBSET OF THE UNDERLYING TABLE** <table> <thead> <tr> <th>View</th> <th>Contents</th> <th>Source</th> </tr> </thead> <tbody> <tr> <td>SASHELP.VALLOPT</td> <td>Options and Goptions</td> <td>select * from DICTIONARY.OPTIONS union select * from DICTIONARY.GOPTIONS;</td> </tr> <tr> <td>SASHELP.VCFORMAT</td> <td>Character Formats</td> <td>select fmtname from DICTIONARY.FORMATS where source='C';</td> </tr> <tr> <td>View</td> <td>Contents</td> <td>Source</td> </tr> <tr> <td>----------------------</td> <td>---------------------------------</td> <td>----------------------------------------------------------------------------------------------------------------------------------------</td> </tr> <tr> <td>SASHELP.VSACCES</td> <td>SAS/ACCESS Views</td> <td>select libname, memname from dictionary.members where memtype = 'ACCESS' order by libname, memname;</td> </tr> <tr> <td>SASHELP.VSCATLG</td> <td>SAS CATALOGS</td> <td>select libname, memname from dictionary.members where memtype = 'CATALOG' order by libname, memname;</td> </tr> <tr> <td>SASHELP.VSLIB</td> <td>SAS Libraries</td> <td>select distinct(libname), path from dictionary.members order by libname;</td> </tr> <tr> <td>SASHELP.VSTABLE</td> <td>SAS Data Tables</td> <td>select libname, memname from dictionary.members where memtype = 'DATA' order by libname, memname;</td> </tr> <tr> <td>SASHELP.VSTABVW</td> <td>SAS Data Tables and View</td> <td>select libname, memname, memtype from dictionary.members where memtype = 'VIEW' or memtype = 'DATA' order by libname, memname;</td> </tr> <tr> <td>SASHELP.VSVIEW</td> <td>SAS Views</td> <td>select libname, memname from dictionary.members where memtype = 'VIEW' order by libname, memname;</td> </tr> </tbody> </table> If you open the SASHelp library in the explorer window, right click on one of the SASHelp views you can also see the structure of the underlying data. The following figure shows the properties window for SASHelp.vmember; note that the columns are the same as we saw in the results of the DESCRIBE TABLE above: 1. Right click on one of the views and select “Properties” 2. Select the Columns tab Later we will see yet another way to see the structure of a Dictionary Table. DICTIONARY TABLES IN MORE DEPTH In this section we will look at some of the dictionary tables, first looking at the structure of the table and then talk about some of the ways it can be used. The purpose here is not to enumerate each of the columns of each of the tables, but to give a general overview of the tables. We will start with the tables that have the metadata about our data tables and views. DICTIONARY.MEMBERS The MEMBERS table contains information about all the library member types – tables, views, and catalogs. ``` create table DICTIONARY.MEMBERS ( libname char(8) label='Library Name', memname char(32) label='Member Name', memtype char(8) label='Member Type', dbms_memtype char(32) label='DBMS Member Type', engine char(8) label='Engine Name', index char(32) label='Indexes', path char(1024) label='Path Name' ); ``` This table is a general overview of all the libraries allocated in the current SAS session. It can be used to determine the contents of a library, or perhaps to determine the type of a specific member. For example you can use the `engine` column to determine which version of SAS was used to create the library member or perhaps verify the location of the file by looking at the `path` column. With SAS libraries, the `dbms_memtype` is blank since it is described in the `memtype` column. However, for external databases (e.g. SQL Server/ODBC database), the `memtype` column says “DATA”, and the `dbms_memtype` column tells whether it is a database table (value of “TABLE”) or a database view (value of “VIEW”). DICTIONARY.TABLES The TABLES table contains more detailed information about the members SAS thinks are tables/datasets; remember, for some external data sources SAS considers DBMS views as tables. ``` create table DICTIONARY.TABLES ( libname char(8) label='Library Name', memname char(32) label='Member Name', memtype char(8) label='Member Type', dbms_memtype char(32) label='DBMS Member Type', memlabel char(256) label='Dataset Label', typemem char(8) label='Dataset Type', crdate num format=DATETIME informat=DATETIME label='Date Created', modate num format=DATETIME informat=DATETIME label='Date Modified', nobs num label='Number of Physical Observations', obslen num label='Observation Length', nvar num label='Number of Variables', protect char(3) label='Type of Password Protection', compress char(8) label='Compression Routine', encrypt char(8) label='Encryption', npage num label='Number of Pages', filesize num label='Size of File', pcompress num label='Percent Compression', reuse char(3) label='Reuse Space', bufsize num label='Bufsize', delobs num label='Number of Deleted Observations', nlobs num label='Number of Logical Observations', ); maxvar num label='Longest variable name', maxlabel num label='Longest label', maxgen num label='Maximum number of generations', gen num label='Generation number', attr char(3) label='Dataset Attributes', indxtype char(9) label='Type of Indexes', daterep char(32) label='Data Representation', sortname char(8) label='Name of Collating Sequence', sorttype char(4) label='Sorting Type', sortchar char(8) label='Charset Sorted By', reqvector char(24) format=$HEX48 informat=$HEX48 label='Requirements Vector', daterepname char(170) label='Data Representation Name', encoding char(256) label='Data Encoding', audit char(3) label='Audit Trail Active?', audit_before char(3) label='Audit Before Image?', audit_admin char(3) label='Audit Admin Image?', audit_error char(3) label='Audit Error Image?', audit_data char(3) label='Audit Data Image?'); The TABLES table is commonly used to get some basic information about the table such as number of rows \( (nobs) \) and/or columns \( (nvar) \) in the table, or the table creation/modification date. With some external data sources (e.g. ODBC) the \( nobs \) column is set to zero since the SAS/Access driver cannot return the number of rows in a table. Also, as noted above some DBMS views are reported in the TABLES table, although the \( \text{dbms\_memtype} \) can be used to determine whether we are looking at a DBMS table or view. **DICTIONARY.VIEWS** The VIEWS table contains a more limited set of metadata about the views available. Note that it is reporting on SAS views, not views in external DBMSs. ```sql create table DICTIONARY.VIEWS ( libname char(8) label='Library Name', memname char(32) label='Member Name', memtype char(8) label='Member Type', engine char(8) label='Engine Name' ); ``` Besides letting you determine which views are available, by looking at the \( \text{engine} \) column you can determine if the view was created as an SQL view or a DATA step view. This table can be used to list all of the SASHelp views as will be shown later. **DICTIONARY.COLUMNS** The COLUMNS table provides detailed metadata about the columns in all of the tables and views. ```sql create table DICTIONARY.COLUMNS ( libname char(8) label='Library Name', memname char(32) label='Member Name', memtype char(8) label='Member Type', name char(32) label='Column Name', type char(4) label='Column Type', length num label='Column Length', ``` This table is commonly used to determine if a column exists (see examples below). It can also be used to verify the column type and format. When looking at the above table layouts we see some common ‘key’ columns - particularly libname and memname. By joining the above three tables on these key columns it is possible to provide a custom report with your data dictionary. In addition to the dictionary tables that describe your data, there are tables, which describe your SAS session. Let us look at some of these. **DICTIONARY.OPTIONS** The OPTIONS table has an entry for each of the SAS options. ```sql create table DICTIONARY.OPTIONS ( optname char(32) label='Option Name', opttype char(8) label='Option type', setting char(1024) label='Option Setting', optdesc char(160) label='Option Description', level char(8) label='Option Location', group char(32) label='Option Group' ); ``` **DICTIONARY.TITLES** The TITLES table has an entry for each title and footnote line currently in effect. See the example below on how to use this table to save the current titles, and then restore them after running a report. ```sql create table DICTIONARY.TITLES ( type char(1) label='Title Location', number num label='Title Number', text char(256) label='Title Text' ); ``` **DICTIONARY.EXTFILES** The EXTFILES table has an entry for each external file registered (filerefs) in the session. ```sql create table DICTIONARY.EXTFILES ( fileref char(8) label='Fileref', xpath char(1024) label='Path Name', ) ``` This table is useful when you want to document external data sources/output from a run. Be aware that SAS has a number of filerefs it uses that you do not see in the explorer window; all of these SAS generated filerefs begin with #LN, so you can easily filter them out. Now that we have looked at a few of the dictionary tables, let us look at some examples of how they could be used. **LOOKING IT UP IN THE DICTIONARY** You have developed an outstanding report that everyone wants included into their SAS runs. The problem is your report sets new title text and some users want their original titles after your report runs. Well, wrap your report in a macro and add two simple data steps, one before and one after your report (Listing 2). **LISTING 2 - RESETTING TITLES** ```sas %macro myreport; /* use the dictionary to save all of the old titles and footnotes */ /* force the new title for demonstration purposes */ TITLE1 'The Original SAS Title1'; TITLE2 'The Original SAS Title2'; /* the SAS view sashelp.vtitle has the current titles and footnotes */ DATA __oldtitles; set sashelp.vtitle; RUN; /* add a new title and run the report */ TITLE1 'This is a GREAT REPORT'; TITLE2 'With TWO Title Lines'; PROC SORT DATA=sashelp.shoes OUT=shoes; BY region product; RUN; PROC PRINT noobs DATA=shoes; BY region product; ID region product; SUMBY product; VAR sales returns; RUN; /* restore the titles from the dataset */ DATA null; SET __oldtitles END=done; length title $12.; length newtext $202.; anum = compress(put(number, 2.)); newtext = '' || text; l = length(newtext); ``` Listing 2 can be viewed as a rough template for saving and restoring most settings in the SAS session. First, open the appropriate view (using a WHERE clause if appropriate) to save the current values. Set and use the new values. Finally, in another data step create the macro variables which are used to reset the original values. A common use of the Dictionary Tables/Views is to identify and/or enumerate tables and columns available in the session. Let’s take a quick look at viewing some of these metadata. **FIND THAT COLUMN** The Dictionary View SASHELP.VCOLUMN has the list of all the columns in all of the tables and views in your current SAS session. We can use this table to create a list of columns that are in multiple tables. First, we could do it with a simple listing (note the WHERE clause that excludes the MAPS and SASHELP libraries): LISTING 3 - SELECTING COLUMNS IN MULTIPLE TABLES (1) ```sql proc sql; select name, count(*) as occurrences from sashelp.vcolumn where libname not in ('MAPS', 'SASHELP') group by name having count(*) > 1 order by name ; quit; ``` Knowing the columns with multiple occurrences is useful, but it would be more useful to know in which tables the columns belong. With SAS this is where it gets interesting since there are usually a number of ways to solve the problem. One way is to create tables with the column names and use this to get more data on the columns: LISTING 4 - SELECTING COLUMNS IN MULTIPLE TABLES (2) ```sql proc sql; create table MultiColCnt as select name, count(*) as occurrences from sashelp.vcolumn where libname not in ('MAPS', 'SASHELP') group by upcase(name) having count(*) > 1 order by name ; create table MultiColCnt1 as select distinct name from MultiColCnt ; create table MultiColTables as from sashelp.vcolumn as v, MultiColCnt1 as c where c.name = v.name and v.libname not in ('MAPS', 'SASHELP') order by v.name, v.libname, v.memname ; quit; ``` Now, suppose you want to apply a consistent label and format to a specific column that may be found in multiple tables. Well, the Dictionary Tables can help you locate the tables with the column and check whether the label and format need to be changed. For those that need changing, you can apply your favorite proc to change them. The following example uses a data step to select the tables that need changing then uses SQL to change them. LISTING 5 - CHANGING THE LABEL AND FORMAT OF A SELECTED COLUMN ```sql %macro ChangeLabelFormat(colName, /* column to change */ ``` 1. Get a count for each column 2. Exclude MAPS and SASHELP libraries 3. Only keep names that occur more than once newLabel, /* label to apply */ newFormat) /* format to apply */ ; %local tblsToChange; %local i; %let colName = %upcase(&colName); %* first locate the tables with the column; %let tblsToChange = 0; data _null_ set sashelp.vcolumn (where=(upcase(name) EQ "&colName" AND (label NE "&newLabel" OR format NE "&newFormat")) keep=libname memname name label format) end=done; chgLib = compress('ChgLib' || put(_n_, 7.)); chgTable = compress('ChgTable' || put(_n_, 7.)); call symput(chgLib, trim(libname)); call symput(chgTable, trim(memname)); if done then call symput('tblsToChange', put(_n_, 7.)); run; %* if any cols need changing, use SQL to change them; %if &tblsToChange NE 0 %then %do %do i = 1 %to &tblsToChange; %let tblToChange = %cmpres(&&chgLib&i...&&chgTable&i); proc sql; alter table &tblToChange modify &colName label="&newLabel" format=&newFormat ; quit; %end; %end; %end; %end; We saw earlier the use of DESCRIBE TABLE to print the table structure to the SAS LOG. Although it gives us the information we need it is not always useful. Similarly we saw how we can also see the structure by looking at the properties of the SASHELP view in the explorer window. Again, it gives us the information, but it is not always useful. What if we want to capture these metadata in a data step, or create a more user-friendly printed output? SAS Dictionary Tables to the rescue LISTING 8 – CAPTURING THE DICTIONARY METADATA %macro describeTable(table=tables); %let table = %upcase(&table); PROC SQL; CREATE TABLE _&table as SELECT * CONCLUSION Dictionary tables are an essential part of every SAS developer’s toolbox. In the past, Michael Davis at SUGI 26 presented a paper that had a good overview and review of other papers showing different uses of the Dictionary Tables. Also, Pete Lund at SUGI 26 had an excellent paper on the use of the Dictionary Tables to document a project. This paper has provided a brief introduction to Dictionary Tables that I hope has helped you to better understand these concepts and thus they become more easily accessible to you as a useful tool. REFERENCES Ravi, Prasad. “Renaming All Variables in a SAS Data Set Using the Information from PROC SQL’s Dictionary Tables” Beakley, Steven & McCoy, Suzanne. “Dynamic SAS Programming Techniques, or How NOT to Create Job Security” http://www2.sas.com/proceedings/sugi29/078-29.pdf Lund, Pete. “A Quick and Easy Data Dictionary Macro” http://www2.sas.com/proceedings/sugi27/p099-27.pdf Davis, Michael. “You Could Look it Up: An Introduction to SASHELP Dictionary Views” http://www2.sas.com/proceedings/sugi26/p017-26.pdf Eberhardt, Peter & Brill, Ilene. “How Do I Look It Up If I Cannot Spell It: An Introduction to SAS Dictionary Tables” ABOUT THE AUTHOR Peter Eberhardt is SAS Certified Professional V8, SAS Certified Professional V6, and SAS Certified Professional - Data Management V6. In addition his company, Fernwood Consulting Group Inc. is a SAS Alliance Partner. Peter is a regular speaker at SAS Global Forum, SESUG and NESUG as well as at local user groups in Canada, the US and the Caribbean. If you have any questions or comments you can contact Peter at: Fernwood Consulting Group Inc., 288 Laird Dr., Toronto ON M4G 3X5 Canada Voice: (416)429-5705 e-mail: peter@fernwood.ca SAS, and SAS Alliance Partner are registered trademarks of SAS Institute Inc. in the USA and other countries. Other brand and product names are registered trademarks or trademarks of their respective companies.
{"Source-Url": "https://www.pharmasug.org/proceedings/2012/AD/PharmaSUG-2012-AD21.pdf", "len_cl100k_base": 7215, "olmocr-version": "0.1.49", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 34043, "total-output-tokens": 7772, "length": "2e12", "weborganizer": {"__label__adult": 0.0002574920654296875, "__label__art_design": 0.0007696151733398438, "__label__crime_law": 0.0002372264862060547, "__label__education_jobs": 0.0026836395263671875, "__label__entertainment": 9.709596633911131e-05, "__label__fashion_beauty": 0.00013506412506103516, "__label__finance_business": 0.0008254051208496094, "__label__food_dining": 0.00028824806213378906, "__label__games": 0.0003986358642578125, "__label__hardware": 0.0007615089416503906, "__label__health": 0.00032258033752441406, "__label__history": 0.0002646446228027344, "__label__home_hobbies": 0.0001646280288696289, "__label__industrial": 0.0004682540893554687, "__label__literature": 0.00022923946380615232, "__label__politics": 0.0001885890960693359, "__label__religion": 0.00032639503479003906, "__label__science_tech": 0.0295257568359375, "__label__social_life": 0.00014972686767578125, "__label__software": 0.140869140625, "__label__software_dev": 0.8203125, "__label__sports_fitness": 0.00014483928680419922, "__label__transportation": 0.00031447410583496094, "__label__travel": 0.0001938343048095703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30068, 0.01584]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30068, 0.45136]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30068, 0.7757]], "google_gemma-3-12b-it_contains_pii": [[0, 4178, false], [4178, 7640, null], [7640, 9288, null], [9288, 10115, null], [10115, 12690, null], [12690, 14727, null], [14727, 17548, null], [17548, 19977, null], [19977, 21527, null], [21527, 23232, null], [23232, 24003, null], [24003, 25931, null], [25931, 27515, null], [27515, 29036, null], [29036, 30068, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4178, true], [4178, 7640, null], [7640, 9288, null], [9288, 10115, null], [10115, 12690, null], [12690, 14727, null], [14727, 17548, null], [17548, 19977, null], [19977, 21527, null], [21527, 23232, null], [23232, 24003, null], [24003, 25931, null], [25931, 27515, null], [27515, 29036, null], [29036, 30068, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30068, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30068, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30068, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30068, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30068, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30068, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30068, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30068, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30068, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30068, null]], "pdf_page_numbers": [[0, 4178, 1], [4178, 7640, 2], [7640, 9288, 3], [9288, 10115, 4], [10115, 12690, 5], [12690, 14727, 6], [14727, 17548, 7], [17548, 19977, 8], [19977, 21527, 9], [21527, 23232, 10], [23232, 24003, 11], [24003, 25931, 12], [25931, 27515, 13], [27515, 29036, 14], [29036, 30068, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30068, 0.15657]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
8b083267117d4ddfd43becf0575bce70daccb1b7
Red Hat OpenStack Platform 16.1 Introduction to the OpenStack Dashboard An overview of the Red Hat OpenStack Platform Dashboard graphical user interface Last Updated: 2022-03-24 Red Hat OpenStack Platform 16.1 Introduction to the OpenStack Dashboard An overview of the Red Hat OpenStack Platform Dashboard graphical user interface OpenStack Team rhos-docs@redhat.com Abstract This guide provides an outline of the options available in the Red Hat OpenStack Platform Dashboard user interface. Table of Contents MAKING OPEN SOURCE MORE INCLUSIVE .......................................................... 3 PROVIDING FEEDBACK ON RED HAT DOCUMENTATION ............................... 4 CHAPTER 1. THE RED HAT OPENSTACK PLATFORM DASHBOARD SERVICE (HORIZON) ............... 5 1.1. THE ADMIN TAB ..................................................................................... 5 1.1.1. Viewing allocated floating IP addresses ............................................ 6 1.2. THE PROJECT TAB ................................................................................. 6 1.3. THE IDENTITY TAB ................................................................................ 8 CHAPTER 2. CUSTOMIZING THE DASHBOARD .................................................. 9 2.1. OBTAINING THE HORIZON CONTAINER IMAGE .................................. 9 2.2. OBTAINING THE RCUE THEME ............................................................. 9 2.3. CREATING YOUR OWN THEME BASED ON RCUE ............................. 10 2.4. CREATING A FILE TO ENABLE YOUR THEME AND CUSTOMIZE THE DASHBOARD ........................................................................... 10 2.5. GENERATING A MODIFIED HORIZON IMAGE ..................................... 11 2.6. USING THE MODIFIED CONTAINER IMAGE IN THE OVERCLOUD 12 2.7. EDITING PUPPET PARAMETERS ........................................................... 12 2.8. DEPLOYING AN OVERCLOUD WITH A CUSTOMIZED DASHBOARD 13 MAKING OPEN SOURCE MORE INCLUSIVE Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message. PROVIDING FEEDBACK ON RED HAT DOCUMENTATION We appreciate your input on our documentation. Tell us how we can make it better. **Using the Direct Documentation Feedback (DDF) function** Use the Add Feedback DDF function for direct comments on specific sentences, paragraphs, or code blocks. 1. View the documentation in the *Multi-page HTML* format. 2. Ensure that you see the Feedback button in the upper right corner of the document. 3. Highlight the part of text that you want to comment on. 4. Click Add Feedback. 5. Complete the Add Feedback field with your comments. 6. Optional: Add your email address so that the documentation team can contact you for clarification on your issue. 7. Click Submit. CHAPTER 1. THE RED HAT OPENSTACK PLATFORM DASHBOARD SERVICE (HORIZON) The Red Hat OpenStack Platform (RHOSP) Dashboard (horizon) is a web-based graphical user interface that you can use to manage RHOSP services. To access the browser dashboard, you must install the Dashboard service, and you must know the dashboard host name, or IP, and login password. The dashboard URL is: ``` http://HOSTNAME/dashoard/ ``` 1.1. THE ADMIN TAB In the Admin tab you can view usage and manage instances, volumes, flavors, images, projects, users, services, and quotas. **NOTE** The Admin tab displays in the main window when you log in as an admin user. The following options are available in the Admin tab: **Table 1.1. System Panel** <table> <thead> <tr> <th>Parameter name</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Overview</td> <td>View basic reports.</td> </tr> <tr> <td>Resource Usage</td> <td>Use the following tabs to view the following usages:</td> </tr> <tr> <td></td> <td>● Usage Report – View the usage report.</td> </tr> <tr> <td></td> <td>● Stats - View the statistics of all resources.</td> </tr> <tr> <td>Hypervisors</td> <td>View the hypervisor summary.</td> </tr> <tr> <td>Host Aggregates</td> <td>View, create, and edit host aggregates. View the list of availability zones.</td> </tr> <tr> <td>Instances</td> <td>View, pause, resume, suspend, migrate, soft or hard reboot, and delete running instances that belong to users of some, but not all, projects. Also, view the log for an instance or access an instance with the console.</td> </tr> <tr> <td>Volumes</td> <td>View, create, edit, and delete volumes, and volume types.</td> </tr> <tr> <td>Flavors</td> <td>View, create, edit, view extra specifications for, and delete flavors. Flavors are the virtual hardware templates in Red Hat OpenStack Platform (RHOSP).</td> </tr> </tbody> </table> ### 1.1. Viewing allocated floating IP addresses You can use the **Floating IPs** panel to view a list of allocated floating IP addresses. You can access the same information from the command line with the `nova list --all-projects` command. ### 1.2. THE PROJECT TAB In the **Project** tab you can view and manage project resources. Set a project as active in **Identity > Projects** to view and manage resources in that project. The following options are available in the **Project** tab: <table> <thead> <tr> <th>Parameter name</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Overview</td> <td>View reports for the project.</td> </tr> <tr> <td>Instances</td> <td>View, launch, create a snapshot from, stop, pause, or reboot instances, or connect to them through the console.</td> </tr> <tr> <td>Parameter name</td> <td>Description</td> </tr> <tr> <td>----------------</td> <td>-------------</td> </tr> <tr> <td><strong>Volumes</strong></td> <td>Use the following tabs to complete these tasks:</td> </tr> <tr> <td></td> <td>- <strong>Volumes</strong> - View, create, edit, and delete volumes.</td> </tr> <tr> <td></td> <td>- <strong>Volume Snapshots</strong> - View, create, edit, and delete volume snapshots.</td> </tr> <tr> <td><strong>Images</strong></td> <td>View images, instance snapshots, and volume snapshots that project users create, and any images that are publicly available. Create, edit, and delete images, and launch instances from images and snapshots.</td> </tr> <tr> <td><strong>Access &amp; Security</strong></td> <td>Use the following tabs to complete these tasks:</td> </tr> <tr> <td></td> <td>- <strong>Security Groups</strong> - View, create, edit, and delete security groups and security group rules.</td> </tr> <tr> <td></td> <td>- <strong>Key Pairs</strong> - View, create, edit, import, and delete key pairs.</td> </tr> <tr> <td></td> <td>- <strong>Floating IPs</strong> - Allocate an IP address to or release it from a project.</td> </tr> <tr> <td></td> <td>- <strong>API Access</strong> - View API endpoints, download the OpenStack RC file, download EC2 credentials, and view credentials for the current project user.</td> </tr> </tbody> </table> ### Table 1.3. The Network tab <table> <thead> <tr> <th>Parameter name</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><strong>Network Topology</strong></td> <td>View the interactive topology of the network.</td> </tr> <tr> <td><strong>Networks</strong></td> <td>Create and manage public and private networks and subnets.</td> </tr> <tr> <td><strong>Routers</strong></td> <td>Create and manage routers.</td> </tr> <tr> <td><strong>Trunks</strong></td> <td>Create and manage trunks. Requires the <strong>trunk</strong> extension enabled in OpenStack Networking (neutron).</td> </tr> </tbody> </table> ### Table 1.4. The Object Store tab <table> <thead> <tr> <th>Parameter name</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><strong>Containers</strong></td> <td>Create and manage storage containers. A container is a storage compartment for data, and provides a way for you to organize your data. It is similar to the concept of a Linux file directory, but it cannot be nested.</td> </tr> </tbody> </table> ### Table 1.5. The Orchestration tab <table> <thead> <tr> <th>Parameter name</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Stacks</td> <td>Orchestrate multiple composite cloud applications with templates, through both an OpenStack-native REST API and a CloudFormation-compatible Query API.</td> </tr> </tbody> </table> ### 1.3. THE IDENTITY TAB In the **Identity** tab you can view and manage projects and users. The following options are available in the **Identity** tab: - **Projects** - View, create, edit, and delete projects, view project usage, add or remove users as project members, modify quotas, and set an active project. - **Users** - View, create, edit, disable, and delete users, and change user passwords. The **Users** tab is available when you log in as an admin user. For more information about managing your cloud with the Red Hat OpenStack Platform dashboard, see the following guides: - [Creating and Managing Instances](#) - [Creating and Managing Images](#) - [Networking guide](#) - [Users and Identity Management guide](#) CHAPTER 2. CUSTOMIZING THE DASHBOARD The Red Hat OpenStack Platform (RHOSP) dashboard (horizon) uses a default theme (RCUE), which is stored inside the horizon container. You can add your own theme to the container image and customize certain parameters to change the look and feel of the following dashboard elements: - Logo - Site colors - Stylesheets - HTML title - Site branding link - Help URL NOTE To ensure continued support for modified RHOSP container images, the resulting images must comply with the Red Hat Container Support Policy. 2.1. OBTAINING THE HORIZON CONTAINER IMAGE To obtain a copy of the horizon container image, pull the image either into the undercloud or a separate client system that is running podman. Procedure - Pull the horizon container image: ``` $ sudo podman pull registry.redhat.io/rhosp-rhel8/openstack-horizon:16.1 ``` You can use this image as a basis for a modified image. 2.2. OBTAINING THE RCUE THEME The horizon container image uses the Red Hat branded RCUE theme by default. You can use this theme as a basis for your own theme and extract a copy from the container image. Procedure 1. Create a directory for your theme: ``` $ mkdir ~/horizon-themes $ cd ~/horizon-themes ``` 2. Start a container that executes a null loop. For example, run the following command: ``` $ sudo podman run --rm -d --name horizon-temp registry.redhat.io/rhosp-rhel8/openstack-horizon /usr/bin/sleep infinity ``` 3. Copy the RCUE theme from the container to your local directory: ``` $ sudo podman cp -a horizon-temp:/usr/share/openstack-dashboard/openstack_dashboard/themes/rcue . ``` 4. Kill the container: ``` $ sudo podman kill horizon-temp ``` **Result:** You now have a local copy of the RCUE theme. ### 2.3. CREATING YOUR OWN THEME BASED ON RCUE To use RCUE as a basis, copy the entire RCUE theme directory rcue to a new location. This procedure uses *mytheme* as an example name. **Procedure** - Copy the theme: ``` $ cp -r rcue mytheme ``` To change the colors, graphics, fonts, and other elements of a theme, edit the files in *mytheme*. When you edit this theme, check for all instances of rcue including paths, files, and directories to ensure that you change them to the new *mytheme* name. ### 2.4. CREATING A FILE TO ENABLE YOUR THEME AND CUSTOMIZE THE DASHBOARD To enable your theme in the dashboard container, you must create a file to override the `AVAILABLE_THEMES` parameter. **Procedure** 1. Create a new file called `_12_mytheme_theme.py` in the `horizon-themes` directory and add the following content: ``` AVAILABLE_THEMES = [('mytheme', 'My Custom Theme', 'themes/mytheme')] ``` The 12 in the file name ensures this file is loaded after the RCUE file, which uses 11, and overrides the `AVAILABLE_THEMES` parameter. 2. Optional: You can also set custom parameters in the `_12_mytheme_theme.py` file. Use the following examples as a guide: - **SITE_BRANDING** Set the HTML title that appears at the top of the browser window. ``` SITE_BRANDING = "Example, Inc. Cloud" ``` - **SITE_BRANDING_LINK** Changes the hyperlink of the theme logo, which normally redirects to `horizon:user_home` by default. ```python SITE_BRANDING_LINK = "http://example.com" ``` 2.5. GENERATING A MODIFIED HORIZON IMAGE When your custom theme is ready, you can create a new container image that uses your theme. Procedure 1. Use a `dockerfile` to generate a new container image using the original `horizon` image as a basis, as shown in the following example: ```bash FROM registry.redhat.io/rhosp-rhel8/openstack-horizon MAINTAINER Acme LABEL name="rhosp-rhel8/openstack-horizon-mytheme" vendor="Acme" version="0" release="1" COPY mytheme /usr/share/openstack-dashboard/openstack_dashboard/themes/mytheme COPY _12_mytheme_theme.py /var/lib/config-data/horizon/etc/openstack-dashboard/local_settings.d/_12_mytheme_theme.py RUN sudo chown horizon:horizon /var/lib/config-data/horizon/etc/openstack-dashboard/local_settings.d/_12_mytheme_theme.py ``` 2. Save this file in your `horizon-themes` directory as `dockerfile`. 3. Use the `dockerfile` to generate the new image: ``` $ sudo podman build . -t "172.24.10.8787/rhosp-rhel7/openstack-horizon:0-5" --log-level debug ``` The `-t` option names and tags the resulting image. It uses the following syntax: ``` [LOCATION]/[NAME]/[TAG] ``` **LOCATION** This is usually the location of the container registry that the overcloud eventually uses to pull images. In this instance, you push this image to the container registry of the undercloud, so set this to the undercloud IP and port. **NAME** For consistency, this is usually the same name as the original container image followed by the name of your theme. In this instance, it is `rhosp-rhel8/openstack-horizon-mytheme`. **TAG** The tag for the image. Red Hat uses the `version` and `release` labels as a basis for this tag. If you generate a new version of this image, increment the `release`, for example, `0-2`. 4. Push the image to the container registry of the undercloud: ``` $ sudo openstack tripleo container image push --local 172.24.10.8787/rhosp-rhel8/openstack-horizon:0-5 ``` 5. Verify that the image has uploaded to the local registry: ``` [stack@director horizon-themes]$ curl http://172.24.10.10:8787/v2/_catalog | jq .repositories[] | grep -i hori "rhosp-rhel8/openstack-horizon" [stack@director horizon-themes]$ [stack@director ~]$ sudo openstack tripleo container image list | grep hor <<<<<<<<<<<Uploaded [stack@director ~]$ ``` **IMPORTANT** If you update or upgrade Red Hat OpenStack Platform, you must reapply the theme to the new `horizon` image and push a new version of the modified image to the undercloud. ### 2.6. USING THE MODIFIED CONTAINER IMAGE IN THE OVERCLOUD To use the container image that you modified with your overcloud deployment, edit the environment file that contains the list of container image locations. This environment file is usually named `overcloud-images.yaml`. **Procedure** 1. Edit the `DockerHorizonConfigImage` and `DockerHorizonImage` parameters to point to your modified container image: ```yaml parameter_defaults: ... ... `` 2. Save this new version of the `overcloud-images.yaml` file. ### 2.7. EDITING PUPPET PARAMETERS Director provides a set of dashboard parameters that you can modify with environment files. **Procedure** - Use the `ExtraConfig` parameter to set Puppet hieradata. For example, the default help URL points to [https://access.redhat.com/documentation/en/red-hat-openstack-platform](https://access.redhat.com/documentation/en/red-hat-openstack-platform). To modify this URL, use the following environment file content and replace the URL: ```yaml parameter_defaults: ExtraConfig: horizon::help_url: "http://openstack.example.com" ``` 2.8. DEPLOYING AN OVERCLOUD WITH A CUSTOMIZED DASHBOARD Procedure - To deploy the overcloud with your dashboard customizations, include the following environment files in the `openstack overcloud deploy` command: - The environment file with your modified container image locations. - The environment file with additional dashboard modifications. - Any other environment files that are relevant to your overcloud configuration. ```bash $ openstack overcloud deploy --templates \ -e /home/stack/templates/overcloud-images.yaml \ -e /home/stack/templates/help_url.yaml \ [OTHER OPTIONS] ```
{"Source-Url": "https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/pdf/introduction_to_the_openstack_dashboard/red_hat_openstack_platform-16.1-introduction_to_the_openstack_dashboard-en-us.pdf", "len_cl100k_base": 4148, "olmocr-version": "0.1.51", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 27650, "total-output-tokens": 4721, "length": "2e12", "weborganizer": {"__label__adult": 0.00022125244140625, "__label__art_design": 0.0005125999450683594, "__label__crime_law": 0.00021517276763916016, "__label__education_jobs": 0.0011644363403320312, "__label__entertainment": 7.992982864379883e-05, "__label__fashion_beauty": 0.00010466575622558594, "__label__finance_business": 0.0015459060668945312, "__label__food_dining": 0.000141143798828125, "__label__games": 0.0005598068237304688, "__label__hardware": 0.001819610595703125, "__label__health": 0.0001609325408935547, "__label__history": 0.00018227100372314453, "__label__home_hobbies": 0.00011080503463745116, "__label__industrial": 0.0003037452697753906, "__label__literature": 0.00011521577835083008, "__label__politics": 0.00016164779663085938, "__label__religion": 0.00022602081298828125, "__label__science_tech": 0.00982666015625, "__label__social_life": 9.101629257202148e-05, "__label__software": 0.2442626953125, "__label__software_dev": 0.73779296875, "__label__sports_fitness": 0.00012791156768798828, "__label__transportation": 0.0002366304397583008, "__label__travel": 0.00016963481903076172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16215, 0.01495]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16215, 0.16725]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16215, 0.71264]], "google_gemma-3-12b-it_contains_pii": [[0, 181, false], [181, 181, null], [181, 372, null], [372, 498, null], [498, 2002, null], [2002, 2002, null], [2002, 2393, null], [2393, 3102, null], [3102, 4793, null], [4793, 5538, null], [5538, 7397, null], [7397, 8410, null], [8410, 9889, null], [9889, 11518, null], [11518, 13603, null], [13603, 15613, null], [15613, 16215, null]], "google_gemma-3-12b-it_is_public_document": [[0, 181, true], [181, 181, null], [181, 372, null], [372, 498, null], [498, 2002, null], [2002, 2002, null], [2002, 2393, null], [2393, 3102, null], [3102, 4793, null], [4793, 5538, null], [5538, 7397, null], [7397, 8410, null], [8410, 9889, null], [9889, 11518, null], [11518, 13603, null], [13603, 15613, null], [15613, 16215, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 16215, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16215, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16215, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16215, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16215, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16215, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16215, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16215, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16215, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16215, null]], "pdf_page_numbers": [[0, 181, 1], [181, 181, 2], [181, 372, 3], [372, 498, 4], [498, 2002, 5], [2002, 2002, 6], [2002, 2393, 7], [2393, 3102, 8], [3102, 4793, 9], [4793, 5538, 10], [5538, 7397, 11], [7397, 8410, 12], [8410, 9889, 13], [9889, 11518, 14], [11518, 13603, 15], [13603, 15613, 16], [15613, 16215, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16215, 0.15079]]}
olmocr_science_pdfs
2024-12-04
2024-12-04
1c5815544a4fcaabc5127955d57075b3abba2e04
Adaptable and Verifiable BDI Reasoning Peter Stringer Rafael C. Cardoso Xiaowei Huang Louise A. Dennis University of Liverpool Liverpool L69 3BX, United Kingdom {peter.stringer, rafael.cardoso, xiaowei.huang, l.a.dennis}@liverpool.ac.uk Long-term autonomy requires autonomous systems to adapt as their capabilities no longer perform as expected. To achieve this, a system must first be capable of detecting such changes. Creating and maintaining a system ontology is a comprehensive solution for this; an agent-maintained formal self-model will take the role of this system ontology. It would act as a repository of information about all the processes and functionality of the autonomous system, forming a systematic approach for detecting action failures. Our work will focus on Belief-Desire-Intention (BDI) [25] programming languages as they are well known for their use in developing intelligent agents [1,6,16,21]. Agents that are capable of controlling an array of cyber-physical autonomous systems such as autonomous vehicles, spacecraft and robot arms have been programmed using BDI agents (e.g., Mars Rover [16], Earth-orbiting satellites [6] and robotic arms for nuclear waste-processing [1]). Coupled with their use of plans and actions, BDI languages offer an appropriate platform to build upon for the development of an adaptable autonomous system. The agent-maintained self-model includes action descriptions, consisting of pre- and post-conditions of all known actions/capabilities. An action’s pre-conditions are the environment conditions that must exist for an action to be executed whilst post-conditions are defined as the expected changes in the environment made directly by a completed action. These action descriptions are based on the Planning Domain Definition Language (PDDL) [22], commonly used in classical automated planning. The complete availability of current system information will provide the ability to monitor the status of actions, presenting the opportunity to detect failure. We use action life-cycles based on a theory of durative actions for BDI systems [10] to detect persistent abnormal behaviour from action executions that could denote hardware degradation or other long-term causes of failure such as exposure to radiation or extreme temperature. Once a failure has been detected, we can use machine learning methods to update the action description in the self model. Then, we can repair or replace the actions in any existing plans by using an automated planner to patch these plans. The resulting plans can then be verified to ensure the systems safety properties are intact. 1 Introduction Long-term autonomy requires autonomous systems to adapt as their capabilities no longer perform as expected. To achieve this, a system must first be capable of detecting such changes. Creating and maintaining a system ontology is a comprehensive solution for this; an agent-maintained formal self-model will take the role of this system ontology. It would act as a repository of information about all the processes and functionality of the autonomous system, forming a systematic approach for detecting action failures. Our work will focus on Belief-Desire-Intention (BDI) [25] programming languages as they are well known for their use in developing intelligent agents [1,6,16,21]. Agents that are capable of controlling an array of cyber-physical autonomous systems such as autonomous vehicles, spacecraft and robot arms have been programmed using BDI agents (e.g., Mars Rover [16], Earth-orbiting satellites [6] and robotic arms for nuclear waste-processing [1]). Coupled with their use of plans and actions, BDI languages offer an appropriate platform to build upon for the development of an adaptable autonomous system. The agent-maintained self-model includes action descriptions, consisting of pre- and post-conditions of all known actions/capabilities. An action’s pre-conditions are the environment conditions that must exist for an action to be executed whilst post-conditions are defined as the expected changes in the environment made directly by a completed action. These action descriptions are based on the Planning Domain Definition Language (PDDL) [22], commonly used in classical automated planning. The complete availability of current system information will provide the ability to monitor the status of actions, presenting the opportunity to detect failure. We use action life-cycles based on a theory of durative actions for BDI systems [10] to detect persistent abnormal behaviour from action executions that could denote hardware degradation or other long-term causes of failure such as exposure to radiation or extreme temperature. Once a failure has been detected, we can use machine learning methods to update the action description in the self model. Then, we can repair or replace the actions in any existing plans by using an automated planner to patch these plans. The resulting plans can then be verified to ensure the systems safety properties are intact. *This work has been supported by the University of Liverpool’s School of EEE & CS in support of the EPSRC “Robotics and AI for Nuclear” (EP/R026084/1) and “Future AI and Robotics for Space” (EP/R026092/1) Hubs. This is a position paper that outlines a program of research. The overarching aim of this research is to create a framework for the verification of autonomous systems that are capable of learning new behaviour(s) descriptions and integrating them into existing BDI plans: using the framework as a route to certification. In this paper, we discuss the current ability of BDI systems in adaptable reasoning, largely focusing on actions. We also consider research in Artificial Intelligence (AI) planning on modelling actions and the methods and implications of introducing machine learning for replacing action descriptions. Our main contribution is the initial design of a system architecture for BDI autonomous agents capable of adapting to changes in a dynamic environment, consolidating the agent-maintained self-model with the theory of durative actions and learning new action descriptions into a cohesive and adaptable BDI system. It should be noted, our work relies upon assumptions that have been discussed further in the relevant sections of this paper. 2 The Belief-Desire-Intention Model of Agency Intelligent Agent systems conforming to the BDI software model largely follow the principles proposed by Bratman’s *Intentions, Plans, and Practical Reason* [4] that was originally intended for modelling practical reasoning in human psychology. Use of BDI agents is particularly suitable for high-level management and control tasks in complex dynamic environments [26] which justifies their implementation in many practical applications. Since Georgeff and Lansky’s Procedural Reasoning System (PRS) emerged in 1987 [14] a wide range of BDI programming languages have been developed [21], each building upon the reach of PRS with a multitude of extensions for different applications. 2.1 Actions in BDI Systems Typically, BDI languages use *plans* provided by a programmer at compile time and the language selects an appropriate plan to react to a given situation. Some BDI languages model interaction with the external environment either as an *action* (e.g., Jason [3]) or as a *capability* (e.g., GOAL [19]). We view capabilities as actions with explicit pre- and post-conditions. Actions and capabilities can appear in the bodies of *plans*. The body of a plan is generally a sequence of actions/capabilities, belief updates and subgoal manipulations (e.g., adopting or dropping goals). Plans are selected by means-end reasoning in order to achieve the agent’s goals. Plans may have additional components as well as the plan body. For instance, they generally have a *guard* which must hold before the plan can be applied. Once a plan is selected for execution it is transformed into an *intention* which represents a sequence of steps to be performed as part of executing the plan. We intend to extend the GWENDOLEN agent programming language [7] for our research. We have chosen to use GWENDOLEN as it is a BDI agent programming language capable of producing verifiable agents. It is integrated into the MCAPL (Model-Checking Agent Programming Languages) framework [8]. Using MCAPL, agents can be programmed in GWENDOLEN and then verified using the AJPF (Agent Java Pathfinder) model-checker [11]. Actions in GWENDOLEN are generally implemented in a Java-based environment; at runtime they are requested and executed by agents. Whilst actions in GWENDOLEN can exhibit characteristics of duration, this is implemented using a ‘wait for’ construct which temporarily suspends an intention when encountered. When the predicate that is being waited for is believed, the intention becomes unsuspended. This is the extent to which actions in GWENDOLEN are treated as having significant durations and is largely typical of the treatment of actions in BDI languages. 2.2 Durative Actions BDI languages are increasingly being used for developing agents for physical systems where actions could take considerable time to complete [10]. Currently, most BDI languages suspend an agent entirely until an action completes or implement actions in such a way that an agent may start a process but then must be programmed to explicitly track the progress of the action in some way. Introducing an explicit notion of duration to actions will allow us to create principled mechanisms to let an agent continue operating once an action is started, meaning the agent is available to monitor the status of actions in progress. [16] introduced an abstract theory of goal life-cycles, whereby every goal pursued by the agent moves through a series of states: Pending to Active; Active to either Suspended or Aborted or a Successful end state; and so on. Dennis and Fisher [10] extended the formal semantics provided by Harland et al. to show how the behaviour of durative actions could integrate into these life-cycles. They advocate associating actions not only with pre- and post-conditions containing durations but also with explicit success, failure and abort conditions (an abort is used if the action is ongoing but needs to be stopped) and suggest goals be suspended while an action is executing and then the action’s behaviour be monitored for the occurrence of its success, failure or abort conditions. When one of these occurs the goal then moves to the Active or Pending (where re-planning may be required) part of its life-cycle as appropriate. Adding these additional states to actions should not add to the cost of model checking as this should not add branches. Adding states should only add more information which would make no significant difference. Brahms [28], a multi-agent modelling environment, is an example of an agent approach that implements durative actions. The Brahms equivalent of actions, activities, have duration. Brahms has a formal semantics provided by Stocker et al. [29], although these semantics are primarily concerned with the effect of activity duration on simulation with mechanism for monitoring the behaviour of an activity during its execution. Whilst the concept of durative actions seems to have been adequately explored in these examples, there has not been a formal implementation that focuses on monitoring individual actions for failure. 2.3 Action Failure The idea of monitoring an actions life-cycle exists in current literature [10], [16], [17]. A range of states can be attributed to an action that can subsequently be traced for irregularities or consistent errors, providing a basis for determining failure. If we assume that the performance of actions may degrade then we also need to introduce the concept of an action life-cycle in which an action is introduced into the system as Functional, may move into a Suspect state if it is failing and finally becomes Deprecated following repeated failures. Cardoso et al. [5] assumes a framework along these lines and builds upon it to outline a mechanism that allows reconfiguration of the agent’s plans in order to continue functioning as intended if some action has become Deprecated. However, this assumed ability to detect persistent failures does not yet exist. Our proposed framework should allow us to detect persistent abnormal behaviour from action executions for use with Cardoso et al.’s reconfiguration mechanism. 3 AI Planners and Learning New Actions AI Planning seeks to automate reasoning about plans; using a formal description of the domain, all possible actions available in the domain, an initial state of the problem, and a goal condition to produce A plan consisting of the actions that will achieve the goal condition when executed [18]. The formal description of the domain and the problem can be considered a model of the environment, the accuracy of which is fundamental to producing viable plans of reasonable quality. Significant advances have been made in the modelling of actions [12][15][33][34] in automated planning, supporting actions that can have variable duration, conditions and effects. Actions in BDI systems are typically designed without a specified duration and are defined before the execution of the program. As previously mentioned, BDI systems do not have a de facto theory of durative actions. Additionally, there is no theory for learning new action descriptions. With an extension of action theory for BDI systems (covering these two areas) paired with the self-model concept, actions could adapt to change. However, learning a new action may not always be the best solution for a failing action. Cardoso et al. [5] have developed a method for reasoning about replacing malfunctioning actions with alternate existing actions to achieve the same desired goal, reusing the domain entities and predicates that are already available. In situations where a new action description is required at runtime, there are already suitable learning methods that could be adapted to be incorporated into the framework [23][24], enabling the discovery of new entities and predicates in the domain. In [23], the Qualitative Learner of Action and Perception (QLAP) is introduced. When deployed in an unknown, continuous and dynamic environment, QLAP constructs a hierarchical structure of possible actions in an environment based upon the consequences of actions that have happened before. Work in [24] explores the use of Machine Learning and probabilistic planning in complex environments to cope with unexpected outcomes. A learning algorithm is used to determine an action model with the greatest likelihood of attaining the perceived action effects of another different set of actions. We have to acknowledge the risk that system properties could be violated during machine learning, although this could be remedied by using a Safe Learning [13] approach, the introduction of machine-learning presents great difficulty for verification as the algorithms cannot (currently) be directly verified [31]. As a consequence, it should be noted that the proposed system could be unsuitable for scenarios where learning from failure is not safe (e.g., autonomous drones), where it would be safest to execute a controlled stop to the system rather than attempting a recovery. 4 System Architecture The initial objective of this research is to formally define the concept of a self-model: an agent-maintained ontology for the autonomous system. We intend to use PDDL (Planning Domain Definition Language) as a starting point for creating a self-model. PDDL is a formalism for AI planning which is intended to “express the ‘physics’ of a domain” [22]. More specifically, we intend to use features introduced in PDDL2.1 [12] as our starting point. PDDL2.1 is an extension to PDDL for expressing temporal planning domains. The self-model concept will build on this by enabling agents to access and maintain a domain description, adding the capability of learning new action descriptions and allowing action life-cycles to be monitored. As shown in Figure 1 the self-model is centrally linked to the other system components as they are required to contribute into keeping the self-model accurate. It is important to note that the self-model’s domain description is not assumed to be modelled soundly and completely, yet it is assumed that all reports and updates received by the system are correct. Our implementation will be developed for GWENDOLEN [9]. The GWENDOLEN agent programming language follows the BDI software model. As part of the MCAPL Framework, GWENDOLEN interfaces with the Java Pathfinder (JPF) model-checker [32]. Our intention is to implement self-models and the theory of action life-cycles [10] in GWENDOLEN and integrate this with the existing work on plan re- configurability [5]. We will then exploit GWENDOLEN’s support for verification to verify the adapted system against requirements. We propose representing actions in our self-model with explicit pre- and post-conditions and either explicit success, fail and abort conditions or one’s that can be inferred from the pre- and post-conditions. We will then adapt the GWENDOLEN goal life-cycle as suggested in [10] to handle durative actions in a principled fashion. Figure 1: Diagram of Action Failure and Recovery Mechanisms. Arrows represent data flow and dotted lines are for readability when a line goes through a component. When an action changes, requiring plans to be modified, it is assumed that the agent must be verified again in order to preserve the safety properties of the system as a whole. However, if a new action is learnt in place of a failing action (fully or partially achieving the failing actions post-conditions), the whole system may not require reverification. We aim to further study this process in order to identify the conditions where such reverification would not be necessary. In Algorithm 1, we propose a primitive method for action failure monitoring. It is assumed that the action status used in the algorithm is asserted as a belief by the system. We start monitoring an action once it has been executed, retrieving some preliminary information about the action: an identifier and the current status (lines 2-3). If the action is currently When an action changes, requiring plans to be modified, it is assumed that the agent must be verified again in order to preserve the safety properties of the system as a whole. However, if a new action is learnt in place of a failing action (fully or partially achieving the failing actions post-conditions), the whole system may not require reverification. We aim to further study this process in order to identify the conditions where such reverification would not be necessary. In Algorithm 1, we propose a primitive method for action failure monitoring. It is assumed that the action status used in the algorithm is asserted as a belief by the system. We start monitoring an action once it has been executed, retrieving some preliminary information about the action: an identifier and the current status (lines 2-3). If the action is currently Pending; Suspended; or Aborting, this status is returned (lines 4-5). If not, the action’s expected post-conditions are retrieved from the tuple (line 6). Whilst an action’s state is ‘Active’, we continue checking it for failure by comparing the perceived post-conditions with those that are expected of that action (lines 7-11). If at any point during monitoring these conditions do not match, the action’s state becomes ‘Failed’. If an action is not working as expected, the action can be re-attempted or suspended and replaced in the self-model. The replacement action may be selected from an existing action in the self-model itself. Alternatively, using machine-learning methods, a new action can be learnt to replace the failing action using current knowledge of the available capabilities. Finally, a method for reconfiguring the BDI plan, such as in [5], is called (lines 12-15). 4.1 Scenario To illustrate how the self-model would complement Cardoso et al.’s work on reconfigurability [5], we use the same scenario: a Mars rover’s faulty movement capability. Figure 1 shows our proposed mechanisms Algorithm 1: Action Failure Monitoring 1. Function monitor(action_identifier, action_status, action_post_conditions) 2. ActionID ← action_identifier; 3. Status ← action_status; 4. if Status ≠ Active ∨ Failed then 5. return Status; 6. ExpectedPostCond ← action_post_conditions; 7. while Status = Active do 8. ActualPostCond ← getPostConditions(ActionID); 9. if ActualPostCond = ExpectedPostCond then 10. Status ← Active; 11. monitor(ActionID, Status, ActualPostCond); 12. else 13. Status ← Failed; 14. reconfigure(ActionID); 15. return Status; 16. return Active; for actions failure and recovery embedded into a system architecture including a BDI system, an AI Planner and Cardoso et al.’s [5] reconfigurability framework. The dotted line arrows crossing the self-model represent incoming information from a component such as an action’s state. The system architecture in the diagram relies upon a simplification of the successful, fault-free execution of actions that would normally occur in a BDI system. In the case of the rover, these fault-free actions can be represented by the high-level task of movement between waypoints. Whilst mostly successful, these actions are susceptible to failure. Consider the task of moving from a waypoint A to another waypoint B, in order to collect a rock sample to analyse at another waypoint C. Using the monitoring method for failure detection in Algorithm[1] a failed movement action between any of these waypoints could be found. Given the dynamic environment that the rover operates in, it is plausible that previously clear and usable routes could become blocked at any time. A failure can be flagged when an action is exceeding a predetermined time or energy threshold described in the action post-conditions. Once failure has been detected and confirmed, we can update the self-model to show that the action description has deprecated and no longer affords it’s post-conditions. The rover now attempts to reconfigure the current plan to resolve the failure using an AI planner to search for a replacement (e.g., by finding a different route) before attempting to learn a completely new action description. In both cases, the time and energy consumption required to accomplish the original post-conditions is updated in the reconfigured/new action description. If it is found that the reconfigured plan is now too time or energy intensive, the latter method of learning a new action description is invoked. If at any point the failing action is found to achieve all post-conditions but does not perform the action within the time or energy threshold (e.g., the rover now navigates around a blockage and arrives at the correct waypoint but now takes longer to do so), this can be managed by learning new actions descriptions with an updated time and/or energy threshold. The action may not be deprecated if the failure is considered anomalous; for instance, the action normally succeeds and only fails on one isolated occasion. If the action description is not deprecated, then the action will be re-attempted without resorting to reconfiguration or learning methods. The rover can continue progressing towards the goal if the failure was anomalous. If a new action description is learned, the original plan will be patched with the new action description by the AI planner. This plan could require reverification to preserve previously verified properties which can be handled by using AJPF. Ensuring these properties are maintained is crucial for avoiding failure. The verified patched plan can then be used in the BDI system where regular action execution continues and the rover can continue to complete its mission. 5 Related Work The work in [5] describes a reconfigurability framework that is capable of replacing faulty action descriptions based on formal definitions of action descriptions, plans, and plan replacement. The implementation uses an AI planner to search for viable action replacements. We plan on extending their approach by adding the concept of a self-model, durative actions, and failure detection. Furthermore, we also envision adding a learning component to the framework in order to be able to cope with dynamic environment events that require new action descriptions to be formulated at runtime. Troquard et al.’s work on logic for agency in [30] considers the modelling of actions with durations although a different approach was taken: actions are given duration using continuations from STIT (Seeing To It That) logic. In BDI systems, the focus of handling plan failure is the effect that failure has on goals [2, 27]. This is a reasonable focus considering the central role that goals have in agent-oriented programming. Consequently, action failure recovery has not been explored as an option for managing plan failure. 6 Conclusions and Future Work In this position paper we have described a system architecture for BDI autonomous agents capable of adapting to changes in a dynamic environment. We also introduced the idea of an agent-maintained self-model with durative actions and learning new action descriptions. Our proposed system aims to resolve the following: develop the concept of a self-model; produce and develop a method to detect the failure of an action performed by a BDI Agent; develop a theory of durative actions for BDI languages; adapt existing system to allow new actions to be learnt and used in place of failing ones whilst preserving safety properties; and finally to integrate into the existing GWENDOLEN infrastructure. To illustrate the applicability of the discussed mechanisms, a practical example of how a Mars rover could make use of the framework was provided. Future work includes defining the learning component to be able to handle dynamic environment events that require the creation of new action descriptions at runtime, a formal definition of the self-model with an outline of the concepts included in this, the implementation of the system architecture, and the evaluation of the approach. A number of questions and challenges have been identified whilst outlining this program of research. Firstly, it has been noted that the term ‘persistent failure’ is subjective and should be accompanied by a formal and precise specification to avoid ambiguity. Secondly, considerations for the steps taken after reconfiguration and the learning process require further work (e.g., What happens to failing actions in the model after reconfiguring?). Finally, the proposed learning strategy has produced many challenges which will be considered once implementation has reached this stage. Notably, we will consider how the learning method can ensure valid solutions; how planning time could be minimised and how an action’s state could influence the learning strategy. These challenges will serve as guidance for future work. References
{"Source-Url": "https://export.arxiv.org/pdf/2007.11743", "len_cl100k_base": 5330, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 24425, "total-output-tokens": 8478, "length": "2e12", "weborganizer": {"__label__adult": 0.0004987716674804688, "__label__art_design": 0.0006341934204101562, "__label__crime_law": 0.0007338523864746094, "__label__education_jobs": 0.0019359588623046875, "__label__entertainment": 0.00017583370208740234, "__label__fashion_beauty": 0.0003044605255126953, "__label__finance_business": 0.0005555152893066406, "__label__food_dining": 0.0005526542663574219, "__label__games": 0.0011529922485351562, "__label__hardware": 0.0012636184692382812, "__label__health": 0.0015439987182617188, "__label__history": 0.0006337165832519531, "__label__home_hobbies": 0.00020134449005126953, "__label__industrial": 0.0010061264038085938, "__label__literature": 0.0007448196411132812, "__label__politics": 0.0007143020629882812, "__label__religion": 0.0007319450378417969, "__label__science_tech": 0.488525390625, "__label__social_life": 0.0001970529556274414, "__label__software": 0.0074615478515625, "__label__software_dev": 0.488525390625, "__label__sports_fitness": 0.0004355907440185547, "__label__transportation": 0.0013341903686523438, "__label__travel": 0.0002567768096923828}, "weborganizer_max": "__label__science_tech", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34352, 0.0574]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34352, 0.53597]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34352, 0.88952]], "google_gemma-3-12b-it_contains_pii": [[0, 5250, false], [5250, 9028, null], [9028, 12729, null], [12729, 16868, null], [16868, 20303, null], [20303, 23356, null], [23356, 27181, null], [27181, 30838, null], [30838, 34352, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5250, true], [5250, 9028, null], [9028, 12729, null], [12729, 16868, null], [16868, 20303, null], [20303, 23356, null], [23356, 27181, null], [27181, 30838, null], [30838, 34352, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34352, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34352, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34352, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34352, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34352, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34352, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34352, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34352, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34352, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34352, null]], "pdf_page_numbers": [[0, 5250, 1], [5250, 9028, 2], [9028, 12729, 3], [12729, 16868, 4], [16868, 20303, 5], [20303, 23356, 6], [23356, 27181, 7], [27181, 30838, 8], [30838, 34352, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34352, 0.0]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
906e3253260c587e8909e3ed7471148b48f065b2
[REMOVED]
{"Source-Url": "http://turing.iimas.unam.mx/~pablor/papers/hcii2009.pdf", "len_cl100k_base": 4157, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 19244, "total-output-tokens": 5890, "length": "2e12", "weborganizer": {"__label__adult": 0.0009794235229492188, "__label__art_design": 0.003528594970703125, "__label__crime_law": 0.0008568763732910156, "__label__education_jobs": 0.2081298828125, "__label__entertainment": 0.00026607513427734375, "__label__fashion_beauty": 0.0004987716674804688, "__label__finance_business": 0.0008091926574707031, "__label__food_dining": 0.0012464523315429688, "__label__games": 0.0015459060668945312, "__label__hardware": 0.0025882720947265625, "__label__health": 0.002323150634765625, "__label__history": 0.0008292198181152344, "__label__home_hobbies": 0.0007638931274414062, "__label__industrial": 0.0016994476318359375, "__label__literature": 0.0019626617431640625, "__label__politics": 0.0007457733154296875, "__label__religion": 0.00145721435546875, "__label__science_tech": 0.11859130859375, "__label__social_life": 0.0007085800170898438, "__label__software": 0.0108489990234375, "__label__software_dev": 0.63623046875, "__label__sports_fitness": 0.0008997917175292969, "__label__transportation": 0.0020236968994140625, "__label__travel": 0.0005412101745605469}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27084, 0.02643]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27084, 0.93302]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27084, 0.93079]], "google_gemma-3-12b-it_contains_pii": [[0, 2688, false], [2688, 6150, null], [6150, 9751, null], [9751, 13068, null], [13068, 15862, null], [15862, 18191, null], [18191, 21197, null], [21197, 24264, null], [24264, 27084, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2688, true], [2688, 6150, null], [6150, 9751, null], [9751, 13068, null], [13068, 15862, null], [15862, 18191, null], [18191, 21197, null], [21197, 24264, null], [24264, 27084, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27084, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27084, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27084, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27084, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27084, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27084, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27084, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27084, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27084, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27084, null]], "pdf_page_numbers": [[0, 2688, 1], [2688, 6150, 2], [6150, 9751, 3], [9751, 13068, 4], [13068, 15862, 5], [15862, 18191, 6], [18191, 21197, 7], [21197, 24264, 8], [24264, 27084, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27084, 0.10101]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
fdb1e65e645cee730d78b1d1c82521ae4adea1fd
Configuring Weblogic Server 12c Oracle FLEXCUBE Universal Banking Release 12.4.0.0.0 [May] [2017] Table of Contents 1. INTRODUCTION ................................................................................................................. 1-1 1.1 PURPOSE OF THIS DOCUMENT ........................................................................... 1-1 1.2 WebLogic Server Overview .................................................................................... 1-1 1.3 Pre-Requisites ........................................................................................................ 1-5 2. DOMAIN CONFIGURATION .................................................................................... 2-6 2.1 Domain Creation ..................................................................................................... 2-6 2.2 Pack and Unpack Domain ..................................................................................... 2-16 2.3 Start Admin Server ............................................................................................... 2-16 2.4 Start Node Manager .............................................................................................. 2-16 3. CLUSTER CONFIGURATION ........................................................................... 3-17 3.1 Machines Configuration .......................................................................................... 3-18 3.2 Dynamic Cluster Creation .................................................................................... 3-21 3.3 Managed Server Template Configuration ............................................................ 3-25 3.3.1 Logging ............................................................................................................. 3-25 3.3.2 HTTP Logging .................................................................................................. 3-27 3.3.3 Stuck Tread Max Time .................................................................................... 3-27 4. TUNING .................................................................................................................. 4-28 4.1 General Parameters ............................................................................................... 4-28 4.2 JVM Tuning ........................................................................................................... 4-28 5. START MANAGED SERVERS ........................................................................... 5-30 6. DATA SOURCE CREATION AND JDBC CONFIGURATION .................................. 6-31 6.1 Setup Required for OCI Driver .............................................................................. 6-31 6.2 Data Source Creation: non XA ............................................................................. 6-32 6.3 XA DataSource ....................................................................................................... 6-36 6.4 JDBC Parameters Tuning ...................................................................................... 6-40 7. JMS RESOURCE CREATION ........................................................................... 7-41 8. ORACLE WEBLOGIC LOAD BALANCING ..................................................... 8-42 9. FREQUENTLY ASKED QUESTIONS .................................................................... 9-43 9.1 Machine status is Unreachable. ........................................................................... 9-43 9.2 How to restart node manager? ............................................................................. 9-43 9.3 Scaling Up Dynamic Cluster ............................................................................... 9-43 9.4 Session Timeout .................................................................................................... 9-45 1. Introduction 1.1 Purpose of this Document The purpose of this document is to explain the steps required for Configuration and applying best practices in cluster mode for - FCUBS 12.2 - Weblogic Version 12.1.3.0.0 - JDK 1.7.0_71 1.2 WebLogic Server Overview This section of the document provides brief explanation on the main components involved in WebLogic server Domain A domain is the basic administration unit for WebLogic Server instances. A domain consists of one or more WebLogic Server instances (and their associated resources) that is managed with a single Administration Server. Multiple domains can be defined based on different system administrators' responsibilities, application boundaries, or geographical locations of servers. Conversely, a single domain can be used to centralize all WebLogic Server administration activities. Each WebLogic Server domain must have one server instance that acts as the Administration Server. Administration Server can be used via the Administration Console or using the command line for configuring all other server instances and resources in the domain. WebLogic Domain Structure ![WebLogic Domain Structure Diagram] Weblogic 12c Domain Overview **Administration Server** A domain includes one WebLogic Server instance that is configured as an Administration Server. All changes to configuration and deployment of applications are done through the Administration Server. The Administration Server provides a central point for managing the domain and providing access to the WebLogic Server administration tools. These tools include the following: - **WebLogic Server Administration Console**: Graphical user interface to the Administration Server. - **WebLogic Server Node Manager**: A Java program that lets you start and stop server instances - both Administration Servers and Managed Servers - remotely, and to monitor and automatically restart them after an unexpected failure. Admin server start mode needs to be configured as Production Mode. **Managed Server** In a domain, server instances other than the Administration Server are referred to as Managed Servers. Managed servers host the components and associated resources that constitute your applications—for example, JSPs and EJBs. When a Managed Server starts up, it connects to the domain's Administration Server to obtain configuration and deployment settings. In a domain with only a single WebLogic Server instance, that single server works as both the administration server and managed server. **Node Manager** The Managed Servers in a production WebLogic Server environment are often distributed across multiple machines and geographic locations. Node Manager is a Java utility that runs as separate process from WebLogic Server and allows you to perform common operations tasks for a Managed Server, regardless of its location with respect to its Administration Server. While use of Node Manager is optional, it provides valuable benefits if your WebLogic Server environment hosts applications with high availability requirements. If you run Node Manager on a machine that hosts Managed Servers, you can start and stop the Managed Servers remotely using the Administration Console or from the command line. Node Manager can also automatically restart a Managed Server after an unexpected failure. **Machine** A machine in the Weblogic Serve context is the logical representation of the computer that hosts one or more Weblogic Server instances(servers). The Admin Server uses the machine definitions to start remote servers through the Node Managers that run on those servers. A machine could be a physical or virtual server that hosts an Admin or Managed Server that belongs to a domain. **Managed Server Cluster** Two or more Managed Servers can be configured as a WebLogic Server cluster to increase application scalability and availability. In a WebLogic Server cluster, most resources and services are deployed to each Managed Server (as opposed to a single Managed Server,) enabling failover and load balancing. The servers within a cluster can either run on the same machine or reside in different machines. To the client, a cluster appears as a single WebLogic Server instance. **Dynamic Cluster** A dynamic cluster is any cluster that contains one or more dynamic servers. Each server in the cluster will be based upon a single shared server template. The server template allows you to configure each server the same and ensures that servers do not need to be manually configured before being added to the cluster. This allows you to easily scale up or down the number of servers in your cluster without the need for setting up each server manually. Changes made to the server template are rolled out to all servers that use that template. You cannot configure dynamic servers individually; there are no server instance definitions in the config.xml file when using a dynamic cluster. Therefore, you cannot override the server template with server-specific attributes or target applications to an individual dynamic server instance. When configuring your cluster you specify the maximum number of servers you expect to need at peak times. The specified number of server instances is then created, each based upon your server template. You can then start up however many you need and scale up or down over time according to your needs. If you need additional server instances on top of the number you originally specified, you can increase the maximum number of servers instances (dynamic) in the dynamic cluster configuration. **Server Templates** A single server template provides the basis for the creation of the dynamic servers. Using this single template provides the possibility of every member being created with exactly the same attributes. Where some of the server-specific attributes like Servername, listen-ports, machines, etc. can be calculated based upon tokens. You can pre-create server templates and let Weblogic clone one when a Dynamic Cluster is created. When none is available a server template is created with the Dynamic Cluster. The name and the listen ports are the only server template attributes that you provide during Dynamic Cluster creation. 1.3 **Pre-Requisites** In this document, we are going to create a domain with two managed servers. The managed servers are going to be created on two different physical servers (nodes). Note that, this document has been prepared based on a test conducted in Linux servers. This requires Weblogic Server of same version to be installed on both the machines and services **Environment** 2 servers where linux is installed, 1 will be primary where admin console will be running along with managed servers and the other where only managed servers will be. **Softwares** 1) Oracle Weblogic Server 12.1.3 installed on both the machines under same folder structure. 2) JDK 1.7 Latest available version installed on both the machines. In this document JDK1.7.0_71 version is used. **Clock Synchronization** The clocks of both the servers participating in the cluster must be synchronized to within one second difference to enable proper functioning of jobs otherwise it will lead to session timeouts. **Enable Graphical User Interface (GUI)** Establish a telnet or SSH connection to primary server. Start X-manager (or any similar tool) in windows desktop. Export DISPLAY environment variable to the machine IP where x-manager is running. Syntax: export DISPLAY=<ip-address>:\<port> Test using xclock 2. Domain Configuration 2.1 Domain Creation Weblogic domain creation and configuration will be done from primary server. From primary server, launch the fusion Middleware configuration wizard using the command `config.sh` available under `$WLS_HOME/common/bin` directory. 1) In the Welcome screen, select “Create a new domain” option. Enter the domain name and click on Next. 2) Select the required templates from **Available Templates** and click **Next**. 3) Specify Administrator **User Name** and **Password**. - The specified credentials are used to access Administration console. - You can use this screen to define the default WebLogic Administrator account for the domain. This account is used to boot and connect to the domain's Administration Server. Click **Next**. 4) Select Server Startup as **Production Mode** and the available **JDKs**. Click **Next**. 5) Select the check box adjacent to Administration Server and Node Manager. Click Next. 6) Specify the Administration server **Listen address** and **Listen port**. **Note**: The default Listen port is 7001 and SSL port is 7101. This could be changed to any other available port. Ensure to make a note of this port since the same is required for launching the Admin console, post domain creation. **Note**: Check for the port availability using the command - `netstat -anp | grep <Port no>` The next screen navigates to **NodeManager configuration**. 7) Configure Node Manager Select Per Domain Default Location option from Node Manager Type. And in the Node Manager Credentials, provide the username and password for the nodemanager. Click Next. 8) Verify the details and click Create. The domain creation process is initiated and the progress of completion is indicated. 9) Click Next. 10) The **Configuration Success** message will be displayed as follows: The Admin Server URL is as indicated below: ``` http://<IP address>:<admin console port>/console ``` - `<IP address>`: Host on which domain was created. - `<admin console port>`: Port specified in Administration Server configuration page. In this case the Admin Console URL is: https://<server1hostname>:7101/console 2.2 **Pack and Unpack Domain** The domain structure is expected to be copied to the second server during domain creation. To copy the same, you can use pack and unpack utility provided under $WLSHOME/common/bin. **Pack** Pack domain in primary server: ``` ./pack.sh -managed=true -domain=/scratch/app/wl12c/user_projects/domains/FCUBSDomain -template=/tmp/FCUBSDomain.jar -template_name="FCUBSDomain" ``` **Unpack** Unpack FTP FCBUSDomain.jar in binary mode to secondary server under /tmp area and unpack the domain using unpack utility provided under $WLSHOME/common/bin ``` ./unpack.sh -domain=/scratch/app/wl12c/user_projects/domains/FCUBSDomain -template=/tmp/FCUBSDomain.jar ``` 2.3 **Start Admin server** Admin server is started on the primary server. Login to primary server and navigate to folder $DOMAIN_HOME/bin and execute startWeblogic.sh. 2.4 **Start Node Manager** Node Manager needs to be started on both the servers. Before starting the node manager update ListenAddress to the Hostname/IP Address of the machine in nodemanager.properties located in folder $DOMAIN_HOME/nodemanager To start the node manager login to the servers and navigate to folder $DOMAIN_HOME/bin and execute NodeManager.sh 3. Cluster Configuration Dynamic Cluster configuration involves below steps 1) Machine Configuration 2) Dynamic Cluster Creation: In a normal WebLogic Cluster you define Managed Server and add them to Cluster. In Dynamic Cluster, you select number of Servers you want in Cluster and Server Template you wish to assign to Servers in this WebLogic Dynamic Cluster. 3) Server template modification: Servers (or Managed Server) that are part of WebLogic Dynamic Cluster will have properties taken from Server Template. Modify server template for best practices parameters for Dynamic Servers (part of Dynamic Cluster), you modify Server Template that is applicable to Dynamic Cluster. These settings are applicable to all the managed servers. 4) Activate Changes which would automatically create the managed servers(as mentioned in the number of servers required parameter). **Calculate Number of Servers Required:** For every 50 logged in FLEXCUBE users require one managed server of size 4GB, i.e. for 300 logged in FLEXCUBE users, it is recommended to have 6 managed servers. Based on the logged in users that needs to be supported decide on the number of the managed servers required. This parameter is required later in the dynamic cluster creation. 3.1 **Machines Configuration** 1) Login into Admin Console and Navigate to FCUBSDomain → Environment → Machine and Click **New** 2) Enter the **machine name** and click **Next** 3) Enter the **Listen Address** and **Listen Port** (this is the port mentioned in nodemanager.properties file) and click **Finish** ![Image of WebLogic Server Administration Console](image) 4) **Machine is created** ![Image of Customizing Machines](image) 5) Similarly create a new **machine** entry for the other server **Verifying machine status** Before starting the managed servers, ensure that the Node manager Status of all the machines are “Reachable”. In the console, navigate through Domain structure → Machines → machine1 → Monitoring → Node Manager Status. Status should be **Reachable**. ## 3.2 Dynamic Cluster Creation 1) Login into Admin Console and Navigate to FCUBSDomain → Environment → Clusters → New → select **Dynamic Cluster** 2) Enter the **Cluster Name** and Click on **Next** 3) Enter the **number of dynamic servers** you want to configure, enter the **server name prefix** and click on **Next** 4) Select Machines that participate in domain, in this case all machines will be part of the domain, select **Use any machine configured in this domain** and click on **Next** 5) Select the **listen port for the first server** in the dynamic cluster and then the **SSL listener port** for the first server in the dynamic cluster. The subsequent servers will be assigned with an incremental port number. Click **Next** 6) Summary of new Dynamic Cluster configuration is presented. Click **Finish** to create 7) The **Summary of Clusters** screens should show the recently created Dynamic Cluster. 8) Upon **Activate Changes** would automatically create 4 managed servers. 9) Navigate to FCUBSDomain → Environment → Servers tab and 4 new servers are created 3.3 **Managed Server Template configuration** The server template created is modified to apply the below parameters 3.3.1 **Logging** The process of log file writing in a Weblogic server can impact the performance. Hence, you need to keep the logging to minimum in a production environment. Update below parameters by in Logging Screen <table> <thead> <tr> <th>Parameter</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>Minimum Severity to log</td> <td>Warning</td> </tr> <tr> <td>Log file Severity level</td> <td>Warning</td> </tr> <tr> <td>Standard Out Severity level</td> <td>Critical</td> </tr> <tr> <td>Domain broadcaster Severity level</td> <td>Critical</td> </tr> </tbody> </table> 1) Navigate to FCUBSDomain → Environment → Clusters 2) Select FCUBSTemplate and navigate to Logging → General 3) Under Advanced Tab, update the below parameters and Click on **Save** 3.3.2 **HTTP Logging** 1) FCUBSDomain → Environment → Clusters → FCUBSTemplate → Logging → HTTP → Uncheck the **Access Logs Flag** 3.3.3 **Stuck Thread Max Time** 1) FCUBSDomain → Environment → Clusters → FCUBSTemplate → Tuning, update the stuck thread max time to **18000** and Click on **Save** 4. Tuning 4.1 General Parameters <table> <thead> <tr> <th>PARAMETER</th> <th>VALUE</th> <th>Navigate To</th> </tr> </thead> <tbody> <tr> <td>JTA Time out seconds</td> <td>18000</td> <td>Login to Weblogic Server console.</td> </tr> <tr> <td></td> <td></td> <td>Click on the domain name (ex: FCUBSDomain) which is under 'Domain Structure'.</td> </tr> <tr> <td></td> <td></td> <td>Go to Configuration &gt; JTA, parameter and values is found on the right side panel of console.</td> </tr> <tr> <td>Session Timeout</td> <td>900</td> <td>Login to Weblogic Server console</td> </tr> <tr> <td></td> <td></td> <td>Click on Deployments which is under 'Domain Structure'.</td> </tr> <tr> <td></td> <td></td> <td>Click on the deployed FCJ application from right side panel.</td> </tr> <tr> <td></td> <td></td> <td>Click on FCJNeoWeb from ‘Modules and components’</td> </tr> <tr> <td></td> <td></td> <td>Go to Configuration General, the parameter values can be found here.</td> </tr> </tbody> </table> 4.2 JVM Tuning This section of the document provides JVM optimization for Oracle FLEXCUBE Universal Banking Solution. Basically the JAVA minimum and maximum heap size needs to be reset for 32 and 64 bit environments. Both the minimum and maximum heap size is set to 1.5GB and 4GB in case of 32 bit and 64 bit environments respectively. How to find whether the JVM is 32bit or 64bit? Go to $JAVA_HOME/bin directory. Check java version using command ./java –d64 –version 64 bit JVM shows the version details where as 32bit throws an error. How to modify the JVM heap parameters? To change the JVM heap parameters modify setDomainEnv.sh under domain FCUBSCL in both servers. This file is located at “$WL_HOME/user_projects/domains/$WLS_DOMAIN/bin” in both the servers. Use below USER_MEM_ARGS variable to override the standard memory arguments passed to java for SUN JDK. 32 bit JDK USER_MEM_ARGS="- Dorg.apache.xml.dtm.DTMManager=org.apache.xml.dtm.ref.DTMManagerDefault - Dorg.apache.xerces.xni.parser.XMLParserConfiguration=org.apache.xerces.parsers.XML11Configuration -Dweblogic.threadpool.MinPoolSize=100 - Dweblogic.threadpool.MaxPoolSize=100 -Xms1536M -Xmx1536M - XX:MaxPermSize=256m -server -XX:+UseParallelOldGC - XX:ParallelGCThreads=4" export USER_MEM_ARGS 64 bit JDK USER_MEM_ARGS="- Dorg.apache.xml.dtm.DTMManager=org.apache.xml.dtm.ref.DTMManagerDefault -Dorg.apache.xerces.xni.parser.XMLParserConfiguration=org.apache.xerces.parsers.XML11Configuration -Dweblogic.threadpool.MinPoolSize=100 - Dweblogic.threadpool.MaxPoolSize=100 -Xms4g -Xmx4g -XX:MaxPermSize=512m -server -d64 -XX:+UseParallelOldGC -XX:ParallelGCThreads=4" export USER_MEM_ARGS Note: Take a backup of the files before modifying the same. 5. Start Managed Servers Starting using scripts Managed Servers can be started by executing startManagedWebLogic.sh script present in folder $DOMAIN_HOME/bin Usage: ./startManagedWebLogic.sh SERVER_NAME {ADMIN_URL} Eg: ./startManagedWeblogic.sh FCUBSMS1 https://<hostname1>/console Starting using console Alternatively, login to admin console, navigate to FCUBSDoamin → Environment → Servers → Control, select the managed servers to be started and click on Start Upon successful startup, the status of Managed servers is changed to “RUNNING”. ![Managed Server Status](image1) ![Managed Server Status](image2) 6. Data Source creation and JDBC Configuration Following are the JNDI names of those data sources used by FLEXCUBE application. - jdbc/fcjdevDS - This datasource is used by FLEXCUBE online screen excluding branch screens. - jdbc/fcjdevDSBranch - This datasource is used by Branch screens. - jdbc/fcjSchedulerDS - This datasource is used by Quartz scheduler. **Note:** - jdbc/fcjdevDS should be **NonXA** and make use of **OCI** driver. - jdbc/fcjdevDSBranch and jdbc/fcjSchedulerDS should be **XA** ### 6.1 Setup Required for OCI Driver Data sources are created with OCI enabled. For this, Oracle Instant Client is required, below steps needs to be followed - Download Oracle Instant Client corresponding to the used Oracle DB and java (x64 or x32): [http://www.oracle.com/technetwork/database/features/instant-client/index-097480.html](http://www.oracle.com/technetwork/database/features/instant-client/index-097480.html) - Set {ORACLE_HOME} in the environment variable. - Update the environment variable LD_LIBRARY_PATH as {ORACLE_HOME}/lib. This is to load all the .so files. - Ensure that the ojdbc*.jar file in {WL_HOME}/server/lib/ojdbc*.jar is the same as the file {ORACLE_HOME}/jdbc/lib/ojdbc*.jar. This is to ensure compatibility. - Update LD_LIBRARY_PATH in StartWebLogic.sh or in setDomainEnv.sh. This must be the path of directory where Oracle Instant Client is installed. - If you are still not able to load the .so files, then you need to update the EXTRA_JAVA_PROPERTIES by setting Djava.library.path as {ORACLE_HOME}/lib in StartWebLogic.sh or in setDomainEnv.sh. 6.2 **Data source creation: non XA** 1) Navigate to FCUBSDomain → Services → Data Sources → select New > **Generic data source** 2) Enter the **Name** and **JNDI Name** and Click on **Next** 3) Select the Driver as "Oracle's Driver(thin) for Instance connection: Versions: Any" and Click on Next 4) Uncheck the "Supports Global Transactions" and click on Next 5) Enter the Database Name, Host Name, Port, User Name, Password, Confirm Password and Click on Next 6) Replace the **JDBC URL** in the below format and click on **Next** Default URL: `jdbc:oracle:thin:@<IP_Address>:<Port>:<INSTANCE_NAME>`. Change the default URL to: `jdbc:oracle:oci:@<DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=xxxxxx.com)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=fcubs)))` Where, Scan IP = xxxxxx.com Service Name = fcubs Port = 1521 Make sure that in URL, we make the changes to reflect oci. Then Click on Test Configuration. The connection test should be successful. 7) Select Target as **FCUBSCluster** and click on **Finish** 6.3 XA Datasource 1) Navigate to FCUBSDomain → Services → Data Sources → select New → Generic data source 2) Enter the Name and JNDI Name and Click on Next 3) Select the Driver as “Oracle’s Driver(thin XA) for Instance connection: Versions: Any” and Click on Next 4) Click on Next 5) From this step to target setting step follow as mentioned in non-xa 6) Upon Activate Changes would create the XA Datasource 7) Similarly create all the other Datasource required for the FCUBS Application and Gateway Deployments 6.4 **JDBC Parameters Tuning** Below JDBC parameters needs to updated for all the Datasources <table> <thead> <tr> <th>PARAMETER</th> <th>VALUE</th> <th>Navigate To</th> </tr> </thead> <tbody> <tr> <td>Connection Reserve time out</td> <td>30</td> <td>Connection Pool-&gt;Advance</td> </tr> <tr> <td>Test Frequency</td> <td>60</td> <td>Connection Pool-&gt;Advance</td> </tr> <tr> <td>Inactive connection time out</td> <td>30</td> <td>Connection Pool-&gt;Advance</td> </tr> <tr> <td>Initial Capacity</td> <td>1</td> <td>Connection Pool</td> </tr> <tr> <td>Max capacity</td> <td>Based on Site Requirement</td> <td>Connection Pool</td> </tr> <tr> <td>Capacity Increment</td> <td>5</td> <td>Connection Pool</td> </tr> <tr> <td>Shrink Frequency</td> <td>900</td> <td>Connection Pool-&gt;Advance</td> </tr> <tr> <td>Test Connection on Reserve</td> <td>Checked</td> <td>Connection Pool-&gt;Advance</td> </tr> </tbody> </table> 7. JMS Resource Creation JMS Resource Creation involves various steps - Persistence Store Creation - JMS Server Creation - JMS Module Creation - Resource Creation: Connection Factory and Queue’s Refer to the JMS Cluster Configuration document for further details on JMS setup. 8. Oracle WebLogic Load Balancing For Weblogic Load balancing, use 1) Oracle HTTP Server: Refer to Configuration for Oracle HTTP Server for setup. 2) Apache: Refer to Configuration for Apache for setup. 9. Frequently Asked Questions 9.1 Machine status is Unreachable. If the machine status is unreachable, means that machine is not reachable and from console you cannot start/stop the managed servers. In the console, navigate through Domain structure → Machines → machine1 → Monitoring → Node Manager Status will be Unreachable To change the status, you need to start the nodemanager on that server. Refer to start nodemanager section on steps to start the nodemanager. 9.2 How to restart node manager? 1) Locate node manager pid using `ps -ef|grep weblogic.nodemanager.javaHome` 2) Change directory to `$DOMAIN_HOME/bin` 3) Kill the unix process using `kill -9 <pid>` 4) Verify that the node manager is killed by `tail –f nohup.out` 5) Start node manager using `nohup ./startNodeManager.sh &` 6) Verify nodemanager is started using `tail –f nohup.out` 9.3 Scaling Up Dynamic Cluster When the capacity is insufficient and you need to scale-up, you can add dynamic servers on demand. It requires only a few clicks. 1) Navigate to FCUBSDomain → Environment → Clusters 2) Click FCUBSCluster → Configuration → Servers tab 3) Change the Maximum Number of Dynamic Servers to: 8 and Click Save 4) Activate changes in the Change Center of the Weblogic Console. After activation 4 new Dynamic Servers are added to the Dynamic Cluster 5) Start the 4 new Dynamic Servers and you have doubled your capacity. 9.4 **Session Timeout** Session timeouts occur intermittently during load condition. Verify the following: 1. Clock Synchronization: Time across the nodes/machines is same. 2. Session Stickiness in load balancer: Persistence Type in load balancer should be set to SOURCE IP and should not be cookie.
{"Source-Url": "https://docs.oracle.com/cd/E86273_01/PDF/Installation/Environment%20Setup/Application%20Server/FCUBS_Weblogic_Middleware_Practices.pdf", "len_cl100k_base": 6228, "olmocr-version": "0.1.50", "pdf-total-pages": 48, "total-fallback-pages": 0, "total-input-tokens": 78282, "total-output-tokens": 7890, "length": "2e12", "weborganizer": {"__label__adult": 0.0004799365997314453, "__label__art_design": 0.0004973411560058594, "__label__crime_law": 0.000362396240234375, "__label__education_jobs": 0.002773284912109375, "__label__entertainment": 0.00020992755889892575, "__label__fashion_beauty": 0.0001854896545410156, "__label__finance_business": 0.0112762451171875, "__label__food_dining": 0.00026702880859375, "__label__games": 0.0015411376953125, "__label__hardware": 0.004695892333984375, "__label__health": 0.0005726814270019531, "__label__history": 0.0004341602325439453, "__label__home_hobbies": 0.0003180503845214844, "__label__industrial": 0.0021686553955078125, "__label__literature": 0.0002491474151611328, "__label__politics": 0.0003428459167480469, "__label__religion": 0.0004992485046386719, "__label__science_tech": 0.0521240234375, "__label__social_life": 0.00017213821411132812, "__label__software": 0.328857421875, "__label__software_dev": 0.5908203125, "__label__sports_fitness": 0.00029206275939941406, "__label__transportation": 0.0005693435668945312, "__label__travel": 0.00034809112548828125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28983, 0.0269]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28983, 0.34138]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28983, 0.72271]], "google_gemma-3-12b-it_contains_pii": [[0, 98, false], [98, 3953, null], [3953, 5134, null], [5134, 6487, null], [6487, 9543, null], [9543, 10192, null], [10192, 11498, null], [11498, 11878, null], [11878, 11960, null], [11960, 12280, null], [12280, 12372, null], [12372, 12460, null], [12460, 12926, null], [12926, 13123, null], [13123, 13249, null], [13249, 13264, null], [13264, 13657, null], [13657, 14881, null], [14881, 16139, null], [16139, 16319, null], [16319, 16579, null], [16579, 16926, null], [16926, 17128, null], [17128, 17426, null], [17426, 17758, null], [17758, 18009, null], [18009, 18741, null], [18741, 18814, null], [18814, 19114, null], [19114, 21173, null], [21173, 22025, null], [22025, 22644, null], [22644, 24230, null], [24230, 24423, null], [24423, 24593, null], [24593, 25202, null], [25202, 25263, null], [25263, 25421, null], [25421, 25547, null], [25547, 25618, null], [25618, 25779, null], [25779, 26789, null], [26789, 27069, null], [27069, 27275, null], [27275, 28348, null], [28348, 28681, null], [28681, 28983, null], [28983, 28983, null]], "google_gemma-3-12b-it_is_public_document": [[0, 98, true], [98, 3953, null], [3953, 5134, null], [5134, 6487, null], [6487, 9543, null], [9543, 10192, null], [10192, 11498, null], [11498, 11878, null], [11878, 11960, null], [11960, 12280, null], [12280, 12372, null], [12372, 12460, null], [12460, 12926, null], [12926, 13123, null], [13123, 13249, null], [13249, 13264, null], [13264, 13657, null], [13657, 14881, null], [14881, 16139, null], [16139, 16319, null], [16319, 16579, null], [16579, 16926, null], [16926, 17128, null], [17128, 17426, null], [17426, 17758, null], [17758, 18009, null], [18009, 18741, null], [18741, 18814, null], [18814, 19114, null], [19114, 21173, null], [21173, 22025, null], [22025, 22644, null], [22644, 24230, null], [24230, 24423, null], [24423, 24593, null], [24593, 25202, null], [25202, 25263, null], [25263, 25421, null], [25421, 25547, null], [25547, 25618, null], [25618, 25779, null], [25779, 26789, null], [26789, 27069, null], [27069, 27275, null], [27275, 28348, null], [28348, 28681, null], [28681, 28983, null], [28983, 28983, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 28983, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28983, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28983, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28983, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28983, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28983, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28983, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28983, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28983, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28983, null]], "pdf_page_numbers": [[0, 98, 1], [98, 3953, 2], [3953, 5134, 3], [5134, 6487, 4], [6487, 9543, 5], [9543, 10192, 6], [10192, 11498, 7], [11498, 11878, 8], [11878, 11960, 9], [11960, 12280, 10], [12280, 12372, 11], [12372, 12460, 12], [12460, 12926, 13], [12926, 13123, 14], [13123, 13249, 15], [13249, 13264, 16], [13264, 13657, 17], [13657, 14881, 18], [14881, 16139, 19], [16139, 16319, 20], [16319, 16579, 21], [16579, 16926, 22], [16926, 17128, 23], [17128, 17426, 24], [17426, 17758, 25], [17758, 18009, 26], [18009, 18741, 27], [18741, 18814, 28], [18814, 19114, 29], [19114, 21173, 30], [21173, 22025, 31], [22025, 22644, 32], [22644, 24230, 33], [24230, 24423, 34], [24423, 24593, 35], [24593, 25202, 36], [25202, 25263, 37], [25263, 25421, 38], [25421, 25547, 39], [25547, 25618, 40], [25618, 25779, 41], [25779, 26789, 42], [26789, 27069, 43], [27069, 27275, 44], [27275, 28348, 45], [28348, 28681, 46], [28681, 28983, 47], [28983, 28983, 48]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28983, 0.08228]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
a4ae3423fd87b8cd1345ba5d349fbdc88e077c60
E-Commerce Patterns v1.0 Business Process Team 11 May 2001 (This document is the non-normative version formatted for printing, July 2001) # Table of Contents 1 Status of this Document ................................................................. 5 2 ebXML Participants ........................................................................ 6 3 Introduction .................................................................................. 7 3.1 Summary .................................................................................. 7 3.2 Audience .................................................................................. 7 3.3 Related documents ................................................................. 7 3.4 Document conventions .......................................................... 8 4 Design Objectives ......................................................................... 9 4.1 Problem description ............................................................... 9 4.2 Terminology ............................................................................. 9 4.3 Significant terms defined in ebXML ........................................ 9 4.4 Terms defined for the purpose of this document .................. 10 4.5 Assumptions and constraints .................................................. 11 4.6 Constraints from legal and auditing requirements .................. 11 4.7 Constraints from ebXML structure and standards ................ 12 5 Contract Formation in ebXML ....................................................... 14 5.1 ebBPSS contract formation functionality ............................. 14 5.2 Simple contract formation pattern ......................................... 15 5.2.1 Requirements for all business documents and document envelopes ............................................. 15 5.2.2 Requirements for all offers .............................................. 16 5.2.3 Requirements for all acceptances .................................. 16 5.2.4 Requirements for all rejections and counteroffers ........ 16 5.3 Drop ship business process example ..................................... 18 6 Simple Automated Contract Negotiation in ebXML ..................... 22 5.1 ebBPSS contract negotiation functionality ........................... 22 5.2 CPA negotiation as an instance ............................................. 24 1 Status of this Document This document specifies an ebXML Technical Report for the eBusiness community. Distribution of this document is unlimited. The document formatting is based on the Internet Society’s Standard RFC format. This version: www.ebXML.org/specs/bpPATT.pdf Latest version: www.ebXML.org/specs/bpPATT.pdf 2 ebXML Participants Business Process Project Team Co-Leads: Paul Levine Telcordia Marcia McLure McLure-Moynihan, Inc. Business Process/Core Components Joint Delivery Analysis Team Lead: Brian Hayes Commerce One We would like to recognize the following for their significant contributions to the development of this document. Editor: Jamie Clark McLure-Moynihan, Inc. Contributors: Bob Haugen Logistical Software LLC Nita Sharma Iona David Welsh Nordstrom.com 3 Introduction 3.1 Summary This document is a supporting document to the ebXML Business Process Specification Schema [ebBPSS], to address common pattern implementation issues and provide examples. The 'Simple Contract Formation Pattern' defined here demonstrates a non-normative rule-defined subset of BPSS use for practical contracting purposes. It also is aligned with the "drop ship vendor" model collaboration used by the Worksheets published by the ebXML BP/CC Analysis Team. The 'Simple Negotiation Pattern' defined here demonstrates a non-normative rule-defined subset of BPSS use to allow simple exchanges of 'dry run' transactions and collaborations that may result in a collective decision by trading patterns to use them on an enforceable basis. It also may be suitable to automate the negotiation of ebXML CPA terms from CPPs. 3.2 Audience This document is intended to be read by designers and implementer of ebXML business processes. 3.3 Related documents UN/CEFACT Modelling Methodology, Version 9.1. 2001. UN Economic Commission for Europe. (CEFACT/TMWG/N090R9.1) [UMM] 3.4 Document conventions The keywords MUST, MUST NOT, REQUIRED, SHALL, SHALL NOT, SHOULD, SHOULD NOT, RECOMMENDED, MAY, and OPTIONAL, when they appear in this document, are to be interpreted as described in RFC 2119 [Bra97]. 4 Design Objectives 4.1 Problem description The BP Specification Schema [ebBPSS] contemplates exchanges of Business Documents composed into atomic Business Transactions each between two parties. In order to achieve the desired legal and economic effects of these exchanges, the structure of the Business Transactions must - generate a computable success or failure state for each transaction that can be derived solely from the application of the ebBPSS standard and the data exchanged in the Business Documents and Business Envelopes, - permit the parties to exchange legally binding statements and terms, - permit the parties to exchange nonbinding statements and terms, in order to negotiate, and - permit a logical composition of those exchanges into Collaboration patterns that allow agreements about sequences of transactions to be formed. 4.2 Terminology 4.3 Significant terms defined in ebXML Business Collaboration -- The "Business Collaboration" object as defined in [ebBPSS]. Business Document -- The "Business Document" object as defined in [ebBPSS]. Business Transaction -- The "Business Transaction" object as defined in [ebBPSS]. Contract – Generally, a bounded set of statements and/or commitments between trading partners that are intended to be legally enforceable as between those parties. [ebGLOSS] Legally Binding – An optional character of a statement or commitment exchanged between trading partners (such as an offer or acceptance), set by its sender, which indicates that the sender has expressed its intent to make the statement or commitment legally enforceable. [ebGLOSS] 4.4 Terms defined for the purpose of this document. Acceptance -- A responding party's document indicating agreement with a received offer. Binding -- See "Legally Binding" above. Business Signal Parameters -- The following parameters as defined in [ebBPSS]: - isAuthorizationRequired - timeToPerform - isIntelligibleCheckRequired - isAuthenticated - isNonRepudiationRequired - isConfidential - isNonRepudiationOfReceiptRequired - isLegallyBinding - timeToAcknowledgeReceipt - isTamperProof - timeToAcknowledgeAcceptance - isGuaranteedDeliveryRequired Collaboration -- See "Business Collaboration" above. Counteroffer advice -- A message bound to a rejection, indicating that the sender intends to send a new offer regarding the same subject matter. Offer -- A document proposing business terms by a requesting party addressed to a responding recipient. A binding offer entitles the recipient to form a contract with the requesting party by responding with a binding acceptance. Nonbinding -- An optional character of a statement or commitment exchanged between trading partners (such as an offer or acceptance), set by its sender, that indicates the intent to be legally bound. See "Legally Binding" above. Rejection -- A responding party's document indicating that it rejects a received offer. Transaction -- See "Business Transaction" above. 4.5 Assumptions and constraints 4.6 Constraints from legal and auditing requirements a.) **Enforceability requires an expression of intent.** In order for a message to be given legally enforceable effect, whatever its form, the author must indicate his intent to be bound. The message's sender may accomplish this by intentional use of a standard that specifies a mark, attribute or protocol indicating legal assent. In a paper context, this might mean affixing a written signature, plus an absence of elements that qualify its enforceability. (Elements that might tend to do so could include a substantive precondition to enforceability, the omission of essential terms, or a 'draft' stamp on its face that impeaches the document's finality). b.) **Each offer must succeed or fail.** The offer in a binary transaction must be definitively resolved in order to end the transaction. (This is true whether or not the offers are binding.) Offers that are followed by an explicit acceptance must be resolved as accepted. All other responses – including time-outs, rejections and counteroffers – must be resolved as a type of rejection. Either resolution should result in completion of the transaction, together with a suitably provable "success" or "failure" end state that informs further processing of the results of the transaction. c.) **Each acceptance must relate precisely to an offer.** Each acceptance of an offer (whether or not binding) must unambiguously refer to the offer accepted, in a manner that produces artifacts transmitted between the parties and suitable for proving the identity of the terms that were accepted. d.) **Replicable and computable transaction state closure.** In the foregoing context, "suitable proof" of the offer and acceptance events, means that determinable computation of the transaction's "success" or "failure" state must be replicable by both trading partners at run time, as well as third parties (such as a court) after the fact, using only artifacts transmitted within messages associated with the transaction. --- **A sidebar: Nonrepudiation and Enforceability** Users of this document should note that the defined signals isNonrepudiation-Required, isNonRepudiationOfReceiptRequired and isLegallyBinding are significantly distinct from the generalized goals of nonrepudiation and legal enforceability. Invoking the former should assist, but does not assure, the latter. The goal of a well-designed electronic commerce model is to reduce the risk of repudiation and unenforceability to a reasonable minimum. No system will completely eliminate either risk. See [ABA Model Trading Partner Agreement 1992] and [UN/ECE Interchange Agreements for EDI 1995]. Repudiation risk occurs whenever a trading partner has an opportunity to avoid the consequences of its commitments. For example, under the BPSS, if you impose an timeToAcknowledgeAcceptance parameter (time>0) on a trading partner's response to you, he may validly reply with an exception claiming that your requesting document does not conform to the relevant business rules. That claim may or may not be true: in fact, nothing in the standard computationally prevents him from making a false exception at runtime. That opportunity may be the functional equivalent for him of a chance to repudiate. Say your requesting document offers to buy 1000 units of X. Assume you and he have a pre-existing contract requiring him to sell you 1000 units of X whenever you offer to buy them. He may have received, parsed and understood your requesting document as a purchase order to buy X. But he is still in a position to inaccurately claim that your purchase order failed a business rule check. Perhaps he has a limited supply of X, and a buyer who will pay more than you. At run time, there likely is no way for you to tell. What business signal parameters offer, in that instance, is a set of process rules that require you or him to keep and store significant artifacts from the transactional messaging, that later may be impartially interpreted. Any "legally binding" obligation should, as a design matter, generate a set of those artifacts that would be useful in proving later in court that (for example) the claim of a failed business rule check was fraudulent. In the electronic commerce context, an evaluative judgment that a set of messages creates an enforceable or nonrepudiatable contract should be understood to mean that the quality and coherence of the evidentiary artifacts available to prove it are acceptably strong. We cannot prevent trading partners from lying. We can design signal structures that make it easier to prove later. 4.7 Constraints from ebXML structure and standards a.) Business Service Interface. An ebXML collaboration is conducted by two or more parties, each using a human or an automated business service interface that interprets the documents and document envelopes transmitted and decides how to (or whether to) respond. b.) Decomposition of business processes into binary pairs. All collaborations are composed of one or more atomic transactions, each between two parties. Multi-party or multi-path economic arrangements are possible, and may be arranged in a single collaboration, but must be decomposed into bilateral transactions in order to be modeled and executed under the ebBPSS. c.) **Definitive use of visible end state machines.** The ebBPSS uses guard expressions that permit the reliable computation of transaction "success" or "failure" transaction end states. For the sake of reliability, these must be the exclusive source of instructions to the trading partner's business service interface, within the scope of that transaction. Any contingency or business logic that is to govern the reaction of the business service interface to a transaction must be expressed within the relevant collaboration in a manner that affects the end state, and that manner must be made visible to both trading partners in the business process specification referenced by the CPA to which the partners agreed. d.) **Function of digital signatures.** Several ebXML specifications permit electronic signatures (generally conforming to the W3C XML-DSIG standard) to be used for various purposes such as message integrity or sender identification. Therefore, the presence or absence of an electronic signature bound to a document by hashing or the like, cannot, by itself, be used to indicate the document's binding character. e.) **Ability to declare documents nonbinding.** The ebBPSS permits a trading partner to explicitly designate specific documents as binding or nonbinding by setting the Boolean parameter "isLegallyBinding". 5 Contract Formation in ebXML 5.1 ebBPSS contract formation functionality The constraints listed in Section 4 provide implementers with a specific set of tools for producing reliable artifacts to evidence contracts. The ebBPSS constrains process designers and implementers to two methods of affecting the determination of a transaction's "success" or "failure" end states: 1. The semantic contents of the documents and document envelopes that pass between the trading partners can be referenced and evaluated in a guard expression, and 2. The BPSS business signal parameters that resolve requests for acknowledgement and the like, short of substantive responses to BusinessDocuments. In the context of simple contract formation, trading partners may explicitly form a contract by exchanging requesting documents constituting binding offers, and responding documents constituting binding acceptances, resulting in a demonstrably successful or failed negotiation of the business terms proposed in the offer. A sidebar: Explicit vs. implicit contracts There is an important distinction between the legal view of contracts and this document's definition of "contract". The former encompasses a much broader range of phenomena that may be interpreted as a enforceable agreement. In commerce, some agreements are formed by reciprocal actions and implied promises, without any explicit messages in one or both directions. If one trading partner acts in a manner that reasonably seems to convey an offer to sell an object, and the other partner carts off the object, a court may conclude that the latter's behavior is acceptance by performance. In such a case, the implicit contract is formed by inferring acceptance, as if the latter party had explicitly accepted an explicit sale offer. In this document we are only concerned with exchanges of explicit messages that, if they logically match, will produce an explicit contract expressed in and evidenced by the messages. However, process designers should bear in mind that the terms of those explicit contracts can suffer interference from subsequent interpretation of events. Courts are not barred from concluding, and trading partners are not barred from arguing, that a course of behavior between electronic trading partners gives rise to an implicit legally enforceable agreement, or an implicit enforceable change to an explicit electronically-formed contract, even in the absence of further exchanges of legally binding messages. The next section describes a pattern that may be used to explicitly exchange a series of one or more transactions, within a collaboration, to form a legally binding contract. 5.2 Simple contract formation pattern Contracts MAY be formed by ebXML collaborations by the inclusion of offers and acceptances that conform to the Simple Contract Formation Pattern described here. This section describes a pattern that may be used to explicitly exchange a series of one or more transactions, within a collaboration, to form a legally binding contract. The Simple Contract Formation Pattern is constrained by rules that define a constrained subset of the alternative methods available for forming a contract under the ebBPSS schema. The pattern illustrates a subset of functionality that a particular domain or group of trading partners might elect. 5.2.1 Requirements for all business documents and document envelopes To use this sample pattern, a business process must conform to the following rules, which are elective ("non-normative") to the ebBPSS standard, but required by this pattern: 1. Guard expressions in this pattern MUST refer only to one or more data fields that reside within the Business Document contained in the Document Envelope being evaluated. For example, this rules out a success or failure end state being generated by guard expressions that rely on the Document Envelope name, or the isPositiveResponse attribute of the Document Envelope. 2. Business Documents in this pattern MUST NOT set the IsLegallyBinding attribute to "No". This simplifies the evaluation that each business service interface must conduct of a document. Among other things, this rule also bars a number of approaches, such as the negotiating function demonstrated in the Simple Negotiation Pattern described in Section 6 of this document. 3. All Business Transactions and Business Documents in this pattern MUST conform to the one of the six "transaction patterns" defined in Chapter [9] of the UMM N90 metamodel. This is an example of re-use. The six recommended N90 patterns dictate or constrain the use of certain ebBPSS business signal parameters such as timeToPerform and timeToAcknowledgeReceipt. By re-using well-defined permutations of the business signal parameter values, the process designer and the process user can choose to rely on the UMM N90 standard designers, who have in the UMM documentation described the logical relationship between the signals, and made suggestions about the suitability of particular permutations to particular business needs. 5.2.2 Requirements for all offers Under this pattern: 1. A document constituting an offer MUST be the Business Document sent within the Requesting Business Activity. 2. Any Business Document constituting an offer MUST NOT contain any data that is evaluated by a guard expression but is not transmitted with the Document Envelope that contains that Business Document. Another way of putting this is that the offer document may not incorporate data by reference that would not be captured by an archive of the message in which the document is sent and received. (While it certainly may be possible for trading partners to work out an acceptably safe protocol for incorporation by linking reference, that function would make more complex the archiving of contract formation evidence. This simple pattern prohibits the linking so as to keep those archiving requirements very simple. 5.2.3 Requirements for all acceptances Under this pattern: 1. Business processes MUST define one and only one responding Business Document that is evaluated by the processes' guard expressions as producing a "success" end state (and thus the end of that atomic transaction). That document constitutes the acceptance, and MUST be the Business Document sent within the Responding Business Activity of the same Business Transaction in which the offer was sent as the Requesting Business Activity. 2. Repeating the terms of an offer, in the document constituting an acceptance to that offer, is NOT RECOMMENDED. Repetition of terms previously transmitted creates ambiguity. If the terms sent "as accepted" are identical to those sent "as offered", a comparison by the offering party is redundant. The parties have already made provision for the desired level of message integrity and security by setting the business signal parameters. Therefore it is possible that the parties are already reflecting back acknowledgement messages. If the comparison reveals a difference, the comparing party is faced with ambiguity among the artifacts that might be its legally relevant evidence, and no clear rule for whether the document type or the document contents govern. 5.2.4 Requirements for all rejections and counteroffers 5.2.4.1 Handling of explicit substantive rejections Under this pattern: 1. A document constituting a rejection MUST be the Business Document sent within the Responding Business Activity of the same Business Transaction in which the offer was sent as the Requesting Business Activity. 2. A document constituting a rejection terminates the transaction initiated by the offer being rejected, by transitioning to a "failure" end state. 5.2.4.2 Handling of counteroffers The request-response paradigm of the BPSS (as well as the UMM N90 "transaction patterns" requires that all counteroffers be expressed in two documents or signals: (a) a rejection, to properly close the request-response pair initiated by the offer, and (b) a counteroffer, expressed as a new offer in which the rejecting party is the initiator of a new transaction. Thus, under this pattern: In order to propose new or modified terms, the rejecting party MUST send a new offer containing the proposed terms, thereby starting a new transaction response-request pair. A document constituting a rejection MAY be bound to a signal indicating that a counteroffer is coming, which is called a “counteroffer advice” in this document. A counteroffer advice MUST NOT be treated by itself as an offer, nor as a binding document. A counteroffer advice MAY be communicated by a message document bound to the rejection document in a manner compliant with ebXML standards (such as in a common Document Envelope), or by a unique rejection document subtype used only to signify a counteroffer advice as well as a rejection. However, the method of indicating a counteroffer advice MUST be specified in the applicable CPA. Receipt of a counteroffer advice MUST NOT toll or re-set a transaction time-out clock (such as timeToPerform) started by the rejected offer. The business service interface of an ebXML user MAY use the counteroffer advice for its own purposes. It is RECOMMENDED that a collaboration handling system include a separate collaboration-oriented time-out clock, distinct from the ebBPSS timeToPerform rules applicable to an individual transaction. The rules for that clock may include an explicit manner for handling counteroffer advice messages. Under ebBPSS the time-out conclusions of that timer do not directly affect the timer objects in the schema's metamodel. However, it would likely inform the decisions of a business service interface decisions regarding, among other things, when to throw an explicit rejection, and when to rescind an offer (if the conditions of the offer permit it). A separate document type for offers not capable of a counteroffer -- sometimes called "unalterable" offers -- is NOT RECOMMENDED. Under the ebBPSS schema, every offer must be simply accepted or rejected on a "take it or leave it" basis. Processing of counteroffers generally will be handled in a more robust and informative manner by the recipient’s business service interface interpreting the rejection, not by a preemptive failure caused by a *document* type. --- **A sidebar: The utility of patterns in handling business signal parameters** As standards that attempts to permit interoperability with a wide range of current practices, ebXML's schemas almost certainly provide more functionality than most users will initially employ. The BPSS schema specifies some mandatory signals and state handling functions, and many more optional ones. Some potential users may wish to permit or support only a select subset. Some user domains may wish to provide a simple upgrade path, by constraining their use of the BPSS schema parameters to a subset that maps easily to the cognate functions of their legacy system. The Simple Contract Formation Pattern is an illustrative example of a set of rules that might be voluntarily adopted to present a simpler set of process design options. This is a hypothetical pattern, not an actual recommendation of suitability. It merely illustrates how a process designer might further constrain the possible uses of BPSS functionality to make it more "user-friendly" to a particular user base. As a result, a process designer could (1) offer to this use base only business processes that conform to the pattern, and (2) advise users to interrogate new business processes to see if they require functionalities that this pattern excludes. --- ### 5.3 Drop ship business process example The following table illustrates the composition of a multiparty *collaboration* from multiple binary *collaborations* and *Business Transactions*, each composed of one or two *Business Documents*. This collaboration can be conducted under the Simple Contract Formation Pattern defined in the previous section. The UMM N90 transaction pattern applicable to each *transaction* is noted in brackets in the second column in the following table. The hypothetical *collaboration* is a superset of the same *Business Transactions* used as the illustrative values that populate the sample "Worksheets" in the ebXML Business Process Analysis Worksheet and Guidelines [bpWS]. ## DROP SHIP SCENARIO ### SAMPLE USE OF BUSINESS PROCESS PATTERNS *Version 1* 10 May 2001 *Jamie Clark, Bob Haugen, Nita Sharma, Dave Welsh, Brian Hayes* <table> <thead> <tr> <th>BUSINESS PROCESS</th> <th>BINARY COLLABORATION (protocol)</th> <th>BUSINESS TRANSACTION (activity)</th> <th>INITIATING / REQUESTING SIDE</th> <th>REQUESTING DOCUMENT</th> <th>RESPONDING SIDE</th> <th>RESPONDING DOCUMENT</th> </tr> </thead> <tbody> <tr> <td>BPUC-5.7-Sales-Product-Notification</td> <td>BC-6.9-Sales-Product-Offering</td> <td>BT-8.9-Product-Offering</td> <td>PARTNER TYPE: DSVendor AUTH ROLE: Catalog Publishing</td> <td>Product Catalog Offering (e.g. X12 832, ver 4010)</td> <td>PARTNER TYPE: Retailer AUTH ROLE: Merchandising</td> <td>Product Catalog Acceptance</td> </tr> </tbody> </table> Actors: Retailer, DSVendor --- 1 Notes on use of roles: Authorized Roles are assigned to each of the two roles in each Business Transaction. Each MUST be unique within a Business Process (or else you can’t definitively point to them for process specification purposes). It is RECOMMENDED that Authorized Roles be named to facilitate resource discovery, by creating unique composite values from a controlled vocabulary. There is no normative rule for generating the names. In this table, we have used a hypothetical controlled vocabulary which includes “Inventory Buyer, Catalog Publisher, Merchandising, Buying Customer, Customer Service, Accounts Receivable, Shipper, Payer, Payee, Credit Authority Service,” to promote resource discovery and re-use, and we have elected to use the Business Transaction names (and, where necessary, Collaboration names) to qualify and distinguish them. 2 This column suggests use of one of the six demonstrative signal patterns offered in the UN/CEFACT TMWG N90 metamodel. Re-using these reduces our need to pay attention to the parameter values. <table> <thead> <tr> <th>BUSINESS PROCESS</th> <th>BINARY COLLABORATION (protocol)</th> <th>BUSINESS TRANSACTION (activity)</th> <th>INITIATING / REQUESTING SIDE</th> <th>REQUESTING DOCUMENT</th> <th>RESPONDING SIDE</th> <th>RESPONDING DOCUMENT</th> </tr> </thead> </table> | BPUC-5.6-Inventory-Management Actors: Retailer, DSVendor | BC-6.7-Vendor-Inventory-Reporting | BT-8.5-Vendor-Inventory-Report [Notification] | PARTNER TYPE: DSVendor AUTH ROLE: Inventory Buyer | Inventory Report | PARTNER TYPE: Retailer AUTH ROLE: Inventory Buyer | On Hand Product Availability | | BPUC-5.1-Firm-Sales-Order [Business Transaction] | PARTNER TYPE: Customer AUTH ROLE: Buying Customer | Sales Order | PARTNER TYPE: Retailer AUTH ROLE: Customer Service | Confirmation | | BPUC-5.2-Customer-Credit-Inquiry Actors: Retailer, Credit Authority | BC-6.2-Check-Customer-Credit [Request / Response] | BT-8.2-Check-Customer-Credit [Request / Response] | PARTNER TYPE: Retailer AUTH ROLE: Customer Service | Credit Check | PARTNER TYPE: Credit Authority AUTH ROLE: Credit Service | Credit Check | --- 3 In designing the business process, Retailer might choose to confirm the order only after successfully completing the Product Fulfillment collaboration. In that case Order Fulfillment would nest inside Firm Order. 4 Provided via web browser. 5 Provided via email 6 The suggested pattern is "Request/Response", not "Commercial Transaction" in N90 usage, because information was transmitted on demand, but no economic commitment (credit allocation) was made. <table> <thead> <tr> <th>BUSINESS PROCESS</th> <th>BINARY COLLABORATION (protocol)</th> <th>BUSINESS TRANSACTION (activity)</th> <th>INITIATING / REQUESTING SIDE</th> <th>REQUESTING DOCUMENT</th> <th>RESPONDING DOCUMENT</th> </tr> </thead> <tbody> <tr> <td>BPUC-5.4-Purchase-Order-Management</td> <td>BC-6.4-Create-Vendor-Purchase-Order</td> <td>BT-8.4-Create-Vendor-Purchase-Order</td> <td>PARTNER TYPE: Retailer</td> <td>Purchase Order Request</td> <td>PARTNER TYPE: DSVendor</td> </tr> <tr> <td>BPUC-5.5-Ship-Goods</td> <td>BC-6.5-Ship-Instruction</td> <td>BT-8.7-Ship-Notification</td> <td>PARTNER TYPE: DSVendor</td> <td>Shipment Instruction</td> <td>PARTNER TYPE: Transport Carrier</td> </tr> <tr> <td></td> <td></td> <td>BC-6.6-Confirm-Shipment</td> <td>PARTNER TYPE: DSVendor</td> <td>Advance Ship Notice</td> <td>PARTNER TYPE: Retailer</td> </tr> <tr> <td></td> <td></td> <td>BT-8.8-Confirm-Shipment</td> <td>AUTH ROLE: Shipper</td> <td>AUTH ROLE: Customer Service</td> <td>NONE</td> </tr> <tr> <td></td> <td></td> <td>[Notification]</td> <td></td> <td></td> <td></td> </tr> <tr> <td>BPUC-5.3-Customer-Credit-Payment</td> <td>BC-6.3-Process-Customer-Credit</td> <td>BT-8.3-Charge-Customer-Credit</td> <td>PARTNER TYPE: Retailer</td> <td>Charge Credit</td> <td>PARTNER TYPE: Credit Authority</td> </tr> <tr> <td>Actors: Retailer, Credit Authority</td> <td>[Business Transaction]</td> <td>AUTH ROLE: Accounts Receivable</td> <td>AUTH ROLE: Credit Authority Service</td> <td>Confirm Credit</td> <td></td> </tr> <tr> <td>BPUC-5.8-Present-Invoice</td> <td>BC-6.10-Present-Invoice</td> <td>BT-8.11-Present-Invoice</td> <td>PARTNER TYPE: DSVendor</td> <td>Invoice</td> <td>PARTNER TYPE: Retailer</td> </tr> <tr> <td></td> <td>[Notification]</td> <td></td> <td>AUTH ROLE: Payee</td> <td></td> <td>AUTH ROLE: Payor</td> </tr> <tr> <td></td> <td></td> <td></td> <td></td> <td></td> <td>NONE</td> </tr> </tbody> </table> Table 6-1: Inventory of Key Objects for Drop Ship Hypothetical MultiParty Collaboration 6 Simple Automated Contract Negotiation in ebXML 6.1 ebBPSS contract negotiation functionality In the prior section we examined contract formation by exchange of explicit, binding terms. At each step of the message exchange, the trading partners were making commitments that might (if properly met with a valid response) result in a "success" end state associated with an explicit contract formed by matching offer and acceptance. Trading partners may also wish to exchange proposed terms, without making an assertion of intent to be legally bound. This is analogous to the paper contracting practice of exchanging unsigned drafts or term sheets. Of course, trading parties may interrogate proposed business processes in a CPP or CPA independently, and then communicate in a human-readable fashion about the suitability and desirability of the specified process. Under the ebBPSS, trading partners also have the opportunity to exchange Business Documents in a run-time fashion, with their isLegallyBinding parameter set to "No", and thereby test whether a particular sequence of exchanged BusinessDocuments results in a mutually satisfactory outcome. Having done so, and concluded (independently) that the resulting collaboration is acceptable, the same partners are then in a position to efficiently duplicate the sequence by changing one parameter -- setting the isLegallyBinding parameter set to "Yes" throughout -- and thereby communicate the "dry run" contractual sequence as an enforceable transaction. The generalized flow of events resulting from the foregoing approach is illustrated in the following activity diagram. Figure 6-1: Hybrid Activity Diagram for Simple Negotiation Pattern 6.2 CPA negotiation as an instance Some ebXML users may initiate communications by selecting from a sheaf of pre-set CPAs. Others may wish to negotiate a CPA dynamically by negotiating a choice from among a pre-set group of CPAs, or assembling a CPA from two CPPs. The Simple Negotiation Pattern may be used to perform such a negotiation, by sending a proposed CPA on a nonbinding basis (isLegallyBinding="No") as a BusinessDocument to a proposed trading partner, in a single BusinessTransaction which indicates that the sole guard expression condition for a "success" end state is return of the identical BusinessDocument, followed (consistent with the foregoing pattern) by either: 1. A nonbinding substantive acceptance, indicated by the return of the CPA, which can then be formally agreed by a second similar exchange with the isLegallyBinding parameter="Yes". 2. A rejection by explicit message, timeout or counteroffer advice, and in the latter case, a new exchange based on the CPA contained in the new offer heralded by the counteroffer advice. The CPA Specification [ebCPP] requires signature of the CPA for substantive reasons. In order to satisfy that requirement, in the design of the foregoing process, the BusinessDocument containing the proposed CPA MUST bear a "isNonrepudiationOfReceiptRequired" parameter="Yes". In order to initiate an ebXML compliant transaction, trading partners must refer to a CPA. If potential trading partners are attempting to negotiate a CPA in such a transaction, they MUST nevertheless agree to a common CPP under which the CPA negotiation occurs. It is RECOMMENDED that the prospective trading partner who initiates that preliminary negotiation do so by specifying agreement to a CPP already offered by the non-initiating party (e.g., held out in a registry as being available for that party). Potential trading partners who wish to be assured that their negotiation over competing prospective CPAs will computationally resolve to a CPA, without human intervention, may choose to employ the suggested set of default business rules described in the "Conflict resolution of equally weighted options" section of the [Automatic CPA Negotiation] document. However, parties are free to accept or reject the adoption of those rules. --- 7. Readers should note that the architects of the ebXML patterns generally seek to leave the selection of such matters up to the individual user. If I want to specify in a registry that I only transact in cuneiform on clay tablets, albeit wrapped in an ebXML data structure, the standards generally leave me free to do so. (As a practical matter, under the BPSS we would be looking at a "Business Document" constituting a conventional XML wrapper around a highly unconventional "Attachment". Also, to remain in compliance with the BPSS one would have to convert the cuneiform to transmittable form -- perhaps by shipping a JPEG file -- and setting the "spec" parameter of the "Attachment" object to a resolvable URI that allegedly informs a reader how to interpret the JPEG picture.) How the market may react to this is an entirely separate consideration. 7 Disclaimer The views and specification expressed in this document are those of the authors and are not necessarily those of their employers. The authors and their employers specifically disclaim responsibility for any problems arising from correct or incorrect implementation or use of this design. 8 Contact Information Team Leader (of the CC/BP Analysis group of the Joint Delivery Team): Brian Hayes Commerce One 4440 Rosewood Drive Pleasanton, CA USA +1 (925) 788-6304 brian.hayes@commerceone.com Editor James Bryce Clark McLure-Moynihan Inc. 28720 Canwood Street Suite 208 Agoura Hills, CA 91301 USA +1 (818) 706-3882 Jamie.clark@mmiec.com
{"Source-Url": "http://www.ebxml.org/specs/bpPATT_print.pdf", "len_cl100k_base": 7855, "olmocr-version": "0.1.49", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 50242, "total-output-tokens": 9052, "length": "2e12", "weborganizer": {"__label__adult": 0.0007691383361816406, "__label__art_design": 0.001964569091796875, "__label__crime_law": 0.004344940185546875, "__label__education_jobs": 0.004375457763671875, "__label__entertainment": 0.000217437744140625, "__label__fashion_beauty": 0.0004341602325439453, "__label__finance_business": 0.16455078125, "__label__food_dining": 0.0007123947143554688, "__label__games": 0.001914024353027344, "__label__hardware": 0.00171661376953125, "__label__health": 0.0006432533264160156, "__label__history": 0.0007348060607910156, "__label__home_hobbies": 0.0003681182861328125, "__label__industrial": 0.0027523040771484375, "__label__literature": 0.0007014274597167969, "__label__politics": 0.0010404586791992188, "__label__religion": 0.0005350112915039062, "__label__science_tech": 0.036773681640625, "__label__social_life": 0.00020956993103027344, "__label__software": 0.161376953125, "__label__software_dev": 0.611328125, "__label__sports_fitness": 0.00045013427734375, "__label__transportation": 0.0017566680908203125, "__label__travel": 0.0004448890686035156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38629, 0.03194]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38629, 0.09514]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38629, 0.88417]], "google_gemma-3-12b-it_contains_pii": [[0, 142, false], [142, 142, null], [142, 2481, null], [2481, 2481, null], [2481, 2807, null], [2807, 3317, null], [3317, 5163, null], [5163, 5662, null], [5662, 7275, null], [7275, 8674, null], [8674, 11381, null], [11381, 13913, null], [13913, 15350, null], [15350, 17395, null], [17395, 20192, null], [20192, 22682, null], [22682, 25540, null], [25540, 27672, null], [27672, 29565, null], [29565, 31318, null], [31318, 33135, null], [33135, 34770, null], [34770, 34837, null], [34837, 37978, null], [37978, 38280, null], [38280, 38629, null]], "google_gemma-3-12b-it_is_public_document": [[0, 142, true], [142, 142, null], [142, 2481, null], [2481, 2481, null], [2481, 2807, null], [2807, 3317, null], [3317, 5163, null], [5163, 5662, null], [5662, 7275, null], [7275, 8674, null], [8674, 11381, null], [11381, 13913, null], [13913, 15350, null], [15350, 17395, null], [17395, 20192, null], [20192, 22682, null], [22682, 25540, null], [25540, 27672, null], [27672, 29565, null], [29565, 31318, null], [31318, 33135, null], [33135, 34770, null], [34770, 34837, null], [34837, 37978, null], [37978, 38280, null], [38280, 38629, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 38629, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38629, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38629, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38629, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38629, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38629, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38629, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38629, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38629, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38629, null]], "pdf_page_numbers": [[0, 142, 1], [142, 142, 2], [142, 2481, 3], [2481, 2481, 4], [2481, 2807, 5], [2807, 3317, 6], [3317, 5163, 7], [5163, 5662, 8], [5662, 7275, 9], [7275, 8674, 10], [8674, 11381, 11], [11381, 13913, 12], [13913, 15350, 13], [15350, 17395, 14], [17395, 20192, 15], [20192, 22682, 16], [22682, 25540, 17], [25540, 27672, 18], [27672, 29565, 19], [29565, 31318, 20], [31318, 33135, 21], [33135, 34770, 22], [34770, 34837, 23], [34837, 37978, 24], [37978, 38280, 25], [38280, 38629, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38629, 0.07308]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
897c328107db0f11f2fdd3660a56ec35b3896df0
Best Practices for Building Splunk Apps and Technology Add-ons Presented by: Jason Conger # Table of Contents Overview ......................................................................................................................... 4 Creating a Directory Structure for Your Application or Add-on ........................................ 4 Directory Naming Conventions for Your Application or Add-on ........................................ 4 Getting Data Into Splunk .................................................................................................. 5 Best Practices for Reading Existing Log Files ................................................................ 5 Best Practices for Logging Data to be consumed by Splunk ................................................ 6 Best Practices for Event Breaking ................................................................................... 7 Scripted Input Best Practices ............................................................................................. 9 Do not hard code paths in scripts ....................................................................................... 9 Use Splunk Entity Class or position files as a placeholder .................................................. 9 Use error trapping and logging .......................................................................................... 10 Using stderr ..................................................................................................................... 10 Test Scripts using Splunk CMD ........................................................................................ 11 Use configuration files to store user preferences ................................................................. 11 Use Splunk methods to read cascaded settings ................................................................... 11 Use script methods to construct file paths ......................................................................... 12 Modular Inputs .................................................................................................................. 12 Modular Inputs vs. Scripted Inputs ....................................................................................... 12 Analyzing Data with Splunk ............................................................................................... 13 Dashboard and Form Best Practices .................................................................................. 13 Splunk 6.x Dashboard Examples ..................................................................................... 13 Search Best Practices ......................................................................................................... 13 Parameterize index names ................................................................................................. 13 Use scheduled saved searches to build lookup files ......................................................... 14 Use TERM() to Search for IP Addresses .......................................................................... 15 Knowledge Object Scope .................................................................................................. 15 Packaging Your App for Distribution ................................................................................. 16 Important Information Regarding app.conf ....................................................................... 16 Application Naming Conventions ...................................................................................... 16 Definitions .......................................................................................................................... 17 Revision History .................................................................................................................. 17 Appendix A – Do’s and Don’ts ......................................................................................... 18 Application ........................................................................................................................ 18 Data Collection .................................................................................................................. 18 Packaging Applications ..................................................................................................... 19 Appendix B – Application/Add-on Checklist ............................................................................. 20 Application Setup Instructions ................................................................................................. 20 Application Packaging ................................................................................................................ 20 Application Performance .......................................................................................................... 21 Application Portability ............................................................................................................... 21 Application Security .................................................................................................................. 21 Technology Add-Ons ................................................................................................................. 21 Testing ....................................................................................................................................... 22 Legal .......................................................................................................................................... 22 Overview The purpose of this guide is to provide guidance on building Splunk Add-Ons and Applications. The recommendations provided within this document may not be appropriate for every environment. Therefore, all best practices within this document should be evaluated in an isolated test environment prior to being implemented in production. Creating a Directory Structure for Your Application or Add-on Splunk Applications and Add-ons are basically a file system directory containing a set of configurations for collecting data and/or analyzing data. To get started on your Splunk application, create a directory in $SPLUNK_HOME/etc/apps where $SPLUNK_HOME is one of the following by default: <table> <thead> <tr> <th>Platform</th> <th>Directory Structure</th> </tr> </thead> <tbody> <tr> <td>Windows</td> <td>%ProgramFiles%\Splunk\etc\apps</td> </tr> <tr> <td>*nix</td> <td>/opt/Splunk/etc/apps</td> </tr> <tr> <td>Mac</td> <td>/Applications/Splunk/etc/apps</td> </tr> </tbody> </table> All configurations will go in this directory to make the application or add-on self-contained and portable. Directory Naming Conventions for Your Application or Add-on For applications (dashboards, forms, saved searches, alerts, etc.): ``` Vendor-app-product Example: acme-app-widget ``` For add-ons (data collection mechanisms with no user interface): ``` TA_vendor-product Example: TA_acme-widget ``` TA stands for Technology Add-on Note: after uploading an application to Splunk Apps, the directory name cannot be changed. The actual name of the application displayed on the Splunk start screen and on Splunk Apps is controlled by a file named app.conf and is independent of the directory name mentioned above. Getting Data Into Splunk The first thing that needs to happen to create a Splunk Application is to get data into Splunk. There are various methods to get data into Splunk including, but not limited to, the following: - Reading log files on disk - Sending data to Splunk over the network via TCP or UDP - Pulling data from APIs - Sending scripted output to Splunk such as bash, PowerShell, batch, etc. - Microsoft Windows perfmon, Event Logs, registry, WMI, etc. These methods are covered in detail at the Splunk Docs site http://docs.splunk.com/Documentation/Splunk/latest/Data/WhatSplunkcanmonitor Data can be local or remote to the Splunk instance. Local Data A local resource is a fixed resource that your Splunk Enterprise server has direct access to, meaning you are able to access it - and whatever is contained within it - without having to attach, connect, or perform any other intermediate action (such as authentication or mapping a network drive) in order to have that resource appear available to your system. A Splunk instance can reach out to other remote systems or receive data from remote systems over the network. Remote Data A remote resource is any resource where the above definition is not satisfied. Splunk Universal Forwarders can be installed on remote systems to gather data locally and send the gathered data over the network to a central Splunk instance. Best Practices for Reading Existing Log Files Existing log files are easily read using Splunk. Information about how to direct Splunk to monitor existing log files and directories can be found here -> http://docs.splunk.com/Documentation/Splunk/latest/Data/Monitorfilesanddirectories Best Practices for Logging Data to be consumed by Splunk If you have control of how data is logged, the following best practices can help Splunk better recognize events and fields with little effort: - Start the log line event with a time stamp - Use clear key-value pairs - When using key-value pairs, leverage the Common Information Model - Create events that humans can read - Use unique identifiers - Log in text format - Use developer-friendly formats - Log more than just debugging events - Use categories - Keep multi-line events to a minimum - Use JSON (Java Script Object Notation) format More information about these best practices can be found here -> http://dev.splunk.com/view/logging-best-practices/SP-CAAADP6 Best Practices for Event Breaking For each of the inputs, at a minimum, ensure you set the following props.conf attributes. They help tremendously with event breaking and timestamp recognition performance: TIME_PREFIX, MAX_TIMESTAMP_LOOKAHEAD, TIME_FORMAT, LINE_BREAKER, SHOULD_LINEMERGE, TRUNCATE, KV_MODE Sample events: 2014-07-15 10:51:06.700 -0400 "GET servicesNS/admin/launcher/search/typeahead?prefix=index%3D_internal&count=50& output_mode=json&max_time= HTTP/1.0" 200 73 - - 1ms 2014-07-15 10:51:06.733 -0400 "GET servicesNS/admin/launcher/search/typeahead?prefix=index%3D_internal&count=50& output_mode=json&max_time= HTTP/1.0" 200 73 - - 2ms 2014-07-15 10:51:06.833 -0400 "GET servicesNS/admin/launcher/search/typeahead?prefix=index%3D_internal&count=50& output_mode=json&max_time= HTTP/1.0" 200 73 - - 1ms Sample props.conf [source::A] TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %Y-%m-%d %H:%M:%S.%3N %z LINE_BREAKER = ([\r\n]+)\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}\.\d{3} SHOULD_LINEMERGE = False TRUNCATE = 5000 KV_MODE = None ANNOTATE_PUNCT = false For each of the above settings, detailed descriptions can be found on the manual for props.conf but here’s a brief explanation: - **TIME_PREFIX**: Leads Splunk to the exact location to start looking for a timestamp pattern. The more precise this is, the faster the timestamp processing. - **MAX_TIMESTAMP_LOOKAHEAD**: Tells Splunk how far after TIME_PREFIX the timestamp pattern extends. - **TIME_FORMAT**: Tells Splunk the exact format of the timestamp (instead of having Splunk to guess what it is by iterating over "many" possible time formats). - **LINE_BREAKER**: Tells Splunk how to break the stream into events. This setting requires a capture group and the breaking point is immediately after it. Capture group content is tossed away. In this specific example it reads: "break after one or more carriage returns or newlines followed by a pattern that looks like a timestamp" - **SHOULD_LINEMERGE**: Tells Splunk not to engage line merging (break on newlines and merge on timestamps), which is known to be a huge resource hog, especially for multiline events. Instead, by defining LINE_BREAKER we’re telling it to break on a definite pattern. - **TRUNCATE**: maximum line length (or event length) in bytes. This defaults to 10K. Its prudent to have something non-default depending on expected event length. Super-long events tend to indicate a logging system problem. • KV_MODE: Specify exactly what you want Splunk to engage at search time. If you do not have events in KV pairs, or any other poly/semi/structured format, disable it. • ANNOTATE_PUNCT: Unless you expect PUNCT to be used in your searches, disable its extraction as it adds index-time overhead. Scripted Input Best Practices Scripted inputs allow you to gather data from sources where data does not exist on disk. Examples include calling APIs or gathering in-memory data. Any script that the operating system can run, Splunk can use for scripted inputs. Any output from the script to stdout (the screen by default) will end up in the Splunk index. Do not hard code paths in scripts When referencing file paths in the Splunk directory, use the special $SPLUNK_HOME environment variable. This environment variable will be automatically expanded to the correct Splunk path based on the operating system on which Splunk is running. Example (Python): ```python os.path.join(os.environ["SPLUNK_HOME"], 'etc', 'apps', APP_NAME) ``` Example (PowerShell): ```powershell Join-Path -path (get-item env:\SPLUNK_HOME).value "Splunk\etc\apps" ``` Use Splunk Entity Class or position files as a placeholder Oftentimes, you may be calling an API with a scripted or modular input. In order to only query a specific range of values, use either the Splunk Entity Class or a position file to keep track of where the last run left off so that the next run will pick up at this position. Depending on where your input runs will dictate whether you should use the Splunk Entity class or use a position file. For more detailed information, refer to the following blog post -> [http://blogs.splunk.com/2014/09/22/pick-up-where-you-left-off-in-scripted-and-modular-inputs/](http://blogs.splunk.com/2014/09/22/pick-up-where-you-left-off-in-scripted-and-modular-inputs/) For position files, avoid using files that start with a dot as operating systems usually treat these types of files as special files. Do use acme.pos Do not use .pos Use error trapping and logging The following example demonstrates how to use the Python Logging Facility. Information logged with logging.error() will end up in splunkd.log as well as a special "_internal" index that can used for troubleshooting. Example with Python Logging Module: ```python import logging try: Some code that may fail like opening a file except IOError, err: logging.error('%s - ERROR - File may not exist %s ' % (time.strftime("%Y-%m-%d %H:%M:%S"), str(err))) pass ``` Using stderr Just like anything written to stdout will end up in the Splunk index, anything written to stderr from a scripted input will behave like logging.error() from above. Example (Python): ```python try: Some code that may fail like opening a file except IOError, err: sys.stderr.write('%s - ERROR - File may not exist %s ' % (time.strftime("%Y-%m-%d %H:%M:%S"), str(err))) pass ``` Example (PowerShell): ```powershell try { Some code that may fail like opening a file } catch { Write-Error('{0:MM/dd/yyyy HH:mm:ss} GMT - {1} {2}' -f (Get-Date).ToUniversalTime(), "Could not create position file: ", $_.Exception.Message) exit } ``` Test Scripts using Splunk CMD To see the output of a script as if it was run by the Splunk system, use the following: Mac: /Applications/Splunk/bin/splunk cmd python /Applications/Splunk/etc/apps/<your app>/bin/<your script> Windows: C:\Program Files\Splunk\bin\splunk.exe cmd C:\Program Files\Splunk\etc\apps\<your app>\bin\<your script> More useful command line tools to use with Splunk can be found here - http://docs.splunk.com/Documentation/Splunk/latest/Troubleshooting/CommandlinetoolsforusewithSupport Use configuration files to store user preferences Configuration files store specific settings that will vary for different environments. Examples include REST endpoints, API levels, or any specific setting. Configuration files are stored in either of the following locations and cascade: $SPLUNK_HOME/etc/apps/<your_app>/default $SPLUNK_HOME/etc/apps/<your_app>/local For example, suppose there is a configuration file called acme.conf in both the default and local directories. Settings from the local directory will override settings in the default directory. Use Splunk methods to read cascaded settings The Splunk cli_common library contains methods that will read combined settings from configuration files. Example (Python): import splunk.clilib.cli_common def __init__(self, obj): self.object = obj self.settings = splunk.clilib.cli_common.getConfStanza("acme", "default") Use script methods to construct file paths Example (Python): abs_file_path = os.path.join(script_dir, rel_path) Example (PowerShell): $positionFile = Join-Path $positionFilePath $positionFileName Modular Inputs Modular Inputs allows you to extend the Splunk Enterprise framework to define a custom input capability. Your custom input definitions are treated as if they were part of Splunk Enterprise native inputs. The inputs appear automatically on the Settings > Data Inputs page. From a Splunk Web perspective, your users interactively create and update your custom inputs using Settings, just as they do for Splunk Enterprise native inputs. Modular Inputs vs. Scripted Inputs Modular inputs can be used just about anywhere scripted inputs are used. Scripted inputs are quick and easy, but may not be the easiest for an end user. Modular inputs require more upfront work, but are easier for end user interaction. For more information on modular inputs as well as comparisons between scripted inputs and modular inputs, follow this link -> http://docs.splunk.com/Documentation/Splunk/latest/AdvancedDev/ModInputsIntro Analyzing Data with Splunk Once data is in a Splunk index, the data can be analyzed via searches, reports, and visualizations. **Dashboard and Form Best Practices** There are multiple options for building Splunk dashboards – Simple XML, Advanced XML, and the Splunk Web Framework. Most dashboard needs are covered in Simple XML; however, if you require advanced visualizations that are outside the scope of the visualizations included with Simple XML, the Splunk Web Framework allows you to use any HTML/JavaScript libraries alongside Splunk data. More information on Simple XML can be found here -> http://docs.splunk.com/Documentation/Splunk/latest/Viz/Visualizationreference **Splunk 6.x Dashboard Examples** The Splunk 6.x Dashboards Examples app provides numerous examples of how to visualize your data in Simple XML format. The app can be downloaded here -> http://apps.splunk.com/app/1603/ **Search Best Practices** The Splunk Search Processing Language (SPL) is the heart of all Splunk analytics. A firm understanding of the SPL is critical to creating good analytics. Several examples of more common search commands can be found on this quick start guide -> http://dev.splunk.com/web_assets/developers/pdf/splunk_reference.pdf **Parameterize index names** Parameterize index names so that they can be changed later without modifying existing searches. This can be done as a macro or eventtype. **marcros.conf example:** ``` [acme_index] definition = index=acme ``` **Example search using macro:** ``` `acme_index` sourcetype=widiget | stats count ``` eventtypes.conf example: [acme_eventtype] search = index=acme sourcetype="widget" Example search using eventtype: ``` eventtype=acme_eventtype | stats count ``` Use scheduled saved searches to build lookup files By building lookup files using a scheduled saved search, lookup files will be automatically replicated in a distributed environment. Example (from saved searches.conf): ``` [Lookup - WinHosts] action.email.inline = 1 alert.suppress = 0 alert.track = 0 auto_summarize.dispatch.earliest_time = -1d@h cron_schedule = 0 0 * * * description = Updates the winHosts.csv lookup file dispatch.earliest_time = -26h@h dispatch.latest_time = now enableSched = 1 run_on_startup = true search = `acme_index` sourcetype=WinHostMon | stats latest(_time) AS _time latest(OS) AS OS latest(Architecture) AS Architecture latest(Version) AS Version latest(BuildNumber) AS BuildNumber latest(ServicePack) AS ServicePack latest(LastBootUpTime) AS LastBootUpTime latest(TotalPhysicalMemoryKB) AS TotalPhysicalMemoryKB latest(TotalVirtualMemoryKB) as TotalVirtualMemoryKB latest(NumberOfCores) AS NumberOfCores by host | inputlookup append=T winHosts.csv | sort _time | stats latest(_time) AS _time latest(OS) AS OS latest(Architecture) AS Architecture latest(Version) AS Version latest(BuildNumber) AS BuildNumber latest(ServicePack) AS ServicePack latest(LastBootUpTime) AS LastBootUpTime latest(TotalPhysicalMemoryKB) AS TotalPhysicalMemoryKB latest(TotalVirtualMemoryKB) as TotalVirtualMemoryKB latest(NumberOfCores) AS NumberOfCores by host | outputlookup winHosts.csv ``` Use TERM() to Search for IP Addresses When you search for a term that contains minor segmenters, Splunk defaults to treating it as a phrase: It searches for the conjunction of the subterms (the terms between minor breaks) and post-filters the results. For example, when you search for the IP address 127.0.0.1, Splunk searches for: 127 AND 0 AND 1 If you search for TERM(127.0.0.1), Splunk treats the IP address as a single term to match in your raw data. Knowledge Object Scope Knowledge Object definition -> [http://docs.splunk.com/Splexicon:Knowledgeobject](http://docs.splunk.com/Splexicon:Knowledgeobject) Knowledge Objects can be scoped to individuals, apps, or global. The scope of the knowledge objects is controlled via default.meta or local.meta. Your app should not ship with a local.meta file, so all scoping should be defined in default.meta. It is a best practice to scope all knowledge objects to the application only. However, if you are creating a TA that is not visible and/or will be used by multiple other applications, the scope of the object should be set to Global. Packaging Your App for Distribution After you build your Splunk application, you can share your extensions to Splunk Enterprise on Splunk Apps and make them available to everyone in the Splunk community or distribute them directly to your customers. Detailed instructions on the process for packaging your application for redistribution can be found here -> http://docs.splunk.com/Documentation/Splunk/latest/AdvancedDev/PackageApp Important Information Regarding app.conf The name of your application, version, App ID, etc. is stored in a file called app.conf. The value for id in app.conf must match the folder name in which your apps lives in $SPLUNK_HOME/etc/apps. Once this id is set and uploaded to Splunk Apps, the id cannot be changed unless you create a separate application to upload. Application Naming Conventions Certain naming conventions should be followed. What you name your application or add-on is not impacted by how you named your directory structure. A detailed list of naming convention parameters can be found here -> http://docs.splunk.com/Documentation/Splunkbase/latest/Splunkbase/Namingguidelines Definitions **Splunk Instance** – the server(s) where the Splunk software is installed. This can be a single server consisting of the Splunk Indexer, Search Head, and Deployment Server. Or, this can be a collection of servers in a distributed deployment. Generally, a Splunk Forwarder is not considered to be part of a Splunk Instance. **Splunk Forwarder** – a piece of software running on a remote system that collects and forwards data to a Splunk Instance. **Technology Add-on (TA)** – a set of Splunk configurations, scripts, modular inputs, etc. The purpose of a TA is to collect data and/or add knowledge to the collected data. Revision History <table> <thead> <tr> <th>Rev.</th> <th>Change Description</th> <th>Updated By</th> <th>Date</th> </tr> </thead> <tbody> <tr> <td>1.0</td> <td>Initial Document</td> <td>Business Development</td> <td>July 15, 2014</td> </tr> <tr> <td>1.1</td> <td>Added Do’s and Don’ts table</td> <td>Business Development</td> <td>August 8, 2014</td> </tr> <tr> <td>1.2</td> <td>Added Knowledge Object Scope</td> <td>Business Development</td> <td>September 6, 2014</td> </tr> <tr> <td>1.3</td> <td>Added Appendix B – Application/Add-on checklist</td> <td>Business Development</td> <td>October 2, 2014</td> </tr> </tbody> </table> # Appendix A – Do’s and Don’ts ## Application <table> <thead> <tr> <th>Do</th> <th>Don't</th> </tr> </thead> <tbody> <tr> <td>Use setup.xml or a django form to allow the end user to configure the app</td> <td>Make users manually enter information such as API credentials into configuration files.</td> </tr> <tr> <td>Parameterize indexes so that they can be easily changed</td> <td>Hard code indexes in your searches</td> </tr> <tr> <td>Place all .conf files in default <code>$SPLUNK_HOME/etc/apps/&lt;your_app&gt;/default</code></td> <td>Leave any content in <code>$SPLUNK_HOME/etc/apps/&lt;your_app&gt;/local</code></td> </tr> <tr> <td>Set default permissions in: <code>$SPLUNK_HOME/etc/apps/&lt;your_app&gt;/metadata/default.meta</code></td> <td>Have a local.meta file located in: <code>$SPLUNK_HOME/etc/apps/&lt;your_app&gt;/metadata</code></td> </tr> </tbody> </table> ## Data Collection <table> <thead> <tr> <th>Do</th> <th>Don't</th> </tr> </thead> <tbody> <tr> <td>Support multiple platforms.</td> <td>Code for a single OS.</td> </tr> <tr> <td>Use scripting language utilities such as os.path.join() and the special environment variable <code>$SPLUNK_HOME</code> to construct paths in scripts.</td> <td>Hard code script paths.</td> </tr> <tr> <td>Use key=value pairs in writing to log files (if you have control of the logging output).</td> <td>Use name abbreviations.</td> </tr> <tr> <td>Throttle how much data is collected at one time from an API.</td> <td>Overwhelm a system by pulling exorbitant amounts of data at one time from an API.</td> </tr> <tr> <td>Use logging and error trapping in scripts.</td> <td></td> </tr> </tbody> </table> Packaging Applications <table> <thead> <tr> <th>Do</th> <th>Don't</th> </tr> </thead> <tbody> <tr> <td>Follow the guidelines found at <a href="http://docs.splunk.com/Documentation/Splunk/latest/AdvancedDev/PackageApp">http://docs.splunk.com/Documentation/Splunk/latest/AdvancedDev/PackageApp</a></td> <td>Leave any hidden files in the app such as Mac’s ._ files.</td> </tr> <tr> <td>Include a screen shot of your application in the correct location.</td> <td></td> </tr> <tr> <td>Let the user choose which inputs are enabled for their environment</td> <td>Enable all inputs by default if not necessary.</td> </tr> <tr> <td>Use a build automation tool such as Apache Ant if necessary to ensure a clean build/package.</td> <td>Leave anything in: $SPLUNK_HOME/etc/apps/&lt;app&gt;/local directory $SPLUNK_HOME/etc/apps/&lt;app&gt;/metadata/local.meta</td> </tr> <tr> <td>Ensure the appropriate settings are set in app.conf</td> <td></td> </tr> <tr> <td>Document your app with a README.txt file</td> <td></td> </tr> <tr> <td>Test your application on a clean system</td> <td></td> </tr> </tbody> </table> Appendix B – Application/Add-on Checklist Application Setup Instructions - README located in the root directory of your application with basic instructions. - Detailed instructions located on a dashboard within the application. - Instructions do not direct the user to store clear text passwords anywhere. - Setup screen (optional) that prompts the user for setup parameters. - Setup mechanism encrypts passwords. Application Packaging - APP.CONF specifies: - id - this cannot be changed once created. - version - this field can contain trailing text such as “beta”. - description - The “local” directory in the application is either empty or does not exist. - Remove metadata/local.meta - Ensure metadata/default.meta exports and permissions are set correctly. - Remove any files from the lookups directory that are not static. For instance, some scheduled saved searches generate files in this directory that are specific to the environment. - No hidden files contained in the application directory structure. - All XML files are valid. http://docs.splunk.com/Documentation/Splunk/latest/AdvancedDev/AdvancedSchemas#Validation **Application Performance** - Intervals on inputs.conf reviewed. For example, inventory data should be polled less frequently than performance data. - Scripted or modular inputs are verified to not have a negative impact on the host system. - Amount of data requested from 3rd party APIs is throttled (if applicable). For example, requesting 1 million records via REST may be bad. - Saved searches are optimized. For example, dispatch.earliest_time and dispatch.latest_time should be set by default in savedsearches.conf. - Measured resources when running the application: - Load average - % CPU - Memory usage **Application Portability** - Searches do not contain hard coded index names. - Application conforms to the Splunk Common Information Model (CIM) (optional). - Eventgen.conf created (optional). - Eventgen sample data stored in the samples directory. - Eventgen data anonymized. **Application Security** - Application does not open outgoing connections. - Application does not use IFRAME. - Application and/or Add-ons do not have pre-compiled Python code. - Application does not contain executable files. **Technology Add-Ons** - Technology add-ons are stored in your application folder under appserver/addons. - Scripted or modular inputs are verified to not have a negative impact on the host system. - No hard coded paths in scripted or modular inputs. - Logging mechanism used in data collection. - Error trapping used in data collection. Testing ☐ Tested Application on a clean install of Splunk to ensure everything is self-contained. ☐ Tested Application on Splunk running in *nix ☐ Tested Application on Splunk running in Windows® ☐ Tested Add-ons on multiple platforms (optional). ☐ Application tested with multiple Splunk account roles. ☐ Application tested on multiple versions of Splunk (optional). ☐ Open Application in non-Flash browser. ☐ Open Application in browsers supported by Splunk. Legal ☐ No use of Splunk in trademark infringing way. ☐ Developer's agreement to "Developer Distribution License" ☐ App has a valid EULA
{"Source-Url": "http://challenges.s3.amazonaws.com/splunk/Best%20Practices%20App%20building.pdf", "len_cl100k_base": 6484, "olmocr-version": "0.1.50", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 45170, "total-output-tokens": 7322, "length": "2e12", "weborganizer": {"__label__adult": 0.0002015829086303711, "__label__art_design": 0.00020301342010498047, "__label__crime_law": 0.0002598762512207031, "__label__education_jobs": 0.00034546852111816406, "__label__entertainment": 3.910064697265625e-05, "__label__fashion_beauty": 7.784366607666016e-05, "__label__finance_business": 0.0002701282501220703, "__label__food_dining": 0.0001289844512939453, "__label__games": 0.0003893375396728515, "__label__hardware": 0.0006823539733886719, "__label__health": 0.0001112222671508789, "__label__history": 0.0001080036163330078, "__label__home_hobbies": 6.747245788574219e-05, "__label__industrial": 0.00020694732666015625, "__label__literature": 0.00011980533599853516, "__label__politics": 0.0001080632209777832, "__label__religion": 0.00016307830810546875, "__label__science_tech": 0.00408935546875, "__label__social_life": 5.942583084106445e-05, "__label__software": 0.04083251953125, "__label__software_dev": 0.951171875, "__label__sports_fitness": 0.0001322031021118164, "__label__transportation": 0.00013434886932373047, "__label__travel": 9.006261825561523e-05}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30898, 0.01179]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30898, 0.33846]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30898, 0.73606]], "google_gemma-3-12b-it_contains_pii": [[0, 91, false], [91, 4460, null], [4460, 5696, null], [5696, 7358, null], [7358, 9035, null], [9035, 9762, null], [9762, 12222, null], [12222, 12515, null], [12515, 14243, null], [14243, 15421, null], [15421, 16731, null], [16731, 17949, null], [17949, 19624, null], [19624, 21196, null], [21196, 22291, null], [22291, 23423, null], [23423, 24664, null], [24664, 26278, null], [26278, 27686, null], [27686, 28828, null], [28828, 30298, null], [30298, 30898, null]], "google_gemma-3-12b-it_is_public_document": [[0, 91, true], [91, 4460, null], [4460, 5696, null], [5696, 7358, null], [7358, 9035, null], [9035, 9762, null], [9762, 12222, null], [12222, 12515, null], [12515, 14243, null], [14243, 15421, null], [15421, 16731, null], [16731, 17949, null], [17949, 19624, null], [19624, 21196, null], [21196, 22291, null], [22291, 23423, null], [23423, 24664, null], [24664, 26278, null], [26278, 27686, null], [27686, 28828, null], [28828, 30298, null], [30298, 30898, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 30898, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30898, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30898, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30898, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30898, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30898, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30898, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30898, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30898, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30898, null]], "pdf_page_numbers": [[0, 91, 1], [91, 4460, 2], [4460, 5696, 3], [5696, 7358, 4], [7358, 9035, 5], [9035, 9762, 6], [9762, 12222, 7], [12222, 12515, 8], [12515, 14243, 9], [14243, 15421, 10], [15421, 16731, 11], [16731, 17949, 12], [17949, 19624, 13], [19624, 21196, 14], [21196, 22291, 15], [22291, 23423, 16], [23423, 24664, 17], [24664, 26278, 18], [26278, 27686, 19], [27686, 28828, 20], [28828, 30298, 21], [30298, 30898, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30898, 0.09021]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
71772fad2d20b802cd8ff2415c31462fe70d955e
Fast Practical Multi-Pattern Matching M. Crochemore\textsuperscript{a}, A. Czumaj\textsuperscript{b}, L. Gasieniec\textsuperscript{c}, S. Jarominek\textsuperscript{d}, T. Lecroq\textsuperscript{e}, W. Plandowski\textsuperscript{d} and W. Rytter\textsuperscript{c} \textsuperscript{a}Institut Gaspard-Monge, Université de Marne-la-Vallée, Cité Descartes 5, Bd Descartes, Champs-sur-Marne, 77454 Marne-la-Vallée CEDEX 2, France \textsuperscript{b}Department of Mathematics and Computer Science, University of Paderborn, Fürstenallee 11, 33102 Paderborn, Germany \textsuperscript{c}Department of Computer Science, The University of Liverpool, Chadwick Building, Peach Street, Liverpool L69 7ZF, United Kingdom \textsuperscript{d}Institute of Informatics, Warsaw University, ul. Banacha2, 00-913 Warsaw 59, Poland \textsuperscript{e}Atelier Biologie Informatique Statistique Socio-linguistique, Laboratoire d’Informatique Fondamentale et Appliquée de Rouen, Facultés des Sciences et des Techniques, Université de Rouen, 76821 Mont-Saint-Aignan Cedex, France Abstract The main result of the paper is the construction of a very fast multi-pattern matching algorithm, called DAWG-MATCH. The algorithm is of Boyer-Moore type. Previous algorithm of this type is the Commentz-Walter algorithm. The DAWG-MATCH algorithm behaves better than Commentz-Walter algorithm. We combine the ideas of two algorithms: the Aho-Corasick algorithm, and the Reverse Factor algorithm from Crochemore et alii. The new algorithm performs at most $2|text|$ inspections of text characters, and is very fast on the average. We give some experimental evidence of its good behavior for random words, against the Commentz-Walter algorithm. The algorithm is especially simple for a single pattern: in this case the Aho-Corasick algorithm can be replaced by the strategy of the Knuth-Morris-Pratt algorithm. The basic tool in the algorithm DAWG-MATCH is the directed acyclic word graph. This graph is usually used as representation of the text to be scanned, but, in our case we use it to represent the set of reverse patterns. 1 Introduction We consider the multiple string matching problem: finding all occurrences of a finite set $P$ of string patterns in a text $t$ of length $n$. Finding all occurrences of elements of $P$ is a problem that appears in bibliographic search and in information retrieval. The first algorithm to solve this problem in $O(n)$ was the Aho-Corasick (AC algorithm, for short) [AC 75], which can be viewed as a generalization of Preprint submitted to Elsevier Preprint the Knuth-Morris-Pratt algorithm (KMP algorithm) [KMP 77], designed for a single pattern. As for one pattern, the Boyer-Moore algorithm (BM algorithm) [BM 77] has a better behavior in practice than the KMP algorithm, Commentz-Walter developed an algorithm combining the ideas of AC and BM algorithms ([Co 79a,Co 79b]). A complete version can be found in [Ah 90]. Later, Uratani [Ur 88], and Baeza-Yates and Régnier [BR 90] developed similar algorithms. In this paper, we show how to use the power of directed acyclic word graphs (DAWG’s) for finding a finite set of patterns. Such graphs are used to represent all factors (subwords) of a given word. We recently presented a family of algorithms for finding one pattern using the DAWG of the reverse pattern [C-R 94]. Direct extension of this algorithm to solve the multi-pattern matching problem gives an algorithm running in quadratic time in the worst case. However, the algorithm has a good behavior in practice. Combining this idea with the AC algorithm, we present a new algorithm which performs at most \(2n\) inspections of text characters, and which is simultaneously very fast on the average. In the case of one pattern, the same technique applies. It gives a new algorithm combining the strategies of the KMP algorithm and the Reverse Factor algorithm ([Le 92,C-R 94]). This new algorithm performs at most \(2n\) inspections of text characters, and it is more simple than the Turbo Reverse Factor algorithm presented in [C-R 94]. Like the Reverse Factor type algorithms, the new algorithm is optimal on the average, with \(O((n \log m)/m)\) inspections of text characters (\(m\) is the length of the pattern). ## 2 The preprocessing phase The preprocessing phase of DAWG-MATCH algorithm concerns the set \(P\) of patterns. It consists both in building the Aho-Corasick machine \(A\) for all the patterns of \(P\), and in building the DAWG \(D\) for the reverse patterns of \(P\). We shortly describe these two data structures. An Aho-Corasick machine \(A = (Q, \delta, f, s_0, T)\) is a deterministic finite state automaton where \(Q\) is a finite set of states, \(\delta\) is the transition function, \(f\) is the failure function, \(s_0\) is the initial state, and \(T\) is a set of the accepting states (see [Ah 90] for details). In figure 1 we present the machine for the example set of patterns \(P = \{abaabaab,aabb,baabaa,baaba\}\). Accepting states are in square boxes. The DAWG-MATCH algorithm needs to be able to compute a shift corresponding to each state of the AC machine. It will enable the algorithm to perform the optimal shift which corresponds to the matched prefix associated with the state. Each state \(s\) is associated with a prefix \(w\) of words in \(P\), which is composed by the characters spelling the unique path between \(s_0\) and \(s\). The length of the shift associated to \(s\) is \(|p| - |w|\), where \(p\) is the shortest pattern in \(P\) that has prefix \[ c \notin \{a, b\} \] dd d \[ 4 / \] \[ 9 \] \[ 10 \] \[ 11 \] \[ 12 \] \[ 13 \] \[ 14 \] \[ 15 \] \[ 16 \] \[ a \] \[ 7 \] \[ b \] \[ 8 \] Fig. 1. The Aho-Corasick machine for \( P = \{abaabaab, aabb, baabaa, baaba\} \). \( w \). The table \( \text{Shift} \) can be defined intuitively as follows: for state \( s \), \( \text{Shift}[s] \) is the minimal shift which guarantees that there is no occurrence of a pattern in the skipped area, assuming that \( s \) corresponds to the longest prefix of a pattern which is a suffix of the text scanned so far (the whole information from the scanned part relevant to further processing). More formally, \( \text{Shift}[s] \) is the length of the minimal path between state \( s \) and an accepting state different from \( s \) (the path can use either the transition function or the failure links). It is easy to see that, if we are given the pattern matching machine with failure links in the table \( f \), then the table \( \text{Shift} \) can be constructed in time proportional to the number of states with the following rules: during the computation of the trie, when the letter \( p[i] \) of a pattern \( p \) of length \( m \) is processed, if a new state \( s \) is created then, if \( i \neq m \) then \( \text{Shift}[s] := m - i - 1 \) else \( \text{Shift}[s] := m \), if the corresponding state \( s \) already exists then \( \text{Shift}[s] := \min(m - i - 1, \text{Shift}[s]) \); afterwards, during the breadth-first traversal of the trie for computing the failure links, when the value of \( f(s) \) is available, \( \text{Shift}[s] := \min(\text{Shift}[s], \text{Shift}[f(s)]) \). For the above example of the Figure 1, values of the table \( \text{Shift} \) are: Fig. 2. The DAWG for reverse patterns of $P = \{abaabaab, aabb, baaba, baaba\}$. <table> <thead> <tr> <th></th> <th>0</th> <th>1</th> <th>2</th> <th>3</th> <th>4</th> <th>5</th> <th>6</th> <th>7</th> <th>8</th> </tr> </thead> <tbody> <tr> <td>$s$</td> <td>9</td> <td>10</td> <td>11</td> <td>12</td> <td>13</td> <td>14</td> <td>15</td> <td>16</td> <td>17</td> </tr> <tr> <td>$Shift[s]$</td> <td>2</td> <td>1</td> <td>4</td> <td>4</td> <td>3</td> <td>2</td> <td>1</td> <td>1</td> <td>2</td> </tr> </tbody> </table> The other data structure used in the DAWG-MATCH algorithm is a DAWG. The DAWG is also a deterministic finite state automaton. It recognizes all the factors of the reverse patterns of $P$. For the above example of set $P$, the automaton is presented in Figure 2. To avoid reversing the pattern in the picture, transitions go from right to left. The aim of the preprocessing is to construct the two following functions: **function** AC($initpos$, $minpos$, $state$); **begin** scan the text $t[initpos, minpos]$ left to right with the Aho-Corasick, machine $A$ starting with state $state$; continue scanning the text with the Aho-Corasick machine $A$ until $Shift[state] \geq$ length of the shortest pattern divided by 2; report all positions, as matches, where $A$ is in an accepting state; **return** the last position scanned, and the last state of $A$; **end** function; **function** DAWG($pos$, $critpos$); **begin** scan the text $t[critpos, pos]$ from right to left with the DAWG, until there is no transition for the next symbol or the position $critpos$ is reached; **return** the last position scanned; **end** function; 3 The search phase The search phase combines the techniques used by the AC and RF algorithms. Its strategy consists first in reading from right to left a segment of the text as far as it is a factor of at least one pattern of \( \text{P} \). Then, using the fact that this segment is a part of one pattern, it is read from left to right using the Aho-Corasick machine, in order to both, report matches, and compute lengths of shifts. The tool that enables us to match segments of the text from right to left against factors of patterns of \( \text{P} \) is the DAWG for all reverse patterns of \( \text{P} \) (see [B-M 87]). Assume that both the AC machine \( \text{A} \) for the patterns, and the dawg \( \text{D} \) for the reverse patterns are constructed. We describe the general situation encountered during the search: we have recognized a prefix \( u \) of length \( l \) of at least one pattern from \( \text{P} \). The occurrence of \( u \) in the text ends at position \( \text{critpos} \). We suppose that we also know the corresponding state in the AC machine, denoted by \( \text{state} \). We denote by \( m \) the length of the shortest pattern \( p \) in \( \text{P} \) such that \( u \) is a prefix of \( p \). Let \( \text{pos} \) be equal to \( \text{critpos} + m - l \). The action at this stage can be described informally as follows: **Substage I** Scan with \( \text{D} \) the characters \( t[\text{critpos} + 1, \text{pos}] \) from right to left. If we are able to reach successfully the critical point \( \text{critpos} \) using \( \text{D} \), it means that the factor \( t[\text{critpos} + 1, \text{pos}] \) is factor of one pattern in \( \text{P} \). **Substage II** If we reach the critical point during Substage I, then, we use the AC machine starting with \( \text{state} \) and with the factor \( t[\text{critpos} + 1, \text{pos}] \) from left to right until a sufficiently large shift is possible. If \( t[\text{critpos} + 1, \text{pos}] \) is not a factor of any pattern of \( \text{P} \), it means that, at a character \( t[\text{pos} - k] \) with \( \text{pos} - k > \text{critpos} \), there is no transition in \( \text{D} \). Then, we use the AC machine starting with the initial state and with the factor \( t[\text{pos} - k + 1, \text{pos}] \) from left to right. After that, it gives a new prefix \( u \), a new value for \( \text{state} \), and a new critical position. The next stage starts here (see Figure 3). At the first stage, we scan the factor \( t[1..m] \) of the text from right to left using the DAWG \( \text{D} \). Length \( m \) is the length of the shortest pattern in \( \text{P} \). The first value of \( \text{state} \) is the initial state \( s_0 \) of the AC machine \( \text{A} \). Figure 4 shows the succession of stages of the algorithm DAWG-MATCH. The algorithm DAWG-MATCH is presented below. **Algorithm** DAWG-MATCH; begin preprocessing phase: build the Aho-Corasick pattern matching machine \( \text{A} \) with the table \( \text{Shift} \); construct the DAWG \( \text{D} \) for reverse patterns; Fig. 3. General situation during the search phase. Fig. 4. A succession of stages of the search phase. search phase: \[ \begin{align*} pos & := \text{length of shortest pattern}; \\ \text{critpos} & := 0; \\ \text{state} & := \text{initial state of } A; \\ \textbf{while} \ pos \leq n \ \textbf{do} \{ \\ \text{newpos} & := \text{DAWG}(pos, \text{critpos} + 1); \\ \text{if} \ \text{newpos} > \text{critpos} \ \textbf{then} \ \text{state} & := \text{initial state of } A; \\ (pos, state) & := \text{AC}(\text{newpos}, pos, state); \\ \text{critpos} & := pos; \\ \text{pos} & := \text{pos} + \text{Shift}[\text{state}]; \\ \}\end{align*} \] end. Example \( P = \{ \text{abaabaab, aabb, baaba, baaba} \}, \) \( t = \text{abaabaabac...} \) **Stage 1** \( \text{critpos} = 0 \) and \( \text{pos} = 4 \), the algorithm DAWG-MATCH first scans \( t[1..4] = \text{aba}a \) from right to left with the DAWG starting with the initial node and stopping with node 15. Then it scans \( t[1..4] \) from left to right with the AC machine starting with the initial state and ending with state 4. Since \( \text{Shift}[4] = 2 \), it shifts to the right by 2. **Stage 2** \( \text{critpos} = 4 \) and \( \text{pos} = 6 \), the algorithm scans \( t[5..6] = \text{ba} \) from right to left with the DAWG starting with the initial node and stopping at node 18. Then it scans \( t[5..6] \) from left to right with the AC machine starting with state 4 and ending at state 6 (it outputs the pattern \( \text{baaba} \)). As \( \text{Shift}[6] = 1 < \) length of the shortest pattern divided by 2 (\( = 2 \)), it scans \( t[7] = \text{a} \) and reaches state 7 (it outputs the pattern \( \text{baabaa} \)), \( \text{Shift}[7] = 1 \) so it scans \( t[8] = \text{b} \) and reaches state 8 (it outputs the pattern \( \text{abaabab} \)), \( \text{Shift}[8] = 1 \) so it scans \( t[9] = \text{a} \) (it takes the failure link from state 8 to state 5) and reaches state 6 (it outputs the pattern \( \text{baaba} \)). Then it scans \( t[10] = \text{c} \) and reaches state 0, \( \text{Shift}[0] = 4 \) so it shifts to the right by 4. ### 3.1 Worst-case time complexity analysis During one stage of the search phase of the DAWG-MATCH algorithm, each text character between positions \( \text{critpos} + 1 \) and \( \text{pos} \) are scanned once with the DAWG, and once with the AC machine. At the end of the stage, some characters at the right of position \( \text{pos} \) are scan only once with the AC machine. For the next stage, \( \text{critpos} \) is set to the rightmost position already scanned, and no character at the left of \( \text{critpos} \) is scanned again. So, obviously, the algorithm DAWG-MATCH performs at most \( 2n \) inspections of text characters. ### 3.2 Average-case time complexity analysis The average complexity of the algorithm DAWG-MATCH is similar to the average complexity of the RF algorithm (see \[C-R 94\]). Denote the length of the shortest pattern by \( m \), and the total length of all patterns by \( M \). Assume that \( M \) is polynomial with respect to \( m \), \( M \leq m^k \), where \( k \) is constant. Let \( s \geq 2 \) be the size of the alphabet. The text (in which the pattern is to be found) is random. If the shortest pattern is short, for example if it is a single-letter pattern, then the average complexity is \( \Omega(n) \). Also if \( M \) is big, for example when the set of patterns consists of almost all strings of size \( m \), then \( M = \Omega(s^m) \), and the average complexity is \( \Omega(n) \). Hence the sublinear average complexity of the algorithm DAWG-MATCH can be expected if \( m \) is reasonably big and \( M \) is reasonably small (polynomial on \( m \)). **Definition 1** The length of a shift is the number of symbols between new critical position critpos and new position pos. This is the number of new symbols of the text to be read in the next iteration. **Proposition 2** Each shift in the algorithm has length at least $\Omega(m)$. **PROOF.** Each iteration ends on work of the AC automaton to obtain a prefix of the multipattern $P$ which is shorter than $m/2$. After that the number of elements between new critical position critpos and new position pos is at least $m/2$. This completes the proof. □ By the proposition 2 it is obvious that there are no more than $O(n/m)$ shifts in the algorithm. We will show that the expected number of comparisons with symbols of the text per one iteration is $O(\log_m m)$. **Proposition 3** There exists a constant $C$ such that reading when $C \log_m m$ new symbols of the text we obtain a subword of the multipattern $P$ with the probability not greater that $1/m^k$. **PROOF.** There is at most $m^{2k}$ different subwords in the multipattern $P$ and there exist at least $s'$ different words of length not greater than $l$ over the size of the alphabet. Each of these words can be obtained with equal probability during reading new symbols of the text because the text is random. Since $s' \geq m^{3k}$ if $l \geq \log_m (m^{3k}) \geq C \log_m m$ for $C \geq k$. Then the probability that the read word is a subword of the multipattern is less then $1/m^k$. This completes the proof. □ **Lemma 4** The expected number of inspections of text characters of the AC machine to obtain a prefix shorter than $m/2$ is $O(\log_m m)$. **PROOF.** Let us assume that the probability of the event that the number of inspections of text characters of the AC machine to obtain a prefix shorter than $m/2$ is between $rC \log_m m$ and $(r + 1)C \log_m m$ is less than $(1/m^k)^r$ by the proposition 3. It follows that the expected number of inspections of text characters is less than $C \log_m m + \Sigma_r (r + 1)(1/m^k)^r C \log_m m \leq O(\log_m m)$. □ **Lemma 5** The expected number of inspections of text characters in one stage of the algorithm is $O(\log_m m)$. **PROOF.** There are two cases. **Case A** The DAWG stops before it has read $C \log_m m$ symbols. The probability of this case is not less than $1 - 1/m^k$ by the proposition 3. The number of inspections of text characters in this case is obviously not greater than $2C \log_m m$ because the AC machine starts with the empty prefix and has only at most $C \log_m m$ symbols to read. **Case B** The DAWG does not stop before the $C \log_s m^k$ symbol. The probability of this case is less than $1/m^k$ by the proposition 3. In the worst case the DAWG achieves the critical position $\text{critpos}$. Then the expected number of text characters inspections in this case is not greater than $m^k$ for the DAWG and $m^k$ for the AC machine to achieve the position $\text{pos}$ and then eventually $C \log_s m$ for the AC machine to achieve a prefix shorter than $m/2$ (by lemma 4). Thus we can bound the expected number of text characters inspections by the formula $(1 - 1/m^k)2C \log_s m + (1/m^k)(2m^k + C \log_s m) = O(\log_s m)$. This completes the proof. \qed Proposition 2 and lemma 5 together imply directly the following result. **Theorem 6** Under our assumptions on $P$ the algorithm DAWG-MATCH makes on average $O(n \log_s m/m)$ inspections of text characters. ## 4 Experimental results In order to verify the good practical behavior of the DAWG-MATCH algorithm we have tested it against the Commentz-Walter algorithm. The two algorithms were implemented in C. We implemented the simple Commentz-Walter algorithm which is quadratic in the worst case (see [Hu 90]). Tests on these two algorithms have been performed with three kinds of alphabet: binary alphabet, alphabet of size 4, and alphabet of size 8. For each alphabet size, we randomly build a text of 50000 characters. Then, we first made experiments with sets of patterns of the same length: for each length of pattern we randomly build a set of 100 patterns of the same length. After that we build sets of patterns of different length (the length is random in an interval): one set with 100 patterns of lengths between 10 and 50, and one set of 100 patterns of lengths between 50 and 100. Then, for the two algorithms, we count the number of inspections per one text character. The results are presented in figures 5, 6 and 7 and in tables 1, 2 and 3. On the binary alphabet, the results for the DAWG-MATCH algorithm is better for length 10 than for length 20 because of the added scanning part with the AC machine until the shift is big enough which saves a lot of inspections. From these results it appears that for small alphabet the DAWG-MATCH algorithm is much better than the simple Commentz-Walter algorithm. This is due to the fact that the Commentz-Walter algorithm computes its shifts with the suffixes it recognizes in the text, but when the set of patterns is big the probability that those suffixes reappear close to the right end of at least one pattern is very large; so, the shifts computed by the Commentz-Walter are small. Fig. 5. Results for an alphabet of size 2. Table 1 Results for a binary alphabet. <table> <thead> <tr> <th>Length of patterns</th> <th>Commentz-Walter</th> <th>DAWG-MATCH</th> </tr> </thead> <tbody> <tr> <td>10</td> <td>4.4497</td> <td>1.1576</td> </tr> <tr> <td>20</td> <td>2.555</td> <td>1.6819</td> </tr> <tr> <td>30</td> <td>2.2186</td> <td>1.1075</td> </tr> <tr> <td>40</td> <td>1.8997</td> <td>0.8458</td> </tr> <tr> <td>50</td> <td>1.7599</td> <td>0.7016</td> </tr> <tr> <td>60</td> <td>1.5846</td> <td>0.5077</td> </tr> <tr> <td>70</td> <td>1.5605</td> <td>0.5222</td> </tr> <tr> <td>80</td> <td>1.4681</td> <td>0.5171</td> </tr> <tr> <td>90</td> <td>1.4185</td> <td>0.4512</td> </tr> <tr> <td>100</td> <td>1.3866</td> <td>0.3</td> </tr> <tr> <td>10–50</td> <td>3.34</td> <td>1.96</td> </tr> <tr> <td>50–100</td> <td>1.58</td> <td>0.63</td> </tr> </tbody> </table> Fig. 6. Results for an alphabet of size 4. Table 2 Results for an alphabet of size 4. <table> <thead> <tr> <th>Length of patterns</th> <th>Commentz-Walter</th> <th>DAWG-MATCH</th> </tr> </thead> <tbody> <tr> <td>10</td> <td>1.9887</td> <td>1.4938</td> </tr> <tr> <td>20</td> <td>1.5612</td> <td>0.6884</td> </tr> <tr> <td>30</td> <td>1.3593</td> <td>0.47</td> </tr> <tr> <td>40</td> <td>1.2843</td> <td>0.3457</td> </tr> <tr> <td>50</td> <td>1.2094</td> <td>0.2785</td> </tr> <tr> <td>60</td> <td>1.1371</td> <td>0.2351</td> </tr> <tr> <td>70</td> <td>1.1205</td> <td>0.205</td> </tr> <tr> <td>80</td> <td>1.0521</td> <td>0.3402</td> </tr> <tr> <td>90</td> <td>1.0376</td> <td>0.2285</td> </tr> <tr> <td>100</td> <td>1.0258</td> <td>0.1462</td> </tr> <tr> <td>10-50</td> <td>1.83</td> <td>1.34</td> </tr> <tr> <td>50-100</td> <td>1.2</td> <td>0.27</td> </tr> </tbody> </table> Fig. 7. Results for an alphabet of size 8. Table 3 Results for an alphabet of size 8. <table> <thead> <tr> <th>Length of patterns</th> <th>Commentz-Walter</th> <th>DAWG-MATCH</th> </tr> </thead> <tbody> <tr> <td>10</td> <td>1.5611</td> <td>0.8749</td> </tr> <tr> <td>20</td> <td>1.255</td> <td>0.4313</td> </tr> <tr> <td>30</td> <td>1.2007</td> <td>0.2923</td> </tr> <tr> <td>40</td> <td>1.1144</td> <td>0.223</td> </tr> <tr> <td>50</td> <td>1.0541</td> <td>0.181</td> </tr> <tr> <td>60</td> <td>1.0335</td> <td>0.1828</td> </tr> <tr> <td>70</td> <td>1.0138</td> <td>0.1964</td> </tr> <tr> <td>80</td> <td>0.946</td> <td>0.2053</td> </tr> <tr> <td>90</td> <td>0.9296</td> <td>0.1065</td> </tr> <tr> <td>100</td> <td>0.8906</td> <td>0.0968</td> </tr> <tr> <td>10–50</td> <td>1.5</td> <td>0.87</td> </tr> <tr> <td>50–100</td> <td>1.04</td> <td>0.18</td> </tr> </tbody> </table> Remark (on single-pattern matching) The previous algorithm is especially simple and efficient for one pattern. In this case we can replace the Aho-Corasick algorithm by the Knuth-Morris-Pratt algorithm. The preprocessing phase consists in building the failure function of Knuth-Morris-Pratt and the DAWG for the reversed pattern. Then during the search phase the shift are computed as the difference between the length of the pattern and the position of the character of the pattern which is compared when the algorithm stops scanning from left to right. References
{"Source-Url": "http://www-igm.univ-mlv.fr/~lecroq/articles/ipl3.pdf", "len_cl100k_base": 7266, "olmocr-version": "0.1.49", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 66146, "total-output-tokens": 8618, "length": "2e12", "weborganizer": {"__label__adult": 0.0005035400390625, "__label__art_design": 0.0005397796630859375, "__label__crime_law": 0.0008263587951660156, "__label__education_jobs": 0.0007181167602539062, "__label__entertainment": 0.00019097328186035156, "__label__fashion_beauty": 0.0002646446228027344, "__label__finance_business": 0.00034427642822265625, "__label__food_dining": 0.0005168914794921875, "__label__games": 0.0009617805480957032, "__label__hardware": 0.0022335052490234375, "__label__health": 0.0010843276977539062, "__label__history": 0.0004940032958984375, "__label__home_hobbies": 0.0001589059829711914, "__label__industrial": 0.0007982254028320312, "__label__literature": 0.0008068084716796875, "__label__politics": 0.0005512237548828125, "__label__religion": 0.0008625984191894531, "__label__science_tech": 0.368896484375, "__label__social_life": 0.00015687942504882812, "__label__software": 0.01171875, "__label__software_dev": 0.60595703125, "__label__sports_fitness": 0.0004429817199707031, "__label__transportation": 0.0008091926574707031, "__label__travel": 0.00025153160095214844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25793, 0.06821]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25793, 0.46157]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25793, 0.8011]], "google_gemma-3-12b-it_contains_pii": [[0, 2569, false], [2569, 5528, null], [5528, 7267, null], [7267, 8695, null], [8695, 11771, null], [11771, 12429, null], [12429, 15575, null], [15575, 18048, null], [18048, 20672, null], [20672, 21512, null], [21512, 22343, null], [22343, 23187, null], [23187, 25581, null], [25581, 25793, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2569, true], [2569, 5528, null], [5528, 7267, null], [7267, 8695, null], [8695, 11771, null], [11771, 12429, null], [12429, 15575, null], [15575, 18048, null], [18048, 20672, null], [20672, 21512, null], [21512, 22343, null], [22343, 23187, null], [23187, 25581, null], [25581, 25793, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25793, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25793, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25793, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25793, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25793, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25793, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25793, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25793, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25793, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25793, null]], "pdf_page_numbers": [[0, 2569, 1], [2569, 5528, 2], [5528, 7267, 3], [7267, 8695, 4], [8695, 11771, 5], [11771, 12429, 6], [12429, 15575, 7], [15575, 18048, 8], [18048, 20672, 9], [20672, 21512, 10], [21512, 22343, 11], [22343, 23187, 12], [23187, 25581, 13], [25581, 25793, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25793, 0.22549]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
41145144aed959e8b6af1fd925bd858750d0a8e8
(Provisional) Lecture 09: Recursive Expression Evaluation 10:00 AM, Sep 23, 2019 Contents 1 Warning 1 2 A Few Things to Note 1 3 Step-by-Step Evaluation (of a Complex Expression) 1 4 Anyone got a 17? 2 5 What About a Recursion Example? 4 6 More Recursion 8 7 How Fast or Slow is my Program? 9 8 Operation Counting 9 9 Summary 10 1 Warning These provisional notes are mostly correct, and mostly follow the language used in class, but there are some differences, and the lecture TA will be modifying them over the next day or two. If something seems screwy, look at the revised notes to see if that clears it up. And in general, I hope that the organization of the day’s Powerpoint slides will work well for you doing your own recursive evaluations. –Spike 2 A Few Things to Note At any time in the course, you’ll have encountered certain aspects of Racket. Your homework is to use only those aspects of racket, not others. For instance, if the homework had asked you to produce a procedure that finds the length of a list, you should not write ``` (define mylen length) ``` because at this point, you have not encountered the built-in length. Note: For these lecture notes, please refer to the lecture slides posted online to see the evaluation step-by-step. For the tables in these lecture notes: - The first column contains an expression we want to evaluate. - The second column (labeled Exp type.) will tell us what type of expression we’re evaluating. - The third column (labeled Env.) contains the relevant parts of our current environment. - A red color in the Value (fourth) column means we haven’t figured out the value of our expression yet (green will mean we have evaluated the expression and know the value.) 3 Step-by-Step Evaluation (of a Complex Expression) To run the Rules of Evaluation, we have to look at an expression and decide what kind of expression it is (this is where the second column comes in handy!). Steps in evaluating a “complex” expression: 1. What kind of expression are we trying to evaluate? 2. Evaluate relevant subparts of the expression. For proc-app-expressions, that means both the first, which should evaluate to a procedure, and all the others. For things like if-expressions and or-expressions, short circuiting may involve only evaluating a few subparts. 3. Once you have values for the relevant “inner” expressions, you can evaluate the “complex” expression which is no longer complex! Note: Look at the lecture slides for why list-length works. 4 Anyone got a 17? We wrote contains17? last class, which takes an int list tests whether a list of integers contains the number 17, and produces a boolean. You should see a pattern, namely, that the input list contains a 17 in one of two situations: - if the rest of the list contains a 17, or - if not, but if the first item in the list happens to be a 17. Thus, if the Recursive Input is Original Output will be true. If the Recursive Output is false, but (first Original Input) is 17, Original Output is true. Otherwise Original Output is false. From this, and the design recipe, you should produce code that looks like this ;; Data Description: ;; An int list is either ;; empty ;; (cons n lst) where n is an int and lst is an int list ;; nothing else is an int list ;; Examples: ;; int: 0, 3, -2 (define lst0 empty) (define lst1 (cons 17 empty)) (define lst2 (cons 17 (cons 4 empty))) (define lst3 (cons 3 (cons 17 empty))) (define lst4 (cons 3 (cons 1 empty))) ;; ;; contains17? : (int list) -> bool ;; ;; input: aloi, a list of integers ;; output: true if aloi contains 17; false otherwise ;; ;; Recursion Diagram ;; Original Input:(cons 17 empty) ;; Recursive Input: empty ;; Recursive output: true ;; Original Output: true ;; ;; Original Input:(cons 17 (cons 4 empty)) ;; Recursive Input: (cons 4 empty) ;; Recursive output: false ;; Original Output: true ;; ;; Original Input:(cons 3 (cons 17 empty)) ;; Recursive Input: (cons 17 empty) ;; Recursive output: true ;; Original Output: true ;; ;; Original Input:(cons 3 (cons 1 empty)) ;; Recursive Input: (cons 1 empty) ;; Recursive output: false ;; Original Output: false ;; (define (contains17? aloi) (cond [(empty? aloi) false] [(cons? aloi) (if (contains17? (rest aloi)) true (if (= 17 (first aloi)) true false))]])) (check-expect (contains17? lst0) false) (check-expect (contains17? lst1) true) (check-expect (contains17? lst2) true) (check-expect (contains17? lst3) true) (check-expect (contains17? lst4) false) This is a correct, but ugly program. A Racket programmer would look at it and wonder why it looked like that. How come? We have the first if expression returning a bool, and the second if expression returning two bools. Let’s look at that last bit: ``` (if (= 17 (first aloi)) true false) ``` Suppose that the first item in aloi is 17. What’s the value of the if-expression? It’s true, right? Now ask yourself: what’s the value of the “condition” part of the if-expression, i.e., of (= 17 (first aloi))? It’s also true. Now suppose that the first item is not 17. Then the whole if-expression evaluates to false, but so does just the condition expression. So we can replace the whole if-expression by just the condition! Our second cond case now looks like this: ``` (define (contains17? aloi) (cond [(empty? aloi) false] [(cons? aloi) (if (contains17? (rest aloi)) true (= 17 (first aloi)))])) ``` We’re not done yet! We’ve now got a situation in which we have two conditions, and if either one of them is true, the value we want is true; otherwise we want false. Well, that’s exactly what or provides. We can rewrite: ``` (define (contains17? aloi) (cond [(empty? aloi) false] [(cons? aloi) (or (contains17? (rest aloi)) (= 17 (first aloi)))])) ``` Finally, suppose we have a list of 1000 items, and the first one is 17. Do we need to look at the other 999? Heck, no! So because of the way that or short-circuits, we should swap the order ``` (define (contains17? aloi) (cond [(empty? aloi) false] [(cons? aloi) (or (= 17 (first aloi)) (contains17? (rest aloi)))])) ``` Now that is idiomatic Racket code! 5 What About a Recursion Example? Remember that when we defined list-length (which we’re now referring to as len for short) the result of that definition is that the identifier len is bound to a closure—a closure in which the argument list is lst and in which the body is a cond expression. For the remainder of our notes (and in the slides), this closure will be called C1. ```scheme ;; example ints ;; 1 ;; 0 ;; len: (int list) -> int ;; Input: a list of integers, aloi ;; Output: an integer, the length of aloi (define len aloi) (cond [(empty? aloi) 0] [(cons? aloi) (+ 1 (len (rest aloi)))] ) ;; An example of using the len procedure (len (cons 1 empty)) ;; ^ - We'll refer to this expression as A ;; [_____________] - We'll refer to this expression as B ``` Evaluating (len (cons 1 empty)): 1. First, we evaluate A, namely len to see that the result is a closure, C1, a kind of procedure value, so overall we’re working with a proc-app-expression. 2. Evaluate B, also a procedure application, which evaluates to the list: (cons 1 empty). 3. Evaluate the body of C1 in an environment consisting of the TLE, extended by new bindings in which the formal arguments are bound to the actual arguments: (a) Evaluate the cond expression by looking at each condition one by one (in order). (b) Evaluate each condition and when you hit one that evaluates to true: (c) ...evaluate the corresponding result expression in the same context (environment). (d) Repeat (1-3) (since (3) is recursive) until you get a final value. Visually, this process of evaluation for (len (cons 1 empty)) will look something like this: <table> <thead> <tr> <th>Expression</th> <th>Exp Type</th> <th>Envt</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>len</td> <td>user-defined proc</td> <td>Top Level Environment</td> <td>C1</td> </tr> </tbody> </table> 2. Evaluate B, also a procedure application, which evaluates to the list: \((\text{cons} \ 1 \ \text{empty})\). <table> <thead> <tr> <th>Expression</th> <th>Exp Type</th> <th>Envt</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>(\text{len})</td> <td>user-defined procedure</td> <td>Top Level Environment</td> <td>C1</td> </tr> <tr> <td></td> <td></td> <td>identifier</td> <td>value</td> </tr> <tr> <td></td> <td></td> <td>len</td> <td>C1</td> </tr> <tr> <td>((\text{cons} \ 1 \ \text{empty}))</td> <td>procedure application</td> <td>...</td> <td>((\text{cons} \ 1 \ \text{empty}))</td> </tr> </tbody> </table> 3. Evaluate the body of \(C1\) in the Top Level Environment, extended by the binding, where the formal arguments are bound to the actual arguments. Recall that our closure, \(C1\), represents a procedure. Visually, it looks something like: ``` args: aloi body: (cond [(empty? .....)] [(cons? .....)]) ``` Note that “args” corresponds to any inputs to the procedure, in this case \(\text{aloi}\), and “body” corresponds to the unevaluated expression which constitutes the body of our length procedure. In this case, the body is a \(\text{cond}\) expression which produces one result if the list is a \(\text{cons}\), and another if it’s \(\text{empty}\). Now it’s time to evaluate! To do so, we extend our Top Level Environment by adding a new, Local Environment, where the formal arguments have been bound to the actual arguments. This local environment is only temporary, and will only exist for as long as it takes for the body to be fully evaluated. So, we now have: ``` Top Level Environment <table> <thead> <tr> <th>identifier</th> <th>value</th> </tr> </thead> <tbody> <tr> <td>len</td> <td>C1</td> </tr> </tbody> </table> ``` and, ``` Local Environment <table> <thead> <tr> <th>identifier</th> <th>value</th> </tr> </thead> <tbody> <tr> <td>aloi</td> <td>(cons 1 empty)</td> </tr> </tbody> </table> ``` In this case we have \(\text{aloi}\) as our formal argument and \((\text{cons} \ 1 \ \text{empty})\) as our actual argument. The actual arguments are bound to the formal arguments temporarily”. So, as we now start to evaluate the body of the closure, we will look up the values of any identifiers we find in these two environments. Racket will first look in the Local Environment for the binding, and, if the identifier was not found, continue searching in the Top Level Environment. (a) Evaluate the \(\text{cond}\) expression by looking at each condition one by one (in order). (b) Evaluate each condition and when you hit one that evaluates to \(\text{true}\): (c) ...evaluate the corresponding result expression in the same context (environment). Following these next three steps, we look up \(\text{aloi}\) in the environments (in the order outlined above), and find that \(\text{aloi}\) is indeed bound to \((\text{cons} \ 1 \ \text{empty})\) in the Local Environment. So, finding that we have a **cons** list, we go on to evaluate the corresponding result expression in the same context. Visually, this looks something like: <table> <thead> <tr> <th>Expression</th> <th>Exp Type</th> <th>Envt</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>(+ 1 (len (rest aloi)))</td> <td>procedure application expression</td> <td>See environments above</td> <td>?</td> </tr> </tbody> </table> Following the rules of evaluation for evaluating a procedure application expression, we evaluate it one expression at a time. <table> <thead> <tr> <th>Expression</th> <th>Exp Type</th> <th>Envt</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>(+ 1 (len (rest aloi)))</td> <td>procedure application expression</td> <td>See environments above</td> <td>?</td> </tr> <tr> <td>+</td> <td>builtin procedure</td> <td></td> <td>Closure</td> </tr> <tr> <td>1</td> <td>number</td> <td></td> <td>1</td> </tr> <tr> <td>(len (rest aloi))</td> <td>procedure application</td> <td></td> <td>?</td> </tr> </tbody> </table> In the last step, Racket recognizes that `(len (rest aloi))` is in fact a procedure application expression. So following the rules of evaluation, going through one piece at a time: <table> <thead> <tr> <th>Expression</th> <th>Exp Type</th> <th>Envt</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>len</td> <td>user-defined procedure</td> <td>See environments above</td> <td>C1</td> </tr> </tbody> </table> Just as above, looking up the identifier `len` in our Top Level Environment (extended by the local environment) gave us the closure `C1`, since that binding remains in the Top Level Environment. Now all that’s left is to evaluate the actual arguments given to our user-defined procedure, `(rest aloi)`. Remembering that we have to look up the value of `aloi` in our Top level Environment extended by our Local Environment, we get: <table> <thead> <tr> <th>Expression</th> <th>Exp Type</th> <th>Envt</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>(rest aloi)</td> <td>...</td> <td>Local Environment</td> <td></td> </tr> <tr> <td></td> <td></td> <td>identifier</td> <td>value</td> </tr> <tr> <td></td> <td></td> <td>aloi</td> <td>(cons 1 empty)</td> </tr> </tbody> </table> Since `(rest (cons 1 empty))`, (i.e. rest of what we get when we look up `aloi`), is `empty`, this will give us `empty`. (d) Repeat (1-3) (since (3) is recursive) until you get a final value. Now knowing that we are invoking the `len` procedure on an empty list, we follow the exact same steps as we do above when we invoked `len` on a cons list. Namely, we evaluate the body of the closure `C1` in an environment consisting of the TLE plus a local environment, where the formal arguments are bound to the actual arguments. We now have: <table> <thead> <tr> <th>Top Level Environment</th> </tr> </thead> <tbody> <tr> <td>identifier</td> </tr> <tr> <td>len</td> </tr> </tbody> </table> and, <table> <thead> <tr> <th>Local Environment 1</th> </tr> </thead> <tbody> <tr> <td>identifier</td> </tr> <tr> <td>aloi</td> </tr> </tbody> </table> When looking up an identifier in these environments, we start with the most recent, and work our way down. So, when looking up any identifiers in this case, we'd start with Local Environment 1, then the Top Level Environment. You can think of local environments like index cards - each time you add a new one, you stack it on top of the old ones, and always look in the top-most index card first when looking up identifiers. Now, again we (a) Evaluate the **cond** expression by looking at each condition one by one (in order). (b) Evaluate each condition and when you hit one that evaluates to **true**: (c) ...evaluate the corresponding result expression in the same context (environment). In this case, when we go to look up aloi, we find that it’s **empty**! So when we evaluate the corresponding result expression for the appropriate **cond** case, we just get 0. Note that, once our closure has returned a value and the procedure has terminated, the local environment which had the temporary bindings between the formal and actual arguments for that procedure goes away. So, after 0 is returned, we are back to evaluating the first use of **len** and the environment looks like this: <table> <thead> <tr> <th>Top Level Environment</th> </tr> </thead> <tbody> <tr> <td>identifier</td> </tr> <tr> <td>len</td> </tr> </tbody> </table> and, <table> <thead> <tr> <th>Local Environment 1</th> </tr> </thead> <tbody> <tr> <td>identifier</td> </tr> <tr> <td>aloi</td> </tr> </tbody> </table> Knowing now what \((\text{len (rest aloi)})\) evaluates to, we can go back and update our table from before! <table> <thead> <tr> <th>Expression</th> <th>Exp Type</th> <th>Envt</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>+ 1</td> <td>builtin procedure</td> <td>&quot;&quot;</td> <td>Closure</td> </tr> <tr> <td>(len (rest aloi))</td> <td>procedure application</td> <td>&quot;&quot;</td> <td>1</td> </tr> <tr> <td></td> <td></td> <td></td> <td>0</td> </tr> </tbody> </table> And, now that we know the value of everything in our procedure-application expression, we can evaluate the procedure-application expression as a whole! <table> <thead> <tr> <th>Expression</th> <th>Exp Type</th> <th>Envt</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>(+ 1 0)</td> <td>procedure application</td> <td>See environments above</td> <td>1</td> </tr> </tbody> </table> Once again, now that our procedure is done and has returned a value, the local environment which contained the temporary bindings between its formal and actual arguments (in this case, Local Environment 1) will disappear, leaving us only with the Top Level Environment. When in doubt, follow these rules of evaluation to find the result of any recursive procedure! 6 More Recursion While evaluating these functions, you may be wondering why recursion even works. Specifically, why we can call a function without having fully defined it. The answer lies in how Racket programs are actually evaluated— the Rules of Evaluation! Consider the \texttt{len} procedure. \texttt{len} is bound to a closure with argument \texttt{aloi} and body which is the body of the procedure. The trick is that this body is not evaluated at the time of adding the binding, it is simply put in the Top Level Environment as it is. So every time we call \texttt{len}, it is going to find a binding to a closure, and evaluate it normally. Note: Most of the examples from the slides are applicable to all types of data. The procedures written in class were for numbers, but here, the same procedures will be generic (if possible) so you can distinguish small (but very important!) stylistic and functional differences. **SUPER Important Note:** All lists in CS17 are homogeneous, i.e., must contain elements of the same data type. For example, \texttt{(cons 3 \texttt{(cons 0 empty))}} is a perfectly acceptable list. As are \texttt{empty} and \texttt{(cons "CS"\texttt{(cons "17"\texttt{(cons "Rocks"empty))}}). However, \texttt{(cons "CS"\texttt{(cons 17 \texttt{(cons "Rocks empty))}})} is an unacceptable list, despite how true that statement is. Notice the difference between \texttt{(cons "CS"\texttt{(cons 17 \texttt{(cons "Rocks empty))}})} and \texttt{(cons "CS"\texttt{(cons 17 \texttt{(cons "Rocks empty))}})}— in the former, \texttt{"17"} is a \texttt{string} as are the other two elements, in the latter, \texttt{17} is an \texttt{integer} while the other two elements are \texttt{strings}. 7 How Fast or Slow is my Program? When we have two functions, for example a linear and an exponential function, we’ve seen how to characterize them as eventually larger or smaller. We can do this when we have a mathematical relation describing the functions. When we want to use this technique to evaluate how fast or slow a computer program is, how would go about this? We would need to find a recurrence relation, which is some mathematical relation describing the program which we can solve to show how fast the program is, without really finding the exact function. We will be learning about how to do this in detail in the coming weeks! Faster programs were much more important in the past when computers were expensive, but recently the focus has shifted somewhat to how easy a program is to maintain. However, as big programs have to deal with more and more data, speed becomes extremely important. Now the important question is— for bigger data sets, does our program eventually start getting faster? Because if we talk about any fixed sized data set, we know that eventually our program will get fast, with computers becoming increasingly powerful. But we will always keep getting more data, faster than we can keep up with. This is where programs that are fast in the long run become important. Some more things to keep in mind as we start talking about this is that there are other factors, unrelated to how good your code is, that can affect how long your program takes to run. Because people code in different languages, use computers with different processors, and other factors, we ignore constants while evaluating the speed of programs. If one program takes twice as long as another— from our point of view, they are equally fast. There are two reasons for this— firstly, practically it doesn’t matter as it does not cause a large effect in the long run, and secondly, it makes the math much easier! 8 Operation Counting Let’s count how many operations are needed to evaluate a couple expressions. The operations cons, first, rest, +, -, *, /, empty?, cons?, or, and, =, binding a name to a value, looking up a procedure, evaluate a num, bool, string, empty takes constant time to operate, i.e it takes 1 operation count. 1. Let’s look at (+ 3 5). How many operations does this take? (a) 1 for evaluating 3 (b) 1 for evaluating 5 (c) 1 for looking up + in the Top Level Environment. (d) 1 for actually performing the addition operation. Therefore, the operation count for (+ 3 5) is 4. 2. Let’s now look at (contains17? empty). How many operations does this take? (a) 1 for looking up contains17? (b) 1 for evaluating empty (c) 1 for binding aloi to empty (d) 1 for evaluating empty? (e) 1 for evaluating aloi (f) 1 for performing the operation empty? on aloi (g) 1 for evaluating false Therefore, the operation count for (contains17? empty) is 7. Now that we have evaluated the base case for contains17?, we can approximately guess the operations for an element with a one element list to be 18. This includes the base case scenario. Thus we can generalize the operation counting for the list to be 11n + 7 where 7 was our base case operation count. 9 Summary Ideas - When we are evaluating a procedure application expression, we always start by extending our environment with a new, local environment, which binds the formal arguments of the procedure to the actual arguments that the procedure is being applied to. This local environment will disappear once our procedure-application has been successfully evaluated (i.e., we’ve followed the logic of the body of the procedure and determined the correct value to return). - We know that, in CS 17, lists are homogeneous (i.e. can only contain items of the same data types). - We know how to write a recursive procedure that will check to see if an input list contains the number 7. Skills - We’ve learned how to use tables to break down the evaluation of a recursive procedure. That is, once we know we are dealing with a procedure application expression, we know how to look up the appropriate identifiers in our environments in the correct order (i.e. looking in chronological order, with the most recent local environments being first, and the top level environment being last) and follow the rules of evaluation until we reach a base case. Then, we take that result, and retrace our steps through our recursive calls to produce one final result. Please let us know if you find any mistakes, inconsistencies, or confusing language in this or any other CS 17 document by filling out the anonymous feedback form: [http://cs.brown.edu/courses/csci0170/feedback](http://cs.brown.edu/courses/csci0170/feedback).
{"Source-Url": "http://cs.brown.edu/courses/csci0170/content/lectures/09recursive-evaluation-racket.pdf", "len_cl100k_base": 6090, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 25004, "total-output-tokens": 6217, "length": "2e12", "weborganizer": {"__label__adult": 0.0003767013549804687, "__label__art_design": 0.0006394386291503906, "__label__crime_law": 0.0003237724304199219, "__label__education_jobs": 0.010345458984375, "__label__entertainment": 0.00011783838272094728, "__label__fashion_beauty": 0.00019502639770507812, "__label__finance_business": 0.00020003318786621096, "__label__food_dining": 0.0005645751953125, "__label__games": 0.0010519027709960938, "__label__hardware": 0.0011234283447265625, "__label__health": 0.0005054473876953125, "__label__history": 0.0003104209899902344, "__label__home_hobbies": 0.00019943714141845703, "__label__industrial": 0.0005865097045898438, "__label__literature": 0.0004150867462158203, "__label__politics": 0.0002512931823730469, "__label__religion": 0.0007190704345703125, "__label__science_tech": 0.026275634765625, "__label__social_life": 0.0001932382583618164, "__label__software": 0.008056640625, "__label__software_dev": 0.9462890625, "__label__sports_fitness": 0.0004117488861083984, "__label__transportation": 0.0006265640258789062, "__label__travel": 0.0002429485321044922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22950, 0.02295]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22950, 0.12176]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22950, 0.84923]], "google_gemma-3-12b-it_contains_pii": [[0, 1158, false], [1158, 3151, null], [3151, 4685, null], [4685, 6316, null], [6316, 8144, null], [8144, 10945, null], [10945, 13977, null], [13977, 16501, null], [16501, 20136, null], [20136, 22121, null], [22121, 22950, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1158, true], [1158, 3151, null], [3151, 4685, null], [4685, 6316, null], [6316, 8144, null], [8144, 10945, null], [10945, 13977, null], [13977, 16501, null], [16501, 20136, null], [20136, 22121, null], [22121, 22950, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 22950, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22950, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22950, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22950, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 22950, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22950, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22950, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22950, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22950, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22950, null]], "pdf_page_numbers": [[0, 1158, 1], [1158, 3151, 2], [3151, 4685, 3], [4685, 6316, 4], [6316, 8144, 5], [8144, 10945, 6], [10945, 13977, 7], [13977, 16501, 8], [16501, 20136, 9], [20136, 22121, 10], [22121, 22950, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22950, 0.18792]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
5e103845c8c84ab3bc265674cb548e9437c18283
Reading Java Java Network Programming, Harold 3rd Ed, Chapter 2 - Network Basics Chapter 4 Streams Chapter 9 Sockets for Clients Ruby Programming Ruby, Thomas, 2'ed Chapter 10 Basic Input & Output Class IO documentation (pp 503-515) IPSocket & TCPSocket in Appendix A Wikipedia, various articles, explicit references on individual slides Network Overview - Messages divided into packets - Each packet routed separately - Routing Issues - Overhead issues Send Message To Machine A This is just a sample message that one might send on a network to another machine. Sending This is just a sample message that one might send on a network to another machine. A:1:This is just a sample message that one might send on a network to another machine. A:2:le message that one A:3: might send on a ne A:4:two to another ma A:5:chine. Network Cloud A B Tanenbaum – please forgive the gross oversimplification here. This is just a sample message that one might send on a network to another machine. Tanenbaum – please forgive the gross oversimplification here. Issues How does the message get to A How does the message get to the correct program on A How do packets get lost How do packets get out of order Routers Some Useful Programs netstat Show status of network connections on machine ls/of list open files (& pipes & sockets) traceroute Show the route to remote machine ## netstat Windows, Unix/Linux Al pro 14->netstat ### Active Internet connections <table> <thead> <tr> <th>Proto</th> <th>Recv-Q</th> <th>Send-Q</th> <th>Local Address</th> <th>Foreign Address</th> <th>(state)</th> </tr> </thead> <tbody> <tr> <td>tcp4</td> <td>17680</td> <td>0</td> <td>10.0.1.192.60840</td> <td>kusc-pc-stream2..irdmi</td> <td>ESTABLISHED</td> </tr> <tr> <td>tcp4</td> <td>0</td> <td>0</td> <td>10.0.1.192.60627</td> <td>208.43.202.32-st.http</td> <td>ESTABLISHED</td> </tr> <tr> <td>tcp4</td> <td>0</td> <td>0</td> <td>10.0.1.192.60623</td> <td>adsl-68-20-22-55.28205</td> <td>ESTABLISHED</td> </tr> <tr> <td>tcp4</td> <td>0</td> <td>0</td> <td>localhost.26164</td> <td>localhost.60431</td> <td>ESTABLISHED</td> </tr> <tr> <td>tcp4</td> <td>0</td> <td>0</td> <td>localhost.60431</td> <td>localhost.26164</td> <td>ESTABLISHED</td> </tr> <tr> <td>tcp4</td> <td>0</td> <td>0</td> <td>10.0.1.192.afpovertcp</td> <td>10.0.1.200.53611</td> <td>ESTABLISHED</td> </tr> <tr> <td>tcp4</td> <td>0</td> <td>0</td> <td>localhost.26164</td> <td>localhost.53896</td> <td>ESTABLISHED</td> </tr> <tr> <td>tcp4</td> <td>0</td> <td>0</td> <td>localhost.53896</td> <td>localhost.26164</td> <td>ESTABLISHED</td> </tr> <tr> <td>tcp4</td> <td>0</td> <td>0</td> <td>localhost.26164</td> <td>localhost.51153</td> <td>ESTABLISHED</td> </tr> <tr> <td>tcp4</td> <td>0</td> <td>0</td> <td>localhost.51153</td> <td>localhost.26164</td> <td>ESTABLISHED</td> </tr> <tr> <td>tcp4</td> <td>0</td> <td>0</td> <td>localhost.26164</td> <td>localhost.49164</td> <td>ESTABLISHED</td> </tr> <tr> <td>tcp4</td> <td>0</td> <td>0</td> <td>localhost.49164</td> <td>localhost.26164</td> <td>ESTABLISHED</td> </tr> <tr> <td>tcp4</td> <td>37</td> <td>0</td> <td>10.0.1.192.49163</td> <td>174.36.30.66-sta.https</td> <td>CLOSE_WAIT</td> </tr> <tr> <td>tcp4</td> <td>37</td> <td>0</td> <td>10.0.1.192.49162</td> <td>174.36.30.67-sta.https</td> <td>CLOSE_WAIT</td> </tr> <tr> <td>tcp4</td> <td>0</td> <td>0</td> <td>10.0.1.192.60862</td> <td>WAREHOUSE-THREE-.55510</td> <td>TIME_WAIT</td> </tr> <tr> <td>tcp4</td> <td>0</td> <td>0</td> <td>10.0.1.192.60861</td> <td>dhcp128036163075.55165</td> <td>TIME_WAIT</td> </tr> </tbody> </table> etc Al pro 13->netstat -s tcp: 1290678 packets sent 169 data packets (52330 bytes) retransmitted 686952 ack-only packets (8670 delayed) 1530460 packets received 332351 acks (for 114678930 bytes) 8686 duplicate acks 760 completely duplicate packets (611003 bytes) 27337 out-of-order packets (34890076 bytes) 11654 connection requests 104 connection accepts 48 bad connection attempts 8 listen queue overflows etc. isof Unix/Linux list open files disk files, pipes and network sockets Air 18->isof -i <table> <thead> <tr> <th>COMMAND</th> <th>PID</th> <th>USER</th> <th>FD</th> <th>TYPE</th> <th>DEVICE</th> <th>SIZE/OFF</th> <th>NODE</th> <th>NAME</th> </tr> </thead> <tbody> <tr> <td>SystemUIS</td> <td>194</td> <td>whitney</td> <td>9u</td> <td>IPv4</td> <td>0x3da8f40</td> <td>0t0</td> <td>UDP</td> <td><em>:</em></td> </tr> <tr> <td>adb</td> <td>1141</td> <td>whitney</td> <td>5u</td> <td>IPv4</td> <td>0x9f28270</td> <td>0t0</td> <td>TCP</td> <td>localhost:5037 (LISTEN)</td> </tr> <tr> <td>Safari</td> <td>3234</td> <td>whitney</td> <td>56u</td> <td>IPv6</td> <td>0x3dac19c</td> <td>0t0</td> <td>TCP</td> <td>localhost:59088-&gt;localhost:59087 (TIME_WAIT)</td> </tr> <tr> <td>Safari</td> <td>3234</td> <td>whitney</td> <td>71u</td> <td>IPv4</td> <td>0x623366c</td> <td>0t0</td> <td>TCP</td> <td>146.244.205.67:50097-&gt;a198-189-255-145.deploy.akamaitechnolgies.com:http (CLOSE_WAIT)</td> </tr> <tr> <td>Safari</td> <td>3234</td> <td>whitney</td> <td>74u</td> <td>IPv4</td> <td>0x61fb270</td> <td>0t0</td> <td>TCP</td> <td>146.244.205.67:50099-&gt;a198-189-255-145.deploy.akamaitechnolgies.com:http (CLOSE_WAIT)</td> </tr> </tbody> </table> See http://en.wikipedia.org/wiki/Lsof traceroute tracepath - Linux tracert - Windows Al pro 15->traceroute www.sdsu.edu traceroute to www.sdsu.edu (130.191.8.198), 64 hops max, 40 byte packets 1 10.0.1.1 (10.0.1.1) 0.679 ms 0.192 ms 0.174 ms 2 ip68-8-224-1.sd.sd.cox.net (68.8.224.1) 8.317 ms 6.879 ms 7.574 ms 3 fed1sysc01-gex0915.sd.sd.cox.net (68.6.10.106) 15.600 ms 8.736 ms 11.449 ms 4 fed1sysc10-get0005.sd.sd.cox.net (68.6.8.78) 10.456 ms 10.895 ms 8.740 ms 5 dt1xaggc01-get0701.sd.sd.cox.net (68.6.8.49) 12.298 ms 23.956 ms 7.625 ms 6 sdscbcsf01-fex0301.cox-sd.net (209.242.135.150) 7.831 ms 8.904 ms 8.057 ms 7 sdsccbcsf01-fex0301.cox-sd.net (209.242.135.150) 7.831 ms 8.904 ms 8.057 ms 8 sdsccbcsf01-fex0301.cox-sd.net (209.242.135.150) 7.831 ms 8.904 ms 8.057 ms 9 dc-sd-csu-egm--sdg-dc1.cenic.net (137.164.41.138) 13.097 ms 13.603 ms 15.632 ms 10 *** 11 *** How do packets get out of order different routes different wait times in router buffers Internet Protocol (IP or TCP/IP) Application Layer DHCP, DNS, FTP, HTTP, SSH, Telnet, (more) Transport Layer TCP, UDP, (more) Internet Layer IPv4, IPv6, (more) Link Layer Ethernet, DSL, ISDN, FDDI, (more) This chart shows the IP address space on a plane using a fractal mapping which preserves grouping -- any consecutive string of IPs will translate to a single compact, contiguous region on the map. Each of the 256 numbered blocks represents one /8 subnet (containing all IPs that start with that number). The upper left section shows the blocks sold directly to corporations and governments in the 1990s before the RIRs took over allocation. http://xkcd.com/195/ TCP Handles lost packets Handles packet order TCP has delays Starting of connection Closing of connection Resending packets Client & Sever don't have to deal with Packet order Packet loss ## TCP Header <table> <thead> <tr> <th>Bit offset</th> <th>0</th> <th>1</th> <th>2</th> <th>3</th> <th>4</th> <th>5</th> <th>6</th> <th>7</th> <th>8</th> <th>9</th> <th>10</th> <th>11</th> <th>12</th> <th>13</th> <th>14</th> <th>15</th> <th>16</th> <th>17</th> <th>18</th> <th>19</th> <th>20</th> <th>21</th> <th>22</th> <th>23</th> <th>24</th> <th>25</th> <th>26</th> <th>27</th> <th>28</th> <th>29</th> <th>30</th> <th>31</th> </tr> </thead> <tbody> <tr> <td></td> <td>Source port</td> <td>Destination port</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>32</td> <td>Sequence number</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>64</td> <td>Acknowledgment number</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>96</td> <td>Data offset</td> <td>Reserved</td> <td>CW</td> <td>EC</td> <td>URG</td> <td>ACK</td> <td>PSH</td> <td>RST</td> <td>SYN</td> <td>FIN</td> <td>Window Size</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>128</td> <td>Checksum</td> <td>Urgent pointer</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>160</td> <td>Options (if Data Offset &gt; 5)</td> <td>...</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> ## IP Header <table> <thead> <tr> <th>bit offset</th> <th>0–3</th> <th>4–7</th> <th>8–15</th> <th>16–18</th> <th>19–31</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>Version</td> <td>Header length</td> <td>Differentiated Services</td> <td>Total Length</td> <td></td> </tr> <tr> <td>32</td> <td>Identification</td> <td>Flags</td> <td>Fragment Offset</td> <td></td> <td></td> </tr> <tr> <td>64</td> <td>Time to Live</td> <td>Protocol</td> <td>Header Checksum</td> <td></td> <td></td> </tr> <tr> <td>96</td> <td>Source Address</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>128</td> <td>Destination Address</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>160</td> <td>Options (if Header Length &gt; 5)</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>160 or 192+</td> <td>Data</td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> ## Ethernet Frame <table> <thead> <tr> <th>Preamble</th> <th>Start-of-Frame-Delimiter</th> <th>MAC destination</th> <th>MAC source</th> <th>802.1Q header (optional)</th> <th>Ethertype/Length</th> <th>Payload (Data and padding)</th> <th>CRC32</th> <th>Interframe gap</th> </tr> </thead> <tbody> <tr> <td>7 octets of 10101010</td> <td>1 octet of 10101011</td> <td>6 octets</td> <td>6 octets</td> <td>(4 octets)</td> <td>2 octets</td> <td>46–1500 octets</td> <td>4 octets</td> <td>12 octets</td> </tr> <tr> <td>64–1522 octets</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>72–1530 octets</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>84–1542 octets</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> http://en.wikipedia.org/wiki/Ethernet UDP - Fast - Packets are treated individually - Packets may arrive out of order - Packets may be lost - Client & Server must handle resulting problems Used by: - Games - NFS ## UDP Header <table> <thead> <tr> <th>bits</th> <th>0 - 15</th> <th>16 - 31</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>Source Port</td> <td>Destination Port</td> </tr> <tr> <td>32</td> <td>Length</td> <td>Checksum</td> </tr> <tr> <td>64</td> <td></td> <td>Data</td> </tr> </tbody> </table> TCP States Flow Control Receiver Tells sender how much data it will buffer (Receive window) Sender Sends one buffer full of data Waits for acknowledgement & window update Congestion Control Slow start Congestion avoidance Fast retransmit Fast recovery IP Addresses IP address is currently a 32-bit number 130.191.3.100 (Four 8 bit numbers) IPv6 uses 128 bit numbers for addresses 105.220.136.100.0.0.0.0.0.0.0.18.128.140.10.255.255 69DC:8864:0:0:1280:8C0A:FFFF 69DC:8864::1280:8C0A:FFFF Machines on a network need a unique IP address What is the difference between MAC address IP address Domain Name System (DNS) Maps machine names to IP addresses Internet Corporation for Assigned Names and Numbers (ICANN http://www.icann.org/) oversees assigning TLDs Graphic is from http://en.wikipedia.org/wiki/Domain_name_system Unix "host" command Shows mapping between machine names and IP address ->host rohan.sdsu.edu rohan.sdsu.edu has address 130.191.3.100 ->host 130.191.3.100 100.3.191.130.IN-ADDR.ARPA domain name pointer rohan.sdsu.edu Ports TCP/IP supports multiple logical communication channels called ports Ports are numbered from 0 - 65535 A connection between two machines is uniquely defined by: - Protocol (TCP or UDP) - IP address of local machine - Port number used on the local machine - IP address of remote machine - Port number used on the remote machine When a client connects to a server, it has to specify a machine and a port. The OS on the server keeps a table of port numbers and applications (sockets from the program) associated with each port number. When a client request comes in, the OS will forward the request to the socket associated with the port number if one is associated (connected) with the port. A similar thing happens on the client side. When you open a socket on the client to connect to the server, the client socket is assigned a port on the client machine. When the server responds to the client, it sends the response to that port on the client machine. ## Some Port Numbers <table> <thead> <tr> <th>Well known Ports</th> <th>1-1023</th> </tr> </thead> <tbody> <tr> <td>Registered Ports</td> <td>1024-49151</td> </tr> <tr> <td>Dynamic/Private Ports</td> <td>49152-65535</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Service</th> <th>Port</th> </tr> </thead> <tbody> <tr> <td>echo</td> <td>7</td> </tr> <tr> <td>discard</td> <td>9</td> </tr> <tr> <td>ftp</td> <td>21</td> </tr> <tr> <td>ssh</td> <td>22</td> </tr> <tr> <td>telnet</td> <td>23</td> </tr> <tr> <td>smtp</td> <td>25</td> </tr> <tr> <td>time</td> <td>37</td> </tr> <tr> <td>http</td> <td>80</td> </tr> <tr> <td>pop</td> <td>110</td> </tr> <tr> <td>https</td> <td>443</td> </tr> <tr> <td>doom</td> <td>666</td> </tr> <tr> <td>mysql</td> <td>3306</td> </tr> <tr> <td>postgresql</td> <td>5432</td> </tr> <tr> <td>gnutella</td> <td>6346 6347</td> </tr> </tbody> </table> For a local list of services file://rohan.sdsu.edu/etc/services For a complete list see: http://www.iana.org/assignments/port-numbers See IANA numbers page http://www.iana.org/numbers.html for more information about protocol numbers and assignment of services What is Telnet? Protocol Send text between client & server Server Requests login Sends text to shell to be executed Returns result of commands Client Transfers text between user and server Telnet & Other Text-based Protocols rohan 37 -> telnet www.eli.sdsu.edu 80 GET /courses/spring06/cs580/index.html HTTP/1.0 <CR><CR> Note <CR> indicates were you need to hit return rohan 38->telnet cs.sdsu.edu 110 Trying 130.191.226.116... Connected to cs.sdsu.edu. Escape character is '^]'. +OK QPOP (version 3.1.2) at sciences.sdsu.edu starting. USER whitney +OK Password required for whitney. PASS typeYourPasswordHere +OK whitney has 116 visible messages (0 hidden) in 640516 octets. **Simple Date Example - Protocol** <table> <thead> <tr> <th>Client Commands</th> <th>Server Response</th> </tr> </thead> <tbody> <tr> <td>&quot;date&quot; ended by line feed &quot;date\n&quot;</td> <td>current date ended by line feed &quot;January 30, 2007\n&quot;</td> </tr> <tr> <td>&quot;time&quot; ended by line feed &quot;time\n&quot;</td> <td>Current time ended by line feed &quot;6:58 pm\n&quot;</td> </tr> </tbody> </table> Server listens for an incoming request On request - reads command - returns response - closes connection On client errors - action not specified Beware Can only send bytes across network Client & server maybe different hardware platforms What is a newline? End-of-file indicates connection is closed import java.io.*; import java.net.Socket; class DateClient { String server; int port; public DateClient(String serverAddress, int port) { server = serverAddress; this.port = port; } public String date() { return send("date\n"); } public String time() { return send("time\n"); } } private String send(String text) { try { Socket connection = new Socket(server, port); OutputStream rawOut = connection.getOutputStream(); PrintStream out = new PrintStream(new BufferedOutputStream(rawOut)); InputStream rawIn = connection.getInputStream(); BufferedReader in = new BufferedReader(new InputStreamReader(rawIn)); out.print(text); out.flush(); String answer = in.readLine(); out.close(); in.close(); return answer; } catch (IOException e) { return "Error in connecting to server"; } } Bad very bad – using PrintStream for network code. Running the Client System.out.println("hi"); DateClient client = new DateClient("127.0.0.1", 4444); System.out.println( client.date()); System.out.println( client.time()); Issue - Avoid Small Packets ```java OutputStream rawOut = connection.getOutputStream(); PrintStream out = new PrintStream(new BufferedOutputStream(rawOut)); ``` Issue - Actually Send the request out.flush(); Issue - Client will not work on all platforms String answer = in.readLine(); Don't Do this String answer = in.readLine(); I did it to keep the example small. One can not get much code on a slide using 24 point font. Plus the Ruby example is sorter than this. Issue - Close the connection when done out.close(); in.close(); Issue - Testing How does one test the client? Issue - Background material Java Streams Read Chapter 4 Sockets Read Chapter 10 Java Network Programming, Harold 3rd Ed require 'socket' class DateClient def initialize(serverAddress, port) @server = serverAddress @port = port end def date() send("date\n") end def time() send("time\n") end private def send(text) connection = TCPSocket.new(@server, @port) connection.send(text, 0) answer = connection.gets("\n") connection.close answer end end Running the client client = DateClient.new("127.0.0.1", 4444) puts client.date puts client.time def send(text) connection = TCPSocket.new(@server, @port) connection.print(text) connection.flush answer = connection.gets("\n") connection.close answer end Ruby Background Sockets Read IPSocket & TCPSocket in Appendix A IO Chapter 10 Basic Input & Output Class IO documentation (pp 503-515) Programming Ruby, Thomas, 2'ed Server Basic Algorithm while (true) { Wait for an incoming request; Perform whatever actions are requested; } Basic Server Issues How to wait for an incoming request? How to know when there is a request? What happens when there are multiple requests? How do clients know how to contact server? How to parse client request? How do we know when the server has the entire request? public class DateServer { private static Logger log = Logger.getLogger("dateLogger"); public static void main (String args[]) throws IOException { ProgramProperties flags = new ProgramProperties( args); int port = flags.getInt( "port" , 8765); new DateServer().run(port); } public void run(int port) throws IOException { ServerSocket input = new ServerSocket( port); log.info("Server running on port " + input.getLocalPort()); while (true) { Socket client = input.accept(); log.info("Request from " + client.getInetAddress()); processRequest( client.getInputStream(), client.getOutputStream()); client.close(); } } } Java Date Server Continued ```java void processRequest(InputStream in, OutputStream out) throws IOException { BufferedReader parsedInput = new BufferedReader(new InputStreamReader(in)); boolean autoflushOn = true; PrintWriter parsedOutput = new PrintWriter(out, autoflushOn); String inputLine = parsedInput.readLine(); if (inputLine.startsWith("date")) { Date now = new Date(); parsedOutput.println(now.toString()); } } ``` This server needs work rohan 16-> java -jar DateServer.jar Feb 19, 2004 10:56:59 AM DateServer run INFO: Server running on port 8765 require 'socket' class DateServer def initialize(port) @port = port end def run() server = TCPServer.new( @port ) puts("start " + @port.to_s ) while (session = server.accept) Thread.new(session) do |connection| process_request_on(connection) connection.close end end end private def process_request_on(socket) request = canonical_form( socket.gets("\n") ) now = Time.now answer = case request when 'time' now.strftime("%X") when 'date' now.strftime("%x") else "Invalid request" end socket.send(answer + "\n",0) end def canonical_form(string) string.lstrip.rstrip.downcase end end Issue - Date Format What format does the server use for time and date? Clients need to know so can parse them
{"Source-Url": "https://eli.sdsu.edu/courses/spring10/cs580/notes/clientserverIntro.pdf", "len_cl100k_base": 7229, "olmocr-version": "0.1.49", "pdf-total-pages": 57, "total-fallback-pages": 0, "total-input-tokens": 83472, "total-output-tokens": 8216, "length": "2e12", "weborganizer": {"__label__adult": 0.0002982616424560547, "__label__art_design": 0.00013530254364013672, "__label__crime_law": 0.0002180337905883789, "__label__education_jobs": 0.0010309219360351562, "__label__entertainment": 5.322694778442383e-05, "__label__fashion_beauty": 0.0001010894775390625, "__label__finance_business": 0.00011897087097167967, "__label__food_dining": 0.0002918243408203125, "__label__games": 0.0005168914794921875, "__label__hardware": 0.0019683837890625, "__label__health": 0.0003657341003417969, "__label__history": 0.0001729726791381836, "__label__home_hobbies": 7.37905502319336e-05, "__label__industrial": 0.0003139972686767578, "__label__literature": 0.00012886524200439453, "__label__politics": 0.00011795759201049803, "__label__religion": 0.00035071372985839844, "__label__science_tech": 0.01264190673828125, "__label__social_life": 8.809566497802734e-05, "__label__software": 0.0158233642578125, "__label__software_dev": 0.96435546875, "__label__sports_fitness": 0.00032520294189453125, "__label__transportation": 0.0005049705505371094, "__label__travel": 0.00023686885833740232}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18814, 0.04919]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18814, 0.15207]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18814, 0.64196]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 270, false], [270, 340, null], [340, 457, null], [457, 567, null], [567, 916, null], [916, 1062, null], [1062, 1212, null], [1212, 1354, null], [1354, 1528, null], [1528, 3395, null], [3395, 3862, null], [3862, 4911, null], [4911, 5785, null], [5785, 5874, null], [5874, 6131, null], [6131, 6594, null], [6594, 6796, null], [6796, 7603, null], [7603, 8133, null], [8133, 8722, null], [8722, 8899, null], [8899, 9135, null], [9135, 9214, null], [9214, 9385, null], [9385, 9531, null], [9531, 9820, null], [9820, 9874, null], [9874, 10107, null], [10107, 10327, null], [10327, 10664, null], [10664, 11292, null], [11292, 12062, null], [12062, 12254, null], [12254, 12745, null], [12745, 13319, null], [13319, 13478, null], [13478, 13826, null], [13826, 14489, null], [14489, 14662, null], [14662, 14824, null], [14824, 14872, null], [14872, 14950, null], [14950, 15134, null], [15134, 15207, null], [15207, 15254, null], [15254, 15377, null], [15377, 15858, null], [15858, 16039, null], [16039, 16210, null], [16210, 16330, null], [16330, 16599, null], [16599, 17371, null], [17371, 17876, null], [17876, 17986, null], [17986, 18703, null], [18703, 18814, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 270, true], [270, 340, null], [340, 457, null], [457, 567, null], [567, 916, null], [916, 1062, null], [1062, 1212, null], [1212, 1354, null], [1354, 1528, null], [1528, 3395, null], [3395, 3862, null], [3862, 4911, null], [4911, 5785, null], [5785, 5874, null], [5874, 6131, null], [6131, 6594, null], [6594, 6796, null], [6796, 7603, null], [7603, 8133, null], [8133, 8722, null], [8722, 8899, null], [8899, 9135, null], [9135, 9214, null], [9214, 9385, null], [9385, 9531, null], [9531, 9820, null], [9820, 9874, null], [9874, 10107, null], [10107, 10327, null], [10327, 10664, null], [10664, 11292, null], [11292, 12062, null], [12062, 12254, null], [12254, 12745, null], [12745, 13319, null], [13319, 13478, null], [13478, 13826, null], [13826, 14489, null], [14489, 14662, null], [14662, 14824, null], [14824, 14872, null], [14872, 14950, null], [14950, 15134, null], [15134, 15207, null], [15207, 15254, null], [15254, 15377, null], [15377, 15858, null], [15858, 16039, null], [16039, 16210, null], [16210, 16330, null], [16330, 16599, null], [16599, 17371, null], [17371, 17876, null], [17876, 17986, null], [17986, 18703, null], [18703, 18814, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 18814, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 18814, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18814, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18814, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18814, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18814, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18814, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18814, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18814, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 18814, null]], "pdf_page_numbers": [[0, 0, 1], [0, 270, 2], [270, 340, 3], [340, 457, 4], [457, 567, 5], [567, 916, 6], [916, 1062, 7], [1062, 1212, 8], [1212, 1354, 9], [1354, 1528, 10], [1528, 3395, 11], [3395, 3862, 12], [3862, 4911, 13], [4911, 5785, 14], [5785, 5874, 15], [5874, 6131, 16], [6131, 6594, 17], [6594, 6796, 18], [6796, 7603, 19], [7603, 8133, 20], [8133, 8722, 21], [8722, 8899, 22], [8899, 9135, 23], [9135, 9214, 24], [9214, 9385, 25], [9385, 9531, 26], [9531, 9820, 27], [9820, 9874, 28], [9874, 10107, 29], [10107, 10327, 30], [10327, 10664, 31], [10664, 11292, 32], [11292, 12062, 33], [12062, 12254, 34], [12254, 12745, 35], [12745, 13319, 36], [13319, 13478, 37], [13478, 13826, 38], [13826, 14489, 39], [14489, 14662, 40], [14662, 14824, 41], [14824, 14872, 42], [14872, 14950, 43], [14950, 15134, 44], [15134, 15207, 45], [15207, 15254, 46], [15254, 15377, 47], [15377, 15858, 48], [15858, 16039, 49], [16039, 16210, 50], [16210, 16330, 51], [16330, 16599, 52], [16599, 17371, 53], [17371, 17876, 54], [17876, 17986, 55], [17986, 18703, 56], [18703, 18814, 57]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18814, 0.15909]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
abb456076fb750672eec99dffaf16c71970ab9d6
Specializing domains in DITA Feature provides for great flexibility in extending and reusing information types Erik Hennum September 28, 2005 (First published May 01, 2002) In current approaches, DTDs are static. As a result, DTD designers try to cover every contingency and, when this effort fails, users have to force their information to fit existing types. The Darwin Information Typing Architecture (DITA) changes this situation by giving information architects and developers the power to extend a base DTD to cover their domains. This article shows you how to leverage the extensible DITA DTD to describe new domains of information. The Darwin Information Typing Architecture (DITA) is an XML architecture for extensible technical information. A domain extends DITA with a set of elements whose names and content models are unique to an organization or field of knowledge. Architects and authors can combine elements from any number of domains, leading to great flexibility and precision in capturing the semantics and structure of their information. In this overview, you'll learn how to define your own domains. Introducing domain specialization In DITA, the topic is the basic unit of processable content. The topic provides the title, metadata, and structure for the content. Some topic types provide very simple content structures. For example, the concept topic has a single concept body for all of the concept content. By contrast, a task topic articulates a structure that distinguishes pieces of the task content, such as the prerequisites, steps, and results. In most cases, these topic structures contain content elements that are not specific to the topic type. For example, both the concept body and the task prerequisites permit common block elements such as p paragraphs and ul unordered lists. Domain specialization lets you define new types of content elements independently of topic type. That is, you can derive new phrase or block elements from the existing phrase and block elements. You can use a specialized content element within any topic structure where its base element is allowed. For instance, because a p paragraph can appear within a concept body or task prerequisite, a specialized paragraph could appear there, too. Here's an analogy from the kitchen. You might think of topics as types of containers for preparing food in different ways, such as a basic frying pan, blender, and baking dish. The content elements are like the ingredients that go into these containers, such as spices, flour, and eggs. The domain resembles a specialty grocer who provides ingredients for a particular cuisine. Your pot might contain chorizo from the carnicería when you're cooking Tex-Mex or risotto when you're cooking Italian. Similarly, your topics can contain elements from the programming domain when you're writing about a programming language or elements from the UI domain when you're writing about a GUI application. DITA has broad tastes, so you can mix domains as needed. If you're describing how to program GUI applications, your topics can draw on elements from both the programming and UI domains. You can also create new domains for your content. For instance, a new domain could provide elements for describing hardware devices. You can also reuse new domains created by others, expanding the variety of what you can cook up. In a more formal definition, topic specialization starts with the containing element and works from the top down. Domain specialization, on the other hand, starts with the contained element and works from the bottom up. **Understanding the base domains** A DITA domain collects a set of specialized content elements for some purpose. In effect, a domain provides a specialized vocabulary. With the base DITA package, you receive the following domains: <table> <thead> <tr> <th>Domain</th> <th>Purpose</th> </tr> </thead> <tbody> <tr> <td>highlight</td> <td>To highlight text with styles such as bold, italic, and monospace</td> </tr> <tr> <td>programming</td> <td>To define the syntax and give examples of programming languages</td> </tr> <tr> <td>software</td> <td>To describe the operation of a software program</td> </tr> <tr> <td>UI</td> <td>To describe the user interface of a software program</td> </tr> </tbody> </table> In most domains, a specialized element adds semantics to the base element. For example, the apiname element of the programming domain extends the basic keyword element with the semantic of a name within an API. The highlight domain is a special case. The elements in this domain provide styled presentation instead of semantic or structural markup. The highlight styles give authors a practical way to mark up phrases for which a semantic has not been defined. Providing such highlight styles through a domain resolves a long-standing dispute for publication DTDs. Purists can omit the highlight domain to enforce documents that should be strictly semantic. Pragmatists can include the highlight domain to provide expressive flexibility for real-world authoring. A semipragmatist could even include the highlight domain in conceptual documents to support expressive authoring, but omit the highlight domain from reference documents to enforce strict semantic tagging. More generally, you can define documents with any combination of domains and topics. As shown in Generalizing a domain, the resulting documents can still be exchanged. ## Combining an existing topic and domain The DITA package provides a DTD for each topic type and an omnibus DTD (`ditabase.dtd`) that defines all of the topic types. Each of these DTDs includes all of the predefined DITA domains. Thus, topics written against one of the supplied DTDs can use all of the predefined domain specializations. Behind the scenes, a DITA DTD is just a shell. Elements are actually defined in other modules, which are included in the DTD. Through these modules, DITA provides you with the building blocks to create new combinations of topic types and domains. When you add a domain to your DITA installation, the new domain provides you with additional modules. You can use the additional modules to incorporate the domain into the existing DTDs or to create new DTDs. In particular, each domain is implemented with two files: - A file that declares the *entities* for the domain. This file has the `.ent` extension. - A file that declares the *elements* for the domain. This file has the `.mod` extension. As an example, suppose that you’re authoring the reference topics for a programming language. You’re a purist about presentation, so you want to exclude the highlight domain. You also have no need for the software or UI domains in this reference. You could address this scenario by defining a new shell DTD that combines the reference topic with the programming domain, excluding the other domains. A shell DTD has a consistent design pattern with a few well-defined sections. The instructions in these sections perform the following actions: 1. Declare the entities for the domains. In the scenario, this section would include the programming domain entities: ```xml <!ENTITY % pr-d-dec PUBLIC "-//IBM//ENTITIES DITA Programming Domain//EN" "programming-domain.ent"> %pr-d-dec; ``` 2. Redefine the entities for the base content elements to add the specialized content elements from the domains. This section is crucial for domain specialization. Here, the design pattern makes use of two kinds of entities. Each base content element has an element entity to identify itself and its specializations. Each domain provides a separate domain specialization entity to list the specializations that it provides for a base element. By combining the two kinds of entities, the shell DTD allows the specialized content elements to be used in the same contexts as the base element. In the scenario, the pre element entity identifies the pre element (which, as in HTML, contains preformatted text) and its specializations. The programming domain provides the pr-d-pre domain specialization entity to list the specializations for the pre base element. The same pattern is used for the other base elements specialized by the programming domain: ``` ENTITY % pre "pre | %pr-d-pre;"> ENTITY % keyword "keyword | %pr-d-keyword;"> ENTITY % ph "ph | %pr-d-ph;"> ENTITY % fig "fig | %pr-d-fig;"> ENTITY % dl "dl | %pr-d-dl;"> ``` To learn which content elements are specialized by a domain, you can look at the entity declaration file for the domain. 3. Define the domains attribute of the topic elements to declare the domains represented in the document. Like the class attribute, the domains attribute identifies dependencies. While the class attribute identifies base elements, the domains attribute identifies the domains available within a topic. Each domain provides a domain identification entity to identify itself in the domains attribute. In the scenario, the only topic is the reference topic. The only domain is the programming domain, which is identified by the pr-d-att domain identification entity: ``` ATTLIST reference domains CDATA "&pr-d-att;"> ``` 4. Redefine the infotypes entity to specify the topic types that can be nested within a topic. In the scenario, this section declares the reference topic: ``` ENTITY % info-types "reference"> ``` 5. Define the elements for the topic type, including the base topics. In the scenario, this section includes the base topic and reference topic modules: ``` ENTITY % topic-type PUBLIC "-//IBM//ELEMENTS DITA Topic//EN" "topic.mod"> %topic-type; ENTITY % reference-typemod PUBLIC "-//IBM//ELEMENTS DITA Reference//EN" "reference.mod"> %reference-typemod; ``` 6. Define the elements for the domains. In the scenario, this section includes the programming domain definition module: ``` ENTITY % pr-d-def PUBLIC "-//IBM//ELEMENTS DITA Programming Domain//EN" "programming-domain.mod"> %pr-d-def; ``` Often, it is easiest to work by copying an existing DTD and adding or removing topics or domains. In the scenario, you can start with reference.dtd and remove the highlight, software, and UI domains as indicated with the bold, highlighted text below. Creating a domain specialization For some documents, you might need new types of content elements. In a common scenario, you need to mark up phrases that have special semantics. You can handle such requirements by creating new specializations of existing content elements and providing a domain to reuse the new content elements within topic structures. As an example, suppose that you’re writing the documentation for a class library. You intend to write processes that will index the documentation by class, field, and method. To support this processing, you need to mark up the names of classes, fields, and methods within the topic content, as in the following sample: ``` <p>The `<classname>String</classname>` class provides the `<fieldname>length</fieldname>` field and the `<methodname>concatenate()</methodname>` method. </p> ``` You must define new content elements for these names. Because the names are special types of names within an API, you can specialize the new elements from the `apiname` element provided by the programming domain. The design pattern for a domain requires an abbreviation to represent the domain. A sensible abbreviation for the class library domain might be `cl`. The identifier for a domain consists of the abbreviation followed by `-d` (for domain). As noted in [Combining an existing topic and domain](#), the domain requires an entity declaration file and an element definition file. Writing the entity declaration file The entity declaration file has sections that perform the following actions: 1. **Define the domain specialization entities.** A domain specialization entity lists the specialized elements provided by the domain for a base element. For clarity, the entity name is composed of the domain identifier and the base element name. The domain provides domain specialization entities for ancestor elements as well as base elements. In the scenario, the domain defines a domain specialization entity for the `apiname` base element as well as the `keyword` ancestor element (which is the base element for `apiname`): ``` <!ENTITY % cl-d-apiname "classname | fieldname | methodname"> <!ENTITY % cl-d-keyword "classname | fieldname | methodname"> ``` 2. **Define the domain identification entity.** The domain identification entity lists the topic type as well as the domain and other domains for which the current domain has dependencies. Each domain is identified by its domain identifier. The list is enclosed in parentheses. For clarity, the entity name is composed of the domain identifier and `-att`. In the scenario, the class library domain has a dependency on the programming domain, which provides the `apiname` element: Writing the element definition file The element definition file has sections that perform the following actions: 1. **Define the content element entities for the elements introduced by the domain.** These entities permit other domains to specialize from the elements of the current domain. In the scenario, the class library domain follows this practice so that additional domains can be added in the future. The domain defines entities for the three new elements: ```xml <!ENTITY % classname "classname"> <!ENTITY % fieldname "fieldname"> <!ENTITY % methodname "methodname"> ``` 2. **Define the elements.** The specialized content model must be consistent with the content model for the base element. That is, any possible contents of the specialized element must be generalizable to valid contents for the base element. Within that limitation, considerable variation is possible. Specialized elements can be substituted for elements in the base content model. Optional elements can be omitted or required. An element with multiple occurrences can be replaced with a list of specializations of that element, and so on. The specialized content model should always identify elements through the element entity rather than directly by name. This practice lets other domains merge their specializations into the current domain. In the scenario, the elements have simple character content: ```xml <!ELEMENT classname (#PCDATA)> <!ELEMENT fieldname (#PCDATA)> <!ELEMENT methodname (#PCDATA)> ``` 3. **Define the specialization hierarchy for the element with class attribute.** For a domain element, the value of the attribute must start with a plus sign. Elements provided by domains should be qualified by the domain identifier. In the scenario, specialization hierarchies include the keyword ancestor element provided by the base topic and the apiname element provided by the programming domain: ```xml <!ATTLIST classname class CDATA "+ topic/keyword pr-d/apiname"> <!ATTLIST fieldname class CDATA "+ topic/keyword pr-d/apiname"> <!ATTLIST methodname class CDATA "+ topic/keyword pr-d/apiname"> ``` The complete element definition file would look as follows: Writing the shell DTD After creating the domain files, you can write shell DTDs to combine the domain with topics and other domains. The shell DTD must include all domain dependencies. In the scenario, the shell DTD combines the class library domain with the concept, reference, and task topics and the programming domain. The portions specific to the class library domain are highlighted below in bold: ```xml <!ENTITY % pr-d-dec PUBLIC "-//IBM//ENTITIES DITA Programming Domain//EN" "programming-domain.ent"> %pr-d-dec; <!ENTITY % cl-d-dec SYSTEM "classlib-domain.ent"> %cl-d-dec; <!--vocabulary declarations--> <!ENTITY % pre "pre | %pr-d-pre;"> <!ENTITY % keyword "keyword | %pr-d-keyword; | %cl-d-apiname;"> <!ENTITY % ph "ph | %pr-d-ph;"> <!ENTITY % fig "fig | %pr-d-fig;"> <!ENTITY % dl "dl | %pr-d-dl;"> <!ENTITY % apiname "apiname | %cl-d-apiname;"> <!--vocabulary substitution--> <!ATTLIST concept domains CDATA "&pr-d-att; &cl-d-att;"> <!ATTLIST reference domains CDATA "&pr-d-att; &cl-d-att;"> <!ATTLIST task domains CDATA "&pr-d-att; &cl-d-att;"> <!--Redefine the infotype entity to exclude other topic types--> <!ENTITY % info-types "concept | reference | task"> <!--Embed topic to get generic elements --> <!ENTITY % topic-type PUBLIC "-//IBM//ELEMENTS DITA Topic//EN" "topic.mod"> %topic-type; <!--Embed topic types to get specific topic structures--> <!ENTITY % concept-typemod PUBLIC "-//IBM//ELEMENTS DITA Concept//EN" "concept.mod"> %concept-typemod; <!ENTITY % reference-typemod PUBLIC "-//IBM//ELEMENTS DITA Reference//EN" "reference.mod"> %reference-typemod; <!ENTITY % task-typemod PUBLIC "-//IBM//ELEMENTS DITA Task//EN" "task.mod"> %task-typemod; ``` Notice that the class library phrases are added to the element entity for keyword as well as for apiname. This addition makes the class library phrases available within topic structures that allow keywords and not just in topic structures that explicitly allow API names. In fact, the structures of the reference topic specify only keywords, but it’s good practice to add the domain specialization entities to all ancestor elements. Considerations for domain specialization When you define new types of topics or domain elements, remember that the hierarchies for topic specialization and domain specialization must be distinct. A specialized topic cannot use a domain element in a content model. Similarly, a domain element can specialize only from an element in the base topic or in another domain. That is, a topic and domain cannot have dependencies. To combine topics and domains, use a shell DTD. When specializing elements with internal structure -- including the ul, ol, and dl lists, as well as table and simpletable -- you should specialize the entire content element. Creating special types of pieces of the internal structure independently of the whole content structure usually doesn’t make much sense. For example, you usually want to create a special type of list instead of a special type of li list item for ordinary ul and ol lists. You should never specialize from the elements of the highlight domain. These style elements do not have a specific semantic. Although the formatting of the highlight styles might seem convenient, you might find you need to change the formatting later. As noted previously, you should use element entities instead of literal element names in content models. The element entities are necessary to permit domain specialization. The content model should allow for the possibility that the element entity might expand to a list. When applying a modifier to the element entity, you should enclose the element entity in parentheses. Otherwise, the modifier will apply only to the last element if the entity expands to a list. Similar issues affect an element entity in a sequence: ..., ( %classname; ), ... ..., ( %classname; )? ... ..., ( %classname; )* ... ..., ( %classname; )+ ... ..., | %classname; | ... The parentheses aren't needed if the element entity is already in a list. Generalizing a domain As with topics, a specialized content element can be generalized to one of its ancestor elements. In the previous scenario, a `classname` can generalize to `apiname` or even `keyword`. As a result, documents using different domains but the same topics can be exchanged or merged without having to generalize the topics. To return to the highlight style controversy mentioned in Understanding the base domains, a pragmatic document authored with highlight domain will contain phrases like the following: ```plaintext ... the <b>important</b> point is ... ``` When the document is generalized to the same topic but without the highlight domain, the pragmatic `b` element becomes a purist `ph` element, indicating that the phrase is special without introducing presentation: ```plaintext ... the <ph class="+ topic/ph hi-d/b">important</ph> point is ... ``` In the previous scenario, the class library authors could send their topics to another DITA shop without the class library domain. The recipients would generalize the class library topics, converting the `classname` elements to `apiname` base elements. After generalization, the recipients could edit and process the class, field, and method names in the same way as any other API names. That is, the situation would be the same as if the senders had decided not to distinguish class, field, and method names and, instead, had marked up these names as generic API names. As an alternative, the recipients could decide to add the class library domain to their definitions. In this approach, the senders would provide not only their topics but also the entity declaration and element definition files for the domain. The recipients would add the class library domain to their shell DTD. The recipients could then work with `classname` elements without having to generalize. The recipients can use additional domains with no impact on interoperability. That is, the shell DTD for the recipients could use more domains than the shell DTD for the senders without creating the need to modify the topics. **Note:** When defining specializations, you should avoid introducing a dependency on special processing that lacks a graceful fallback to the processing for the base element. In the scenario, special processing for the `classname` element might generate a literal "class" label in the output to save some typing and produce consistent labels. After automated generalization, however, the label would not be supplied by the base processing for the `apiname` element. Thus, the dependency would require a special generalization transform to append the literal "class" label to `classname` elements in the source file. **Summary** Through topic specialization and domains, DITA provides the following benefits: - **Simpler topic design:** The document designer can focus on the structure of the topic without having to foresee every variety of content used within the structure. • **Simpler topic hierarchies:** The document designer can add new types of content without having to add new types of topics. • **Extensible content for existing topics:** The document designer can reuse existing types of topics with new types of content. • **Semantic precision:** Content elements with more specific semantics can be derived from existing elements and used freely within documents. • **Simpler element lists for authors:** The document designer can select domains to minimize the element set. Authors can learn the elements that are appropriate for the document instead of learning to disregard unneeded elements. In short, the DITA domain feature provides for great flexibility in extending and reusing information types. The highlight, programming, and UI domains provided with the base DITA release are only the beginning of what can be accomplished. **Notices** The information provided in this document has not been submitted to any formal IBM test and is distributed "AS IS," without warranty of any kind, either express or implied. The use of this information or the implementation of any of these techniques described in this document is the reader's responsibility and depends on the reader's ability to evaluate and integrate them into their operating environment. Readers attempting to adapt these techniques to their own environments do so at their own risk. © Copyright International Business Machines Corp., 2002. All rights reserved. © Copyright IBM Corporation 2002, 2005 **Trademarks** (www.ibm.com/developerworks/ibm/trademarks/)
{"Source-Url": "https://www.ibm.com/developerworks/xml/library/x-dita5/x-dita5-pdf.pdf", "len_cl100k_base": 4938, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 24055, "total-output-tokens": 5520, "length": "2e12", "weborganizer": {"__label__adult": 0.0002036094665527344, "__label__art_design": 0.0005459785461425781, "__label__crime_law": 0.0002114772796630859, "__label__education_jobs": 0.0007262229919433594, "__label__entertainment": 5.358457565307617e-05, "__label__fashion_beauty": 8.738040924072266e-05, "__label__finance_business": 0.0002789497375488281, "__label__food_dining": 0.00015175342559814453, "__label__games": 0.0002770423889160156, "__label__hardware": 0.0002803802490234375, "__label__health": 0.00014412403106689453, "__label__history": 0.00014328956604003906, "__label__home_hobbies": 6.42538070678711e-05, "__label__industrial": 0.0001875162124633789, "__label__literature": 0.00023496150970458984, "__label__politics": 0.00013124942779541016, "__label__religion": 0.00025534629821777344, "__label__science_tech": 0.0048980712890625, "__label__social_life": 8.445978164672852e-05, "__label__software": 0.0312042236328125, "__label__software_dev": 0.95947265625, "__label__sports_fitness": 0.00010895729064941406, "__label__transportation": 0.0001513957977294922, "__label__travel": 0.00010794401168823242}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23723, 0.00452]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23723, 0.70415]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23723, 0.83906]], "google_gemma-3-12b-it_contains_pii": [[0, 2265, false], [2265, 4626, null], [4626, 7242, null], [7242, 10068, null], [10068, 10068, null], [10068, 12783, null], [12783, 15050, null], [15050, 16811, null], [16811, 19147, null], [19147, 22111, null], [22111, 23723, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2265, true], [2265, 4626, null], [4626, 7242, null], [7242, 10068, null], [10068, 10068, null], [10068, 12783, null], [12783, 15050, null], [15050, 16811, null], [16811, 19147, null], [19147, 22111, null], [22111, 23723, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23723, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23723, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23723, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23723, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23723, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23723, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23723, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23723, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23723, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23723, null]], "pdf_page_numbers": [[0, 2265, 1], [2265, 4626, 2], [4626, 7242, 3], [7242, 10068, 4], [10068, 10068, 5], [10068, 12783, 6], [12783, 15050, 7], [15050, 16811, 8], [16811, 19147, 9], [19147, 22111, 10], [22111, 23723, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23723, 0.02778]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
b934fd82a4ed0d820a3616b6928810b4b894dbe2
Issue-Driven Features for Software Fault Prediction Amir Elmishali and Meir Kalech Software and Information Systems Engineering Ben Gurion University of the Negev e-mails: amir9979@gmail.com, kalech@bgu.ac.il Abstract Nowadays, software systems are an essential component of any modern industry. Unfortunately, the more complex software gets, the more likely it is to fail. A promising strategy is to use fault prediction models to predict which components may be faulty. Features are key factors to the success of the prediction, and thus extracting significant features can improve the model’s accuracy. In the literature, software metrics are used as features to construct fault prediction models. A fault occurs when the software behaves differently than it is required to. However, software metrics are designed to measure the developed code rather than the requirements it meets. In this paper we present a novel paradigm to construct features that combine the software metrics as well as the details of the requirements, we call it Issue-Driven features. Evaluation, conducted on 86 open source projects, shows that issue-driven features are more accurate than state-of-the-art features. 1 Introduction Software’s significance, as well as its complexity, is practically growing in almost every field of our lives. The growing complexity of software leads to software failures that are more difficult to resolve. Unfortunately, software failures are common, and their impact can be significant and costly. Early detection of faults may lead to timely correction of these faults and delivery of maintainable software. Therefore, many studies in software engineering and artificial intelligence have been focused on approaches for finding faulty code in the early phases of software development life cycle. In particular, one of those approaches is fault prediction that implements prediction models to estimate which software components are faulty. Software Fault Prediction (SFP) is very important and essential to improve the software quality and reduce the maintenance effort before the system is deployed. SFP enables to classify software components as fault-prone or healthy. SFP model is constructed using various software metrics and fault information of previous releases of the project system. In particular, a training set is generated by extracting a set of features from each software component and assigning a target label to the component indicating whether it is faulty or healthy. Then this training set is fed to a machine learning algorithm, which generates a classification model. This model is used to predict whether or not the components in the next version will be faulty. SFP models have been proved in the literature to produce classifiers with a great predictive performance [1] and to improve bug localization process [2]. Selecting the features that best describe each component has a great influence on the accuracy of the classification model. Previous research in SFP proposed to use software metrics as indicators for fault prediction task. These metrics have been originally created to measure various properties of the developed code such as code size and complexity, object oriented metrics and process metrics [3; 4]. However, measuring only the code properties to predict faults is not sufficient since a fault occurs when a software component behaves unexpectedly. The expected behaviour of a component is derived from the requirements of the system. Therefore, the features for SFP should represent the code properties alongside the requirements properties of the component. A modern software developers team uses a version control system (such as GIT) to manage the modifications in the code and an issue tracking system (such as Jira and Bugzilla) to record and maintain the requested and planned tasks need to be done in the system, such as reported bugs and new features. An issue in the system is a report of a specific task, its status and other relevant information. We can map each given requirement to the software components that fulfilled it by monitoring issues and code changes. The goal of this paper is to propose a novel paradigm to construct features that combine the software metrics and the issued task details, we call it Issue-Driven features. These features are designed to overcome the gap between the developed software metrics and the task it expected to complete. We empirically study the impact of issue-driven features on fault prediction models and we compare our results with known feature sets used in the literature. Given the research goal, we mined 86 software repositories and extracted the issue-driven features for the SFP problem, as well as other features sets used in the literature. We evaluate the performance of SFP model generated with each set and show that issue-driven features outperform all other features sets. In addition, we show that issue-driven features are the most important features, in terms of Gini importance 2 Related Work Software fault prediction (SFP) is one of the most actively researched areas in software engineering. SFP is the process of developing models that can be used by the developers teams to detect faulty components such as methods or classes in the early phases of software development life cycle. The prediction model performance is influenced by modeling techniques and metrics. The choice of which modeling technique to use has lesser impact on the model accuracy. On the other hand, the choice of which software metric set to use has greater impact on the model accuracy. Several literature reviews analyzed and categorized the software metrics used for SFP. [3] divided software metrics into three categories based on the goal of the metrics and the time there been proposed. First, traditional metrics are metrics that aimed to measure the size and complexity of a code. Second, object oriented metrics are metrics that aim to capture the object oriented properties such as cohesion, coupling and inheritance. Third, are process metrics that measure the quality of the development process such as number of changes and number of bugs. Rathore et. al. [4] categorized the metrics into two classes according to the way they are extracted. First are product metrics computed on a finally developed software product and includes the traditional, object oriented and dynamic metrics. Second are process metrics that are collected across the software development life cycle. We explain the features sets deeply in Section 4. Based on the literature reviews of Kumar et. al. and Radjenovic [5; 3], the most commonly used software metrics suites are: Chidamber and Kemerer, Abreu MOOD metrics suite, Bieman and Kang, Briand et al., Halstead, Lorenz and Kidd and McCabe [6; 7; 8; 9]. 3 Problem Definition and Methodology 3.1 Problem Definition Fault prediction is a classification problem. Given a component, the goal is to determine its class, healthy or faulty. Software fault prediction is described as the task of predicting which components in the next version of the software contain a defect. Supervised machine learning algorithms are commonly used to solve classification problems. They receive as input a training set, which is fed into a classification algorithm. This set is composed by a set of instances, in our domain these are software components, and their correct labeling, i.e., the correct class - healthy or faulty - for each instance. Then, they output a classification model, which maps a new instance to a class. 3.2 Generating a Fault Prediction Model To build a fault prediction model we require a training set. In our case, the training set is composed of components from previous versions of the software under analysis. Version control systems, like Git and Mercurial, track modifications done to the source files and record the state of every released version. Therefore, by analyzing version control systems, we can train a fault prediction model using components from previous versions, and evaluate the prediction model with the components from the next version. The label of an instance is whether the file is faulty or not. Issue tracking systems such as Jira and Bugzilla, record all reported bugs and track changes in their status, including when a bug gets fixed. A key feature in modern issue tracking and version control systems is that they enable to track which modifications in the source were performed in order to fix a specific bug. Formally, given a bug X we Φ(X) is a function that returns a set of source files in the version control system that were modified to fix bug X. To implement function Φ(X), we start by extracting closed bug reports that refer to the version under analysis from the issue tracking system. Then, we map each bug to the commit that fixed it. To this end, we use the id of the bug issue, and search the corresponding issue id in all of the commit messages of the version under analyzing. After matching each bug issue to the respective bug fixing commit, we label the changed files in the commit as defective if they were changed in that particular fixing commit. Consequently, for each version, the files labeled as defective are the ones that were previously changed at least once in a fixing commit. 4 Feature Extraction One of the key requirements to achieve good performance while predicting the target state is to choose meaningful features. Many possible features were proposed in the literature for software fault prediction task. In particular, software metrics are well known features issues and domain knowledge to be used for this task. Software metrics have been introduced to estimate the quality of the software artifacts currently under development for an effective and efficient software quality assurance process. By using metrics, a software project can be quantitatively analyzed and its quality can be evaluated. Generally, each software metric is related to some functional properties of the software project such as coupling, cohesion, inheritance, code change, etc., and is used to indicate an external quality attribute such as reliability, testability, and fault-proneness. Rathore et. al. [4] survey the software metrics used by existing software fault prediction models and divide them into product features and process features. Product features aim to measure the final developed product and can be categorized into three feature sets: traditional, Object-Oriented and dynamic features. Process metrics aim to measure the quality of the development process and software life cycle. - **Product features** - product metrics are calculated using various features of finally developed software product. These metrics are generally used to check whether a software product confirms certain norms or code conventions. Broadly, product metrics can be classified as traditional metrics, object-oriented metrics, and dynamic metrics. - **Traditional features** - These features, designed during the initial days of emergence of software engineering, are known as traditional metrics. They mainly include size and complexity metrics such as number lines of code and number of functions. - **Object-oriented features** - These features are software complexity metrics that are specifically designed for object-oriented programs. They include metrics like cohesion and coupling levels and depth of inheritance. - **Dynamic features** - Dynamic metrics refer to the set of metrics which depend on the features gathered from a program execution. These metrics reveal the behavior of the software components during execution, and are used to measure specific run-time properties of programs, components, and systems. On the contrary to the static metrics that are calculated from static non-executing models, the dynamic metrics are used to identify the objects that are the most coupled and/or complex during run-time. These metrics provide different indication on the quality of the design. - **Process features** - Process features refer to the set of metrics that depend on the features collected across the software development life cycle. For instance, the number of times a component has been modified, the time passed from the last modification, etc. In contrast to product metrics, process features were designed to measure the quality of the development process instead of the final product. They help to provide a set of process measures that lead to long-term software process improvement. It is not clear from the literature which combination of features yields the most accurate fault predictions. Therefore, in our experiments we use features from all the sets. ### 5 Issues Driven Features Nowadays, software development teams manage their day-to-day tasks using issue tracking system like Jira, Bugzilla etc. Issue tracking systems record issues that should be implemented in the system such as bugs to be fixed and new features to develop. An issue details the status of the task, the type of the task, a literal explanation of the task and the task priority. To resolve an issue, a developer adds a commit that completes the issue task. Software bugs occur when the code does not perform the issued task correctly. Most of the software metrics measure the quality based on the code properties solely rather than the desired tasks they seek to accomplish. We suggest combining issues and code properties to calculate code metrics as a function of the issues properties that they address. We call these new features "issue-driven features". An issue can be represented by its: (1) type (bug fix, improvement or new feature), (2) priority (major, minor, trivial), and (3) severity (blocker, normal, enhancement). Note that we can easily extract these fields from the issue tracking system. Then, we map the issue to the developed component by analyzing the changes in the GIT commits that the developer added to resolve this issue. Figure 1 shows an example of an issue report CAMEL-12078 (upper) and a commit that resolved it (lower). As a result, for each issue we have the changes have been done to resolve it. For example, one of the features that we propose is to measure the added complexity of a change for different issues type. As shown in Figure 1, we can analyze the changed file and mark the added "else" statement as increase of the code complexity that fixed the bug. To demonstrate the effectiveness of the issue-driven features, we show how to empower the known features, process and product, by adding the issue information. - **Process features**: The process features are calculated as an aggregation of the change metrics on the relevant commits e.g. the number of line insertions for a file. In order to add the issue’s information, we extract the issue that was resolved by each commit and record the changes have been done in the commit to solve the issue. For example, we calculate the number of line insertions that fixed a bug. - **Product features**: A product feature is extracted by analyzing the source lines of the component. For Figure 1: A screenshot of a bug report number CAMEL-12078 (upper) and the commit that resolved it (lower). We can see that a new 'else' statement added in order to resolve the bug. example, the number of function calls in a component. In order to add the issue's information for the product features, we annotate for each source line the latest commit that modified it. Then, we use our mapping in order to find the issue that has been resolved by the commit. Finally, we calculate each product metric separately for the different values of the issue type, priority and severity. For example, to get the number of function calls in for issue type "bug" we sum up the number of calls only for source line that have been modified in commits mapped to issue of type "bug". Next we demonstrate the issue-driven features extraction with the product metric lines of source code (LOC). First we map each source line to the issue that has been resolved by the last commit that changed the line. Figure 2 shows a screenshot of function createMixedMultipartAttachments from Apache Camel project. For each source line we mention the issue that mapped to the line and the type of each issue (bug/improvement/new feature). The LOC of bug issues is 5, LOC of improvement issues is 4 and LOC of new features is 2. To demonstrate a issue-driven process feature extraction, we focus on the total number of modification (MOD) of the commit. Here we derive the MOD per issue type. For simplicity, we focus only on the three commits in the square. There is a vertical line next to each commit, that marks the source lines that have been modified by the commit. We can see that the MOD value is 11 and the MOD of the bug issues, improvement issues and new feature issues, are 5, 4, and 2 respectively. In the experiment section we thoroughly explain which features we extracted. 6 Evaluation Our research goal is to present the effectiveness of the issue-driven features for software fault prediction. Therefore, we designed our study to empirically com- Figure 2: A screenshot of the function `createMixedMultipartAttachments` from Apache Camel project. For each source line we show the issue resolved by the last commit that changed the line. Also, we add the issue type of each issue, where “Bug”, “Iml” and “New” represent a bug fix, improvement and new feature, respectively. The highlighted square shows the lines that were modified by each commit. First, CAMEL-385’s commit added 5 lines, then, CAMEL-1645’s commit modified 4 and finally, CAMEL-14578’s commit modified the last two. pare the performance of fault prediction models trained with the issue-driven features against other feature sets proposed in the literature. We report an experimental study designed to address the following research questions. **RQ1.** Do issue-driven **product** features perform better than other product features proposed in the literature? **RQ2.** Do issue-driven **process** features perform better than other process features proposed in the literature? **RQ3.** Which features influence the most on the accuracy of the fault prediction model? ### 6.1 Experimental Setup We start by collecting the data from repositories, which includes metrics and defects information. Then, we apply feature extraction whose purpose is to extract the features and organize them in sets. Next, we train classification models to predict defects based on several algorithms and optimise them with hyper-parameterization. Last, we cross-validate the models and evaluate them using different classification metrics. Each step is represented in the following subsections. #### Data Collection The first step of our approach is to collect the data and to generate the datasets required for training and testing of the classifiers. We evaluate our approach 86 projects from the open source organizations Apache¹ and Spring² written in Java that managed their source code using Git version control system and an issue tracking system (JIRA or BUGZILLA). We filtered the projects as follows. First, we filtered out projects without reported resolved bugs or less than 5 released versions. Then we iterated the resolved bugs and mapped them to the commits that resolved them. Next, for each version we labeled the faulty files in the version if they changed in a commit in the version that resolved a bug. Finally, for each project we filtered out versions with faulty files’ ratio lower than 5% and higher than 30%, since it composes a good representation of bugs that reduces the class imbalance, that is produced by the low number of defects, and it is not an outlier, for instance, a version that was created just to fix issues. For each project we selected 4 version as a training set and a later version as a test set. #### Feature Extraction It is not clear from the literature which combination of features return the best fault prediction model. Therefore, we extracted commonly used feature sets from the literature [4; 3], consist of both process and product features. We implemented 122 features including the following product features sets: 2. Halstead complexity metrics [8] (HALSTEAD), 3. Apache¹ and Spring² written in Java that managed their source code using Git version control system and an issue tracking system (JIRA or BUGZILLA). ¹https://www.apache.org/ ²https://spring.io/ In addition, we implemented 17 process features including the following: 1. code delta and change metrics (such as number of changes and type of changes) [10], 2. Time difference between commits [11; 12] 3. developer based features [11; 13]. **Training Classifiers** Several learning algorithms were considered to generate the fault prediction model: Random Forest, XGB Classifier and Gradient Boosting Classifier. Preliminary comparison found that the Random Forest learning algorithm with 1000 estimators performs best with our datasets. The depth of the trees was limited to five and the function to measure the quality of a split was set to Gini. We used 10 fold cross validation Random Forest for the rest of the experiments. ### 6.2 Data Analysis and Metrics To evaluate the fault prediction models, we followed Rathore et al. [4] literature review that recommends to use precision, recall and the area under the ROC curve (AUC) as evaluation metrics. Moreover, they recommend AUC as a primary indicator. We describe the metrics in the rest of this section. Precision and recall measure the relationships between specific parameters in the confusion matrix: \[ P = \frac{TP}{TP + FP} \quad R = \frac{TP}{TP + FN} \quad (1) \] where, - TP is the number of classes containing fault that were correctly predicted as faulty; - TN is the number of healthy classes that were predicted as healthy; - FP is the number of classes where the classifier failed to predict, by declaring healthy classes as faulty; - FN is the number of classes where the classifier failed to predict, by declaring faulty classes as healthy; In addition, we use Area Under the Curve (AUC) of the Receiver Operating Characteristic curve (ROC). The ROC visualizes a trade-off between the number of correctly predicted faulty modules and the number of incorrectly predicted non-faulty modules. As a result, the closest it is to 1, the better the classifier’s ability to distinguish between classes that are or are not affected by the fault. <table> <thead> <tr> <th>feature set</th> <th>AUC</th> <th>Recall</th> <th>precision</th> <th>importance</th> </tr> </thead> <tbody> <tr> <td>issue-driven</td> <td>0.74</td> <td>0.16</td> <td>0.31</td> <td>0.31</td> </tr> <tr> <td>process</td> <td>0.68</td> <td>0.14</td> <td>0.30</td> <td>0.07</td> </tr> <tr> <td>issue-driven</td> <td>0.75</td> <td>0.16</td> <td>0.33</td> <td>0.40</td> </tr> <tr> <td>product</td> <td>0.62</td> <td>0.11</td> <td>0.27</td> <td>0.22</td> </tr> </tbody> </table> Table 1: The fault prediction performance for different feature sets. The first two rows compare between issue-driven process features and common process features used in the literature, and the next two rows compare between issue-driven product features and common product features used in the literature. The highest value of each metric is highlighted. The importance of a set is calculated as the sum of the Gini importance of the features in the set. ### 6.3 Results In this section, we discuss the obtained results focusing on the research questions we initially defined. As such, we first analyse the results of the fault prediction models trained with different feature sets, and then analyse the importance of each feature set. To address **RQ.1** and **RQ.2**, we evaluated whether the performance of our models, trained with issue-driven features, outperform those trained with feature sets from the literature. Table 1 shows the arithmetic mean for all the scores representing the comparison between issue-driven features and the known features. The first two rows compare between issue-driven process features and common process features used in the literature, and the next two rows compare between issue-driven product features and common product features. The precision and especially recall results are fairly low. This is understandable, since the imbalance nature of the dataset damage the TP score of the models [14]. Regarding **RQ.1**, we can observe that the issue-driven features perform better in all metrics. This is most noticeable in the primary indicator AUC. Furthermore, for 81% (70 out of 86) of the projects the issue-driven features performed better than the other features. The significance level of the results is \( p < 0.01 \). Regarding **RQ.2**, we evaluate process features sets as listed at Rathure et al. [4] such as code delta, code churn and change metrics. We compare the results of a model trained with those features to a model trained with issue-driven variant of those features. Results among all metrics show that issue-driven features outperform the process features used in the literature. Furthermore, for 86% (74 out of 86) of the projects the issue-driven features performed better than other features. The significance level of the results is \( p < 0.01 \). To address **RQ.3**, we evaluated which feature set influence the prediction model the most, when training with all the feature sets together. To do so, we relied on the importance score of the random forest classifier. The importance of a feature is computed as the (normalized) total reduction of the criterion brought by that feature. It is also known as the Gini importance. We computed the importance of a feature set as the sum of the importance of the features in the set. The column “importance” in Table 1 shows the arithmetic means of the importance score for each feature set. These results show that issue-driven features are the most importance features in the model. 7 Threats To Validity For our study, we identified the following threats to validity. The projects we used for evaluation were limited to open-source projects from Apache and Spring written in Java. This is a threat to the generalization of our results. However, several fault prediction studies have used projects from open source projects as the software archive [15] and, in addition, projects from Apache have been integrated into the Promise data set. The use of the issue tracking system is also a threat to validity towards the result’s generalization. However, their use is coupled with the issue tracking system of the open source projects. 8 Conclusions and Future Work In this study we present a novel feature set for the fault prediction problem, named issue-driven features. We demonstrated how issue-driven features overcome the limitation of traditional software metrics that are agnostic to the requirements of the software. Next, we evaluated the impact of issue-driven features on 86 open source projects from two organizations. We evaluated the performance of issue-driven features against traditional features for both process and product feature classes. Moreover, we investigated the importance of the issue-driven features among all features. The results show that issue-driven features are significantly better than traditional features for both classes and achieve an improvement of 6% to 13% in terms of AUC. In future work we propose to improve Issue-Driven features using the relation between overlapping issues such as new feature and its improvements. Moreover, in future work, we plan to use the Issue-Driven feature to cross-project software fault prediction task. References
{"Source-Url": "https://www.hsu-hh.de/imb/wp-content/uploads/sites/677/2021/09/DX-2021_paper_5.pdf", "len_cl100k_base": 5553, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 22034, "total-output-tokens": 6776, "length": "2e12", "weborganizer": {"__label__adult": 0.00034880638122558594, "__label__art_design": 0.0002417564392089844, "__label__crime_law": 0.00031638145446777344, "__label__education_jobs": 0.0005626678466796875, "__label__entertainment": 4.6133995056152344e-05, "__label__fashion_beauty": 0.00014901161193847656, "__label__finance_business": 0.0001405477523803711, "__label__food_dining": 0.0002663135528564453, "__label__games": 0.0005311965942382812, "__label__hardware": 0.0005421638488769531, "__label__health": 0.00038909912109375, "__label__history": 0.00012803077697753906, "__label__home_hobbies": 5.835294723510742e-05, "__label__industrial": 0.0002312660217285156, "__label__literature": 0.00021445751190185547, "__label__politics": 0.00018084049224853516, "__label__religion": 0.00036454200744628906, "__label__science_tech": 0.004894256591796875, "__label__social_life": 7.730722427368164e-05, "__label__software": 0.00438690185546875, "__label__software_dev": 0.9853515625, "__label__sports_fitness": 0.00025391578674316406, "__label__transportation": 0.00029754638671875, "__label__travel": 0.00015175342559814453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30494, 0.03273]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30494, 0.16285]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30494, 0.93391]], "google_gemma-3-12b-it_contains_pii": [[0, 4077, false], [4077, 9612, null], [9612, 15124, null], [15124, 17162, null], [17162, 20539, null], [20539, 25488, null], [25488, 30295, null], [30295, 30494, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4077, true], [4077, 9612, null], [9612, 15124, null], [15124, 17162, null], [17162, 20539, null], [20539, 25488, null], [25488, 30295, null], [30295, 30494, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30494, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30494, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30494, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30494, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30494, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30494, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30494, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30494, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30494, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30494, null]], "pdf_page_numbers": [[0, 4077, 1], [4077, 9612, 2], [9612, 15124, 3], [15124, 17162, 4], [17162, 20539, 5], [20539, 25488, 6], [25488, 30295, 7], [30295, 30494, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30494, 0.05405]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
9dfb09b9851f719a3a1cfcb43da2be9f0224d20d
Learning Recursive Program Schemes Bachelor Seminar Cognitive Systems Group University of Bamberg Thorsten Spieker Christophe Quignon April 12, 2010 1 Introduction In inductive programming a system tries to find a program to solve a problem using inductive reasoning. Current approaches and systems that learn programs given a set of inputs and outputs try to find a program just on the base of these specifications.[HKS09] They work independantly from one program to the next without using already learned programs as knowledge for future programs. Having such background knowledge to use in the inductive process could help an inductive programming system by both improving performance in terms of speed and in terms of quality of found algorithms. The problem lies in representing such background knowledge to make use of it. Just saving already learned programs will not bring the improvement we hope to get. One approach to this is a partially ordered program scheme repository which consists of already learned programs and more generic program schemes that are computed by taking two programs or program schemes and computing the anti-instance of the two. This will result in a partially ordered repository from the most generic program scheme to specific programs that result from instantiating program schemes. An inductive programming system could then try to find similarities between a given problem specification and the problem specifications of already learned programs and use more generic program schemes than the learned program corresponding to the most similar problem specification to find a new instantiation of program schemes that solve the new problem. Our method to find program schemes from two specific programs uses tree representations of the programs in a form where tree nodes are functions and the children of the nodes are the functions arguments. Leafs of the tree will be variables, also representing constant values. The benefit of using this representation is that we can use a tree matching algorithm to find similarities between the trees and what operations transform one tree into the other. With this information we find the program scheme using transformation operations to create a generic tree and a set of mapping instructions. To create the two programs instantiation of the functions and constants und the program scheme is used. Our implementation is described in further detail in section 3. 2 Background A program is defined as a term, in particular a set of equality tests on functions, variables and constants. A program scheme is a generic program with generic function and variable placeholders that have to be instantiated with concrete functions and variables or constants to obtain a working program. A program scheme that generalizes over two programs can be obtained by calculating the anti-instance of the two programs since both are represented as terms.[SSW00] The anti-instance will represent the program scheme that can be instantiated to obtain both programs. An anti-instance is defined as the anti-unification of two terms.[Rey70] There are two main methods to obtain an anti-instance. One obtains only a first order anti-instance which does not generalize over functions and one obtains a second order anti-instance which does generalize over functions but includes some problems. First order anti-unifications allow only the substitution of variables in terms by other variables or a function (Figure 1). It then obtains the maximally specific generalization of two programs by finding substitutions that make both programs equal. When calculating the set of possible anti-instances by using substitution this leads to a finite set because there is only a finite number of different substitutions that can be generated. This is an important property of the first order anti-unification algorithm. However there is an important drawback. Since only variables are being substituted the first-order anti-unification cannot find the similarity of two terms that use different functions on the same argument: The anti-unification of $G(a)$ and $F(a)$ yields a single variable $x$ as their anti-instance. Therefore this method is not sufficient for finding similarities and computing the program scheme of two recursive programs.[Sin00] See the two examples of first order anti-unification in figure 2 and 3. Second order anti-unification also allows the substitution of a function by another function. This improves the detection of similarities by many positive aspects, like argument permutation, generalizing over functions of different arity et cetera, but also has one important drawback. The computation of all possible anti-instances now results in an infinite set of anti-instances. This comes from the possibility to endlessly insert and remove arguments from functions. To still use benefits of second order anti-unification it is important to restrict the operations allowed to get from the anti-instance to the input terms to the following operations: substitution of variables, substitution of function names and deletion of function arguments. What is not allowed is the insertion of arguments.[Wag02] With those restrictions the second order anti-unification is a much better fit for finding a proper program scheme of two programs as it finds more similarities and still results in a finite set of possible anti-instances. ![Figure 1: Anti instance](image) After introducing two different methods for computing anti-instances of two terms the question remains what properties such anti-instance must hold to qualify as a program scheme. An obvious property is correctness which is supplied by the algorithms themselves. Next to correctness the anti-instance should be of the kind, that the two input terms (programs) can be obtained with the minimal amount of instantiations from the anti-instance. This ensures that there are no unnecessary variables or functions in the anti-instance which are not needed to obtain the input terms. This means the anti-instance needs to be the maximally specific generalization of the two terms. This is given by the first order algorithm but remains a problem with the second order algorithm. Given a tree representation of the programs a tree matching algorithm computes how similar two programs are. We are using an algorithm for ordered trees to find the edit distance of the two tree representations of our programs. The algorithm also returns the operations needed to transform one tree into the other. Since the algorithm finds the minimal amount of operations we can use these operations to find the smallest anti-instance by directly computing it from the trees and the operations instead of computing a set of all possible anti-instances and then selecting the smallest like the second order algorithm would do. The algorithm that computes this smallest anti-instance is described in section 3.2. Note that the algorithm only handles ordered trees which means it does not allow argument permutation within functions which would only be possible with a tree matching algorithm that compares unordered trees. However we did not find a proper algorithm for unordered trees that serves our needs and therefore had to find a work-around to make argument permutation possible with neither an insert operation from the anti-instance to the specific programs nor a tree matching algorithm for unordered trees. The tree matching algorithm and the work-around are further presented in section 3.1. Figure 2: First order anti unification Figure 3: Second order anti unification 3 Implementation Our implementation is written in the functional programming language Haskell. The two programs from whom the anti-instance shall be generated need to be written to our input file in a tree representation where a node is represented by a string and its children follow this string where each child is encapsulated in brackets. Figure 4 is an example how to convert a function into their tree representations. At first the function name is pulled from infix to prefix and both, the whole term as well as both function arguments, are encapsulated in brackets. This is repeated until we reach the tree representation. Function: \( \text{add}(x, y) = x + y \) Tree: \[ (= \text{(add}(x, y)) (x + y)) \\ (= \text{add}(x)((y)) (x + y)) \\ (= \text{add}(x)((y)) (+((x)((y))))) \] Function: \( \text{sumList } x = \text{if } (x == []) \text{ then } 0 \text{ else } (\text{head } x) + (\text{sumList } (\text{tail } x)) \) Tree: \[ (= \text{sumList}(\text{[]}))(0)) \\ (= \text{sumList}(x)) (+((\text{head}(x)) (\text{sumList } (\text{tail } (x))))) \] Figure 4: Example of function splitting Afterwards, the program reads the input file and processes the trees. It runs an external program that calculates the distance between the two trees and the transform operations needed to transform one tree into the other. The number of operations is called the distance. This is described in the section 3.1. The program then passes the operations to the anti-instance part. This is described in section 3.2. Afterwards the new program scheme and the mappings to generate the input programs from the program scheme by instantiation are written to an output file. 3.1 Tree Matching Within our program we use an internal representation of the input programs. The data structure is an abstract syntax tree with a few modifications. To differentiate between commutative functions (like addition or multiplication) and non-commutative functions (like subtraction and division) we introduce an attribute for commutative functions in our data structure. Using this attribute our algorithm is able to find function similarities between two programs that result from argument permutation without destroying the semantics of a function and their program. One example is \( \text{div}(x, y) \) and \( \text{sub}(y, x) \). While both functions seem to be similar by permuting the arguments, it would destroy the semantics of subtraction and division which are both non-commutative functions. With our internal representation it is possible to find argument permutation similarities with the described restriction. However, since we are working with a string representation of the trees as an input to our program that is needed for the tree matching algorithm, described in the previous section, we cannot predict if a function is commutative or not when reading the input file. Therefore we do not allow argument permutation at all in our current implementation. After reading the input file and parsing the string into our data structure both trees are passed to the external tree matching algorithm. The algorithm computes the edit distance of two trees and returns the distance plus the operations to obtain the distance. The edit distance is defined as the minimal number of operations to transform one tree into the other.[TIZJS92] Figure 5 shows the three operations, their description and representation. The positions are in post traversal order. - **RENAME** p1 n1 p2 n2 - renames a node n1 at position p1 in the first tree to node n2 at pos p2 in the second tree - Example: ( 2 fib 2 fact ) - **INSERT** _ p2 n2 - inserts a node with label name at position pos in the second tree - Example: ( _ _ 12 y ) - **DELETE** p1 n1 _ - deletes a node with label name at position pos in the first tree - Example: ( 6 fib _ ) Figure 5: Operations used by the algorithm to transform the trees ### 3.1.1 Example output The example output is given in figure 6. The first two lines print the string representation of the tree as described in the introduction of section 3. The third line prints the edit distance, the smallest number of operations needed to transform one tree into the other. From the 9th line until the end all the operations are listed which correspond to the smallest distance. The first operation is a renaming operation but since both names are equal, this operation is not counted towards the edit distance. Only real renaming operations as the second operation are counted. Operation number 9 deletes the node 9 with label reverse in the first tree. Operation --- 1Generously provided by Prof. Dennis Shasha from New York University number 12 inserts a node with label $sumList$ at position 11 in the second tree. Note that the first number in each operation is not the number of the operation but the position of the node that the operation is performed on. ``` Tree1: (reverse (= (reverse([]))(0)) (= (reverse(x))(+(reverse(tail(x)))(head(x)))))) Tree2: (sumList (= (sumList([]))(0)) (= (sumList(x))(+(head(x))(sumList(tail(x))))))) ``` Distance: 8 (1 [] 1 [] 2 reverse 2 sumList) (3 0 3 0) (4 = 4 =) (5 x 5 x) (6 reverse 6 sumList) (7 x 7 x) (8 tail 8 head) (9 reverse) (10 x 9 x) (11 head 10 tail) (11 sumList) (12 ++ 12 +) (13 = 13 =) (14 reverse 14 sumList) Figure 6: Example output for tree matching of $sumList$ and $reverse$ This example is not the smallest distance of the two programs if argument permutation is enabled. If the permutation option is enabled our implementation will compute the set of all possible permutations of both trees and send each pair into the tree matching algorithm to find the smallest distance. Note that a smaller distance is a smaller number of operations to transform a tree. This means that a commutative function with two arguments will result in a distance that is smaller by at least 2 (2 renaming operations for the arguments) but possibly a lot more considering that an argument could be a whole subtree. The minimal distance therefore is found by finding the lowest distance among all combinations of permutations of the two trees and enables us to find the most specific generalization of both programs given the restrictions described in section 2. In the above example you can see the recursive call of the respective function in the last operation (+ and ++ respectively). The call is the first argument in the $reverse$ program and the second in the $sumList$ program. If we would use the knowledge that the + operation is commutative we could swap the arguments in the second program and observe a smaller distance from the tree matcher. Instead of the delete operation (number 9) and the add operation (number 12) we would have only one renaming operation. Also the renaming operations of tail to head and vice versa (number 8 and 11) would not be necessary. After we obtained the smallest distance (either with or without the permutation of arguments) the results are then read including the edit operations and then passed to the anti-instance part of our implementation. 3.2 Anti-Instance The Anti-Instance Algorithm itself need as input 2 programs, represented by their abstract syntax tree "AST" and a list of "commands" which describe the editing distance between the two trees. This is exactly what the TreeMatching program delivers. Both trees \( AST_a \) and \( AST_b \) get their abstract syntax tree form internally by simple parsing. Without any additional information about the semantics behind the program, we assume that their arguments are not commutative. 3.2.1 Definitions As the type definition given in figure 7 shows, an \( AST \) consists of an identifier in string form, a boolean, indicating if the function is permutative or not and a list of children, which are also \( ASTs \). \[ AST : \{S, B, (AST)^*\} \\ COMM : ((N, S), (N, S)) \\ MAP : ((N, AST_a), (N, AST_b)) \] Figure 7: Type definitions. \( S \) represent strings, \( B \) booleans and \( N \) integers The different parts within this types can be accessed as stated in figure 8. With the index \( a \) (\( b \)) one can get the first (second) position of a tuple. The \( pos \) function returns the integer value from there and the \( val \) function the second position of one tuple. \[ c \in COMM \\ c \leftarrow ((n1, s1), (n2, s2)) \\ c_a = s1; c_b = s2; \\ pos(c_a) = n1; \\ val(c_a) = S \leftarrow AST_s \] Figure 8: Subtype accessor definitions The external information about the shortest editing path (figure 6) is parsed into a list of tuple of tuples, called “COMM”. One $COMM$ of this list represents at first position the element of $AST_a$ and the second element of $AST_b$. Both elements are represented by their postorder position in their AST and the string representing their name. If the TreeMatcher does suggest not a mapping, but a insertion or deletion by giving just one tuple, the missing one is instantiated with an empty position and the string ”ID”, meaning that the function may stay the same without this subAST. In the stated algorithm the positions can be reached by the index of their AST eg. $COMM_a$. The numeric value of the element can be obtained by the function $pos(COMM_i)$ and the function name by $val(COMM_i)$. Beneath the $AST$ which is the Anti-Instance of both given $AST$s, an list of mappings $MAP$ is returned. One element of $MAP$ has an index $S_i$ and a list of instances represented by that index. All instances may be instanciated while generalizing the Anti-Instance. Figure 9 and 10 show examples of the datatypes. The algorithm itself is given in figure 11. There are two critical points in this algorithm: - The definition of ”new” element - The check, if one element is also mapped As one may point out, the ”new” element is an error in type theory, for it is an AST and not a tuple of strings. This exactly is the critical point, the need to find a representation of the mapping in the returned AST. The ”new” element can, as stated her, be achieved by simple mapping of one atomar element to another atomar element. This reduces the effort to a bare minimum, but does not account the fact, that this atomar element may represent a complex function, with underlying structure and constraints of commutation. This can lead lead to errors when multiple insertions occur. To merge these multiple insertions to one insertion of a complex AST will solve the problem. The checking of already mapped functions is critical if the mappings shall be minimal and especially without circular dependencies, which may fault the reinstanciation of the Anti-Instance. We check and reduce our mapping list against double occurrences. $$\begin{align*} (11 \text{ head} \ 10 \text{ tail}) & \rightarrow COMM((11, "head"), (10, "tail")) \\ (11 \text{ sumList}) & \rightarrow COMM((0, "ID"), (11, "sumList")) \end{align*}$$ Figure 9: Example for $COMM$ Figure 10: The AST to the "sumList" program, that adds all values of a list ``` MAP ← [] for all c ∈ COMM do new ← (val(C_a), val(C_b)) dest ← pos(C_a) a_dest ← new MAP ← MAP + new end for i ← 0 for all m ∈ MAP do for all p ∈ AST_a do if m ⊆ p then p ← S_i end if m ← (S_i, p) end for i ← i + 1 end for return (AST_a, MAP) ``` Figure 11: The core algorithm 4 Results To present that our algorithm works we have compiled a list of comparison programs and computed their program schemes. Note that some of the comparisons are similar to already computed generalizations in other works.[Sin00] [SSW00] [Wag02] 4.1 List of programs We have picked four recursive programs to perform our comparisons with which have a number of properties that made the computation of program schemes both fast and simple yet meaningful. All programs where hand-written from memory without searching for a proper implementation. First we picked a program that computes the n-th fibonacci number. This recursive program has two synchronous recursive calls and therefore shows what happens to the second call when comparing it to a recursive program with only a single recursive call. It also contains two base cases instead of one which is more common in recursive programs. Second we picked a program that computes the factorial of a number n. This program has the mentioned single recursive call and both programs work on natural numbers. Third we picked two programs that work on lists instead of natural numbers which are very similar to each other. One program computes the sum of a list of natural numbers and the other reverses a list with elements of some arbitrary type. Both programs have only a single recursive call on the tail of the list and perform some operation on the head and the result from the recursive call. 1. Fibonacci: \[ \text{fib} \\ \quad (\text{fib}(0))(1) \\ \quad (\text{fib}(1))(1) \\ \quad (\text{fib}(x))*(\text{fib}(-y)(1))(\text{fib}(-z)(2))) \] 2. Factorial: \[ \text{fact} \\ \quad (\text{fact}(0))(1) \\ \quad (\text{fact}(a))*(b)(\text{fact}(-c)(1))) \] 3. SumList: \[ \text{sumList} \\ \quad (\text{sumList}([]))(0) \\ \quad (\text{sumList}(x))+(\text{head}(x))(\text{sumList}(\text{tail}(x)))) \] 4. Reverse: \[ \text{reverse} \\ \quad (\text{reverse}([]))(0) \\ \quad (\text{reverse}(x))(++(\text{reverse}(\text{tail}(x)))(\text{head}(x)))) \] 4.2 Example Anti-Instances 4.2.1 (1) *Fibonacci* and (2) *Factorial*: The program scheme resulting from the *fibonacci* and the *factorial* program contains both recursive calls of the *fibonacci* program and shows that one of them is deleted when instantiated with the identity function to obtain the *factorial* program. The same happens to one of the base cases of the *fibonacci* program. Function: \[ (s_1 = (s_1(0))(1)) \] \[ (s_6(s_7(s_8))(s_8)) \] \[ (= (s_1(s_2))(+((s_1(−(s_6)(1)))(s_7(s_9(s_20)(s_21)))))) \] Mappings: \[s_1 \rightarrow (fib; fact), s_6 \rightarrow (=; ID), s_7 \rightarrow (fib; ID),\] \[s_8 \rightarrow (1; ID), s_9 \rightarrow (a; x), s_10 \rightarrow (y; c),\] \[s_11 \rightarrow (−; ID), s_20 \rightarrow (z; ID), s_21 \rightarrow (2; ID)\] 4.2.2 (1) *Fibonacci* and (3) *SumList*: The program scheme resulting from the *fibonacci* program and the *sumList* program shows a generalization over functions where one is a recursive call and the other a simple head of a list. It also shows that the algorithm does not care about types at all. Function: \[ (s_1 = (s_1(s_4))(s_5)) \] \[ (s_6(s_7(s_8))(s_8)) \] \[ (= (s_1(x))(s_13(s_14(s_15(s_16)(s_8))))(s_19(s_20)(s_21)))) \] Mappings: \[s_1 \rightarrow (fib; sumList), s_13 \rightarrow (+; +), s_14 \rightarrow (fib; ID),\] \[s_15 \rightarrow (−; head), s_16 \rightarrow (y; x), s_19 \rightarrow (−; tail),\] \[s_20 \rightarrow (z; x), s_21 \rightarrow (2; ID), s_4 \rightarrow (0; []),\] \[s_5 \rightarrow (1; 0), s_6 \rightarrow (=; ID), s_7 \rightarrow (fib; ID),\] \[s_8 \rightarrow (1; ID)\] 4.2.3 (4) Reverse and (2) Factorial: The program scheme resulting from the reverse program and the factorial program is another example over type generalization. Function: \[ (s1 \\ \quad (= (s1(s4))(s5)) \\ \quad (= (s1(x))(s9(s10(s11(x)))(s13(x))))) \] Mappings: \[ s1 \rightarrow (\text{reverse}; \text{fact}), s4 \rightarrow ([]; 0), s5 \rightarrow (0; 1), \\ s9 \rightarrow (++; *=), s10 \rightarrow (\text{reverse}; ID), s11 \rightarrow (\text{tail}; ID), \\ s13 \rightarrow (\text{head}; \)−\) \] 4.2.4 (4) Reverse and (3) SumList (without permutation): The program scheme resulting from the reverse program and the sumList program shows the differences of enabling argument permutation and disabling it. When disabling it most of the functions have to be generalized . . . Function: \[ (s1 \\ \quad (= (s1([]))(0)) \\ \quad (= (s1(x))(s9(s10(s11(x)))(s13(x))))) \] Mappings: \[ s1 \rightarrow (\text{reverse}; \text{sumList}), s9 \rightarrow (++; +), s10 \rightarrow (\text{reverse}; ID), \\ s11 \rightarrow (\text{tail}; \text{head}), s13 \rightarrow (\text{head}; \text{tail}) \] 4.2.5 (4) Reverse and (3) SumList (with permutation): . . . while some can be kept (namely head and tail) when enabling argument permutation. Function: \[ (s1 \\ \quad (= (s1([]))(0)) \\ \quad (= (s1(x))(s9(s1(tail(x)))(head(x))))) \] Mappings: \[ s1 \rightarrow (\text{reverse}; \text{sumList}), s9 \rightarrow (++; +) \] These results show that our algorithm works with programs of any type, any number of recursive calls as recursive calls are just handled like any other non-recursive function, any number of base cases and therefore equations in the program. It is also impossible to find a program scheme that obtains the input programs with a lesser number of instantiations, therefore the program schemes are the most specific generalizations given the restriction on argument permutation and the prohibition of insertions while keeping the deletion operations using the identity function instantiation to delete a variable or function. 5 Future Work The goal of this seminar was to find an algorithm that computes a program scheme as a generalization over two recursive programs. We have found a way to make use of second order anti-unification and its benefits plus implemented a prototype that uses an external program to compute the edit distance. We have shown that the algorithm computes correct anti-instances and therefore program schemes that are also minimal given the number of instantiations to retrieve the input programs as a minimality property. We have also shown that the algorithm the algorithm is type free and generalizes both over multiple recursive calls and multiple base cases or equations in a program. We did not integrate a tree matching algorithm in our own implementation nor define an input representation that makes use of the argument permutation option of our algorithm. There are a lot of possibilities to extend this work in the future. First of all, to make use of our permutation possibility if one would either define an input format that represent commutative functions or make the algorithm work with a background knowledge repository of commutative functions. However the implementation of this would have been beyond the limits of the seminar so it stays as future work while the functionality of permutation has been implemented. Also it would be a great speed improvement to port or interface the tree matching algorithm into Haskell to completely integrate it in our program. Further possibilities also include the integration of this method in an inductive programming learner. This however can only be the case if a proper repository representation has been introduced and implemented including save and lookup methods to work with the repository. Another problem is that there is no direct benefit to have all learned programs and their program schemes in a repository as an inductive programming learner only works on problem specifications. This means that such a background repository would need to store the problem specifications to its corresponding learned programs through which the learner could find similarities to the current problem and then try out several program schemes based on the similarity of the problem specification. However similarity of problem specifications is another big task to find and implement. Given such a problem similarity it would be a pretty simple task to find a program scheme from the repository that could help in finding a program for a given new problem. One would only need to compute the similarities to all already learned problem specifications and then retrieve all program schemes that generalize over the most similar problem. Now an algorithm can try to instantiate the program schemes ordered from most specific to most general to solve the given problem. If a new program is found the repository needs to be updated with new program schemes given the new program. Over time a program learner would become faster and produce more efficient programs. References
{"Source-Url": "https://cogsys.uni-bamberg.de/teaching/ws0910/sem_b/LearningRecursiveProgramSchemes.pdf", "len_cl100k_base": 6291, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 36550, "total-output-tokens": 7564, "length": "2e12", "weborganizer": {"__label__adult": 0.000370025634765625, "__label__art_design": 0.0004405975341796875, "__label__crime_law": 0.0003898143768310547, "__label__education_jobs": 0.0034027099609375, "__label__entertainment": 8.14199447631836e-05, "__label__fashion_beauty": 0.00018334388732910156, "__label__finance_business": 0.0002167224884033203, "__label__food_dining": 0.00046944618225097656, "__label__games": 0.0007519721984863281, "__label__hardware": 0.0009245872497558594, "__label__health": 0.0006251335144042969, "__label__history": 0.00028324127197265625, "__label__home_hobbies": 0.00016045570373535156, "__label__industrial": 0.0005445480346679688, "__label__literature": 0.0004119873046875, "__label__politics": 0.00028014183044433594, "__label__religion": 0.00058746337890625, "__label__science_tech": 0.040740966796875, "__label__social_life": 0.0001512765884399414, "__label__software": 0.006053924560546875, "__label__software_dev": 0.94189453125, "__label__sports_fitness": 0.00039458274841308594, "__label__transportation": 0.00063323974609375, "__label__travel": 0.00020551681518554688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28902, 0.04745]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28902, 0.53795]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28902, 0.89493]], "google_gemma-3-12b-it_contains_pii": [[0, 157, false], [157, 2453, null], [2453, 5453, null], [5453, 7611, null], [7611, 10063, null], [10063, 12281, null], [12281, 14103, null], [14103, 16068, null], [16068, 18513, null], [18513, 18936, null], [18936, 21013, null], [21013, 22600, null], [22600, 24024, null], [24024, 24646, null], [24646, 27666, null], [27666, 28902, null]], "google_gemma-3-12b-it_is_public_document": [[0, 157, true], [157, 2453, null], [2453, 5453, null], [5453, 7611, null], [7611, 10063, null], [10063, 12281, null], [12281, 14103, null], [14103, 16068, null], [16068, 18513, null], [18513, 18936, null], [18936, 21013, null], [21013, 22600, null], [22600, 24024, null], [24024, 24646, null], [24646, 27666, null], [27666, 28902, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28902, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28902, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28902, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28902, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28902, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28902, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28902, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28902, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28902, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28902, null]], "pdf_page_numbers": [[0, 157, 1], [157, 2453, 2], [2453, 5453, 3], [5453, 7611, 4], [7611, 10063, 5], [10063, 12281, 6], [12281, 14103, 7], [14103, 16068, 8], [16068, 18513, 9], [18513, 18936, 10], [18936, 21013, 11], [21013, 22600, 12], [22600, 24024, 13], [24024, 24646, 14], [24646, 27666, 15], [27666, 28902, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28902, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
cc3159ffd6435e081d6b8cc7a1894c7162f0f440
Precise modeling of design patterns Alain Le Guennec, Gerson Sunyé, Jean-Marc Jézéquel To cite this version: Alain Le Guennec, Gerson Sunyé, Jean-Marc Jézéquel. Precise modeling of design patterns. Proceedings of UML 2000, 2000, YORK, United Kingdom. hal-00794308 HAL Id: hal-00794308 https://inria.hal.science/hal-00794308 Submitted on 25 Feb 2013 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Precise Modeling of Design Patterns Alain Le Guennec, Gerson Sunye, and Jean-Marc Jézéquel IRISA/CNRS, Campus de Beaulieu, F-35042 Rennes Cedex, FRANCE email: aleguenn, sunye, jezequel@irisa.fr Abstract. Design Patterns are now widely accepted as a useful concept for guiding and documenting the design of object-oriented software systems. Still, the UML is ill-equipped for precisely representing design patterns. It is true that some graphical annotations related to parameterized collaborations can be drawn on a UML model, but even the most classical GoF patterns, such as Observer, Composite or Visitor cannot be modeled precisely this way. We thus propose a minimal set of modifications to the UML 1.3 meta-model to make it possible to model design patterns and represent their occurrences in UML, opening the way for some automatic processing of pattern applications within CASE tools. We illustrate our proposal by showing how the Visitor and Observer patterns can be precisely modeled and combined together using our UMLAUT tool. We conclude on the generality of our approach, as well as its perspectives in the context of the definition of UML 2.0. 1 Introduction From the designer point of view, a modeling construct allowing design pattern [8] participant classes to be explicitly pointed out in a UML class diagram can be very useful. Besides the direct advantage of a better documentation and the subsequent better understandability of a model, pointing out an occurrence of a design pattern allows designers to abstract known design details (e.g. associations, methods) and concentrate on more important tasks. We can also foresee tool support for design patterns in UML as a help to designers in overcoming some adversities [2][7][15]. More precisely, a tool can ensure that pattern constraints are respected, relieve the designer of some implementation burdens, and even recognize pattern occurrences within source code, preventing them from getting lost after they are implemented. In this context, we are not attempting to detect the need of a design pattern application but to help designers to explicitly manifest this need and therefore abstract intricate details. Neither are we trying to discover which implementation variant is the most adequate to a particular situation, but we would like to discharge programmers from the implementation of recurrent trivial operations introduced by design patterns. According to James Coplien [3] p. 30 - patterns should not, can not and will not replace programmers - , our goal is not to replace programmers nor designers but to support them. But in its current incarnation as of version 1.3 from the OMG, the UML is ill-equipped for precisely representing design patterns. It is true that some graphical annotations related to parameterized collaborations can be drawn on a UML model, but even the most classical GoF patterns, such as Observer, Composite or Visitor cannot be modeled precisely this way (see Sect. 1.1). Ideas to overcome the shortcomings of collaborations are sketched in Sect. 1.2, providing some guidelines to model the “essence” of design patterns more accurately. An example showing how the Visitor and Observer patterns can be precisely modeled and combined together using our UMLAUT tool is presented in Sect. 2. Related approaches are then discussed in Sect. 4. We then conclude with a discussion of the generality of our approach, as well as its perspectives in the context of the definition of UML 2.0. To alleviate the reading of the paper, we have moved to an appendix some complementary support material needed to understand the extensions to UML that we propose. 1.1 Problem Outline The current official proposal for representing design patterns in the Unified Modeling Language is to use the collaboration design construct. Indeed, the two conceptual levels provided by collaborations (i.e. parameterized collaboration and collaboration usage) seem to be appropriate to model design patterns. At the general level, a parameterized collaboration is able to represent the structure of the solution proposed by a pattern, which is enounced in generic terms. Here patterns are represented in terms of classifier and association roles. The application of this solution, i.e. the terminology and structure specification into a particular context (so called instance or occurrence of a pattern) are represented by expansions of template collaborations. This design construct allows designers to explicitly point out participant classes of a pattern occurrence. Parameterized collaborations are rendered in UML in a way similar to template classes [1], p.384. Thus, roles represented in these collaborations are actually template parameters to other classifiers. More precisely, each role has an associated base, which serves as the actual template parameter (the template parameter and the argument of a binding must be of the same kind [14] p.2-46.) However, there are severe limitations for modeling design patterns as parameterized collaborations: First, the use of generic templates is not fully adapted to represent the associations between pattern roles and participant classes. More precisely, as each classifier role (actually its base class) is used as a template parameter, it can be bound to at most one participant class. Therefore, design patterns having a variable number of participant classes (e.g. Visitor, Composite) cannot be precisely bound. Also, if the use of base classes in a template collaboration is necessary to allow the binding (bindings can only be done between elements having the same meta-type), its utility and its underlying representation are unclear. Second, some constraints inherent to design patterns cannot be represented by collaborations, since they involve concepts that cannot be directly included as OCL constraints. For instance, in the Visitor [8] pattern, the number of visit methods defined by the visitor class must be equal to the number of concrete element classes. This constraint cannot be written in OCL unless an access to the UML meta-model is provided. Third, collaborations provide no support for feature roles. In design patterns, an operation (or an attribute) is not necessarily a real operation. It defines a behavior that must be accomplished by one or more actual operations. This kind of role cannot be defined in a collaboration, nor is it possible to describe behavioral constraints (e.g. operation A should call operation B). These limitations were extensively discussed in previous work by the authors [16]. In this paper, we propose some solutions to overcome these problems. A misunderstanding with the term role might be a possible source of the present inadequacy of collaborations to model design patterns. In a UML collaboration, roles represent placeholders for objects of the running system. However, in the design pattern literature, the term role is often associated to participant classes and not only objects in a design model. There can also be roles for associations and inheritance relationships. In other words, pattern roles refer to an upper level. This subtle difference can be noted when binding a parameterized collaboration to represent an occurrence of a pattern: it is impossible to assign a single role to more than one class. This difference is also observable when writing OCL constraints to better model a design pattern: frequently, this kind of constraints needs access to meta-level concepts, that cannot be directly accessed by OCL. 1.2 Patterns as sets of constraints: Leitmotiv in UML Design patterns are described using a common template, which is organized in a set of sections, each one relating a particular aspect of the pattern. Before extending this description of how to model design patterns in UML, let us dispel some possible misunderstanding concerning the modeling of design patterns. It is not our intention to model every aspect of design patterns, since some aspects are rather informal an cannot be modeled. We are interested in a particular facet of patterns, which is called Leitmotiv by Amnon Eden [5]: the generic solution indicated by a design pattern, which involves a set of participants and their collaborations. Our intention is to model the leitmotiv of design patterns using structural and behavioral constraints. The goal of this approach is to provide a precise description of how pattern participants should collaborate, instead of specifying a common fixed solution. Design patterns can be expressed as constraints among various entities, such as classifiers, structural and behavioral features, instances of the respective classifiers, generalization relationships between classifiers, generalization relationships between behavioral features, etc. All those entities are modeling constructs of the UML notation. That is, they can be thought of as instances of meta-classes from the UML meta-model. This suggests that patterns can be expressed with meta-level constraints. The parameters of the constraints together form the context of the pattern, i.e. the set of participants collaborating in the pattern. Since the UML meta-model is defined using a UML class diagram, we can make the reasonable assumption that it is not different than any other UML model. Therefore, we propose to use meta-level collaborations to specify design patterns. However, to avoid any ambiguity in the sequel, we will explicitly use “M2” if necessary when referring to the UML meta-model and “M1” when referring to an ordinary UML model, following the conventions of the classical 4-layer metamodel architecture. The material presented in appendix explains in details how collaborations and OCL constraints can be used together in a complementary way, and we apply this principle to specify the structural constraints of patterns in Sect. 2. A different approach is needed to specify the behavioral properties associated with a pattern, and Sect. 2.4 presents how temporal logic could be used to that purpose. Finally, Sect. 3 shows how an appropriate redefinition of the mapping of dashed-ellipses permits to keep that familiar notation to represent occurrences of design patterns. 2 Modeling Design Patterns in Action 2.1 Presentation of the Visitor and Observer Patterns Figure 1 shows the participants in the Visitor design pattern represented as a meta-level collaboration. It consists of a hierarchy of classes representing concrete nodes of the structure to be visited, and a visitor class (or possibly a hierarchy thereof). Each element class should have an accept() routine with the right signature, for which there should be a corresponding visitElement() routine in the visitor class. ![Fig. 1. Meta-level collaboration of the Visitor design pattern](image) Figure 2 shows a collaboration representing the participants in the Observer design pattern. It consists of a subject class (or hierarchy thereof) whose instances represent observed nodes, and a class (or hierarchy thereof) whose instances represent observers. The subject class should offer routines to attach or detach an observer object, and a routine to notify observer objects whenever the state of the subject changes. The observer class should offer an update() routine for notification purpose. 2.2 Structural constraints When a behavioral (or a structural) feature appears in the specification of a design pattern, this does not mean that there must be an identical feature in a pattern occurrence. Actually, the specification describes roles that can be played by one or more features in the model. Using meta-level collaborations clears this possible confusion. An example of a feature role is the Attach() feature of the Observer pattern. It represents a simple behavioral feature that adds an observer into a list. This does not mean that this feature cannot perform other actions, nor that it should be named “Attach” and have exactly the same parameters. Some feature roles are more complex than the above example is, since they represent the use of dynamic dispatch by a family of behavior features. An example of this is the Accept() feature role of the Visitor design pattern. It represents a feature that should be implemented by concrete elements. Such a family of features is named a Clan by Eden [5] (p.60). Finally, some other feature roles represent a set of clans, i.e. a set of features that should be redefined in a class hierarchy. The feature role Visit() of the Visitor design pattern is an example of this particular role. It designates a set of features (one for each concrete element) that should be implemented by concrete visitors. Such a family of features is named a Tribe by Eden [5] (p.61). 2.3 Factoring recurring constraints with stereotypes These kinds of structural constraints among features and classes are recurring in pattern specifications, and factoring them would significantly ease the pattern designer’s task. The natural means provided by the UML to group a set of constraints for later reuse is the stereotype construct. Figure 3 recalls how constraints can be attached to a stereotype. These constraints will later transitively apply to any elements to which the stereotype is applied (OCL rule number 3, page 2-71 of the UML1.3 documentation [14]). Clans The first stereotype presented here is called <<Clan>>. A clan is a set of behavioral features that share a same signature and are defined by different classes of a same hierarchy. In OCL, clans are defined as follows: \[ \begin{align*} isClan & (\text{head} : \text{BehaviorFeature}, \\ & \text{features} : \text{Sequence}(\text{BehaviorFeature})) \ \text{inv} : \\ & \text{features} \rightarrow \forall f \ | \ f.\text{sameSignature}(\text{head}) \end{align*} \] Other examples of clans are the AlgorithmInterface() feature role of the Strategy pattern or the notify() feature role of the Observer pattern (see Fig.2). Tribes The second stereotype is called <<Tribe>> and is somewhat similar to the first one. A tribe is set of behavioral features that consists of other sets of behavioral features each of which is a clan. A tribe is defined in OCL as follows: \[ \begin{align*} isTribe & (\text{heads} , \text{features} : \text{Sequence}(\text{BehaviorFeature})) \ \text{inv} : \\ & \text{features} \rightarrow \forall f | \text{heads} \rightarrow \exists (\text{head} | \text{head}.\text{sameSignature}(f)) \end{align*} \] Elements of a tribe do not necessarily have the same signature, as elements of a clan do. Other examples of tribes are the setState() feature role of the Observer pattern (see Fig. 2) or the Handle() feature role of the State pattern. Auxiliary operations The above OCL constraints both use an operation that compares behavioral feature signatures. Two features share the same signature if they have the same set of parameters (including the “return” parameter): \[ \begin{align*} sameSignature & (\text{featA} , \text{featB} : \text{BehaviorFeature}) : \text{boolean} ; \\ sameSignature &= \\ & \text{featA}.\text{parameter() - collect (par | par.type()) =} \\ & \text{featB}.\text{parameter() - collect (par | par.type())} \end{align*} \] 2.4 Behavioral properties and temporal logic Behavioral properties of the Visitor pattern [1] A given call of anElement.accept(aVisitor) is always followed by a call of aVisitor.visitAnElement(anElement). If we want the call to visit to be synchronous (“nested flow of control”), we need a second constraint: [2] A given call of aVisitor.visitAnElement(anElement) always precedes the return of a call of anElement.accept(aVisitor). Behavioral properties of the Observer pattern [1] After a given call of aSubject.attach(anObserver), and before any subsequent call of aSubject.notify() or of aSubject.detach(anObserver), the set of observers known by aSubject must contain anObserver. [2] A given call of aSubject.detach(anObserver) if any must follow a corresponding call of aSubject.attach() and no other call of aSubject.attach(anObserver) should appear in between. [3] After a given call of aSubject.detach(anObserver), and before any subsequent call of aSubject.notify() or of aSubject.attach(anObserver), the set of observers known by aSubject must not contain anObserver. [4] A given call of aSubject.notify() must be followed by calls of observers.update(), and all these calls must precede any subsequent call of aSubject.attach(anObserver), of aSubject.detach(anObserver) or of aSubject.notify(). Note that we do not require the notification to be synchronous, just that all the observers which are known when notification starts will be eventually notified. We could allow for collapsing of pending notification events by allowing another call of aSubject.notify() before all calls of update: pending calls would then not have to occur twice to satisfy the constraints (this can be very useful in GUI design notably, to improve rendering speed). Using temporal logic Note how general and declarative the constraints are. For instance, it is not written down that update() shall contain a loop calling notify(), because the pattern does not have to be implemented like that. We do not want to proscribe correct alternative solutions. The constraints just ensure that some events shall occur if some others do, and prevent erroneous orders. A form of temporal logic would provide the right level of abstraction to express the behavioral properties expected of all pattern occurrences. Some recent research efforts [12, 4] have begun to investigate the integration of temporal operators within OCL, reusing the current part of OCL for atoms of temporal logic formulas. Although this work is very valuable and necessary, we cannot reuse it directly in our context, because the resulting OCL expressions belong to the model level (M1): They lack quantification over modeling elements and therefore cannot capture the “essence” of a pattern’s behavior. Behavioral properties do rely on quantification over operations, their target, their parameters, and various other entities which will later be bound to elements of the M1 level. This suggests that they are at the same level as the OCL expressions we used to specify the structural constraints of patterns. However, OCL is not really appropriate to formally specify the behavioral properties of a pattern: such OCL expressions would have to express very complex properties on a model of all possible execution traces (such a “runtime model” is actually under work at the OMG.) Making these OCL expressions completely explicit would amount to building a model checker: A simulator would produce execution traces following the rules of a semantics, and the OCL expressions would represent the properties to be checked. However, special OCL operations could be defined to simplify complex OCL expressions. To the extent that the designer could use these predefined operations to completely abstract away from the actual details of the runtime model, they would provide a formal definition for a set of OCL temporal operators. Another interesting topic for future research is how to adapt UML sequence diagrams so that they could describe behavioral constraints at the more general level needed for design patterns. 3 Representing pattern occurrences 3.1 Bridging the gap between the two levels of modelisation An occurrence of a pattern can be represented by a collaboration occurrence (see more details in the appendix) at the meta-model level (M2) connecting (meta-level) roles in the collaboration with instances representing modeling elements of the M1 model (instances in M2 represent modeling elements of M1). The bindings would belong to the M2 model, not to the M1 model. They link instances in M2 that are representations of modeling elements of M1. The problem is similar to expressing that a class in a given (M1) model is an “instance” of a <<meta>> class of the same model. Normally the “is an instance of” dependency is between an Instance and a Classifier, and not between a Classifier and another Classifier. So this dependency would appear in the model of the model, where the normal class would appear as an instance while the <<meta>> class would appear as a classifier. The standard <<meta>> stereotype acts as an inter-level “bridge”, making up for the fact that UML is not a fully reflexive language and therefore avoiding a very significant extension. Using <<meta>> allows for representing or transposing M2 entities into M1. An appropriate M1 dependency can then be used to relate M1 entities to entities “transposed” from M2. Now we can define a pattern using a M2 collaboration and still represent it and access it from an ordinary M1 model by using the <<meta>> stereotype. A pattern occurrence is then represented by a composite dependency between arbitrary model elements and classifier roles of the collaboration transposed from M2 (see Fig. 4), in a way similar to a CollaborationOccurrence (see Fig. 6 in appendix), except for the fact that a real collaboration occurrence connects instances to classifier roles, while a pattern occurrence connects any model elements to classifier roles of a <<meta>> collaboration. Additional well-formedness rules associated to pattern occurrences [1] The pattern specification of the pattern occurrence must be a <<meta>> collaboration. **context** PatternOccurrence **inv:** \[ \text{self.patternSpecification.stereotype} \rightarrow \exists (s \mid s.\text{name} = \text{"meta"}) \] [2] The number of participants must not violate multiplicity constraints of the roles in the <<meta>> collaboration. **context** PatternOccurrence **inv:** \[ \left(\text{self.patternSpecification.ownedElement} \rightarrow \exists (cr \mid cr.\text{oclIsKindOf(ClassifierRole)})\right) \rightarrow \forall (cr : \text{ClassifierRole} \text{let nbOfParticipants = cr.supplierDependency}\rightarrow \exists (p \mid p.\text{oclIsKindOf(Participation)})\rightarrow \text{size()} \text{in (cr.\text{multiplicity.ranges} \rightarrow \exists (r \mid r.\text{lower} <= \text{nbOfParticipants} \text{and \text{nbOfParticipants} <= r.\text{upper}}))} \] Additional well-formedness rules associated to participations [1] The supplier must be a classifier role. **context** Participation **inv:** \[ \text{self.supplier.oclIsKindOf(ClassifierRole)} \] The supplier role of the participation must be a role of the collaboration specifying the corresponding pattern occurrence. **context** Participation **inv**: \[ \text{self.supplier.namespace} = \text{self.patternOccurrence.patternSpecification} \] The client element of the participation (the participant) must be of a kind whose name matches the name of the base of the role or any sub-class of the base. **context** Participation **inv**: \[ \text{supplier.oclAsType(ClassifierRole).base.allSubtypes()} \rightarrow \exists (c | c.name = \text{self.client.type.name}) \] The last rule is the most significant one in the inter-level bridging context. It ensures that a pattern occurrence could be represented at the M2 level directly by a collaboration occurrence binding roles to *conforming* instances representing modeling elements of M1. Note that the type cast realized with oclAsType is always valid because of rule [1]. ### 3.2 Graphical representation of pattern occurrences Figure 5 presents a class diagram in which two pattern occurrences are used (we assume that all visit() operations call the markNode() operation, whose effect is in turn notified to the observer that can count marked nodes). We chose to keep the familiar ellipse notation to represent both pattern occurrences as defined in the previous section and collaboration occurrences as defined in the appendix, in order not to disrupt designers accustomed to the current UML notation for design patterns. Note that Fig. 5 does not show all participation relationships, because this would clutter the diagram for no good reason since there is no ambiguities. For the same reason, neither does it represent the relationships between pattern occurrences and the corresponding <<meta>> collaborations. However, a tool should provide the option of showing all participations, even those involving behavioral features or generalizations, possibly using dialog boxes. Also, as there may be many behavioral features participating in a pattern such as the observer (potentially all those that change the state of the subject), a good tool should also propose a default list of matching participants to ease the designer’s task. 4 Conclusion and related work 4.1 Related work PatternWizard is one of the most extensive projects of design pattern specification, and has influenced our research work in several points. PatternWizard proposes LePUS [6] a declarative, higher order language, designed to represent the generic solution, or leitmotif, indicated by design patterns. Our work differs from PatternWizard in two aspects. First, we use UML and OCL to specify patterns. We believe that a UML collaboration and OCL rules can be easier to understand than LePUS formulae and the associated visual notation. Second, PatternWizard works at the code level and is not integrated to any design model. An approach to the validation of design patterns through their precise representation is proposed by Görel Hedin [9]. She uses attribute grammars to precisely model a pattern and explicit markers in a program to distinguish a pattern occurrence and validate it. Patterns are represented as a set of class, attribute and method roles, related by rules which have the same goal as the OCL constraints in our proposal. Using attribute extension is a way to extend the static semantics with new rules, while leaving the original syntax unchanged. In [11], a dedicated logic called MMM for “Model and MetaModel logic” is used to express constraints and patterns. This logic can express causal obligations and can manipulate entities from both M1 and M2, but it is not based on OCL. The authors give a MMM specification of a Subject/Observer cooperation that includes both structural and behavioral aspects. However, the notion of role does not seem to be supported. Without roles, the generic form of a pattern cannot be completely represented, limiting this interesting approach to the specification of particular occurrences of design patterns. Another research effort in precise representation of design patterns was presented by Tommi Mikkonen [10]. He proposes to formalize temporal behaviors of patterns using a specification method named DisCo. An interesting aspect of his work is that its formalism allows pattern occurrences to be combined through refinements between pattern definitions. 4.2 Conclusion The use of meta-level collaborations and constraints, instead of the suggested parameterized collaborations, allows a more precise representation of design pattern structural and behavioral constraints. However, one may argue that this approach is not appropriate since we accomplish changes in the UML meta-model which is supposed to be standardized and static. Although this observation is true, it is also true that our approach does not change the UML abstract syntax. This is because the representation of a design pattern as a meta-model collaboration does not add new modeling constructs to UML. It only adds a way to enforce particular constraints among existing constructs. Our approach also fits quite well with the Profile mechanism. A collection of design patterns, modeled with meta-level collaborations, could be provided as a Profile to a UML CASE tool which could reuse the pattern definitions. In its current incarnation at the OMG, the UML is ill-equipped for precisely representing design patterns: even the most classical GoF patterns, such as Observer, Composite or Visitor cannot be modeled precisely with parameterized collaborations in UML 1.3. In this paper, we have proposed a minimal set of modifications to the UML 1.3 meta-model to make it possible to model design patterns and represent their occurrences in UML, opening the way for some automatic processing of pattern applications within CASE tools. We are implementing these ideas in the UMLAUT tool (freely available from http://www.irisa.fr/pampa/UMLAUT/) in order to better document the occurrences of design patterns in UML models, as well as to help the designer to abstract away from the gory details of pattern application. Because in our proposal the “essence” of design patterns can be represented at the meta-level with sets of constraints, we can also foresee the availability of design pattern libraries that could be imported into UML tools for application into the designer’s UML models. We are starting to build such a library as an open source initiative. References A Appendix: Defining the context of OCL expressions with Collaborations A.1 Collaborations as context for reusable OCL expressions Section 7.3 of the UML documentation [14] defines what the context of an OCL expression is. The context specifies how the expression is “connected” to the UML model, that is, it declares the names that can be used within the expression. UML proposes two kinds of context for OCL constraints: Classifiers to which an <<invariant>> can be attached. The OCL expression can refer to “self” and to all subexpressions reachable from “self” using the navigation rules defined in Sect. 7.5. Behavioral Features (Operations or Methods) to which a <<precondition>> and/or a <<postcondition>> can be attached. The OCL expression can refer to “self” and to all formal parameters of the behavioral feature. Note that although OCL postconditions can express that a new object has been created (using oclIsNew()), the context does not provide any way to declare a local variable that will denote the newly created object. A Collaboration can also be attached to a Classifier or to a Behavioral Feature. The collaboration can described how the behavioral feature is realized, in terms of roles played by “self” and the various parameters. One can also use the predefined stereotypes <<self>>, <<parameter>>, <<local>>, or <<global>> to make the roles more explicit within the collaboration. Note that a given role can sometimes be played by several instances in a collaboration occurrence, in the limits imposed by the multiplicity of the role, and is then represented with a “multi-object” box, resembling stacked objects. This suggests that Collaborations could well be used systematically as a precise graphical representation of the context of an OCL expression. Each parameter maps to a role having the parameter’s type as base. A parameter which would be an OCL sequence of objects of a given type would map to a role with a multiplicity greater than one. Auxiliary variables introduced by “let expressions” (see 7.4.3 of [14]) can also be represented by corresponding roles in the collaboration. A.2 Binding a parameterized OCL expression to the model The way particular instances are attached to the respective roles is not very clear in the UML documentation: When a behavioral feature is called, the target of the call action and its effective arguments are supposed to be bound to the corresponding roles, but this is left implicit. UML is apparently missing a construct to bind an instance to a role. The UML notation describes “instance level collaborations” (see Fig. 3-52 p314) as object diagrams representing snapshots of the system, where the roles of the objects are indicated in the object-box. But the mapping onto the UML abstract syntax is not described: how are Instances bound to ClassifierRoles? Without further information, the only plausible mapping we found is to add the ClassifierRole (which is a kind of Classifier) to the set of current types of the Instance playing this role. Although this correctly reflects the dynamic nature of roles, this mapping explicitly relies on multiple and dynamic classification, which might be deemed too sophisticated, and is not well-defined in the UML context. Another disadvantage of this mapping is that it is too fine-grained: There is no way to group individual bindings together and say that they all belong to the same collaboration. This can cause confusion if a snapshot presents a set of instances participating in more than one collaboration at the same time. Note also that the existing Binding construction of UML relates to generic template instantiations (see Sect. 1.1), which is a completely different matter altogether. We suggest that it be renamed as TemplateExpansion, which would better reflect its semantics, and reserve the name Binding for a new construct whose purpose is to bind an instance to a role, with a semantics equivalent to the dynamic classification alternative presented above. Note that it is desirable that individual Bindings be grouped together to form the whole effective context of a collaboration occurrence. UML1.1 offered the possibility to have composite dependencies, but this valuable capability has apparently been removed during the transition to UML1.3. We propose that it be brought back in. A very similar approach is proposed in [13]: an InstanceCollaboration construct is used to group a set of Instances, while the relation between role and instance is expressed using classification instead of an explicit dependency. Additional well-formedness rules associated to roles and bindings [1] The number of instances playing a given role must not violate the multiplicity constraint of the role. \textbf{context} ClassifierRole \textbf{inv}: \begin{verbatim} let nbOfInstances = self.supplierDependency \rightarrow select(b | b.oclIsKindOf(Binding)) \rightarrow size() \end{verbatim} \begin{verbatim} in (self.multiplicity.ranges \rightarrowforall(r | r.lower \leq nbOfInstances \text{ and } nbOfInstances \leq r.upper)) \end{verbatim} A.3 Graphical representation of bindings There could be several ways of representing bindings between instances and roles. 1. Using the “instance level collaboration” idea of putting the role in the object-box. This solution is appropriate in some circumstances and is already known, and so is worth keeping. 2. Using a dependency arrow with a \(<\text{bind}>\) stereotype connecting the instance on the object diagram and the role on the collaboration diagram. This solution is not very attractive because it is too fine-grained and also requires both diagrams to be present together. 3. Reusing the dashed-ellipse notation originally proposed to represent instantiations of template collaborations, while changing the mapping of the ellipse onto the abstract syntax: The ellipse would represent the composition of all individual bindings, while each line going out of the ellipse would map to an individual binding dependency between the instance at the end of the line and the role whose name is given by the line label. This notation actually is generalizable to any composite dependency.
{"Source-Url": "https://inria.hal.science/hal-00794308/file/LeGuennec00a.pdf", "len_cl100k_base": 7344, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 35229, "total-output-tokens": 8964, "length": "2e12", "weborganizer": {"__label__adult": 0.0003058910369873047, "__label__art_design": 0.000518798828125, "__label__crime_law": 0.0002799034118652344, "__label__education_jobs": 0.0006995201110839844, "__label__entertainment": 4.881620407104492e-05, "__label__fashion_beauty": 0.0001341104507446289, "__label__finance_business": 0.00014495849609375, "__label__food_dining": 0.00024187564849853516, "__label__games": 0.00035262107849121094, "__label__hardware": 0.0004987716674804688, "__label__health": 0.0003364086151123047, "__label__history": 0.0002236366271972656, "__label__home_hobbies": 7.432699203491211e-05, "__label__industrial": 0.00033473968505859375, "__label__literature": 0.00023317337036132812, "__label__politics": 0.00023114681243896484, "__label__religion": 0.0004193782806396485, "__label__science_tech": 0.0105133056640625, "__label__social_life": 7.617473602294922e-05, "__label__software": 0.00484466552734375, "__label__software_dev": 0.978515625, "__label__sports_fitness": 0.0002570152282714844, "__label__transportation": 0.0004153251647949219, "__label__travel": 0.00018644332885742188}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38537, 0.01819]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38537, 0.46963]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38537, 0.88523]], "google_gemma-3-12b-it_contains_pii": [[0, 895, false], [895, 3671, null], [3671, 6900, null], [6900, 10068, null], [10068, 12204, null], [12204, 14213, null], [14213, 16095, null], [16095, 19025, null], [19025, 22117, null], [22117, 23275, null], [23275, 24494, null], [24494, 27650, null], [27650, 30679, null], [30679, 33548, null], [33548, 36923, null], [36923, 38537, null]], "google_gemma-3-12b-it_is_public_document": [[0, 895, true], [895, 3671, null], [3671, 6900, null], [6900, 10068, null], [10068, 12204, null], [12204, 14213, null], [14213, 16095, null], [16095, 19025, null], [19025, 22117, null], [22117, 23275, null], [23275, 24494, null], [24494, 27650, null], [27650, 30679, null], [30679, 33548, null], [33548, 36923, null], [36923, 38537, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38537, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38537, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38537, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38537, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38537, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38537, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38537, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38537, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38537, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38537, null]], "pdf_page_numbers": [[0, 895, 1], [895, 3671, 2], [3671, 6900, 3], [6900, 10068, 4], [10068, 12204, 5], [12204, 14213, 6], [14213, 16095, 7], [16095, 19025, 8], [19025, 22117, 9], [22117, 23275, 10], [23275, 24494, 11], [24494, 27650, 12], [27650, 30679, 13], [30679, 33548, 14], [33548, 36923, 15], [36923, 38537, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38537, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
378c4aec03e0a66c6247896f7d746430e2c0110f
[REMOVED]
{"Source-Url": "https://hal-lirmm.ccsd.cnrs.fr/lirmm-00748174/document", "len_cl100k_base": 6854, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 30866, "total-output-tokens": 7709, "length": "2e12", "weborganizer": {"__label__adult": 0.0004832744598388672, "__label__art_design": 0.0004916191101074219, "__label__crime_law": 0.0007967948913574219, "__label__education_jobs": 0.0014209747314453125, "__label__entertainment": 0.00020754337310791016, "__label__fashion_beauty": 0.00030303001403808594, "__label__finance_business": 0.0006337165832519531, "__label__food_dining": 0.0005159378051757812, "__label__games": 0.002017974853515625, "__label__hardware": 0.0015106201171875, "__label__health": 0.0011472702026367188, "__label__history": 0.0006117820739746094, "__label__home_hobbies": 0.00022292137145996096, "__label__industrial": 0.0008077621459960938, "__label__literature": 0.0006308555603027344, "__label__politics": 0.000453948974609375, "__label__religion": 0.0006785392761230469, "__label__science_tech": 0.360595703125, "__label__social_life": 0.00017249584197998047, "__label__software": 0.015625, "__label__software_dev": 0.60888671875, "__label__sports_fitness": 0.000518798828125, "__label__transportation": 0.0011167526245117188, "__label__travel": 0.0003664493560791016}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24405, 0.05578]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24405, 0.4777]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24405, 0.88406]], "google_gemma-3-12b-it_contains_pii": [[0, 833, false], [833, 3510, null], [3510, 7606, null], [7606, 10902, null], [10902, 13405, null], [13405, 17310, null], [17310, 21764, null], [21764, 24405, null]], "google_gemma-3-12b-it_is_public_document": [[0, 833, true], [833, 3510, null], [3510, 7606, null], [7606, 10902, null], [10902, 13405, null], [13405, 17310, null], [17310, 21764, null], [21764, 24405, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24405, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24405, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24405, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24405, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24405, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24405, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24405, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24405, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24405, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24405, null]], "pdf_page_numbers": [[0, 833, 1], [833, 3510, 2], [3510, 7606, 3], [7606, 10902, 4], [10902, 13405, 5], [13405, 17310, 6], [17310, 21764, 7], [21764, 24405, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24405, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
8c42b4419fec2d0066222e7d6bf25331d0ce1533
Abstract Inductive learning in First-Order Logic (FOL) is a hard task due to both the prohibitive size of the search space and the computational cost of evaluating hypotheses. This paper introduces an evolutionary algorithm for concept learning in (a fragment of) FOL. The algorithm evolves a population of Horn clauses by repeated selection, mutation and optimization of more fit clauses. Its main novelty, with respect to previous approaches, is the use of stochastic search biases for reducing the complexity of the search process and of the clause fitness evaluation. An experimental evaluation of the algorithm indicates its effectiveness in learning short hypotheses of satisfactory accuracy in a short amount of time. 1 Introduction Learning from examples in FOL, also known as Inductive Logic Programming (ILP) [20], constitutes a central topic in Machine Learning, with relevant applications to problems in complex domains like natural language and molecular computational biology [19]. Learning can be viewed as a search problem in the space of all possible hypotheses [16]. Given a FOL description language used to express possible hypotheses, a background knowledge, a set of positive examples, and a set of negative examples, one has to find a hypothesis which covers all positive examples and none of the negative ones (cf. [13, 17]). This problem is NP-hard even if the language to represent hypotheses is propositional logic. When FOL hypotheses are used, the complexity of searching is combined with the complexity of evaluating hypotheses [8]. Popular FOL learners like FOIL [22] and Progol [18] adopt a progressive coverage approach. One starts with a training set containing all positive and negative examples, construct a FOL (if-then) rule which covers some of the positive examples, removes the covered positive examples from the training set and continues with the search for the next rule. When the process terminates (after a maximum number of iterations or when all positive examples have been covered), the resulting set of rules is reviewed, e.g., to eliminate redundant rules. These algorithms use different greedy methods as well as heuristics (e.g. information gain) to cope with the complexity of the search. FOL learners based on genetic algorithms act on more clauses at the same time. Systems like GIL [11], GLPS [14] and STEPS [12] use an encoding where a chromosome represents a set of rules. In other GA based systems like SIA01 [3], REGAL [7], G-NET [1] and DOGMA [10], a chromosome represents a clause. In the latter case a non-redundant hypothesis is extracted from the final population at the end of the evolutionary process. Both approaches present advantages and drawbacks. Encoding a whole hypothesis in each chromosome allows an easier control of the genetic search but introduces a large redundancy, that can lead to populations hard to manage and to individuals of enormous size. Encoding one clause in each chromosome allows for co-operation and competition between different clauses hence reduces redundancy, but requires sophisticated strategies, like co-evolution, for coping with the presence in the population of super-individuals. This paper introduces an evolutionary algorithm which evokes a set of chromosomes representing clauses, where at each iteration fitter chromosomes are selected, mutated, and optimized. The main nov- elty with respect to previous approaches is the introduction of two stochastic mechanisms for controlling the complexity of the construction, optimization and evaluation of clauses. The first mechanism allows the user to specify the percentage of background knowledge that the algorithm will use, in this way controlling the computational cost of fitness evaluation. The second mechanism allows one to control the greediness of the operators used to mutate and optimize a clause, thus controlling the computational cost of the search. Furthermore we introduce and test a variant of the Universal Suffrage (US) selection operator ([7]), called Weighted Universal Suffrage (WUS) selection operator. The US selection operator is based on the idea that individuals are candidates to be elected, and positive examples are the voters. Every positive example has the same voting power. The idea behind the WUS selection operator, is to give more voting power to examples that are harder to cover. The voting power of examples is adjusted during the computation. We show experimentally that the algorithm is able to find hypotheses of satisfactory quality, both with respect to accuracy and simplicity, in a short time. 2 The Learning Algorithm The algorithm considers Horn clauses of the form \[ p(X, Y) \leftarrow r(X, Z), q(Y, a). \] consisting of atoms whose arguments are either variables (e.g. \( X, Y, Z \)) or constants (e.g. \( a \)). The atom \( p(X, Y) \) is called head, and the set of other atoms is called body. The head describes the target concept, and the predicates of the body are in the background knowledge. The background knowledge contains ground facts (i.e. clauses of the form \( r(a, b) \leftarrow . \) with \( a, b \) constants). The training set contains facts which are true (positive examples) and false (negative examples) for the target predicate. A clause is said to cover an example if the theory formed by the clause and the background knowledge logically entails the example. A clause has a declarative interpretation (universally quantified FOL implication) \[ \forall X, Y, Z (r(X, Z), q(Y, a) \rightarrow p(X, Y)) \] and a procedural one in order to solve \( p(X, Y) \) solve \( r(X, Z) \) and \( q(Y, a) \). Thus a set of clauses forms a logic program, which can directly (in a slightly different syntax) be executed in the programming language Prolog. So the goal of the learning algorithm can be rephrased as finding a logic program that models a given target concept, given a set of training examples and a background knowledge. The overall algorithm we introduce, called ECL (Evolutionary Concept Learner), is illustrated in pseudo-code in the figure below. ALGORITHM ECL Sel = positive examples repeat Select partial Background Knowledge Population = Initial_pop while (not terminate) do Select n chromosomes using Sel for each selected chromosome chrm Mutate chrm Optimize chrm Insert chrm in Population end for end while Store Population in Final_Population Sel = Sel - \{ positive examples covered by clauses in Population \} until max iter is reached Extract final theory from Population In the repeat statement the algorithm constructs iteratively a Final_population as the union of max_iter populations. At each iteration, part of the background knowledge is chosen using a stochastic search bias described below. A Population is evolved by the repeated application of selection, mutation and optimization (the while statement). These operators use only the chosen part of background knowledge. At each generation of the evolution, \( n \) clauses are selected by means of the Universal Suffrage (US) selection operator [21], a powerful selection mechanism used for achieving species formation in GAs for concept learning. US selection chooses randomly a positive example from the set \( Sel \) of positive examples yet not covered by clauses in the actual Final_population, and performs a roulette wheel selection on those clauses of the Population which cover that example. If the example is not yet covered by any clause, a new clause is constructed for that example using a seeding operator. The selected clause is then modified using the mutation and optimization operators, and is inserted in the population. When the construction of the Final_population is completed, a logic program is extracted using a set covering algorithm. Before presenting the main steps of ECL, we describe the stochastic search biases. 2.1 Stochastic Search Biases ECL uses two stochastic mechanisms, one for selecting part of the background knowledge, and one for selecting the degree of greediness of the operators used in the evolutionary process. A parameter $p$ ($p$ real number in $(0,1]$) is used in a simple stochastic sampling mechanism which selects an element of the background knowledge with probability $p$. In this way the user can limit the cost of the search and fitness evaluation by setting $p$ to a low value. This because only a part of the background knowledge will be used when assessing the goodness of an individual. This leads to the implicit selection of a subset of the examples (only those examples that can be covered with the partial background knowledge selected will be considered). Individuals will be evaluated on these examples using only the partial background knowledge. In this way an individual can be wrongly evaluated because a subset of the examples is used, and also because those examples can be wrongly classified, in case they are covered using the whole background knowledge, but are not covered using the partial background knowledge. This is different from other mechanisms used for improving the efficiency of fitness evaluation, like [9], [24], where training set sampling is employed for speeding up the evaluation of individuals. The construction, mutation and optimization of a clause use four greedy generalization/specialization operators (described later in an apart section). Each greedy operator involves the selection of a set of constants (or of a set of variables). The size of this set can be supplied by the user by setting a corresponding parameter $Ni$ ($i = 1, \ldots, 4$). The elements of the set are then randomly chosen with uniform probability. In this way the user can control the greediness of the operators, where higher values of the parameters imply higher greediness. Finally ECL uses also a language bias which is commonly employed in ILP systems for limiting explicitly the maximum length of clauses. These search biases allow one to reduce the cost of both the search and fitness evaluation, but the price to pay may be the impossibility of finding the best clauses. 2.2 Fitness and Representation The quality of a clause $cl$ is measured by the following fitness function: $$fitness(cl) = \frac{pos_{cl} - pos_{cl}}{pos_{cl}} + w \cdot \frac{neg_{cl}}{neg_{cl}}$$ The aim of ECL is to evolve clauses with minimum fitness, that is which cover many positive examples and few negative ones. In the above formula $pos$ and $neg$ are respectively the total number of positive and negative training examples, $pos_{cl}$, $neg_{cl}$ are the number of positive and negative examples covered by the clause $cl$, and $w$ is a weight used to favor clauses covering few negative examples. The weight $w$ is used to deal with skewed distributions of the examples, where a high weight is used when there are much more positive examples than negative ones. ECL uses a high level representation similar to the one used by SIA01 [3], where a clause $p(X,Y) \leftarrow r(X, Z), q(Y, a)$. is described by the sequence $$p, X, Y , r, X, Z , q, Y, a$$ This representation is preferred to other GA typical representations, like bit string, because it allows the direct use of ILP operators for generalization and specialization of a clause. Moreover, it does not constraint the length of a chromosome, like e.g in the bitwise representation used in the REGAL and G-NET systems, which requires the user to specify an initial template for the target predicate. 2.3 Clause Construction A clause $cl$ is constructed when the US selection operator selects a positive example which is not yet covered by any clause in the actual population. This example is used as seed in the following procedure, where $BK_p$ denotes the chosen part of background knowledge. 1. The selected example becomes the head of the emerging clause; 2. Construct two sets $A_{cl}$ and $B_{cl}$. $A_{cl}$ consists of all atoms in $BK_p$ having at most one argument which does not occur in the head; $B_{cl}$ contains all elements in $BK_p \setminus A_{cl}$ having at least one argument occurring in the head. 3. while $length(cl) < l$ and $A_{cl} \cup B_{cl} \neq \emptyset$ (a) Randomly select an atom from $A_{cl}$ and remove it from $A_{cl}$. If $A_{cl}$ is empty then randomly select an atom from $B_{cl}$ (and remove it from $B_{cl}$). Add the selected atom to the emerging clause $cl$. 4. Generalize $cl$ as much as possible by means of the repeated application of the generalization operator ‘constant into variable’ (described in the next section). Apply this operator to the clause until either its fitness increases or a maximum number of iterations is reached. In the former case, retract the last application of the generalization operator. In step 3 $l$ is the maximum length of a clause, supplied by the user. If $l$ was not supplied then the first condition of the while cycle is dropped, and no constraint on the length of the clause is imposed. 2.4 Selection The US selection operator, first introduced in [7], selects clauses in two steps: 1. randomly select $n$ examples from the positive examples set; 2. for each selected example $e_i$, $1 \leq i \leq n$, let $\text{Cov}(e_i)$ be the set of clauses in the current population that cover $e_i$. If $\text{Cov}(e_i) \neq \emptyset$, choose one clause from $\text{Cov}(e_i)$ using a roulette wheel mechanism, where the sector associated with the clause $c \in \text{Cov}(e_i)$ is proportional to the ratio between the fitness of $c$ and the sum of the fitness of all the clauses occurring in $\text{Cov}(e_i)$. If $\text{Cov}(e_i) = \emptyset$ create a new clause covering $e_i$, using $e_i$ as a seed. When introduced, in [7], the US selection operator was used in a distributed system, made of various genetic nodes, where each genetic node performs a GA. In that setting, the examples assigned to each node were different, and the training sets changed during the computation. However at the GA level the examples were the same, and had the same probability of being selected. We propose here the following variant of the US selection, called Weighted US selection, where examples have different probability of selection. A weight is associated to each example, where smaller weights are associated to examples harder to cover. Then the random selection used in step 1 of the US selection above is replaced by a selection which takes into account the weights of examples. More in detail, the weight of an example $e$ is equal to $$\frac{|\text{Cov}(e)|}{|\text{Pop}|}$$ being $\text{Pop}$ the current population and $\text{Cov}(e)$ the set of clauses of $\text{Pop}$ that cover $e$. If the population is empty, then every example has the same weight. The examples are then selected with a roulette wheel mechanism, where the dimension of the sector associated to each examples is inversely proportional to the weight of the example. So the less clauses cover an example, the more chances that example has of being selected. The weights of the examples are updated at every iteration. Once the examples have been selected, the selection of the clauses is made as in the standard US selection operator. With this mechanism not only uncovered examples are favored, but also examples that are covered by few clauses are favored, having wider sector in the roulette wheel. Examples covered by many clauses are penalized, because easier to cover. In this way the system will focus at each iteration more and more on the harder examples to be covered. 2.5 Mutation and Optimization The mutation consists of the application of one of the four generalization/specialization operators. This operator is chosen as follows. First, a (randomized) test decides whether it will be a generalization or a specialization operator. Next, one of the two operators of the chosen class is randomly applied. The first test is based on the completeness and the consistency of the selected individual. If the individual is consistent with the training set, then it is likely that the individual will be generalized. Otherwise it is more probable that the individual will be specialized. The test decides to generalize a clause $cl$ with probability $$p_{gen}(cl) = \frac{1}{2} \left( \frac{\text{pos} - \text{neg}}{\text{pos} + \text{neg}} + \alpha \right)$$ otherwise it decides to specialize the clause. The constant $\alpha$ is used to slightly bias the decision toward generalization. The probability $p_{gen}$ is maximal when $cl$ covers all positive and no negative examples, and it is minimal in the opposite case. The optimization phase consists of a repeated application of the greedy operators to the selected individual, until either its fitness does not increase or a maximum number of iterations is reached. The system does not make use of any crossover operator. Experiments with a simple crossover operator, which uniformly swaps atoms of the body of the two clauses, have been conducted. However the results did not justify its use. 2.6 Hypothesis Extraction The termination condition of the main `while` statement of ECL is met when either all positive examples are covered or a maximum number of iterations is reached. In this case a logic program for the target predicate is extracted from the final population. The theory has to cover as many positive examples as possible, and as few negative ones (notice that at this stage the clauses have been “globally” evaluated, using the complete background knowledge). This problem can be translated into an instance of the weighted set covering problem as follow. Each element of the final population is a column with positive weight equal to \[ \text{weight}_{cl} = \text{neg}_{cl} + 1 \] and each covered positive example is a row. The columns relative to each positive example are the clauses that cover that example. In this way clauses covering few negative examples are favored. A fast heuristic algorithm ([15]) is applied to this problem instance to find a “best” theory. 3 Generalization/Specialization Operators A clause `cl` is generalized either by deleting an atom from the body of the clause or by replacing (all occurrences of) a constant with a variable. Dually, `cl` is specialized by either adding an atom to the body of `cl`, or by replacing (all occurrences of) a variable with a constant. The four operators utilize four parameters \( N_1, \ldots, N_4 \), respectively, in their definition, and a gain function. When applied to operator \( \tau \) and clause `cl`, the gain function yields the difference between the clause fitness before and after the application of \( \tau \): \[ \text{gain}(cl, \tau) = \text{fitness}(cl) - \text{fitness}(\tau(cl)) \] The four operators are defined below. 3.1 Atom Deletion Consider the set `Atm` of \( N_1 \) atoms of the body of `cl` randomly chosen. For each \( A \) in `Atm`, compute `gain(cl, -A)`, the gain of `cl` when \( A \) is deleted from `cl`. Choose an atom \( A \) yielding the highest gain `gain(cl, -A)` (ties are randomly broken), and generalize `cl` by deleting `A` from its body. Insert the deleted atom \( A \) in a list `D_{cl}` associated with `cl` containing atoms which have been deleted from `cl`. Atoms from this list may be added to the clause during the evolutionary process by means of a specialization operator. 3.2 Constant into Variable Consider the set `Var` of variables present in `cl` plus a new variable. Consider also the set `Con` consisting of \( N_2 \) constants of `cl` randomly chosen. For each \( a \) in `Con` and each \( X \) in `Var`, compute `gain(cl, \{a/X\})`, the gain of `cl` when all occurrences of \( a \) are replaced by \( X \). Choose a substitution \( \{a/X\} \) yielding the highest gain (ties are randomly broken), and generalize `cl` by applying \( \{a/X\} \). 3.3 Atom Addition Consider the set `Atm` consisting of \( N_3 \) atoms of `B_{cl}` (list built at initialization time) and of \( N_3 \) atoms of `D_{cl}`, all randomly chosen. For each \( A \) in `Atm` compute `gain(cl, +A)`, the gain of `cl` when \( A \) is added to the body of `cl`. Choose an atom \( A \) yielding the highest gain `gain(cl, +A)` (ties are randomly broken), and specialize `cl` by adding \( A \) to its body. Remove \( A \) from its original list (\( B_{cl} \) or \( D_{cl} \)). 3.4 Variable into Constant Consider the set `Con` consisting of \( N_4 \) constants (of the problem language) randomly chosen, and a variable \( X \) of `cl` randomly chosen. For each \( a \) in `Con`, compute `gain(cl, \{X/a\})`, the gain of `cl` when all occurrences of \( X \) are replaced by \( a \). Choose a substitution \( \{X/a\} \) yielding the highest gain (ties are randomly broken), and specialize `cl` by replacing all occurrences of \( X \) with \( a \). 4 Experimental Evaluation We consider three datasets for experimenting with ECL: the vote, credit and mutagenesis dataset, respectively. The three dataset are public domain datasets. The vote dataset contains votes for each of the U.S. House of Representatives Congressmen on the sixteen key votes. The problem is learning a concept for distinguishing between democratic and republican congressmen. The dataset consists in 435 instances, of which 267 are examples of democrats, and 168 are republicans. The credit dataset concerns credit card applications. The problem consists in learning when to allow a subject to have a credit card or not. There are 690 instances, of which 307 are positive instances and 383 are negative instances. Each instance is described by fifteen attributes. These first two datasets are taken from [4]. The mutagenesis dataset comes from the field of organic chemistry, and concerns the problem of learning the mutagenic activity of nitroaromatic compounds. These compounds occur in automobile exhaust fumes and are also common intermediates in the synthesis of many thousands of industrial compounds [5]. Highly mutagenic nitroaromatics have been found to be carcinogenic [2]. The concept to learn is expressed by the predicate active(C), which states that compound \( C \) has mutagenic activity. The dataset originates from [5]. The parameter settings used in the experiments are given in Table 1. <table> <thead> <tr> <th></th> <th>Vote</th> <th>Credit</th> <th>Mutagenesis</th> </tr> </thead> <tbody> <tr> <td>pop_size</td> <td>80</td> <td>20</td> <td>50</td> </tr> <tr> <td>mut_rate</td> <td>1</td> <td>1</td> <td>1</td> </tr> <tr> <td>n</td> <td>10</td> <td>2</td> <td>7</td> </tr> <tr> <td>max_gen</td> <td>5</td> <td>30</td> <td>10</td> </tr> <tr> <td>max_iter</td> <td>2</td> <td>10</td> <td>10</td> </tr> <tr> <td>( N(1,2,3,4) )</td> <td>(3,7,2,5)</td> <td>(2,5,2,5)</td> <td>(4,8,2,8)</td> </tr> <tr> <td>( p )</td> <td>0.1</td> <td>0.2</td> <td>0.1</td> </tr> <tr> <td>( l )</td> <td>4</td> <td>4</td> <td>4</td> </tr> </tbody> </table> Table 1: Parameter settings: pop_size = maximum size of population, mut_rate = mutation rate, n = number of selected clauses, max_gen = maximum number of GA generations, max_iter = maximum number of iterations, \( N(1,2,3,4) \) = parameters of the four greedy operators, \( p \) = parameter of BK selection, \( l \) = maximum length of a clause These values have been obtained after a few experiments on the training sets, with the constraint that a run of ECL would take at most 1 hour. As expected, the values found depend on the specific dataset. Unfortunately, we were unable to find general rules that could explain the choice of these parameters. This is in general a challenging problem [6], and we are actually investigating methods for the on-line adaptation of parameters. The evaluation method used is ten-fold cross validation. Each data set is divided in ten disjoint sets of similar size; one of these sets is used as test set, and the union of the remaining nine forms the training set. Then ECL is run on the training set and it outputs a logic program, whose performance on new examples is assessed using the test set. This process is repeated 10 times, using each time a different set as test set. The average of all the results is taken as final evaluation measure for ECL. We consider three performance measures: - efficiency: the running time of the algorithm on the training set for finding the logic program; - simplicity: the number of clauses of the logic program; - accuracy: the proportion of examples in the test set which have been correctly classified by the resulting logic program. ### Table 2: Accuracy results obtained using ten-fold cross validation. Standard deviation is given between brackets. <table> <thead> <tr> <th>System</th> <th>Vote</th> <th>Credit</th> <th>Mutagenesis</th> </tr> </thead> <tbody> <tr> <td>G-NET</td> <td>0.95 (0.032)</td> <td>0.84 (0.044)</td> <td>0.91 (0.079)</td> </tr> <tr> <td>C4.5</td> <td>0.95 (0.030)</td> <td>0.86 (0.033)</td> <td>n.a.</td> </tr> <tr> <td>Progol</td> <td>-</td> <td>-</td> <td>0.80 (0.030)</td> </tr> <tr> <td>ECL</td> <td>0.94 (0.023)</td> <td>0.79 (0.072)</td> <td>0.87 (0.056)</td> </tr> </tbody> </table> Results obtained by ECL are compared to results obtained by three of the most effective concept learners based on different approaches in table 2. C4.5 is based on decision trees, Progol employs a progressive coverage method, and G-NET is a distributed co-evolutionary genetic algorithm. The results for the first three systems are taken from [1], while the result of Progol is taken from [23]. All the results were obtained using the same ten-fold cross validation. On the vote dataset the results obtained by ECL are comparable to those obtained with the other three systems. The results on the credit dataset are worse than those of G-NET and C4.5. Table 3 presents the results obtained on the three datasets using the parameter \( p \) set to one. This means that the whole background knowledge will be used. The other parameters are the ones defined in table 1. It can be seen that the results are not better than the results shown in table 2, especially in the mutagenesis dataset. This is probably due to overfitting that can take place when too much information about the problem to tackle is present. Moreover, as expected, Table 3: Results obtained by ECL using the same parameters shown in table 1, but here p is set to 1, so that the whole background knowledge is used. <table> <thead> <tr> <th>Dataset</th> <th>Accuracy</th> <th>Efficiency</th> <th>Simplicity</th> </tr> </thead> <tbody> <tr> <td>Vote</td> <td>0.94 (0.033)</td> <td>66 min</td> <td>5.89</td> </tr> <tr> <td>Credit</td> <td>0.71 (0.074)</td> <td>224 min</td> <td>41.1</td> </tr> <tr> <td>Mutagenesis</td> <td>0.81 (0.089)</td> <td>81 min</td> <td>16</td> </tr> </tbody> </table> Table 3 reports some results in which the US and the WUS selection operators are compared. It can been seen that the use of the WUS selection operator improves the accuracy of the system for the vote and the mutagenesis datasets. In particular in the vote dataset the difference is evident. For the credit dataset the use of the WUS selection operator does not lead to any improvement. Even if the results of the experiments do not indicate a dramatic benefit of the WUS selection operator over the US one, it does not affect the efficiency of the system hence it can be used as alternative selection mechanism. 5 Conclusion In this paper we presented a concept learning algorithm based on evolutionary computation, which incorporates novel simple parametric mechanisms for controlling the cost of searching the hypotheses space and the cost of fitness evaluation. We also introduced a variant of the US selection operator, called Weighted US selection operator. The algorithm can be used profitably for exploring efficiently a new learning problem to get a first rough idea of possible simple models of the target concept, or for experimenting with a range of different search strategies at the same time, including random search and hill climbing as bounds of the range, which can be obtained from ECL by setting appropriately the bias search parameters. The search biases of ECL assume a uniform distribution of the data used for selection. This does not reflect reality in many learning problems. We are actually investigating alternative stochastic sampling mechanisms for selecting the portion of background knowledge, which would take into account the estimated importance of each element (e.g. fact of the background knowledge) according to some evaluation measure obtained from the examples in the training set. References Table 5 reports some results in which the US and the WUS selection operators are compared. It can been seen that the use of the WUS selection operator improves the accuracy of the system for the vote and the mutagenesis datasets. In particular in the vote dataset the difference is evident. For the credit dataset the use of the WUS selection operator does not lead to any improvement. Even if the results of the experiments do not indicate a dramatic benefit of the WUS selection operator over the US one, it does not affect the efficiency of the system hence it can be used as alternative selection mechanism. 5 Conclusion In this paper we presented a concept learning algorithm based on evolutionary computation, which incorporates novel simple parametric mechanisms for controlling the cost of searching the hypotheses space and the cost of fitness evaluation. We also introduced a variant of the US selection operator, called Weighted US selection operator. The algorithm can be used profitably for exploring efficiently a new learning problem to get a first rough idea of possible simple models of the target concept, or for experimenting with a range of different search strategies at the same time, including random search and hill climbing as bounds of the range, which can be obtained from ECL by setting appropriately the bias search parameters. The search biases of ECL assume a uniform distribution of the data used for selection. This does not reflect reality in many learning problems. We are actually investigating alternative stochastic sampling mechanisms for selecting the portion of background knowledge, which would take into account the estimated importance of each element (e.g. fact of the background knowledge) according to some evaluation measure obtained from the examples in the training set. References University, CA, USA, 13-16 July 1997. Morgan Kaufmann.
{"Source-Url": "https://repository.ubn.ru.nl/bitstream/handle/2066/84521/84521.pdf?sequence=1", "len_cl100k_base": 6961, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 29553, "total-output-tokens": 8943, "length": "2e12", "weborganizer": {"__label__adult": 0.0004215240478515625, "__label__art_design": 0.0006718635559082031, "__label__crime_law": 0.0006160736083984375, "__label__education_jobs": 0.0030651092529296875, "__label__entertainment": 0.00017964839935302734, "__label__fashion_beauty": 0.0002624988555908203, "__label__finance_business": 0.0005292892456054688, "__label__food_dining": 0.0005173683166503906, "__label__games": 0.0010786056518554688, "__label__hardware": 0.0012493133544921875, "__label__health": 0.0008382797241210938, "__label__history": 0.00047087669372558594, "__label__home_hobbies": 0.0002143383026123047, "__label__industrial": 0.0009813308715820312, "__label__literature": 0.0008416175842285156, "__label__politics": 0.0005583763122558594, "__label__religion": 0.000701904296875, "__label__science_tech": 0.35546875, "__label__social_life": 0.0001628398895263672, "__label__software": 0.0129241943359375, "__label__software_dev": 0.61669921875, "__label__sports_fitness": 0.0003495216369628906, "__label__transportation": 0.0007734298706054688, "__label__travel": 0.00022268295288085935}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34902, 0.03568]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34902, 0.78813]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34902, 0.88692]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 3394, false], [3394, 7765, null], [7765, 12359, null], [12359, 17050, null], [17050, 21284, null], [21284, 25996, null], [25996, 30886, null], [30886, 34848, null], [34848, 34902, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 3394, true], [3394, 7765, null], [7765, 12359, null], [12359, 17050, null], [17050, 21284, null], [21284, 25996, null], [25996, 30886, null], [30886, 34848, null], [34848, 34902, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34902, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34902, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34902, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34902, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34902, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34902, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34902, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34902, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34902, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34902, null]], "pdf_page_numbers": [[0, 0, 1], [0, 3394, 2], [3394, 7765, 3], [7765, 12359, 4], [12359, 17050, 5], [17050, 21284, 6], [21284, 25996, 7], [25996, 30886, 8], [30886, 34848, 9], [34848, 34902, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34902, 0.10825]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
565dd3b6ef9f09cb47df9c4da0ed1e3a218a0057
[REMOVED]
{"Source-Url": "http://nordiasoft.com/documents/33_Using_OpenCL_to_Increase_SCA_Application_Portability-Springer_revised%20.pdf?4896dc", "len_cl100k_base": 7946, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 41947, "total-output-tokens": 9837, "length": "2e12", "weborganizer": {"__label__adult": 0.0007328987121582031, "__label__art_design": 0.0006361007690429688, "__label__crime_law": 0.0006289482116699219, "__label__education_jobs": 0.0004794597625732422, "__label__entertainment": 0.00017130374908447266, "__label__fashion_beauty": 0.00029158592224121094, "__label__finance_business": 0.0003886222839355469, "__label__food_dining": 0.00045013427734375, "__label__games": 0.0013189315795898438, "__label__hardware": 0.033050537109375, "__label__health": 0.0008549690246582031, "__label__history": 0.0004575252532958984, "__label__home_hobbies": 0.0001976490020751953, "__label__industrial": 0.0017042160034179688, "__label__literature": 0.00021839141845703125, "__label__politics": 0.0003676414489746094, "__label__religion": 0.000904560089111328, "__label__science_tech": 0.26611328125, "__label__social_life": 7.957220077514648e-05, "__label__software": 0.0143280029296875, "__label__software_dev": 0.67431640625, "__label__sports_fitness": 0.000545501708984375, "__label__transportation": 0.00140380859375, "__label__travel": 0.0002932548522949219}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40699, 0.07677]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40699, 0.58446]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40699, 0.90975]], "google_gemma-3-12b-it_contains_pii": [[0, 4248, false], [4248, 8917, null], [8917, 13340, null], [13340, 18384, null], [18384, 22392, null], [22392, 26807, null], [26807, 30282, null], [30282, 33449, null], [33449, 37728, null], [37728, 40699, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4248, true], [4248, 8917, null], [8917, 13340, null], [13340, 18384, null], [18384, 22392, null], [22392, 26807, null], [26807, 30282, null], [30282, 33449, null], [33449, 37728, null], [37728, 40699, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40699, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40699, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40699, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40699, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40699, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40699, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40699, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40699, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40699, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40699, null]], "pdf_page_numbers": [[0, 4248, 1], [4248, 8917, 2], [8917, 13340, 3], [13340, 18384, 4], [18384, 22392, 5], [22392, 26807, 6], [26807, 30282, 7], [30282, 33449, 8], [33449, 37728, 9], [37728, 40699, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40699, 0.22667]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
52a09ab5ceb688f7106ec4bf8669dfbbd4da1289
Response time of BPEL4WS constructors Serge Haddad LSV, ENS de Cachan 61 Avenue du Président Wilson 94235 CACHAN, France serge.haddad@lsv.ens-cachan.fr Lynda Mokdad LACL, Université Paris 12 61 Avenue du Général de Gaulle 94010 Créteil, France lynda.mokdad@univ-paris12.fr Samir Youcef Lamsade, Université Paris Dauphine Place du Mal. de Lattre de Tassigny 75775 cedex 16, France samir.youcef@lamsade.dauphine.fr Abstract—Response time is an important factor for every software system and it becomes more salient when it is associated with introducing novel technologies, such as Web services. Most performance evaluation of Web services are focused toward composite Web services and their response time. One important limitation of existing work is in the fact that only constant or service exponential time distribution are considered. However, experimental results have shown that the Web services response times is typically heavy-tailed, in particular, if there are heterogeneous. So, heavy-tailed response times should be considered in the dimensioning Web services. In this study, we propose analytical formulas for mean response times for structured BPEL constructors such as sequence, flow and switch constructors, etc. The difference with previous studies in the literature, is that we consider heterogeneous servers, the number of invoked elementary Web services can be variable and the elementary Web services response times are heavy-tailed. Keywords: composite Web service, BPEL constructors, response times, heavy-tailed. I. INTRODUCTION Service oriented computing utilizes services to support low-cost, flexible software. The underlying services are loosely-coupled, thus allowing rapid change of such systems. Although a framework for defining the functional interfaces of Web services has been established, non-functional properties remain under-development. The Web services architecture is defined by W3C (The World Wide Web Consortium) in order to determine a common set of concepts and relationships that allow different implementations working together. The Web services architecture consists of three entities, the service provider, the service registry and the service consumer. The service provider creates or simply offers the Web service. The service provider needs to describe the Web service in a standard format WSDL (Web Service Description Language), which is often XML, and publish it in a central service registry UDDI (Universal Description, Discovery and Integration). The service registry contains additional information about the service provider, such as address and contact of the providing company, and technical details about the service. The service consumer retrieves the information from the registry and uses the service description obtained to bind and to invoke the Web service, using the SOAP (Simple Object Access Protocol) protocol. Elementary Web services, such as described by WSDL, are conceptually limited to relatively simple functionalities modeled through a collection of simple operations. However, for certain types of applications, it is necessary to combine a set of individual Web services to obtain more complex Web services, called composite or aggregated Web services. This last is possible using BPEL4WS (Business Process Execution Language For Web Services) standard, which is the result of the merger of the previous languages such WSFL (Web Services Flow Language) and XLANG (XML Business Process Language). One important issue within Web service composition is related to their Quality Of Service (QoS), which must be guaranteed for an adhesion clients. Web services quality of services is a combination of several properties and may include availability, security, response time, and reliability of Web services. For this, quantitative methods are needed to understand, to analyse and to operate such large infrastructure. The goal of our research is to propose an extension of a recent study [1], where we have taken into account different statistical characteristics for the services and a random number of invoked services and Web service response time are supposed exponential with different parameters, contrarily to the models presented by Manascé [2] and Sharf [3]. However, most existing work only considers constant or exponential service times. As will be shown in [15][5], measurements in the WWW and in e-commerce systems have observed heavy-tailed server response time distributions. In this study, we take into account the fact that the Web services response time is typically heavy-tailed, like Pareto distribution, which is attributed to the burstiness of arriving requests [15]. More precisely, the objective of this paper is to consider the heavy-tailed response times in the dimensioning of web service platforms. The rest of the paper is structured as follows. Section II presents the related work. Section III details the different structured BPEL constructors. Section IV presents analytical formulas for response time of these constructors. In section V, we give the response time formula for multi-choice pattern which is a generalization of switch constructor. Numerical results are given in section VI. Finally, section VII concludes and gives some perspectives to this work. II. RELATED WORK Major works in the domain of Web services performance are concentrated towards composite Web services and their response time. Although there have been several studies reported on the workload characterization of general Web servers, where the response-time distribution is found to be heavy-tailed, which has been attributed to the heavy-tailed nature of request and response file-sizes [15][6]. However, most existing work only considers constant or service exponential time distribution. Only few studies have been taken into account this result on the computation of composite Web services response time. Actually, the execution of a composite service have been studied as a fork-join model in [2], where Web services response time are supposed exponential with the same parameters, excepted one which is slower than others. This model states that a single Internet application invokes many different Web services in parallel and gathers their responses from all these launched services in order to return the results to a client. Sharf [3] studies the response time of a centralized middleware component performing largescale composition of web services. This last work is similar to the first study [2], that analyzes the effects of exponential response times. The work is more oriented towards studying fork-join model in order to understand the merger of results from various servers. More recently, [26] proposed how service providers can optimally allocated to support activities of business process with topologies that can include any combination of BPEL constructors. However, authors are content to propose a general formula for a given composite Web service without giving the exact result when the service of elementary Web services are know. The exact response time of fork and join system, under some hypothesis, can be found in [7]. However, these last state that the number of servers is equal to two, the job arrival is Poisson process and the tasks have exponential service time distribution. Nelson and Tantawi [8] proposed an approximation in the case where the number of servers is greater or equal to two and homogeneous exponential servers. Thereafter, a more general case is presented in [9][10], where arrival and service process are general. An upper and lower bound are obtained by considering respectively $G/G/1$ and $D/G/1$ queuing parallel systems. Klingemann and al. [11] use a continuous Markov chain to estimate the execution response time and the cost of workflow. In [11], authors propose an algorithm which determines the QoS of a Web service composition by aggregating the QoS dimensions of the individual services, based on a collection of workflow patterns defined by Van der Aalst’s and al. [12], where Web services response times are supposed constants. These QoS include upper and lower bounds of execution time as well as throughput. In [13], we have studied end-to-end response time for composite Web services representing a factor of Internet overhead in the execution model, using simulation technique. Contrarily to these previous studies, where the servers are not heterogenous, their number is always constant and their response times are supposed exponential, the aim of this paper is to overcome these limitations. Thus, we propose analytical formulas for mean response time of composite Web services assuming that servers are heterogenous, the number of invoked elementary Web services can be variable. III. BPEL CONSTRUCTORS Business Process Execution Language for Web services (BPEL4WS) has been built on IBM’s WSFL (Web Services Flow Language) and Microsoft’s XLANG (Web services for Business Process Design) and combines accordingly the features of a block structured language inherited from XLANG with those for directed graphs originating from WSFL [14]. The language BPEL is used to model the behavior of both executable and abstract processes. - An abstract process is not an executable process and which is a business protocol, which use process descriptions that specify the mutually visible message exchange behavior of each parts involved in the protocol, without revealing their internal behavior. - An executable process specifies the execution order between a number of activities constituting the process, the partners involved in the process, the messages exchanged between these partners and the fault and exception handling specifying the behavior in cases of errors and exceptions In the BPEL process each element is called an activity which can be a primitive or a structured one. The set \{ invoke, receive, reply, wait, assign, throw, terminate, empty \} are primitive activities and the set \{ sequence, switch, while, pick, flow, scope \} are structured activities. In this paper, we are interested in the sequence, flow and switch activities also called constructors. In the following, we give analytical formulas to evaluate the response times to each considered constructor. IV. RESPONSE TIMES OF STRUCTURED BPEL CONSTRUCTORS In this section, we give analytical formulas for mean response times for structured BPEL constructors and we consider the case that the execution time of each elementary Web service $s_i$ of a composite Web service $S$, is heavy-tailed and we consider also that the number of invoked elementary services are variable. The Pareto function distribution is given by the following equation: $$F(t) = \begin{cases} 0 & t \leq k \\ 1 - \left(\frac{t}{k}\right)^\alpha & t > k \end{cases}$$ which has an infinite variance for $\alpha < 2$ and is then heavy-tailed. Thus, we consider in the following the control patterns supported by BPEL standard. More specifically, the control patterns considered are: sequence, parallel split (flow), exclusive choice (switch), multi-choice. This last pattern is not directly supported by BPEL, but we can implement it using control links inherited from WSFL. A. Computation for the sequence constructor The sequence constructor correspond to a sequential execution of $s_1$ to $s_n$ elementary Web services. The analytical formulas of mean response time $E(T_{sequence})$ is given by the following proposition: **Proposition 1:** When elementary Web services $s_i, i = \{1..n\}$ are exponentially distributed, the mean response time of composite Web service $S$ is given by: $$E(T_{sequence}) = \sum_{i=1}^{n} E(T_i)$$ (2) **Proof:** The execution time of composite Web service $S$ composed by $n$ elementary Web services is given by: $$T_{sequence} = \sum_{i=1}^{n} T_i$$ which is easier to derive from equation (2). Case of homogeneous servers. In the case where $T_i, i \in \{1,...,n\}$ are random variables with Pareto distributions with parameters $(\alpha, k)$ for each $T_i$, the mean response time of composite Web service $S$ is trivial and is given by: $$E(T_{par}) = \frac{k\alpha}{\alpha - 1}$$ Case of heterogeneous servers. As we notice before, we overcome the limitation of other studies by considering that the servers are heterogeneous. Thus, we consider that the execution time of $k$ elementary services $s_i$ follow a Pareto distribution with rate $(\alpha_1, k_1)$ and the execution time of $n - k$ services follow a Pareto distribution with rate $(\alpha_2, k_2)$. Thus, the response time for a composite Web service $S$ is given by: $$E(T_{par}) = \frac{k_1\alpha_1}{\alpha_1 - 1} k + \frac{k_2\alpha_2}{\alpha_2 - 1} (n - k)$$ B. Computation for the flow constructor One the most important benefits of the component approach is the interoperability. This inherent interoperability that comes with using vendor, platform, and language independent XML technologies and the ubiquitous HTTP as a transport mean that any application can communicate with any other application using Web services. Thus, the client only requires the WSDL definition to exchange message with the service. However, in the WSDL language, the elementary Web services are conceptually limited to relatively simple operations. In fact, for certain types of applications it is necessary to combine a set of elementary Web services into composite Web services. These services are generally invoked in parallel, using the flow constructor. Thus, in this section, we are focused on the mean response time of a composite Web service $S$ which is composed by $n$ elementary services invoked in parallel. In [2], the author give an analytical formula for the response time of flow constructor but he supposes that $n$ is fixed and elementary Web services are exponential service time distribution. Our contribution is to consider that $n$ is random and Web services are heterogenous. In the following, we give an analytical expression for the mean response time: $$E(T_{flow}) = \sum_{i=1}^{n} \int_{0}^{\infty} t f_i(t) \prod_{j \neq i} F_j(t) dt$$ (3) where: $$T_{flow} = \text{Max}\{T_i, i = 1, ..., n\}$$ As we assume that the random variables $T_i$ are independents, the cumulative function of random variable $T_{flow}$ is given by: $$F(T_{flow}) = P(T_{flow} \leq t) = \prod_{i=1}^{n} F_i(t)$$ Thus the probability density of $T_{flow}$ is: $$f_{T_{flow}}(t) = \sum_{i=1}^{n} f_i(t) \prod_{j \neq i} F_j(t)$$ (4) Thus $E(T_{flow})$ can be easily derived. Case of Pareto distributions. We give in the following the mean response time analytical formula where the random variables $T_i, i \in \{1,...,n\}$ are Pareto distributed with parameters $(\alpha_i, k_i), i \in \{1,...,n\}$. $$E(T_{par}) = \sum_{i=1}^{n} \alpha_i k_i^{\alpha_i} \sum_{X \in \mathcal{P}(E_n \setminus \{i\})} (-1)^X \prod_{j \neq i} \frac{(\sum_{j \in X} \alpha_j + \alpha_i - 1)}{\sum_{j \in X} \alpha_j + \alpha_i} \prod_{j \in X} \alpha_j k_j$$ (5) Where: $$\beta = \text{max}\{k_i, i \in \{1..n\}\} \text{ and } E_n = \{1,...,n\}$$ and $\mathcal{P}(E_n \setminus \{i\})$ the sub-set of $E_n$ without $\{i\}$. **Proof:** From equation 4, the probability density of random variable $T_{par}$ is given by: $$f_{T_{flow}}(t) = \begin{cases} 0 & \text{if } t \leq \text{max}\{k_i, i = 1...n\} \\ \sum_{i=1}^{n} \frac{\alpha_i k_i^{\alpha_i}}{t^{\alpha_i}} \prod_{j \neq i} (1 - (\frac{k_j}{t})^{\alpha_j}) & \text{else.} \end{cases}$$ As we have: $$\prod_{j \neq i} (1 - (\frac{k_j}{t})^{\alpha_j}) = \sum_{X \in \mathcal{P}(E_n \setminus \{i\})} (-1)^X \prod_{j \in X} (\frac{k_j}{t})^{\alpha_j}$$ Thus, the average response time is: $$E(T_{par}) = \sum_{i=1}^{n} \alpha_i k_i^{\alpha_i} \sum_{X \in \mathcal{P}(E_n \setminus \{i\})} (-1)^X \int_{\beta}^{\infty} t^{-\sum_{j \in X} \alpha_j - \alpha_i} \prod_{j \in X} \alpha_j k_j dt$$ As we have: $$\int_{\beta}^{\infty} t^{-\sum_{j \in X} \alpha_j - \alpha_i} dt = \frac{\beta^{-\sum_{j \in X} \alpha_j + \alpha_i - 1}}{\sum_{j \in X} \alpha_j + \alpha_i - 1}$$ Thus we obtain that the mean response time for a composite Web service \( S \) is given by the following formula: \[ E(T_{\text{flow}}) = \sum_{i=1}^{n} \alpha_i k_i^{\alpha_i} \sum_{X \in P(E_n \setminus \{i\})} (1 - 1)^{-\sum_{j \in X} \alpha_j + \alpha_i - 1} \prod_{j \in X} \alpha_j k_j \] **Case of homogeneous servers.** In the case where all elementary service times are Pareto distributed with same rates \((\alpha_i, k_i) = (\alpha, k)\) (i.e \(\forall i \in \{1, ..., n\}\)), \(\alpha_i = \alpha, k_i = k\). In this case the response time for \( S \) is given by: \[ E(T_{\text{flow}}) = n\alpha k^{\alpha} \sum_{m=0}^{n-1} (1)^m k^{-(m+1)\alpha-1}(k^\alpha)^m (m+1)\alpha - 1 \] \[C_n^{m-1} \tag{6}\] Where: \[ C_n^{m-1} = \frac{(m-1)!}{n!(n-1-m)!} \] **Case of heterogeneous servers.** In the case where \( n - k \) elementary service times follow a Pareto distribution with parameters \(\alpha_1, k_1\) and \( k \) elementary service times follow a Pareto distribution with rates \(\alpha_2, k_2\). Let factor \( g \) which is the slowdown factor such that \( k_1 \alpha_1 = (\frac{k_1}{k_2})^g \). With these assumptions, the response time of \( S \) is as follows: \[ E(T_{\text{flow}}) = R_1 + R_2 \] \[ R_1 = (n-k)\alpha_1 k_1^{\alpha_1} \sum_{m=0}^{n-1} \sum_{j=0}^{m} \frac{(1)^m k^{-(j+1)\alpha_1 + (m-j)\alpha_2 - 1}}{(j+1)\alpha_1 + (m-j)\alpha_2 + 1)} \] \[ R_2 = k\alpha_2 k_2^{\alpha_2} \sum_{m=0}^{n-1} \sum_{j=0}^{m} \frac{(1)^m k^{-(j+1)\alpha_2 + (m-j)\alpha_1 - 1}}{(j+1)\alpha_2 + (m-j)\alpha_1 + 1)} \] This equation (7) is easily derived by the equation (5) by considering that \((\alpha_i, k_i) = (\alpha_1, k_1), \forall i \in \{1, ..., n - k\}\) and \((\alpha_i, k_i) = (\alpha_2, k_2), \forall i \in \{n - k + 1, ..., n\}\). **C. Computation for the switch constructor** In this case, we consider that we have one choice of \( n \) elementary Web services. Let \( P(Y = i) \) the invocation probability of elementary Web service \( i \), with \( \sum_{i=1}^{n} P(Y = i) = 1 \). The response time of \( \text{switch} \) constructor is then given by the following analytic formula: \[ E(T_{\text{switch}}) = \sum_{i=1}^{n} P(Y = i) E(T_i) \] with \( E(T_i) \) the mean response time of service \( i \). **Proof:** First we calculate the probability density of the random variable \( T_{\text{switch}} \). The cumulative distribution function of the variable \( T_{\text{switch}} \) is defined as: \( F_{T_{\text{switch}}} (t) = P(T_{\text{switch}} \leq t) \). According to the total probability theorem, we have: \[ F_{T_{\text{switch}}} (t) = \sum_{i=1}^{n} P(Y = i | T_{\text{switch}} \leq t) P(Y = i) \] Thus, probability density function of random variable \( T_{\text{switch}} \) is given by: \[ f_{T_{\text{switch}}} (t) = \sum_{i=1}^{n} f_{T_i}(t) P(Y = i) \] The definition of the average of \( T_{\text{switch}} \) allow to deduce the result given in equation (8). **Case of Pareto distribution.** As in this paper, we consider the case of exponential distribution time for each elementary service time, thus the formula for mean response time is given by: \[ E(T_{\text{switch}}) = \sum_{i=1}^{n} \frac{\alpha_i k_i}{\alpha_i - 1} P(Y = i) \] **Case of heterogeneous servers.** As well as in the case of the previous pattern constructor, we give in the following the response time for the case that the execution times of elementary services are not the same: \[ E(T_{\text{switch}}) = \sum_{i=1}^{n-k} P(Y = i) \frac{\alpha_1 k_1}{\alpha_1 - 1} + \sum_{i=n-k+1}^{n} P(Y = i) \frac{\alpha_2 k_2}{\alpha_2 - 1} \] In the next section, we are interested to multi-choice pattern which is not supported directly by BPEL, but it can be implemented using the links controls inherited from WSFL. V. COMPUTATION FOR THE multi-choice PATTERN The difference with the previous pattern where only one Web service is chosen, the multi-choice pattern allows the invocation of a subset of elementary services among the \( n \) possible. Take for example the case of a booking flights operated as follows: Web services invoked depend on two criteria namely the city of departure and destination. Next, according to these cities, agencies providing this trip are invoked in parallel. The number of services, and relied on is random. Let \( N \) the random variable for the number of invoked services and \( P(N = i) \) the probability that the number of invoked service is equal to \( i \), with \( n \) maximum number of the invoked services. In this case, the response time of composite web service \( S \) is given by the following formula: \[ E(T_{\text{multichoice}}) = \sum_{i=1}^{n} [P(N = i) E(T_{S_i})] \] Where \( E(T_{S_i}) \) is the mean response time for composite Web service \( S \) when \( i \) elementary services are invoked. **Proof:** First, we give the cumulative function \( F_{T_{\text{multichoice}}} (t) \) of random variable \( T_{\text{multichoice}} \). \( F_{T_{\text{multichoice}}} (t) = P(T_{\text{multichoice}} \leq t) \). From totaly probability theorem, we can obtain: \[ F_{T_{\text{multichoice}}} (t) = P(\bigcup_{i=1}^{n} [P(T_{\text{multichoice}} \leq t) \land N = i]) \] The events \((N = i, i \in \{1, ..., n\})\) are incompatible, so: \[ F_{T^{\text{multichoice}}} (t) = \sum_{i=1}^{n} P(T^{\text{multichoice}} \leq t \land N = i) \] thus, \[ F_{T^{\text{multichoice}}} (t) = \sum_{i=1}^{n} P(T^{\text{multichoice}} \leq t \mid N = i)P(N = i) \] So: \[ F_{T^{\text{multichoice}}} (t) = \sum_{i=1}^{n} F_{T^{\text{multichoice}}(i)} P(N = i) \] The cumulative function of \(T^{\text{multichoice}}\) is: \[ F_{T^{\text{multichoice}}} (t) = \sum_{i=1}^{n} F_{T^{\text{par}}(i)} P(N = i) \] We can derive the probability density \(f_{T^{\text{multichoice}}}\) of \(T^{\text{multichoice}}\) and we obtain: \[ f_{T^{\text{multichoice}}} (t) = \sum_{i=1}^{n} f_{T^{\text{par}}(i)} P(N = i) \] **Case of homogenous servers.** As, we consider the case that the elementary service execution times are Pareto distributed with \((\alpha, k)\) parameters and the invocation probability of elementary service \(s_i\) is \(p\), thus the mean response time for composite Web service \(S\) can be easily derived from equation (11) and is given as follows: \[ E(T^{\text{par}}) = \frac{n}{N} \sum_{i=1}^{n} C_n^i p^i (1 - p)^{n-i} \gamma(i) \tag{12} \] Where: \[ \gamma(i) = i \alpha k^n \sum_{m=0}^{i-1} (-1)^m \frac{k(-m+1)\alpha-1}{(m+1)\alpha-1} C_m^{i-1} \] **Case of heterogeneous servers.** We give also the analytical formula for composite Web service response time where we consider two classes of elementary services. The execution time in each class is the same. \(N_1\) (resp. \(N_2\)) is the random variable which defined the number of elementary services in class 1 (resp. class 2). The mean response time formula is also derived from equation (11) and is given by: \[ E(T^{\text{multichoice}}) = \sum_{i=1}^{n} P(N_1 = i) \sum_{j=0}^{k} E(T^{\text{multichoice}}(i, j))P(N_2 = j \mid N_1 = i) \tag{13} \] ![Fig. 1. Response times for composite Web service versus slowdown factor \(g\)](image) VI. EXPERIMENTAL RESULTS AND DISCUSSIONS In this section, we present some numerical computation and results that we have obtained. When two class of services are considered, let first define a heterogenous coefficient noted \(g\), such as \(\frac{k_{\alpha_1}^2}{\alpha_2} = g\frac{k_{\alpha_2}^2}{\alpha_1}\) (the mean response time of elementary Web services belong respectively to class one and two). It is clear that if \(g = 1\), then all of elementary Web services belong to the same class (i.e. the elementary Web services are homogeneous). However, if \(g > 1\) means that Web services belong to the second class are slower than services belong to the first class. For simplicity, we assume that the probability of elementary Web services invocation is \(p\) for all services. The synchronization time, when \(g = 1\), is the same for any value for the number of elementary Web services belong to the second class denoted \(N^2\). In figure 1, we give the response times by varying the slowdown factor \(g\) and where we consider different values of the number of elementary services for second class which takes these values \(N^2 = 20, N^2 = 60, N^2 = 80\) and \(N^2 = 100\). In figure 2, we give the response times by varying the the number of elementary services for second class and we consider the case of \(g = 2, g = 3, g = 4\) and \(g = 5\). From figure 1, we can conclude two things. First, for any value of \(N^2\), the synchronization response time increases linearly with the heterogenous coefficient \(g\). Second, when \(g = 1\) the response time of the composite Web service is the same for any value of the elementary Web services belong to the second class. From figure 2, we can notice that the waiting time increase logarithmically with invocation probability \(p\). It is clear that the response time increases logarithmically with the number of invoked Web services (see figure 3). So, we can conclude that the choice of elementary Web services must be made on their physical characteristics and not on their number. In the figure 4, we shown the evolution of \(\frac{T_{\text{exp}}}{T_{\text{par}}}\), where \(T_{\text{exp}}\) and \(T_{\text{par}}\) is the response time of a composite Web services when respectively the response time of elementary Web services is exponential and heavy-tailed. The results shown in this figure, for different values of elementary Web services response time, reveals that the choice conditions of elementary Web services must be more restrictive in the case of exponential, when their number is great. **VII. CONCLUSION** Web Services are based on a set of standards and protocols, that allow us to make processing requests to remote systems by exchanging with a common language, and using common transport protocols. Once deployed, Web services provided can be combined (or inter-connected) in order to implement business collaborations, leading to composite web services. With the proliferation of Web Services as a business solution to enterprise application integration, the quality of service offered by Web Services is becoming the utmost priority for service provider and their partners. The QoS is defined as a combination of the different attributes of the Web services such as availability, response time, throughput, etc. In this paper, we have focused in the response time of composite Web services. We have proposed analytical formulas for the mean response of the different control patterns supported by BPEL standards. In this paper, we have studied the Pareto distribution. It is justified by the fact that experimental studies shown that Web services response time is typically heavy-tailed. However, the methodology can be applied to other service response time distributions. We plan to consider the dynamic composition of Web services and we will give the analytical formulas for BPEL constructors as a perspective study. **REFERENCES**
{"Source-Url": "https://basepub.dauphine.fr/bitstream/handle/123456789/4451/HMY-iscc10.pdf?sequence=1&isAllowed=y", "len_cl100k_base": 7094, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 28643, "total-output-tokens": 8403, "length": "2e12", "weborganizer": {"__label__adult": 0.00035309791564941406, "__label__art_design": 0.0004379749298095703, "__label__crime_law": 0.0004420280456542969, "__label__education_jobs": 0.0010547637939453125, "__label__entertainment": 0.0001233816146850586, "__label__fashion_beauty": 0.00017821788787841797, "__label__finance_business": 0.0011272430419921875, "__label__food_dining": 0.0004684925079345703, "__label__games": 0.0005183219909667969, "__label__hardware": 0.0013208389282226562, "__label__health": 0.0010223388671875, "__label__history": 0.00035500526428222656, "__label__home_hobbies": 0.0001105666160583496, "__label__industrial": 0.0006155967712402344, "__label__literature": 0.00040841102600097656, "__label__politics": 0.00035119056701660156, "__label__religion": 0.0004119873046875, "__label__science_tech": 0.1805419921875, "__label__social_life": 0.00012683868408203125, "__label__software": 0.0213623046875, "__label__software_dev": 0.78759765625, "__label__sports_fitness": 0.0002493858337402344, "__label__transportation": 0.0007538795471191406, "__label__travel": 0.0002751350402832031}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29204, 0.02601]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29204, 0.52869]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29204, 0.8659]], "google_gemma-3-12b-it_contains_pii": [[0, 5309, false], [5309, 11238, null], [11238, 16098, null], [16098, 21265, null], [21265, 25542, null], [25542, 29204, null], [29204, 29204, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5309, true], [5309, 11238, null], [11238, 16098, null], [16098, 21265, null], [21265, 25542, null], [25542, 29204, null], [29204, 29204, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29204, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29204, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29204, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29204, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29204, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29204, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29204, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29204, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29204, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29204, null]], "pdf_page_numbers": [[0, 5309, 1], [5309, 11238, 2], [11238, 16098, 3], [16098, 21265, 4], [21265, 25542, 5], [25542, 29204, 6], [29204, 29204, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29204, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
ebedfa40a3dedc584354a7bdfe7d0fe2bc553a1b
TRANSACTION SERVICE COMPOSITION A Study of Compatibility Related Issues Anna-Brith Arntsen and Randi Karlsen Computer Science Department, University of Tromsoe, 9037 Tromsoe, Norway Keywords: Flexible Transaction Processing Environment, Dynamic service composition, Compatibility, Integration. Abstract: Different application domains have varying transactional requirements. Such requirements must be met by applying adaptability and flexibility within transaction processing environments. ReflecTS is such an environment providing flexible transaction processing by exposing the ability to select and dynamically compose a transaction service suitable for each particular transaction execution. A transaction service (TS) can be seen as a composition of a transaction manager (TM) and a number of involved resource managers (RMs). Dynamic transaction service composition raises a need to examine issues regarding Vertical Compatibility between the components in a TS. In this work, we present a novel approach to service composition by evaluating Vertical Compatibility between a TM and RMs - which includes Property and Communication compatibility. 1 INTRODUCTION New application domains and execution environments have transactional requirements that may exceed the traditional ACID properties. Such domains, including workflow, cooperative work, medical information systems and e-commerce, are constantly evolving and possess varying and non-ACID requirements. The travel arrangement scenario is a well-known example of a long-running transaction with requirements that goes beyond the ACID properties. Such a transaction consists of a number of subtasks (booking flights, hotel rooms, theater tickets, etc), possible with adjacent contingent transactions and of dissimilar importance (vital vs. non-vital). Resources cannot be locked for the entire duration of the transaction. So, to increase performance and concurrency, this transaction must be structured as a non-ACID transaction with relaxed (i.e. semantic) atomicity based on the use of compensating activities in case of failure. Varying transactional requirements demand a flexible transaction execution environment. Such requirements are not met by current transaction processing solutions where merely ACID transactions are supported. Thus, there is a gap between offered and required support for varying transactional requirements. A number of advanced transaction models (Elmagarmid, 1992; Garcia-Molina and Salem, 1987) have been proposed to meet different transactional requirements. Many advanced models where suggested with specific applications in mind, and with fixed transactional semantics and correctness criteria. Consequently, they do not provide sufficient support for wide areas of applications. The characteristics of the proposed transaction models support our conviction that the “one-size fits all” paradigm is not sufficient and that a single approach to extended transaction execution will not suit all applications. To close the gap between offered and required support for varying requirements, we designed the flexible transaction processing platform ReflecTS (Arntsen and Karlsen, 2005). ReflecTS is a highly adaptable platform offering an extensible number of concurrently running transaction services, where each service supports different transactional guarantees. Generally, a transaction service (TS) can be viewed as a composition of a transaction manager (TM) and a number of resource managers (RMs), one for each involved source. Today’s systems keeps mainly one TM, giving a predefined and static TS composition. ReflectTS, on the other hand, exposes the ability to dynamically select a TM and subsequently compose a TS suitting particular transactional requirements. Dynamic service composition raises a need to evaluate Vertical Compatibility between each TM - RM pair. This must be done both with respect to Property and Communication compatibility. The goal of this paper is to investigate these issues, with a particular focus on Property compatibility and problems related to the integration of heterogeneous commit and recovery protocols. In the remainder of this paper we first, in section 2, present the architecture of ReflectTS. Section 3 presents the service composition procedure and compatibility related issues. Section 4 follows with related work, and section 5 draws conclusions and presents future work. 2 REFLECTS ReflectTS (Arntsen and Karlsen, 2005) is a flexible and adaptable transaction processing system suitting varying transactional requirements by providing an extensible number of transaction managers (TMs). The main functionalities of ReflectTS are transaction manager (TM) selection, transaction service (TS) composition and transaction activation. We present the architecture of ReflectTS and the specifications involved in TS composition and compatibility evaluation. 2.1 Architecture ReflectTS, shown in Figure 1, is a composition of components. TSInstall handles requests for TM configurations and reconfigurations, and TSActivate handles requests for transaction executions. The TM-Framework hosts the TM implementations, and the InfoBase keeps TM and resource manager (RM) descriptors and results from the compatibility evaluation procedures. The IReflecTS interface (Arntsen and Karlsen, 2005) defines the interaction between the application program (AP) and ReflectTS, and is called to demarcate global transactions and to control the direction of their completion. Its design has been influenced by the TX-interface defined in the X/Open Distributed Transaction Processing (DTP) model (Group, 1996), but differ from it to conform to varying transactional requirements. This is, among others, done by including the transactional requirements and information about requested RMs in the TransBegin() request. The interaction between a TM and a RM is generally determined by the X/Open standard and the XA-interface (Group, 1996). Applications initiating TransBegin() embeds a XML-document describing the transactional requirements and a list of requested RM identifications. Based on the transactional requirements and descriptors of available TMs, a suitable TM is selected for the transaction execution. The mapping of requirements to a TM need not be one-to-one. A specific set of requirements can be mapped to different TMs, in which case a list is stored in the system for use if incompatibility arises. Consecutively, the TM is composed together with required RMs into a TS. This TS is responsible for coordinating the execution of the particular transaction while preserving the requested transactional requirements. TSActivate performs TS composition based on the descriptor of the selected TM, TM_Descriptor, and the descriptors associated with involved RMs, RM_Descriptors. For each pair of TM and RM, compatibility is evaluated. When this compatibility, which includes Property and Communication compatibility, is fulfilled between the involved parties - composition takes place, and eventually, the transaction is started. Transaction activation presupposes successful evaluation of Horizontal Compatibility, which is compatibility between concurrently active transaction services. This is part of future work. 2.2 Transaction Service Specifications ReflectTS introduces two specifications sustaining service composition and compatibility evaluation: the TM_Descriptor and the RM_Descriptor. 2.2.1 TM_Descriptor The TM_Descriptor describes a TM and includes information about: 1) a TM_ID, 2) transactional properties (ACID or non-ACID), 3) transactional mechanisms (commit/recovery, global concurrency control), and 3) compatibility with a standard (i.e. XA) or not. The following is an example of a TM with ACID guarantees running a 2PC protocol with presumed abort, supporting the XA-interface: ```xml <TM_Descriptor> <ServiceID>TM_ID</ServiceID> <Properties>ACID</Properties> <Standard>XA</Standard> <TaskList> <Task> <TaskId>2PC</TaskId> <TaskParameters> <Parameter>PrA</Parameter> </TaskParameters> </Task> </TaskList> </TM_Descriptor> ``` 2.2.2 RM_Descriptor A RM_Descriptor holds information about a registered RM and includes the following: 1) a resource identification RM_ID, 2) ResourceID with the DNS name of the resource, 3) whether the RM is XA-compliant or not, and 4) information about the RMs transactional mechanisms. The specification of ResourceID corresponds to the content of the RMlist following the Start_Trans() request. An example of a XA-compatible RM running a two-phase commit protocol with presumed abort (PrA) follows. ```xml <RM_Descriptor> <RM_ID>Resource_ID</RM_ID> <ResourceID>mypc.mydom.edu</ResourceID> <Standard>XA</Standard> <TaskList> <Task> <TaskId>commit</TaskId> <TaskParameters> <Parameter>PrA</Parameter> </TaskParameters> </Task> </TaskList> </RM_Descriptor> ``` 3 SERVICE COMPOSITION AND COMPATIBILITY 3.1 Introduction A transaction service TS is a composition of a transaction manager TM and a set of n compatible resource managers $TS = (TM, R_M)$ where $R_M = \{RM_1, ..., RM_n\}$. To compose a TS, Vertical Compatibility must be successfully evaluated with respect to the following: - Each pair of TM and RM must match with respect to local and global transaction control to assure the requested transactional requirements. This we refer to as Property compatibility - Each pair of TM and RM must be able to communicate through some common interface while assuring requested transactional requirements. This we refer to as Communication compatibility Results from these evaluations are kept in the InfoBase. This means that it is unnecessary to repeatedly evaluate the same pair of TM and RM. 3.2 Composition Procedure This section presents the service composition procedure, TS_Composition(), which follows TM selection. Code regarding selection, service composition and incompatibility is presented below. $R_T$ represents the transactional requirements of a transaction $T$, and $RMlist$, the list of RMs requested by $T$. The procedure SelectTM() takes as input the TMDescriptors of the deployed TMs and the transactional requirements $R_T$, and returns a list of TMs ($TMlist$) assuring $R_T$. In the case of unsuccessful composition, another TM may be selected, and TS_Composition() restarted. This sequence is repeated either until the composition completes or there are no available TMs in the list. If the composition does not complete, incompatibility is managed by the ResolveIncomp() procedure (see 3.5). ResolveIncomp() takes as input a list of compatible RMs ($Complist$), returned from the TS_Composition() procedure. If there is no solution to the incompatibility problem, the procedure stops. ```plaintext TMlist = SelectTM(TM_Descriptor[], TM, R_T) while TS not composed { if TS_Composition(TM, RMlist, R_T, Complist) Compose and return TS else { if More TMs available { TM = ChooseNewTM(TMlist) } else { if ResolveIncomp(TM, Complist, R_T) Compose and return TS else STOP } } ``` 3.3 Property Compatibility Formally, a composition of a transaction service \( TS \) for the execution of a transaction \( T \) over a set of compatible resource managers \( RMs \) is denoted \( TS = (TM, RMs) \), where \( RMs = \{RM_1, \ldots, RM_M\} \) and where the transaction manager \( TM_i \) is selected for the specific transaction \( T \). \( TS \) has the properties \( \varphi(TS) \). The principal goal of \( TS \) is to coordinate the execution of transaction \( T \) while assuring the requested transactional requirements \( R_T \). Evaluating property compatibility involves examining the transactional mechanisms of the \( TM_i \) with the corresponding mechanisms of each involved \( RM_j \) with the aim to assure the requested transactional requirements. 3.3.1 Transactional Mechanisms The main transactional mechanisms of \( TMs \) and \( RMs \) are commit/recovery and concurrency control. Generally, a \( TM \) controls global commit/recovery and global concurrency control, and a \( RM \) performs local transaction control (logging, concurrency control, persistency, commit/recovery). When heterogeneous commit protocols cooperate for the commitment of a distributed transaction, problems and incompatibility may arise. In this work, we focus on integration of heterogeneous commit protocols, and leaves concurrency control issues for future work. Traditionally, two-phase commit (2PC) is the protocol used to ensure the ACID properties of distributed transactions, and the X/Open Distributed Transaction Processing (DTP) model (Group, 1996) is the most widely used standard implementing the 2PC protocol. Some optimizations of the 2PC protocol are proposed. For instance, presumed abort (PrA), presumed commit (PrC), presumed nothing (PrN), presumed any (PrAny), and three-phase commit (3PC) (Gupta et al., 1997; Tamer and Valduriez, 1999; Al-Houmaily and Chrysanthis, 1999). These 2PC optimizations ensure atomic commit of global transactions. Other optimizations ensure weaker notions of atomicity, e.g., semantic atomicity. These include for instance the Optimistic 2PC (O2PC) protocol (Levy et al., 1991), the OPT (Gupta et al., 1997) protocol, and one-phase commit (1PC). Compared with 2PC, 1PC omits the first phase, thereby permitting immediate commit of subtransactions. Consequently, one-phase commitment is feasible in an X/Open environment. A combination of 1PC and 2PC protocols is realized in the dynamic 1-2PC protocol (Al-Houmaily and Chrysanthis, 2004). The 1-2PC protocol switches from 1PC to 2PC when necessary. A prerequisite for a \( RM \) participating in a 2PC protocol is to support a visible prepare-to-commit state. \( RMs \) not supporting prepare-to-commit are not able to participate in a 2PC protocol and can thus not contribute in assuring global atomicity. Integrating the different commit protocols may cause problems and a visible prepare-to-commit may not be enough for a practical integration of commit protocols. For instance, in (Al-Houmaily and Chrysanthis, 1999) they show that it is impossible to ensure global atomicity of distributed transactions executed at both PrA and PrC participants if PrA, PrC or PrN is running at the transaction manager. Consequently, they presented PrAny, which is a protocol that successfully integrates PrN, PrA and PrC. 3.3.2 Evaluating Property Compatibility Assume a transaction \( T \) with the set of transactional requirements \( R_T \). If each \( RM_j \) requested by the transaction can participate in the selected \( TM_i \)'s commit and abort protocols so that the requirements \( R_T \) of \( T \) are assured, the \( TS \) with the properties \( \varphi(TS) \) will be composed. The requirements \( R_T \) and the properties \( \varphi(TS) \) need not be equivalent. The following definition must be sustained. **Definition 1:** (Assured transactional requirements). The set of transactional requirements \( R_T \) of a transaction \( T \) is assured if and only if RT is semantically a subset of the set of transactional properties \( \varphi(TS) \) of the transaction service \( TS' \), where \( TS' = (TM, R, M') \) and \( TM \) is the manager selected for this particular transaction \( T \): \[ RT \subseteq^* \varphi(TS') \] The definition states that the set of transactional requirements \( RT \) of the transaction \( T \) must be a semantic subset of the set of properties \( \varphi \) of \( TS' \) such that every element of the set \( RT \) is semantically contained in the set \( \varphi(TS') \). This definition is used during evaluation of property compatibility. If the equation does not hold and \( RT \not\subseteq^* \varphi(TS') \), the composition will not take place. Definition 1 must hold for each \( TM - RM \) combination, and it must hold even though the \( TM \) changes or \( RMs \) are added. Consider \( TS' = (TM, R, M') \) where \( R, M' = \{ RM_1, \ldots, RM_m \} \) and \( RT \subseteq^* \varphi(TS') \). Assume adding the resource manager \( RM_i \) so that \( R, M' = R, M' \cup \{ RM_i \} \). Then, according to definition 1, transaction manager \( TM \) and the resource manager \( RM_i \) are property compatible for the execution of \( T \) if and only if \( RT \not\subseteq^* \varphi(TS') \). According to definition 1 and deduced from our perception, a semantic subset refers to a set of transactional properties belonging to a transaction service that are powerful enough to assure a specific set of transactional requirements. To illustrate the semantic subset relationship, consider a transaction service \( T(S1) \) having the following set of properties: \( \varphi(TS1) = (A) \), where \( A \) refers to the atomicity property. Assume a set of requirements deduced from a particular transaction specification: \( RT = (SA_{atomic}) \), which refers to semantic atomicity as supported by Sagas (García-Molina and Salem, 1987). \( TS1 \) assures atomicity by implementing a variant of 2PC or a 3PC protocol. Since these protocols are able to commit individual transactions one-phase as is required to assure semantic atomicity, \( TS1 \) guarantees \( RT \), \( (SA_{atomic}) \subseteq^* (A) \), and definition 1 is fulfilled. If the transaction requires full atomicity, \( RT = (A) \), definition 1 still holds as \( TS1 \) assures atomicity, and \( (A) \subseteq^* (A) \). Next, consider a service \( TS2 \) with the properties \( \varphi(TS2) = (SA_{atomic}) \) - semantic atomicity. This service implements a Sagas-like commit protocol supporting compensation. The resource managers of the composition may implement either a 2PC variant, 1PC, or just committing transactions as soon as they are finished (like in for instance a web service). If a transaction requires semantic atomicity \( RT = (SA_{atomic}) \), the equation \( (SA_{atomic}) \subseteq^* (SA_{atomic}) \) is fulfilled and the requirements guaranteed. If a transaction requires ACID, the service \( TS2 \) will not be composed for the transaction unless incompatibility can be solved (see section 3.5). ### 3.3.3 Exemplifying Property Compatibility Assume an environment with three transaction managers, \( TM_1, TM_2 \) and \( TM_3 \). \( TM_1 \) assures ACID by implementing PrC, \( TM_2 \) implements PrAny, and \( TM_3 \) assures relaxed atomicity as required by Sagas. The environment also includes three resource managers, \( RM_1 \) running PrC, \( RM_2 \) running PrA, and a web service, \( RM_3 \), not supporting prepare-to-commit. Figure 2 illustrates this environment with four proposed transaction services composed for four different transactions. These are denoted \( T1 \) to \( T4 \), and are surrounded with drawn lines. First, consider a transaction \( T1 \) that requests ACID and the resources \( RM_1 \) and \( RM_2 \). For \( T1 \), \( TM_1 \) implementing PrAny is selected. \( RM_1 \) implements PrC and \( RM_2 \) PrA. As seen in (Al-Houmaily and Chrysanthis, 1999), this combination implies compatibility as PrAny successfully integrates both PrC and PrA. Next, a transaction \( T2 \) requests the same properties and resources as \( T1 \), namely ACID and \( RM_1 \) and \( RM_2 \). However, for \( T2 \), \( TM_2 \) implementing PrC is selected. We know from (Al-Houmaily and Chrysanthis, 1999) that atomicity cannot be guaranteed when a PrC protocol controls the execution of transactions over PrA and PrC protocols. In fact, Definition 1 will prevent this service from being composed. Instead, the procedure managing incompatibility will be initiated, and the problem can be solved by for instance reconsidering the choice of \( TM \) (see 3.5). Transaction \( T3 \) requests ACID and the resources \( RM_1 \), \( RM_2 \) and \( RM_3 \). \( TM_1 \) is selected for the execution. \( RM_3 \) does not support prepare-to-commit, so PrAny implemented by \( TM_1 \) cannot control the execution of \( T3 \). Consequently, global atomicity cannot be assured, incompatibility exists and the procedure managing incompatibility will be initiated. The fourth transaction, $T_4$, is a Sagas requesting semantic atomicity and the execution over $RM_1$, $RM_2$, and $RM_3$. For $T_4$, $TM_3$ is selected. The Sagas-like commit protocol implemented by $TM_3$ requires the underlying resources to respond to immediate commit of individual transactions. This is assured by the involved $RMs$, compatibility is present and the requirements are assured. 3.4 Communication Compatibility Communication compatibility evaluates the communication capabilities of a specific $TM - RM$ pair. The ultimate goal is to assure requested transactional properties. The interface implemented by the involved parties determines the ability to communicate. At present, the XA-standard (Group, 1996) defines the most widely used interface. XA-compatibility and non-XA compatibility is a natural classification of communication capabilities for $TM$s and $RM$s. XA-compatibility in our sense means conformance to the XA-interface, not necessarily assuring specific transactional requirements. In our definition, a XA-compatible $TM$ or $RM$ can assure either ACID or non-ACID. Non-XA compatible participants may conform to any other standard (or interface), or none at all. The XA-interface provides the methods necessary for transaction coordination, commitment, and recovery between a $TM$ and one or more $RMs$. The XA-interface supports both atomic commit by the use of a 2PC variant and relaxed atomicity by having the ability to perform 1PC. The $Comm()$ procedure (see 3.2) handling communication compatibility, takes as input the particular $TM$ and $RM$, and a set of transactional requirements $R_T$. The $Comm()$ procedure discovers XA-compatibility by investigating the standard tag of the $TM$ and the $RM$ descriptor. Then, the transactional requirements, $R_T$ are used in the process of evaluating communication compatibility. Based on $R_T$, the requirements regarding communication can be deduced. We will see that a specific $TM - RM$ combination may satisfy a particular $R_T$ set, but not another one. If both the $TM$ and the $RM$ are XA-compatible, communication compatibility exists irrespective of the content of $R_T$. In this case, the $TM$ most likely implements a 2PC variant, and the XA-compatible $RM$ supports a visible prepare-to-commit state. Consequently, within this communication, both ACID and relaxed (i.e. semantic) atomicity, communication is most likely satisfied. However, independent of the content of $R_T$, the descriptors are consulted ahead of the evaluation. If the $R_T$ claims ACID, the communication might be fulfilled even though the $RM$ is non-XA compatible. For instance, a non-XA compatible $RM$ may be able to support prepare-to-commit. Consider a non-XA $TM$ in combination with a XA $RM$. If the $TM$ implements a Saga-like commit protocol and $R_T$ requests semantic atomicity, communication is satisfied. 3.5 Managing Incompatibility Incompatibility may be detected either during evaluating property or communication compatibility. In each case, the following actions are considered: - Communication incompatibility: 1) add an adapter (or wrapper) to either $RM$ or $TM$ to make them conform to each other’s interfaces, or 2) choose another $TM$ with different communication characteristics, but with the same transactional guarantees. - Property incompatibility: 1) choose another $TM$ with different transactional mechanisms, but with the same transactional guarantees, or 2) negotiate to find an alternative way to execute the transaction by either modifying the transactional requirements or the list of involved resources. 4 RELATED WORK Today’s transaction processing platforms supports the execution of distributed transactions, but with limited flexibility. Present platforms, like for instance Microsoft Transaction Server (MTS) (Corporation, 2000), Sun’s Java Transaction Server (JTS) (Subramanyam, 1999) provide merely one transaction service with ACID guarantees. Other approaches support more than one transaction service, although not concurrently. One is given by the CORBA Activity Service Framework (Houston et al., 2001), where various extended transaction models are supported. Others are the WS-transactions (Group, 2004), the OASIS BTP (Little, 2003) specification and the Arjuna XML Transaction Service (Lid, 2003) describing solutions providing two different transaction services, one for atomic transactions and the other for long-running business transactions. Flexibility within transactional systems can be found in the works of Barga (Barga and Pu, 1996) and Wu (Wu, 1998), implementing flexible transaction services. Related work on dynamic combination and configuration of transactional and middleware systems can for instance be found in Zarras (Zarras and Issarny, 1998). References to other works can be found in (Arntsen and Karlsen, 2005). These works recognizes the diversity of systems and their different transactional requirements, and describes approaches to how these needs can be supported. Our work on the flexible transaction processing environment ReflecTS, contrasts previous work in several matters. First, by supporting an extensible number of concurrently running services, and next, by providing dynamic transaction service selection and composition according to the needs of applications. 5 CONCLUSION AND FUTURE WORK The transactional requirements of advanced application domains and web services environments are varying and evolving, demanding flexible transaction processing. On the basis of the flexible transaction processing platform ReflecTS, this work presents a novel approach to dynamic transaction service composition and compatibility related issues. From ReflecTS a suitable transaction manager can be selected for a particular transaction execution, and dynamically composed together with requested resource managers into a complete transaction service. To complete transaction service composition, this work evaluates Property and Communication compatibility between a transaction manager and resource managers. The main contributions of this work are the procedures and the formalisms related to these compatibility issues. Ongoing and future work includes an in-depth evaluation of local and global transactional mechanisms (including concurrency control) with respect to transaction service composition. Further, ongoing work includes developing rules for managing incompatibility, and future work includes an examination of compatibility related to service activation, Horizontal Compatibility. REFERENCES
{"Source-Url": "http://www.scitepress.org/Papers/2007/23926/23926.pdf", "len_cl100k_base": 5866, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 26976, "total-output-tokens": 7191, "length": "2e12", "weborganizer": {"__label__adult": 0.0002529621124267578, "__label__art_design": 0.0002422332763671875, "__label__crime_law": 0.0003101825714111328, "__label__education_jobs": 0.0006556510925292969, "__label__entertainment": 5.7816505432128906e-05, "__label__fashion_beauty": 0.00012743473052978516, "__label__finance_business": 0.0007147789001464844, "__label__food_dining": 0.0002796649932861328, "__label__games": 0.0003812313079833984, "__label__hardware": 0.0005965232849121094, "__label__health": 0.000423431396484375, "__label__history": 0.00022780895233154297, "__label__home_hobbies": 6.538629531860352e-05, "__label__industrial": 0.0003571510314941406, "__label__literature": 0.00023984909057617188, "__label__politics": 0.0002818107604980469, "__label__religion": 0.0003223419189453125, "__label__science_tech": 0.032440185546875, "__label__social_life": 8.356571197509766e-05, "__label__software": 0.016510009765625, "__label__software_dev": 0.94482421875, "__label__sports_fitness": 0.0001832246780395508, "__label__transportation": 0.0004305839538574219, "__label__travel": 0.00018286705017089844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29271, 0.02131]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29271, 0.32636]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29271, 0.8534]], "google_gemma-3-12b-it_contains_pii": [[0, 3508, false], [3508, 7463, null], [7463, 11170, null], [11170, 15158, null], [15158, 20191, null], [20191, 24875, null], [24875, 29271, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3508, true], [3508, 7463, null], [7463, 11170, null], [11170, 15158, null], [15158, 20191, null], [20191, 24875, null], [24875, 29271, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29271, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29271, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29271, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29271, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29271, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29271, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29271, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29271, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29271, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29271, null]], "pdf_page_numbers": [[0, 3508, 1], [3508, 7463, 2], [7463, 11170, 3], [11170, 15158, 4], [15158, 20191, 5], [20191, 24875, 6], [24875, 29271, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29271, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
83635dfe80bd39198eaecf3e37b528b7d1782564
**Online-Questionnaire Design: Establishing Guidelines and Evaluating Existing Support** Joanna Lumsden NRC IIT e-Business, 46 Dineen Dr., Fredericton, NB, Canada, E3B 9W4, jo.lumsden@nrc-cnrc.gc.ca Wendy Morgan University of New Brunswick, PO Box 4400, Fredericton, NB, Canada, E3B 5A3, f995g@unb.ca **INTRODUCTION** As a new medium for questionnaire delivery, the internet has the potential to revolutionise the survey process. Online (web-based) questionnaires provide several advantages over traditional survey methods in terms of cost, speed, appearance, flexibility, functionality, and usability [1, 2]. For instance, delivery is faster, responses are received more quickly, and data collection can be automated or accelerated [1-3]. Online-questionnaires can also provide many capabilities not found in traditional paper-based questionnaires: they can include pop-up instructions and error messages; they can incorporate links; and it is possible to encode difficult skip patterns making such patterns virtually invisible to respondents. Like many new technologies, however, online-questionnaires face criticism despite their advantages. Typically, such criticisms focus on the vulnerability of online-questionnaires to the four standard survey error types: namely, coverage, non-response, sampling, and measurement errors. Although, like all survey errors, coverage error (“the result of not allowing all members of the survey population to have an equal or nonzero chance of being sampled for participation in a survey” [2, pg. 9]) also affects traditional survey methods, it is currently exacerbated in online-questionnaires as a result of the digital divide. That said, many developed countries have reported substantial increases in computer and internet access and/or are targeting this as part of their immediate infrastructural development [4, 5]. Indicating that familiarity with information technologies is increasing, these trends suggest that coverage error will rapidly diminish to an acceptable level (for the developed world at least) in the near future, and in so doing, positively reinforce the advantages of online-questionnaire delivery. The second error type – the non-response error – occurs when individuals fail to respond to the invitation to participate in a survey or abandon a questionnaire before it is completed. Given today’s societal trend towards self-administration [2] the former is inevitable, irrespective of delivery mechanism. Conversely, non-response as a consequence of questionnaire abandonment can be relatively easily addressed. Unlike traditional questionnaires, the delivery mechanism for online-questionnaires makes estimation of questionnaire length and time required for completion difficult, thus increasing the likelihood of abandonment. By incorporating a range of features into the design of an online-questionnaire, it is possible to facilitate such estimation – and indeed, to provide respondents with context sensitive assistance during the response process – and thereby reduce abandonment while eliciting feelings of accomplishment [6]. For online-questionnaires, sampling error (“the result of attempting to survey only some, and not all, of the units in the survey population” [2, pg. 9]) can arise when all but a small portion of the anticipated respondent set is alienated (and so fails to respond) as a result of, for example, disregard for varying connection speeds, bandwidth limitations, browser configurations, monitors, hardware, and user requirements during the questionnaire design process. Similarly, measurement errors (“the result of poor question wording or questions being presented in such a way that inaccurate or uninterpretable answers are obtained” [2, pg. 11]) will lead to respondents becoming confused and frustrated. Sampling, measurement, and non-response errors are likely to occur when an online-questionnaire is poorly designed. Individuals will answer questions incorrectly, abandon questionnaires, and may ultimately refuse to participate in future surveys; thus, the benefit of online-questionnaire delivery will not be fully realized. To prevent errors of this kind, and their consequences, it is extremely important that practical, comprehensive guidelines exist for the design of online-questionnaires. Many design guidelines exist for paper-based questionnaire design (e.g. [7-14]); the same is not true for the design of online-questionnaires [2, 15, 16]. The research presented in this paper is a first attempt to address this discrepancy. Section 2 describes the derivation of a comprehensive set of guidelines for the design of online-questionnaires and briefly (given space restrictions) outlines the essence of the guidelines themselves. Although online-questionnaires reduce traditional delivery costs (e.g. paper, mail out, and data entry), set up costs can be high given the need to either adopt and acquire training in questionnaire development software or secure the services of a web developer. Neither approach, however, guarantees a good questionnaire (often because the person designing the questionnaire lacks relevant knowledge in questionnaire design). Drawing on existing software evaluation techniques [17, 18], we assessed the extent to which current questionnaire development applications support our guidelines; Section 3 describes the framework used for the evaluation, and Section 4 discusses our findings. Finally, Section 5 concludes with a discussion of further work. **COMPREHENSIVE DESIGN GUIDELINES** In essence, an online-questionnaire combines questionnaire-based survey functionality with that of a webpage/site. As such, the design of an online-questionnaire should incorporate principles from both contributing fields. Hence, in order to derive a comprehensive set of guidelines for the design of online-questionnaires, we performed an environmental scan of existing guidelines for paper-based questionnaire design (e.g. [7-14]) and website design, paying particular attention to issues of accessibility and usability (e.g. [19-30]). Additionally, we reviewed the scarce existing provision of online-questionnaire design guidelines [2, 15, 16]. Principal amongst the latter is the work of Dillman [2]. Expanding on his successful Total Design Method for mail and telephone surveys [31], Dillman introduced, as part of his Tailored Design Method [2], fourteen Guideline Organisation At the highest level, our guidelines advise on the process that should be followed when designing a questionnaire (the sequence of steps is shown in Figure 1(a)). Providing brief assistance for the remaining steps, the guidelines focus on supporting the design and implementation of questionnaire content (step shown shaded in Figure 1(a)). To this end, we identify the general organisational structure that online-questionnaires should adopt (see Figure 1(b)), provide assistance at this level, and then progressively refine the guidance according to the issues identified in Table 1. Since it is not possible to include the comprehensive set of guidelines, the following excerpt (Table 2) is presented as an example to provide a 'flavour' for the guidelines as a whole; the guidance relates to the formatting of text (see outlined component in Table 1) in online-questionnaires. When reading the example, it is important to note that none of the guidelines are particularly innovative in their own right; each has been drawn from the aforementioned sources covered by the environmental scan. What is novel, however, is the fact that applicable guidelines from these disparate sources have been collated into a unified set which is presented methodically in order to comprehensively support online-questionnaire design. A FRAMEWORK FOR EVALUATION OF SUPPORT Choice of online-questionnaire development tool is complex. Developers of online-questionnaires are confronted by an increasing variety of software tools to help compose and deliver online-questionnaires. Many such tools purport to allow ‘anyone’ to quickly and easily develop an online-questionnaire. We wanted to assess the degree to which such tools encourage ‘anyone’ to develop a good questionnaire (where, for our purposes, ‘good’ is defined as following established principles for website and questionnaire design); that is, we wanted to evaluate the extent to which online-questionnaire development tools incorporate the principles of our guidelines. Developed by Lumsden, SUIT is a means by which user interface development tools (UIDTs) can be systematically evaluated and compared [17, 18]. Centring around a framework and evaluation method, SUIT adopts a reference model-based approach to tool evaluation. Although, as published, SUIT is dedicated to UIDT evaluation, the principles of SUIT are applicable to any artefact evaluation and comparison [17]. Hence, together with the fact that a website – and therefore an online-questionnaire – is essentially a user interface, a version of the SUIT framework (modified to reflect appropriate evaluative parameters) seemed ideal for the evaluation of support for our identified guidelines within current online-questionnaire development tools. Evaluation Framework Figure 2 shows an excerpt from the evaluation framework that was used to assess online-questionnaire development tools. Rows in the framework represent the guidelines, each being summarized for brevity and included under a header representing the applicable online-questionnaire component (e.g. 'text' in Figure 2). Tools were evaluated according to their feature provision and specifically the means by which each feature is incorporated into an online-questionnaire. Where a tool did not support a particular feature, this was marked as ‘no support’; where a feature was supported, we recorded whether it was incorporated automatically or manually and whether control over the feature (e.g. style) was manual or automatic. In essence, these measures allow us to determine the functionality and focus of control available in Table 1. Guideline Organisation and Topics Covered <table> <thead> <tr> <th>GENERAL ORGANIZATION</th> <th>FORMATTING</th> <th>QUESTION TYPE &amp; PHRASING</th> <th>GENERAL TECHNICAL ISSUES</th> </tr> </thead> <tbody> <tr> <td>Welcome Page</td> <td>Text</td> <td>General Guidance</td> <td>Privacy &amp; Protection</td> </tr> <tr> <td>Registration/Login Page</td> <td>Color</td> <td>Sensitive Questions</td> <td>Computer Literacy</td> </tr> <tr> <td>Introduction Page</td> <td>Graphics</td> <td>Attitude Statements</td> <td>Automation</td> </tr> <tr> <td>Screening Text Page</td> <td>Flash</td> <td>Phraseology</td> <td>Platforms &amp; Browsers</td> </tr> <tr> <td>Questionnaire Questions</td> <td>Tables &amp; Frames</td> <td>Types of Person</td> <td>Devices</td> </tr> <tr> <td>Additional Information Links</td> <td>Feedback</td> <td>Open-Ended/Closed-Ended</td> <td>Assistive Technology</td> </tr> <tr> <td>Thank You</td> <td>Miscellaneous</td> <td>Rank Order</td> <td></td> </tr> <tr> <td>Layout</td> <td>Response Formats</td> <td>Categorical or Nominal</td> <td></td> </tr> <tr> <td>Frames &amp; Fields</td> <td>Matrix Questions</td> <td>Magnitude Estimate</td> <td></td> </tr> <tr> <td>Navigation</td> <td>Drop Down Boxes</td> <td>Ordinal Questions</td> <td></td> </tr> <tr> <td>Buttons</td> <td>Radio Buttons</td> <td>Likert Scale</td> <td></td> </tr> <tr> <td>Links</td> <td>Check Boxes</td> <td>Skip</td> <td></td> </tr> <tr> <td>Site Maps/Scalining</td> <td></td> <td></td> <td></td> </tr> </tbody> </table> There are a number of issues of importance when designing the textual content of an online-questionnaire: a) Fonts used should be readable and familiar, and text should be presented in mixed case or standard sentence formatting; upper case (or all capitals) should only be used for emphasis; b) Sentences should not exceed 20 words, and should be presented with no more than 75 characters per line. If elderly respondents are anticipated, then this limit should be reduced to between 50 and 65 characters per line. Paragraphs should not exceed 5 sentences in length; c) Technical instructions (those being instructions related to the basic technical operation of the website delivering the questionnaire) should be written in such a way that non-technical people can understand them; d) Ensure that questions are easily distinguishable in terms of formatting, from instructions and answers; e) For each question type, be consistent in terms of the visual appearance of all instances of that type and the associated instructions concerning how they are to be answered. In particular, keep the relative prominence of the question and answer consistent throughout the questionnaire. Where different types of questions are to be included in the same questionnaire, each question type should have a unique visual appearance; f) When designing for accessibility by users with disabilities and the elderly, employ a minimum of size 14pt font and ensure that the font color contrasts significantly with the background coloring. Text should be discernible even without the use of color. It is advisable to test font colors and size with a screen magnifier to ensure usability prior to release; g) If targeting an elderly audience, provide a text-sizing option on each page, use bold face but avoid italics, and ‘automatic text’; it is also advisable to increase the spacing between lines of text for ease of reading by this respondent group; h) Make sure that text is read (by screen readers) in a logical order. Specifically, set the tab order on the pages. This is especially true for actual questions in the questionnaire – think carefully about the order in which a visually impaired user will hear the elements of a question, including the instructions and response options. Table 2. Excerpt from the Online-Questionnaire Design Guidelines Guideline Organisation At the highest level, our guidelines advise on the process that should be followed when designing a questionnaire (the sequence of steps is shown in Figure 1(a)). Providing brief assistance for the remaining steps, the guidelines focus on supporting the design and implementation of questionnaire content (step shown shaded in Figure 1(a)). To this end, we identify the general organisational structure that online-questionnaires should adopt (see Figure 1(b)), provide assistance at this level, and then progressively refine the guidance according to the issues identified in Table 1. Since it is not possible to include the comprehensive set of guidelines, the following excerpt (Table 2) is presented as an example to provide a ‘flavour’ for the guidelines as a whole; the guidance relates to the formatting of text (see outlined component in Table 1) in online-questionnaires. When reading the example, it is important to note that none of the guidelines are particularly innovative in their own right; each has been drawn from the aforementioned sources covered by the environmental scan. What is novel, however, is the fact that applicable guidelines from these disparate sources have been collated into a unified set which is presented methodically in order to comprehensively support online-questionnaire design. A FRAMEWORK FOR EVALUATION OF SUPPORT Choice of online-questionnaire development tool is complex. Developers of online-questionnaires are confronted by an increasing variety of software tools to help compose and deliver online-questionnaires. Many such tools purport to allow ‘anyone’ to quickly and easily develop an online-questionnaire. We wanted to assess the degree to which such tools encourage ‘anyone’ to develop a good questionnaire (where, for our purposes, ‘good’ is defined as following established principles for website and questionnaire design); that is, we wanted to evaluate the extent to which online-questionnaire development tools incorporate the principles of our guidelines. Developed by Lumsden, SUIT is a means by which user interface development tools (UIDTs) can be systematically evaluated and compared [17, 18]. Centring around a framework and evaluation method, SUIT adopts a reference model-based approach to tool evaluation. Although, as published, SUIT is dedicated to UIDT evaluation, the principles of SUIT are applicable to any artefact evaluation and comparison [17]. Hence, together with the fact that a website – and therefore an online-questionnaire – is essentially a user interface, a version of the SUIT framework (modified to reflect appropriate evaluative parameters) seemed ideal for the evaluation of support for our identified guidelines within current online-questionnaire development tools. Evaluation Framework Figure 2 shows an excerpt from the evaluation framework that was used to assess online-questionnaire development tools. Rows in the framework represent the guidelines, each being summarized for brevity and included under a header representing the applicable online-questionnaire component (e.g. ‘text’ in Figure 2). Tools were evaluated according to their feature provision and specifically the means by which each feature is incorporated into an online-questionnaire. Where a tool did not support a particular feature, this was marked as ‘no support’; where a feature was supported, we recorded whether it was incorporated automatically or manually and whether control over the feature (e.g. style) was manual or automatic. In essence, these measures allow us to determine the functionality and focus of control available in There are a number of issues of importance when designing the textual content of an online-questionnaire: a) Fonts used should be readable and familiar, and text should be presented in mixed case or standard sentence formatting; upper case (or all capitals) should only be used for emphasis; b) Sentences should not exceed 20 words, and should be presented with no more than 75 characters per line. If elderly respondents are anticipated, then this limit should be reduced to between 50 and 65 characters per line. Paragraphs should not exceed 5 sentences in length; c) Technical instructions (those being instructions related to the basic technical operation of the website delivering the questionnaire) should be written in such a way that non-technical people can understand them; d) Ensure that questions are easily distinguishable in terms of formatting, from instructions and answers; e) For each question type, be consistent in terms of the visual appearance of all instances of that type and the associated instructions concerning how they are to be answered. In particular, keep the relative prominence of the question and answer consistent throughout the questionnaire. Where different types of questions are to be included in the same questionnaire, each question type should have a unique visual appearance; f) When designing for accessibility by users with disabilities and the elderly, employ a minimum of size 14pt font and ensure that the font color contrasts significantly with the background coloring. Text should be discernible even without the use of color. It is advisable to test font colors and size with a screen magnifier to ensure usability prior to release; g) If targeting an elderly audience, provide a text-sizing option on each page, use bold face but avoid italics, and ‘automatic text’; it is also advisable to increase the spacing between lines of text for ease of reading by this respondent group; h) Make sure that text is read (by screen readers) in a logical order. Specifically, set the tab order on the pages. This is especially true for actual questions in the questionnaire – think carefully about the order in which a visually impaired user will hear the elements of a question, including the instructions and response options. online-questionnaire development tools. Tools were also evaluated according to their support for our guidelines, measured according to the manner in which the guidelines manifested; for instance, for any given guideline, if a tool restricted the use/set up of the associated feature in accordance with the principle of the guideline, support for that guideline was recorded as ‘Imposed Restrictions’. Support mechanisms were not mutually exclusive; it was possible for any given guideline to be supported by more than one means (the available options are shown in Figure 2). RESULTS AND DISCUSSION Fifteen online-questionnaire development tools were randomly selected for inclusion in this study; seven were web-based software products (online-tools), typically also hosting the finished survey, and eight were offline software products, installed on one’s own computer (offline-tools). A combination of demo software, free online accounts, and vendor tutorials was used to source the information for the study3. Functional Support for Listed Features On average, 74% of listed features were supported within the tools studied4; this did not differ between online- and offline-tools although the supported subset did vary slightly across the tool types. In terms of the General Organization related features (see Table 1), none of the tools explicitly5 supported the inclusion of screening-test pages or sitemaps, and offline-tools were not found to support development of registration/login pages. Formatting features (see Table 1) were better supported, with only flash missing from offline-tools and tables and frames missing from online-tools. All features related to Question Type & Phrasing (see Table 1) were supported irrespective of tool type. In terms of General Technical Issues (see Table 1), no tools supported design for assistive technology and offline-tools, surprisingly, did not support rigorous testing across platforms and browsers. Across those that were provided, feature inclusion was achieved manually, on average, 74% of the time; this figure was slightly higher for offline-tools (80%) and slightly lower (68%) for online-tools. A similar pattern was also observed for the control of features once included (on average, 78% of control was manual). In essence, feature insertion style mirrored manipulation style. What is interesting to note here is that despite providing a distinct lack of guidance (see Section 4.2), the tools supported little automation of online-questionnaire design which could have been used in lieu of guidance to ‘control’ questionnaire quality to some extent. Support for Guidelines Guideline support was only assessed relative to the features or functions that were physically present/provided within the tools. On average, only 13% of the listed guidelines (relating to supported functionality) had any form of support within the tools studied; this was true for both tool types. Of the guideline categories listed in Table 1, 36% had no representation at all across one or both of the tool types. Support for guidelines across 18% of categories was completely missing (where the functionality was available) from every tool studied; these included guidelines related to the use or design of Additional Information Links, Navigation, Scrolling, Matrix Questions, Attitude Statements, Magnitude Estimate Questions, and Automation of online-questionnaire components. On average, 12% of General Organization, 16% of Formatting, 6% of Question Type & Phrasing, and 26% of General Technical Issues guidelines were supported; where there was a difference (albeit, in most cases, very little) between the tool types in terms of extent of support for these high-level categories of guidelines, offline-tools generally provided more support with the exception of General Organization, where online-tools were more supportive. Consider, now, the means by which the supported guidelines were supported. Figure 3 shows (using the primary y-axis and bar chart) the number of tools in which each support mechanism was used: using the secondary y-axis and line charts, Figure 3 shows the average, minimum, and maximum extent (as a percentage) to which the various mechanisms were used across supported guidelines. The most popular support mechanism was the use of defaults (used, on average, for 87% of supported guidelines). Thus, when a feature was included in an online-questionnaire, it was set up by default in adherence with the associated guideline(s); designers were, however, typically free to alter these settings without being advised of the potential disadvantages associated with their actions. Second in popularity was the use of non-context sensitive help (i.e. help which was not context-linked to actions and had to be looked up independently); it was used in 10 of the 15 tools, but its average application across supported guidelines was only 14%. The remaining support mechanisms were typically used by only one or two tools and contributed to the support of very few guidelines overall. Surprisingly, given the nature of the artifact being designed, neither wizards nor templates were much utilized; where the latter were used, they supported, on average, 34% of guidelines. Overall, the study has highlighted the predominant absence of sufficient guidance when creating online-questionnaires using current development tools. Typically, most available features can be incorporated into an online-questionnaire with little or no suggestion as to best-practice; where guidelines are supported, the mechanism by which they are supported is typically implicit – there is insufficient explicit explanation provided as to how best to design an online-questionnaire. Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited. CONCLUSIONS AND FURTHER WORK On the basis of our evaluation, we consider there to be a distinct need for improved support for guidelines within online-questionnaire design tools in order to facilitate online-questionnaire development that is based on all relevant accepted principles. Without such support, ad hoc online-questionnaire development will continue as is, with the anticipated result that the public will become disenchanted with such surveys, and their usefulness will therefore diminish without having been granted a fair hearing. We are currently in the process of performing an empirical study by which we are evaluating the guidelines themselves in terms of their ability, as a comprehensive tool, to guide better online-questionnaire design. Ultimately, we plan to develop an online-questionnaire design tool that will guide a developer through the design process, highlighting contravention of advisable practice where applicable. Finally, we plan to incorporate additional new guidelines concerning the use of language in online-questionnaires. A ‘structurally sound’ questionnaire can be badly disadvantaged by the wording used to express questions and responses; we would like to be able to advise on the use of language, primarily via some form of natural language ‘check’ in our development tool. REFERENCES ENDNOTES 1 In the absence of appropriate measures to address this. 2 Note, this research is not concerned with coverage errors which are orthogonal to good questionnaire design; mixed-mode delivery is suggested as a means to combat such errors. 3 The authors recognize that, as a result of limited access to some tools, some available functionality may have been missed. All results should be considered in light of this caveat. 4 None of the tools supported any of the design process steps (see Figure 1(a)) – other than the ‘design and implement content’ step – in any way; as such, these functions are not included in any of the following discussion. 5 For the purpose of fairness and simplicity, a tool was only assessed as supporting functionality if the support was explicit; it is recognized that some tools will provide ‘hidden’ means by which to achieve functional goals but these are not included here. Related Content Detection of Automobile Insurance Fraud Using Feature Selection and Data Mining Techniques The Evolutional Genesis of Blogs and the Integration of Communication Networks www.irma-international.org/chapter/the-evolutional-genesis-of-blogs-and-the-integration-of-communication-networks/112619 Learning-by-Exporting www.irma-international.org/chapter/learning-by-exporting/112374 A Novel Aspect Based Framework for Tourism Sector with Improvised Aspect and Opinion Mining Algorithm Programmable Logic Controllers www.irma-international.org/chapter/programmable-logic-controllers/112509
{"Source-Url": "https://www.irma-international.org/viewtitle/32623/?isxn=9781616921293", "len_cl100k_base": 5659, "olmocr-version": "0.1.49", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 15170, "total-output-tokens": 6829, "length": "2e12", "weborganizer": {"__label__adult": 0.0004117488861083984, "__label__art_design": 0.003635406494140625, "__label__crime_law": 0.0008292198181152344, "__label__education_jobs": 0.064208984375, "__label__entertainment": 0.0002343654632568359, "__label__fashion_beauty": 0.00036406517028808594, "__label__finance_business": 0.002941131591796875, "__label__food_dining": 0.0004680156707763672, "__label__games": 0.0008273124694824219, "__label__hardware": 0.0018672943115234375, "__label__health": 0.0012845993041992188, "__label__history": 0.0011720657348632812, "__label__home_hobbies": 0.0005164146423339844, "__label__industrial": 0.0009374618530273438, "__label__literature": 0.001739501953125, "__label__politics": 0.0005464553833007812, "__label__religion": 0.0006575584411621094, "__label__science_tech": 0.32177734375, "__label__social_life": 0.000507354736328125, "__label__software": 0.1676025390625, "__label__software_dev": 0.42626953125, "__label__sports_fitness": 0.0002560615539550781, "__label__transportation": 0.0006079673767089844, "__label__travel": 0.00033092498779296875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31807, 0.01841]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31807, 0.29636]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31807, 0.91515]], "google_gemma-3-12b-it_contains_pii": [[0, 6410, false], [6410, 19794, null], [19794, 25668, null], [25668, 30258, null], [30258, 31807, null]], "google_gemma-3-12b-it_is_public_document": [[0, 6410, true], [6410, 19794, null], [19794, 25668, null], [25668, 30258, null], [30258, 31807, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31807, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31807, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31807, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31807, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31807, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31807, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31807, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31807, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31807, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31807, null]], "pdf_page_numbers": [[0, 6410, 1], [6410, 19794, 2], [19794, 25668, 3], [25668, 30258, 4], [30258, 31807, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31807, 0.12821]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
955a8a8972cd2dd8a8c2850c98ad00f538eda19a
Status of this Memo This document is an Internet-Draft and is in full conformance with all provisions of Section 10 of RFC2026 [RFC2026]. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet-Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsolete by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html. Abstract This document describes how to efficiently implement certain mechanisms of unidirectional operation in ROHC-TCP (robust header compression scheme for TCP/IP), RFC XXX, which can significantly improve the compression efficiency compared to using simple control scheme. All the operational modes and state machines mentioned in this document are the same as the ones described in [ROHC-TCP]. Table of Contents Status of this Memo................................................1 Abstract...........................................................1 1. Introduction....................................................4 2. Terminology....................................................5 3. Tracking-based TCP congestion window estimation...........6 3.1. General principle of congestion window tracking.........6 3.2. Tracking based on Sequence Number.....................6 3.3. Tracking based on Acknowledgment Number...............8 3.4. Further discussion on congestion window tracking.....9 4. Optional enhancements in Unidirectional mode...............10 4.1. Optional operation for upwards transition.............10 4.1.1. Optional transition for short-live TCP transfers....10 4.1.2. Optional operation in IR state.....................10 4.1.3. Optional operation in CO state.....................11 4.1.4. Determine the value K in congestion window estimation11 4.2. Optional operation for downwards transition...........12 4.3. Other possible optimizations........................12 5. Security considerations......................................13 6. IANA Considerations........................................13 7. Acknowledgements...........................................14 8. Authors’ addresses...........................................14 9. Intellectual Property Right Considerations................14 10. References................................................14 Document History 00 November, 2002 First release. 01 May, 2003 Editorial revision. 1. Introduction This document describes how to implement certain mechanisms in [ROHC-TCP] to significantly improve the compression efficiency in unidirectional operation mode comparing with the ones obtained with simple control scheme. As discussed in [TCPBEH], Window-based LSB encoding [RFC3095], which does not assume the linear changing pattern of the target header fields, is more suitable to encode some TCP fields, such as SEQUENCE NUM, ACKNOWLEDGEMENT NUM, etc., both efficiently and robustly, considering the changing pattern of these TCP fields. Using ROHC-TCP, both the compressor and decompressor maintain the context values. To provide robustness, the compressor can maintain more than one context value for each field. These values represent the r most likely candidates’ values for the context at the decompressor. ROHC-TCP ensures that the compressed header contains enough information so that the uncompressed header can be extracted no matter which one of the compressor context values is actually stored at the decompressor. Storing more context values at the compressor reduces the chance that the decompressor will have a context value different from any of the values stored at the compressor (which could cause the packet to be decompressed incorrectly). As an example, an implementation may choose to store the last r values of each field in the compressor context. In this case robustness is guaranteed against up to r - 1 consecutive dropped packets between the compressor and the decompressor. Thus, by keeping the value r large enough, the compressor rarely gets out of synchronization with the decompressor. However, the trade-off is that the larger the number of context values is, the compressed header will be larger because it must contain more information to decompress relative to any of the candidate context values. That is to say, the compression efficiency will be reduced. To reduce the number of context value r, the compressor needs some form of feedback to get sufficient confidence that a certain context value will not be used as a reference by the decompressor anymore. Then the compressor can remove that value and all other values older than it from its context. Obviously, when a feedback channel is available, this type of confidence can be achieved by proactive feedback in the form of ACKs from the decompressor. A feedback channel, however, is unavailable in unidirectional operational mode in ROHC-TCP. To simplify the description, the mechanism used previously in the ROHC-TCP compression process, is referred to as static control scheme. One thing will be emphasized in this document is that, an implicit feedback can be obtained from the nature feedback-loop of TCP protocol itself for TCP/IP header compression even in unidirectional operational mode. Since TCP is a window-based protocol, a new segment cannot be transmitted without getting the acknowledgment of segment in the previous window. Upon receiving the new segment, the compressor can get enough confidence that the decompressor has received the segment in the previous window and then can shrink the context window by removing all the values older than that segment. This is to say, the context window of ROHC-TCP, the number of context value $r$, can be controlled by the TCP congestion window. A tracking based TCP congestion window estimation algorithm is described in this document to estimate TCP congestion window in the compressor side. All the operational modes and state machines mentioned in this document are the same as the ones described in [ROHC-TCP]. For better understanding of this draft, the reader should be familiar with the concept of [ROHC-TCP]. 2. Terminology The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119 [RFC2119]. TCP congestion window (cwnd) A TCP state variable that limits the amount of data a TCP can send. At any given time, a TCP MUST NOT send data with a sequence number higher than the sum of the highest acknowledged sequence number and the minimum of cwnd and rwnd (RECEIVER WINDOW). Loss propagation Loss of headers, due to errors in (i.e., loss of or damage to) previous header(s) or feedback. Robustness The performance of a header compression scheme can be described with three parameters: compression efficiency, robustness, and compression transparency. A robust scheme tolerates loss and residual errors on the link over which header compression takes place without losing additional packets or introducing additional errors in decompressed headers. 3. Tracking-based TCP congestion window estimation As originally outlined in [CAC] and specified in [RFC2581], TCP is incorporated with four congestion control algorithms: slow-start, congestion-avoidance, fast retransmit, and fast recovery. The effective window of TCP is mainly controlled by the congestion window and may change during the entire connection life. The enhancement of ROHC-TCP, ENH-ROHC-TCP, designs a mechanism to track the dynamics of TCP congestion window, and control the number of context value r of windowed LSB encoding by the estimated congestion window. By combining the windowed LSB encoding and TCP congestion window tracking, ENH-ROHC-TCP can achieve better performance over high bit-error-rate links. Note that in one-way TCP traffic, only the information about sequence number or acknowledgment number is available for tracking TCP congestion window. ROHC-TCP does not require that all one-way TCP traffics must cross the same compressor. The detail will be described in the following sections. 3.1. General principle of congestion window tracking The general principle of congestion window tracking is as follows. The compressor imitates the congestion control behavior of TCP upon receiving each segment, in the meantime, estimates the congestion window (cwnd) and the slow start threshold (ssthresh). Besides the requirement of accuracy, there are also some other requirements for the congestion window tracking algorithms: - Simplex link. The tracking algorithm SHOULD always only take Sequence Number or Acknowledgment Number of a one-way TCP traffic into consideration. It SHOULD NOT use Sequence Number and Acknowledgment Number of that traffic simultaneously. - Misordering resilience. The tracking algorithm SHOULD work well while receiving misordered segments. - Multiple-links. The tracking algorithm SHOULD work well when not all the one-way TCP traffics are crossing the same link. - Slightly overestimation. If the tracking algorithm cannot guarantee the accuracy of the estimated cwnd and ssthresh, it is RECOMMENDED that it produces a slightly overestimated one. The following sections will describe two congestion window tracking algorithms, which use Sequence Number and Acknowledgment Number of a one-way TCP traffic, respectively. 3.2. Tracking based on Sequence Number This algorithm (Algorithm SEQ) contains 3 states: SLOW-START, CONGESTION-AVOIDANCE, and FAST-RECOVERY, which are equivalent to the states in TCP congestion control algorithms. It maintains 2 variables: cwnd and ssthresh. Initially, this algorithm starts in SLOW-START state with ssthresh set to ISSTHRESH and cwnd set to IW. Upon receiving a segment, if it is the first segment, which is not necessary to be the SYN segment, the algorithm sets the current maximum Sequence Number (CMAXSN) and the current minimum Sequence Number (CMINSN) to this segment’s sequence number; otherwise, the algorithm takes a procedure according to the current state. - SLOW-START * If the new Sequence Number (NSN) is larger than CMAXSN, increase cwnd by the distance between NSN and CMAXSN, and update CMAXSN and CMINSN based on the following rules: CMAXSN = NSN if (CMAXSN - CMINSN) > cwnd CMINSN = cwnd - CMAXSN; * If the cwnd is larger than ssthresh, the algorithm transits to CONGESTION-AVOIDANCE state; * If the distance between NSN and CMAXSN is less than or equal to 3*MSS, ignore it; * If the distance is larger than 3*MSS, halve the cwnd and set ssthresh to MAX(cwnd, 2*MSS). After that, the algorithm transits into FAST-RECOVERY state. - CONGESTION-AVOIDANCE * If NSN is larger than CMAXSN, increase cwnd by ((NSN - CMAXSN)*MSS)/cwnd and then update CMAXSN and CMINSN based on... the following rules: \[ \text{CMAXSN} = \text{NSN} \] \[ \text{if (CMAXSN} - \text{CMINSN}) > \text{cwnd}) \] \[ \text{CMINSN} = \text{cwnd} - \text{CMAXSN}; \] * If the distance between NSN and CMAXSN is less than or equal to 3*MSS, ignore it; * If the distance is larger than 3*MSS, halve the cwnd and set ssthresh to MAX(cwnd, 2*MSS). After that, the algorithm transits into FAST-RECOVERY state. - FAST-RECOVERY * If NSN is larger than or equal to CMAXSN + cwnd, the algorithm transits into CONGESTION-AVOIDANCE state; * Otherwise, ignore it. In this algorithm, MSS is denoted as the estimated maximum segment size. The implementation can use the MTU of the link as an approximation of this value. ISSTHRESH and IW are the initial values of ssthresh and cwnd, respectively. ISSTHRESH MAY be arbitrarily high. IW SHOULD be set to 4*MSS. 3.3. Tracking based on Acknowledgment Number This algorithm (Algorithm ACK) maintains 3 states: SLOW-START, CONGESTION-AVOIDANCE and FAST-RECOVERY, which are equivalent to the states in TCP congestion control algorithms. It also maintains 2 variables: cwnd and ssthresh. Initially, this algorithm starts in SLOW-START state with ssthresh set to ISSTHRESH and cwnd set to IW. Upon receiving a segment, if it is the first segment, which is not necessary to be the SYN segment, the algorithm sets the current maximum Acknowledgment Number (CMA\_ACK) to this segment’s acknowledgment number; otherwise, the algorithm takes a procedure according to the current state. - SLOW-START * If the new Acknowledgment Number (NEW\_ACK) is larger than CMA\_ACK, increase cwnd by the distance between NEW\_ACK and CMA\_ACK, set duplicate ack counter (NDUP\_ACKS) to 0, and update CMA\_ACK accordingly; If the cwnd is larger than ssthresh, the algorithm transits to CONGESTION-AVOIDANCE state; * If NEW\_ACK is equal to CMA\_ACK, increase the NDUP\_ACKS by 1. If NDUP\_ACKS is greater than 3, halve the cwnd and set ssthresh to MAX(cwnd, 2*MSS). Consequently, the algorithm transits into FAST-RECOVERY state; * Otherwise, set NDUP\_ACKS to 0. - CONGESTION-AVOIDANCE * If NEW\_ACK is larger than CMA\_ACK, increase cwnd by ((NEW\_ACK- CMA\_ACK)*MSS)/cwnd, set NDUP\_ACKS to 0 and update CMA\_ACK accordingly; * If NEW\_ACK is equal to CMA\_ACK, increase NDUP\_ACKS by 1. If NDUP\_ACKS is greater than 3, halve the cwnd and set ssthresh to MAX(cwnd, 2*MSS). After that, the algorithm transits into FAST-RECOVERY state; * Otherwise, set NDUP\_ACKS to 0. - FAST-RECOVERY * If NEW\_ACK is larger than CMA\_ACK, set NDUP\_ACKS to 0. Consequently, the algorithm transits into CONGESTION-AVOID state; * Otherwise, ignore it. In this algorithm, MSS is denoted as the estimated maximum segment size. The implementation can use the MTU of the link as an approximation of this value. ISS\_THRESH and IW are the initial values of ssthresh and cwnd, respectively. ISS\_THRESH MAY be arbitrarily high. IW SHOULD be set to 4*MSS. ### 3.4. Further discussion on congestion window tracking In some cases, it is inevitable for the tracking algorithms to overestimate the TCP congestion window. Also, it SHOULD be avoided that the estimated congestion window gets significantly smaller than the actual one. For all of these cases, ROHC-TCP simply applies two boundaries on the estimated congestion window size. One of the two boundaries is the MIN boundary, which is the minimum congestion window size and whose value is determined according to the [INITWIN]; the other boundary is the MAX boundary, which is the maximum congestion window size. There are two possible approaches to setting this MAX boundary. One is to select a commonly used maximum TCP socket buffer size. The other one is to use the simple equation \[ W = \sqrt{\frac{8}{3}l} \], where \( W \) is the maximum window size and \( l \) is the typical packet loss rate. If ECN mechanism is deployed, according to [RFC2481] and [ECN], the TCP sender will set the CWR (Congestion Window Reduced) flag in the TCP header of the first new data packet sent after the window reduction, and the TCP receiver will reset the ECN-Echo flag back to 0 after receiving a packet with CWR flag set. Thus, the CWR flag and the ECN-Echo flag’s transition from 1 to 0 can be used as another indication of congestion combined with other mechanisms mentioned in the tracking algorithm. 4. Optional enhancements in Unidirectional mode Several implementation enhancements will be introduced in this section to improve the compression efficiency in unidirectional mode by utilizing the TCP congestion window estimated using the above mechanism. 4.1. Optional operations for upwards transition 4.1.1. Optional transition for short-live TCP transfers This approach is introduced in ENH-ROHC-TCP to compress short-lived TCP transfers more efficiently. The key message of this approach is that the compressor should try to speed up the initialization process. This approach can be applied if the compressor is able to see the SYN packet. The compressor enters the IR state when it receives the packet with SYN bit set and sends IR packet. When it receives the first data packet of the transfer, it should transit to FO state because that means the decompressor has received the packet with SYN bit set and established the context successfully at its side. Using this mechanism can significantly reduce the number of context initiation headers. 4.1.2. Optional operation in IR state In IR state, the compressor can send full header (or partial full header) periodically with an exponentially increasing period, which is so-called compression slow-start [RFC2507]. The main idea of this optional operation is controlling the size of context window by dynamically TCP congestion window estimation. After a packet has been sent out, the compressor invokes the Algorithm SEQ or Algorithm ACK introduced in the above session to track the congestion windows of the two one-way traffics with different directions in a TCP connection. Suppose that the estimated congestion windows are $cwnd_{seq}$ and $cwnd_{ack}$, while the estimated slow start thresholds are $ssthresh_{seq}$ and $ssthresh_{ack}$, respectively. Let $$W(cwnd_{seq}, ssthresh_{seq}, cwnd_{ack}, ssthresh_{ack}) = K \cdot \max(\max(cwnd_{seq}, 2ssthresh_{seq}), \max(cwnd_{ack}, 2ssthresh_{ack})).$$ If the value of context window, $r$, is larger than $W(cwnd_{seq}, ssthresh_{seq}, cwnd_{ack}, ssthresh_{ack})$, $r$ can be reduced. $K$ is an implementation parameter that will be further discussed in Section 4.1.4. If the number of the compress packets been send gets larger than $W(cwnd_{seq}, ssthresh_{seq}, cwnd_{ack}, ssthresh_{ack})$, the compressor transits to CO state. If the compressor transits to the IR state from the higher states, the compressor will re-initialize the algorithm for tracking TCP congestion window. 4.1.3. Optional operation in CO state After a packet has been sent out, the compressor invokes the Algorithm SEQ or Algorithm ACK to track the congestion windows of the two one-way traffics with different directions in a TCP connection. Suppose that the estimated congestion windows are $cwnd_{seq}$ and $cwnd_{ack}$, while the estimated slow start thresholds are $ssthresh_{seq}$ and $ssthresh_{ack}$, respectively. Let $$W(cwnd_{seq}, ssthresh_{seq}, cwnd_{ack}, ssthresh_{ack}) = K \cdot \max(\max(cwnd_{seq}, 2ssthresh_{seq}), \max(cwnd_{ack}, 2ssthresh_{ack})).$$ If the value of context window, $r$, is larger than $W(cwnd_{seq}, ssthresh_{seq}, cwnd_{ack}, ssthresh_{ack})$, $r$ can be reduced. $K$ is an implementation parameter, which can be set to the same value as in the IR state. If the compressor finds that the payload size of consecutive packets is a constant value and one of such packets has been removed from the context window, which means the decompressor has known the exact value of the constant size, it may use fixed-payload encoding scheme to improve the compression efficiency. 4.1.4. Determine the value $K$ in congestion window estimation As mentioned above, the context window SHOULD be shrunk when its size gets larger than $K \times \max(\max(cwnd_{seq}, 2ssthresh_{seq}), \max(cwnd_{ack}, 2ssthresh_{ack}))$. Since the Fast Recovery algorithm was introduced in TCP, several TCP variants had been proposed, which are different only in the behaviors of Fast Recovery. Some of them need several RTTs to be recovered from multiple losses in a window. Ideally, they may send $L \times W/2$ packets in this stage, where $L$ is the number of lost packets and $W$ is the size of the congestion window where error occurs. Some recent work [REQTCP] on improving TCP performance allows transmitting packets even when receiving duplicate acknowledgments. Due to the above concerns, it’d better keep $K$ large enough so as to prevent shrinking context window without enough confidence that corresponding packets had been successfully received. Considering the bandwidth-limited environments and the limited receiver buffer, a practical range of $K$ is around 1-2. From the simulation results, $K=1$ is good enough for most cases. 4.2. Optional operation for downwards transition The compressor must immediately transit back to the IR state when the header to be compressed, falls behind the context window, i.e. it is older than all the packets in context. If the context window contains only one packet, which means there is a long jump in the packet sequence number or acknowledge number, the compressor will transit to the IR state. Here, a segment causes a long jump when the distance between its sequence number (or acknowledgment number) and $CMAXSN$ (or $CMAXACK$) is larger than the estimated congestion window size, i.e., $$|\text{sequence number (acknowledgement number)} - CMAXSN (CMAXACK)| > \text{estimated congestion window size}.$$ 4.3. Other possible optimizations It can be seen that there are two distinct deployments - one where the forward and reverse paths share a link and one where they do not. In the former case it may be the situation that a compressor and decompressor co-locate as illustrated in Figure 4.3. It may then be possible for the compressor and decompressor at each end of the link to exchange information. In such a scenario, ENH-ROHC-TCP can make further optimization on context size for windowed LSB encoding. In Figure 4.3., C-SN represents the compressor for the sequence number traffic that deployed in Host A, D-SN represents the decompressor for the sequence number traffic that deployed in Host B. Similarly, C-ACK represents the compressor for the acknowledgement number traffic that deployed in Host B, D-ACK represents the decompressor for the acknowledgement number traffic that deployed in Host A. ![Figure 4.3. Illustration of Possible optimization in U-mode.](image) It is known that acknowledgement numbers (from C-ACK to D-ACK) are generally taken from the sequence numbers (from C-SN to D-SN) in the opposite direction. Since an acknowledgement cannot be generated for a packet that has not passed across the link, this offers an efficient way of estimating the TCP congestion window. Denote the current sequence number that is sending out from C-SN as SN-New, denote the sequence number that been acknowledged by the D-ACK as SN-Old, then the TCP congestion window (cwnd-bidir) can be represented as \[ cwnd-bidir = SN-New - SN-Old. \] Having this new estimated congestion window, the control parameter \( W \) will be re-calculated as \[ W(cwnd_seq, ssthresh_seq, cwnd_ack, ssthresh_ack) = \min(W(cwnd_seq, ssthresh_seq, cwnd_ack, ssthresh_ack), cwnd-bidir) \] 5. Security considerations ROHC-TCP is conformed to ROHC framework. Consequently the security considerations for this enhancement document match those of [ROHC-TCP]. 6. IANA Considerations This document does not require any IANA involvement. 7. Acknowledgements Thanks to Richard Price, Mark A West, Mikael Degermark, and Carsten Bormann for valuable input. 8. Authors’ addresses Qian Zhang Tel: +86 10 62617711-3135 Email: qianz@microsoft.com HongBin Liao Tel: +86 10 62617711-3156 Email: i-hbliao@microsoft.com Wenwu Zhu Tel: +86 10 62617711-5405 Email: wwzhu@microsoft.com Ya-Qin Zhang Tel: +86 10 62617711 Email: yzhang@microsoft.com Microsoft Research Asia Beijing Sigma Center No.49, Zhichun Road, Haidian District Beijing 100080, P.R.C. 9. Intellectual Property Right Considerations The IETF has been notified of intellectual property rights claimed in regard to some or all of the specification contained in this document. For more information consult the online list of claimed rights. 10. References M. Allman, S. Floyd, and C. Partridge, "Increasing TCP’s Initial Window", Internet Draft (work in progress), May 2001. <draft-ietf-tsvwg-initwin-00.txt> L-E. Jonsson, "Requirements for ROHC IP/TCP header compression", Internet Draft (work in progress), Nov. 2002. <draft-ietf-rohc-tcp-requirements-05.txt>
{"Source-Url": "https://tools.ietf.org/pdf/draft-ietf-rohc-tcp-enhancement-01.pdf", "len_cl100k_base": 5633, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 28381, "total-output-tokens": 6855, "length": "2e12", "weborganizer": {"__label__adult": 0.000362396240234375, "__label__art_design": 0.0002796649932861328, "__label__crime_law": 0.0006270408630371094, "__label__education_jobs": 0.0004851818084716797, "__label__entertainment": 0.00016558170318603516, "__label__fashion_beauty": 0.00016510486602783203, "__label__finance_business": 0.0007996559143066406, "__label__food_dining": 0.000362396240234375, "__label__games": 0.0007886886596679688, "__label__hardware": 0.00514984130859375, "__label__health": 0.0006923675537109375, "__label__history": 0.0004122257232666016, "__label__home_hobbies": 8.416175842285156e-05, "__label__industrial": 0.0008606910705566406, "__label__literature": 0.0003256797790527344, "__label__politics": 0.0004496574401855469, "__label__religion": 0.0005311965942382812, "__label__science_tech": 0.324462890625, "__label__social_life": 9.989738464355467e-05, "__label__software": 0.091552734375, "__label__software_dev": 0.56982421875, "__label__sports_fitness": 0.0004265308380126953, "__label__transportation": 0.0008563995361328125, "__label__travel": 0.0002899169921875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25511, 0.0256]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25511, 0.5243]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25511, 0.86581]], "google_gemma-3-12b-it_contains_pii": [[0, 1201, false], [1201, 2737, null], [2737, 2834, null], [2834, 5435, null], [5435, 7514, null], [7514, 9845, null], [9845, 11243, null], [11243, 12601, null], [12601, 14531, null], [14531, 17014, null], [17014, 19286, null], [19286, 21920, null], [21920, 23119, null], [23119, 24645, null], [24645, 25511, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1201, true], [1201, 2737, null], [2737, 2834, null], [2834, 5435, null], [5435, 7514, null], [7514, 9845, null], [9845, 11243, null], [11243, 12601, null], [12601, 14531, null], [14531, 17014, null], [17014, 19286, null], [19286, 21920, null], [21920, 23119, null], [23119, 24645, null], [24645, 25511, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25511, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25511, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25511, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25511, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25511, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25511, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25511, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25511, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25511, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25511, null]], "pdf_page_numbers": [[0, 1201, 1], [1201, 2737, 2], [2737, 2834, 3], [2834, 5435, 4], [5435, 7514, 5], [7514, 9845, 6], [9845, 11243, 7], [11243, 12601, 8], [12601, 14531, 9], [14531, 17014, 10], [17014, 19286, 11], [19286, 21920, 12], [21920, 23119, 13], [23119, 24645, 14], [24645, 25511, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25511, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
d38004f9121f41d79beff73280131355f97e1861
Java Strings String • instances of `java.lang.String` • compiler works with them *almost* with primitive types – String constants = instances of the String class • immutable!!! – for changes – classes `StringBuffer`, `StringBuilder` • operator + – String concatenation – if there is at least a single String in an expression -> all is converted to Strings and concatenated • method `toString()` – defined in the class Object – commonly overridden – creates a new String java.lang.String • constructors String(); String(char[] value); String(byte[] bytes); String(byte[] bytes, String charsetName); String(String value); String(StringBuffer value); String(StringBuilder value); java.lang.String • methods - int length(); - char charAt(int index); • IndexOutOfBoundsException - boolean equals(Object o); • compares Strings • == compares references String a = new String("hello"); String b = new String("hello"); System.out.println(a==b); // false System.out.println(a.equals(b)); // true java.lang.String • methods - int compareTo(String s); • lexicographical comparison - int compareToIgnoreCase(String s); - int indexOf(char c); - int indexOf(String s); • return -1, if there is no such char or substring - String substring(int beginIndex); - String substring(int beginIndex, int endIndex); - String replaceFirst(String regexp, String repl); - String replaceAll(String regexp, String repl); Strings • methods (cnt.) - String join(CharSequence delimiter, CharSequence... elements); • since Java 8 • methods can be called on String constants also ```java String s; ... if ("ahoj".equals(s)) { ... ``` Java Wrapper types Wrappers - immutable - Integer - constructors – deprecated since Java 9 - `Integer(int value)` - `Integer(String s)` - methods - `int intValue()` - `static Integer valueOf(int i)` - `static int parseInt(String s)` - ... - other wrapper types similarly More about methods Local variables • definition anywhere in body • visible in a block – see the first lecture • no initialization • can be defined as final – constants – no other modifier can be used • effectively final – defined without final but the value is never changed after it is initialized Method overloading • several methods with the same name but different parameters – different number and/or type ```java public void draw(String s) { ... } public void draw(int i) { ... } public void draw(int i, double f) { ... } ``` • cannot overload just by a different return type Recursive calls • recursion – a method calls itself ```java public static long factorial(int n) { if (n == 1) return 1; return n * factorial(n-1); } ``` • be aware about termination • non terminated -> stack overrun – a size of the stack can be set Java Exceptions Exceptions - errors reporting and handling - an exception represents an error state of a program - exception = an instance of \texttt{java.lang.Throwable} - two subclasses – \texttt{java.lang.Error} and \texttt{java.lang.Exception} - specific exceptions – children of the above two classes - \texttt{java.lang.Error} - "unrecoverable" errors - should not be caught - e.g. \texttt{OutOfMemoryError} - \texttt{java.lang.Exception} - recoverable errors - should (has to) be caught - e.g. \texttt{ArrayIndexOutOfBoundsException} Exception handling • **statement** `try/catch/finally` ```java try { // a block of code where an exception can happen and we want to handle it } catch (Exception1 e) { // handling of exceptions with the Exception1 type and its subtypes } catch (Exception2 e) { // handling of exceptions with the Exception2 type and its subtypes } finally { // executes always } ``` Exception handling - if the exception is not caught in a block where it occurs, it propagates to the upper block - if the exception is not caught in a method, it propagates to the calling method - if the exception reaches `main()` and it not caught, it terminates the virtual machine - information about the exception is printed try/catch/finally - catch or finally can be omitted - but both cannot be omitted Extended try (since Java 7) • interface AutoClosable and extended try – example: ```java class Foo implements AutoClosable { ... public void close() { ... } } try ( Foo f1 = new Foo(); Foo f2 = new Foo() ) { ... } catch (...) { ... } finally { ... } – at the end of try (normally or by an exception), close() is always called on all the objects in the try declaration • called in the reverse order than declared Extended try - both catch and finally can be omitted together ```java try (Resource r = new Resource()) { ... } ``` - since Java 9, (effectively) final variables can be used in extended try ```java final Resource resource1 = new Resource("res1"); Resource resource2 = new Resource("res2"); try (resource1; resource2) { ... } ``` class Exception1 extends Exception {} class Exception2 extends Exception {} try { boolean test = true; if (test) { throw new Exception1(); } else { throw new Exception2(); } } catch (Exception1 | Exception2 e) { ... } Exception declaration • a method that can throw an exception must either – catch the exception, or – declare the exception via `throws` ```java public void openFile() throws IOException { ... } ``` • it is not necessary to declare following exceptions – children of `java.lang.Error` – children of `java.lang.RuntimeException` • it extends `java.lang.Exception` • `ex. NullPointerException`, `ArrayIndexOutOfBoundsException` Throwing exceptions - **statement** `throw` - throws (generates) an exception - "argument" – a reference to `Throwable` ```java throw new MojeVyjimka(); ``` - existing exceptions can be thrown but, commonly, own ones are used - exceptions can be “re-thrown” ```java try { ... } catch (Exception e) { ... throw e; } ``` Re-throwing (in Java 7) class Exception1 extends Exception {} class Exception2 extends Exception {} public static void main(String[] args) throws Exception1, Exception2 { try { boolean test = true; if (test) { throw new Exception1(); } else { throw new Exception2(); } } catch (Exception e) { throw e; } } java.lang.Throwable - has the field (private) typed String - contains a detailed description of the exception - method String getMessage() - constructors - Throwable() - Throwable(String mesg) - Throwable(String mesg, Throwable cause) // since 1.4 - Throwable(Throwable cause) // since 1.4 - methods - void printStackTrace() public class MyException extends Exception { public MyException() { super(); } public MyException(String s) { super(s); } public MyException(String s, Throwable t) { super(s, t); } public MyException(Throwable t) { super(t); } } Chains of exceptions ... try { ..... ... } catch (Exception1 e) { ... throw new Exception2(e); } ... • throwing an exception as a reaction to another exception – it is common • reacting to a “system” exception by an “own” one Suppressing exception - in several cases an exception can suppress another one - it is not chaining of exceptions! - typically it can happen - if an exception occurs in the `finally` block - in the extended `try` block (Java 7) - `Throwable[] getSuppressed()` - method in `Throwable` - returns an array of suppressed exceptions Inner classes Inner classes • defined in the body of another class ```java public class MyClass { class InnerClass { int i = 0; public int value() { return i; } } public void add() { InnerClass a = new InnerClass(); } } ``` Inner classes • the inner class can return a reference to the outer class ```java public class MyClass { class InnerClass { int i = 0; public int value() { return i; } } public InnerClass add() { return new InnerClass(); } public static void main(String[] args) { MyClass p = new MyClass(); MyClass.InnerClass a = p.add(); } } ``` Hiding inner class - inner class can be `private` or `protected` - access to it via an interface ```java public interface MyIface { int value(); } public class MyClass { private class InnerClass implements MyIface { private i = 0; public int value() {return i;} } public MyIface add() {return new InnerClass();} } ... public static void main(String[] args) { MyClass p = new MyClass(); MyIface a = p.add(); // error - MyClass.InnerClass a = p.add(); } Inner classes in methods - an inner class can be defined in method or just a block of code - visible just in the method (block) ```java public class MyClass { public MyIface add() { class InnerClass implements MyIface { private i = 0; public int value() {return i;} } return new InnerClass(); } public static void main(String[] args) { MyClass p = new MyClass(); MyIface a = p.add(); // error - MyClass.InnerClass a = p.add(); } } ``` Anonymous inner classes ```java public class MyClass { public MyIface add() { return new MyIface() { private i = 0; public int value() {return i;} }; } public static void main(String[] args) { MyClass p = new MyClass(); MyIface a = p.add(); } } ``` Anonymous inner classes ```java public class Wrap { private int v; public Wrap(int value) { v = value; } public int value() { return v; } } public class MyClass { public Wrap wrap(int v) { return new Wrap(v) { public int value() { return super.value() * 10; } }; } public static void main(String[] args) { MyClass p = new MyClass(); Wrap a = p.wrap(5); } } ``` Anon. inner classes: initialization - elements outside an anon. in. class necessary in the anon. in. class – final - without final – compile-time error - since Java 8 - “effectively” final is enough - i.e. declared without the final modifier, but there are no changes to the particular element ```java public class MyClass { public MyIface add(final int val) { return new MyIface() { private int i = val; public int value() {return i;} }; } } ``` - till Java 7 final is necessary here - since Java 8 final can be omitted - as there are no changes to val Anon. inner classes: initialization - anon. inner classes cannot have a constructor - because they are anonymous - object initializer ```java public class MyClass { public MyIface add(final int val) { return new MyIface() { private i; { if (val < 0) i = 0; else i = val; } public int value() {return i;} }; } ``` Relation of inner and outer class - the instance of an inner class can access all elements of the instance of the outer class ```java interface Iterator { boolean hasNext(); Object next(); } public class Array { private Object[] o; private int next = 0; public Array(int size) { o = new Object[size]; } public void add(Object x) { if (next < o.length) { o[next] = x; next++; } } } // cont.... ``` Relation of inner and outer class // cont.... private class AIterator implements Iterator { int i = 0; public boolean hasNext() { return i < o.length; } public Object next() { if (i < o.length) return o[i++]; else throw new NoNextElement(); } } public Iterator getIterator() { return new AIterator(); } Relation of inner and outer class - a reference to the instance of the outer class - OuterClassName.this - previous example – classes Array and AIterator - the reference to the instance of Array from Array.AIterator – Array.this Relation of inner and outer class • creation of the instance of an inner class outside of its outer class ```java public class MyClass { class InnerClass { } public static void main(String[] args) { MyClass p = new MyClass(); MyClass.InnerClass i = p.new InnerClass(); } } ``` • an instance of an inner class cannot be created without an instance of its outer class - instances of an inner class always have a (hidden) reference to an instance of its outer class Inner classes in inner classes - from an inner class, an outer class on any level of nesting can be accessed ```java class A { private void f() {} class B { private void g() {} class C { void h() { g(); f(); } } } } public class X { public static void main(String[] args) { A a = new A(); A.B b = a.new B(); A.B.C c = b.new C(); c.h(); } } ``` Inheriting from inner classes • a reference to an instance of the outer class has to be explicitly passed ```java class WithInner { class Inner {} } class InheritInner extends WithInner.Inner { InheritInner(WithInner wi) { wi.super(); } // InheritInner() {} // compile-time error public static void main(String[] argv) { WithInner wi = new WithInner(); InheritInner ii = new InheritInner(wi); } } ``` Nested classes - defined with the keyword `static` - do not have a reference to an instance of its outer class - can have static elements - inner classes cannot have static elements - do not need an instance of the outer class - they do not have the reference to it - in fact, they are regular classes just placed in the namespace of the outer class ```java public class MyClass { public static class NestedClass { } } public static void main(String[] args) { MyClass.NestedClass nc = new MyClass.InnerClass(); } ``` Nested classes - can be defined in an interface - inner classes cannot be ```java interface MyInterface { static class Nested { int a, b; public Nested() {} void m(); } } ``` Inner classes and .class files - inner (or nested) class – own .class file - OuterName$InnerName.class - MyClass$InnerClass.class - anonymous inner classes - OuterName$SequentialNumber.class - MyClass$1.class - a nested class can have the main method - launching: java OuterName$NestedName Reasons for using inner classes - hiding an implementation - access to all elements of the outer class - “callbacks” - ... Source files Unicode • programs ~ Unicode – comments, identifiers, char and string constants – the rest is in ASCII (<128) • or Unicode escape sequences < 128 • Unicode escape sequences – \uxxxxx – \u0041 .... A • the expanded sequence is not used for following ones – \u005cu005a results in six chars • \ u 0 0 5 a 1. translation of unicode escape sequences (and all of the source code) into a sequence of unicode chars 2. the sequence from (1) is translated into a sequence of chars and line-terminating chars 3. the sequence from (2) is translated into a sequence of input tokens (without white-spaces and comments) - line-terminating chars - CR LF - CR - LF public class Test { public static void main(String[] argv) { int i = 1; i += 1; // is the same as \u000A i = i + 1; System.out.println(i); } } - Program prints out: a) 1 b) 2 c) 3 d) cannot be compiled e) a runtime exception Encoding - argument of javac `-encoding` - encoding of source files - without it – default encoding Literals - integer literals - decimal ... 0 1 23 -3 - hexadecimal ... 0xa 0xA 0x10 - octal ... 03 010 0777 - binary ... 0b101 0B1001 - since Java 7 - by default of the int type - long ... 1L 33l 077L 0x33L 0b10L - floating-point literals - 0.0 2.34 1. .4 1e4 3.2e-4 - by default double - float ... 2.34f 1.F .4f 1e4F 3.2e-4f - boolean literals - true, false • underscores in numerical literals – since Java 7 – for better readability 1234_5678_9012_3456L 999_99_9999L 3.14_15F 0xFF_EC_DE_5E 0xCAFE_BABE 0x7fff_ffff_ffff_ffffL 0b0010_0101 0b11010010_01101001_10010100_10010010 Literals • char literals - 'a' '%' '\\' '' '\u0045' '\123' - escape sequences \b \u0008 back space \t \u0009 tab \n \u000A line feed \f \u000C form feed \r \u000D carriage return \" \u0022 \' \u0027 \\ \u005c Literals - String literals - "" "\"" "this is a String" - null literal
{"Source-Url": "http://d3s.mff.cuni.cz/~hnetynka/teaching/java/slides2017/java03.en.pdf", "len_cl100k_base": 4284, "olmocr-version": "0.1.53", "pdf-total-pages": 56, "total-fallback-pages": 0, "total-input-tokens": 77327, "total-output-tokens": 6718, "length": "2e12", "weborganizer": {"__label__adult": 0.00042366981506347656, "__label__art_design": 0.0001850128173828125, "__label__crime_law": 0.0002818107604980469, "__label__education_jobs": 0.00035452842712402344, "__label__entertainment": 4.178285598754883e-05, "__label__fashion_beauty": 0.0001380443572998047, "__label__finance_business": 0.00010520219802856444, "__label__food_dining": 0.0003659725189208984, "__label__games": 0.0004718303680419922, "__label__hardware": 0.0006194114685058594, "__label__health": 0.0002646446228027344, "__label__history": 0.00014889240264892578, "__label__home_hobbies": 8.428096771240234e-05, "__label__industrial": 0.00026488304138183594, "__label__literature": 0.00015401840209960938, "__label__politics": 0.00021719932556152344, "__label__religion": 0.0004987716674804688, "__label__science_tech": 0.0009403228759765624, "__label__social_life": 7.11679458618164e-05, "__label__software": 0.0034885406494140625, "__label__software_dev": 0.98974609375, "__label__sports_fitness": 0.0003542900085449219, "__label__transportation": 0.00033545494079589844, "__label__travel": 0.00024259090423583984}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16944, 0.0183]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16944, 0.65163]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16944, 0.54323]], "google_gemma-3-12b-it_contains_pii": [[0, 14, false], [14, 508, null], [508, 717, null], [717, 1046, null], [1046, 1476, null], [1476, 1694, null], [1694, 1714, null], [1714, 1996, null], [1996, 2015, null], [2015, 2308, null], [2308, 2608, null], [2608, 2871, null], [2871, 2888, null], [2888, 3430, null], [3430, 3814, null], [3814, 4146, null], [4146, 4230, null], [4230, 4727, null], [4727, 5069, null], [5069, 5324, null], [5324, 5770, null], [5770, 6109, null], [6109, 6493, null], [6493, 6837, null], [6837, 7130, null], [7130, 7364, null], [7364, 7709, null], [7709, 7723, null], [7723, 7975, null], [7975, 8372, null], [8372, 8872, null], [8872, 9397, null], [9397, 9720, null], [9720, 10184, null], [10184, 10793, null], [10793, 11254, null], [11254, 11735, null], [11735, 12113, null], [12113, 12357, null], [12357, 12860, null], [12860, 13339, null], [13339, 13791, null], [13791, 14327, null], [14327, 14538, null], [14538, 14843, null], [14843, 14967, null], [14967, 14980, null], [14980, 15303, null], [15303, 15658, null], [15658, 15926, null], [15926, 16031, null], [16031, 16422, null], [16422, 16645, null], [16645, 16870, null], [16870, 16944, null], [16944, 16944, null]], "google_gemma-3-12b-it_is_public_document": [[0, 14, true], [14, 508, null], [508, 717, null], [717, 1046, null], [1046, 1476, null], [1476, 1694, null], [1694, 1714, null], [1714, 1996, null], [1996, 2015, null], [2015, 2308, null], [2308, 2608, null], [2608, 2871, null], [2871, 2888, null], [2888, 3430, null], [3430, 3814, null], [3814, 4146, null], [4146, 4230, null], [4230, 4727, null], [4727, 5069, null], [5069, 5324, null], [5324, 5770, null], [5770, 6109, null], [6109, 6493, null], [6493, 6837, null], [6837, 7130, null], [7130, 7364, null], [7364, 7709, null], [7709, 7723, null], [7723, 7975, null], [7975, 8372, null], [8372, 8872, null], [8872, 9397, null], [9397, 9720, null], [9720, 10184, null], [10184, 10793, null], [10793, 11254, null], [11254, 11735, null], [11735, 12113, null], [12113, 12357, null], [12357, 12860, null], [12860, 13339, null], [13339, 13791, null], [13791, 14327, null], [14327, 14538, null], [14538, 14843, null], [14843, 14967, null], [14967, 14980, null], [14980, 15303, null], [15303, 15658, null], [15658, 15926, null], [15926, 16031, null], [16031, 16422, null], [16422, 16645, null], [16645, 16870, null], [16870, 16944, null], [16944, 16944, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 16944, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16944, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16944, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16944, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16944, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16944, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16944, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16944, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16944, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16944, null]], "pdf_page_numbers": [[0, 14, 1], [14, 508, 2], [508, 717, 3], [717, 1046, 4], [1046, 1476, 5], [1476, 1694, 6], [1694, 1714, 7], [1714, 1996, 8], [1996, 2015, 9], [2015, 2308, 10], [2308, 2608, 11], [2608, 2871, 12], [2871, 2888, 13], [2888, 3430, 14], [3430, 3814, 15], [3814, 4146, 16], [4146, 4230, 17], [4230, 4727, 18], [4727, 5069, 19], [5069, 5324, 20], [5324, 5770, 21], [5770, 6109, 22], [6109, 6493, 23], [6493, 6837, 24], [6837, 7130, 25], [7130, 7364, 26], [7364, 7709, 27], [7709, 7723, 28], [7723, 7975, 29], [7975, 8372, 30], [8372, 8872, 31], [8872, 9397, 32], [9397, 9720, 33], [9720, 10184, 34], [10184, 10793, 35], [10793, 11254, 36], [11254, 11735, 37], [11735, 12113, 38], [12113, 12357, 39], [12357, 12860, 40], [12860, 13339, 41], [13339, 13791, 42], [13791, 14327, 43], [14327, 14538, 44], [14538, 14843, 45], [14843, 14967, 46], [14967, 14980, 47], [14980, 15303, 48], [15303, 15658, 49], [15658, 15926, 50], [15926, 16031, 51], [16031, 16422, 52], [16422, 16645, 53], [16645, 16870, 54], [16870, 16944, 55], [16944, 16944, 56]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16944, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
dd31af6cbcfae6477333d22084549bd9e6ce2055
AUTOMATIC CORRECTION OF SECURITY DOWNGRADERS (71) Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION, Armonk, NY (US) (72) Inventors: Salvatore A. Guarnieri, New York, NY (US); Marco Pistoia, Amawalk, NY (US); Omer Tripp, Har-Adar (IL) (73) Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION, Armonk, NY (US) (56) References Cited U.S. PATENT DOCUMENTS 2012/0023486 A1 1/2012 Haviv et al. 2012/0324582 A1 12/2012 Park OTHER PUBLICATIONS * cited by examiner Primary Examiner — Amare F Tabor Attorney, Agent, or Firm — Tutunjian & Bitetto, P.C.; Daniel P. Morris FIELD OF CLASSIFICATION SEARCH USPC .......................... 726/25, 22; 713/188 See application file for complete search history. ABSTRACT Methods and systems for automatic correction of security downgraders. For one or more flows having one or more candidate downgraders, it is determined whether each candidate downgrader protects against all vulnerabilities associated with the candidate downgrader’s respective flow. Candidate downgraders that do not protect against all of the associated vulnerabilities are transformed, such that the transformed downgraders do protect against all of the associated vulnerabilities. 10 Claims, 3 Drawing Sheets Perform security analysis, disregarding downgraders 102 For each witness flow, locate candidate downgraders 104 For each candidate downgrader, check whether the downgrader protects against all attacks 106 Protected? 108 Yes → Remove flow from report 110 No → Transform downgrader 112 Output all flows where no downgrader was found 114 Output all downgrader transformations 116 FIG. 1 FIG. 2 Processor 202 Memory 204 Security analysis module 206 Enhancer module 208 Report module 210 Downgrader correction system 200 FIG. 3 User input 302 Incomplete downgrader 304 Database 306 User input 302 Incomplete downgrader 304 Additional downgrader 402 Additional downgrader 402 Database 306 FIG. 4 AUTOMATIC CORRECTION OF SECURITY DOWNGRADERS RELATED APPLICATION INFORMATION This application is a Continuation application of co-pending U.S. patent application Ser. No. 13/768,645 filed on Feb. 15, 2013, incorporated herein by reference in its entirety. BACKGROUND 1. Technical Field The present invention relates to security analysis and, more particularly, to automatic correction and enhancement of user-implemented security downgraders. 2. Description of the Related Art Static security analysis typically takes the form of taint analysis, where the analysis is parameterized by a set of security rules, each rule being a triple <Src, San, Snk>, where Src denotes source statements that read untrusted user inputs, San denotes downgrader statements that endorse untrusted data by validating and/or sanitizing it, and Snk denotes sink statements which perform security-sensitive operations. Given a security rule R, any flow from a source in Src to a sink in Snk that doesn’t pass through a downgrader from San comprises a potential vulnerability. This reduces security analysis to a graph reachability problem. Traditionally, the goal of security analysis has been to detect potential vulnerabilities in software applications (mostly web applications) and to inform the user of these problems. The user would then apply a fix, typically by introducing a downgrader (such as a sanitizer or validator function) into the flow of the computation. For example, if an analysis tool were to discover that an application is able to read user-provided data (e.g., an HTTP parameter) and then use this data in a security-critical operation (e.g., writing it to a database or to a log file), then one of the flows extending between these two endpoints would be reported to the user. Such a flow is a security risk, as it potentially allows users to corrupt or subvert the security-critical operation. To remedy the problem, the user would install one or more security checks covering all flows between the endpoints to ensure that data reaching the security-sensitive operation is benign by, e.g., transforming it through sanitization, or to reject the data through validation. This solution is limited, however, in that the tool assumes, rather than verifies, that the security checks inserted by the user are correct. Implementing and using downgraders correctly is highly nontrivial, and users are prone to making errors. In particular, there are many end-cases to account for, the correctness of a check often depends on the deployment configuration of the software system (e.g., the type of backend database), and the context where the vulnerability occurs also partially determines what needs to be checked. A user may either be in tool configuration, e.g., by defining incorrect downgraders, or in the remediation of reported vulnerabilities. SUMMARY A method for automatic correction of security downgraders includes determining, for one or more flows having one or more candidate downgraders, whether each candidate downgrader protects against all vulnerabilities associated with said candidate downgrader’s respective flow. Downgraders that do not protect against all of the associated vulnerabilities are transformed, such that the transformed downgraders do protect against all of the associated vulnerabilities. A system for automatic correction of security downgraders includes an enhancer module with a processor configured to determine, for one or more flows having one or more candidate downgraders, whether each of the candidate downgraders protects against all vulnerabilities associated with said candidate downgrader’s respective flow, and to transform candidate downgraders that do not protect against all of the associated vulnerabilities such that the transformed downgraders do protect against all of the associated vulnerabilities. These and other features and advantages will become apparent from the following detailed description of illustrative embodiments thereof, which is to be read in connection with the accompanying drawings. BRIEF DESCRIPTION OF DRAWINGS The disclosure will provide details in the following description of preferred embodiments with reference to the following figures wherein: FIG. 1 is a block/flow diagram of a method for enhancing a downgrader in accordance with an embodiment of the present invention; FIG. 2 is a diagram of a downgrader correction system in accordance with an embodiment of the present invention; FIG. 3 is an exemplary vulnerable data flow prior to correction/enhancement in accordance with an embodiment of the present invention; and FIG. 4 is an exemplary data flow having an enhanced downgrader in accordance with an embodiment of the present invention. DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS Embodiments of the present invention provide for the remediation of security issues in software systems by first detecting an existing downgrader along a path between a source and a sink, and second attempting to fix or enhance that downgrader. Developers often apply checks to user input to verify its validity but, as noted above, often do so incorrectly or incompletely. However, there is often at least some existing check along a path that is available to be “boosted.” Developers further prefer to make organic changes, such that modifying existing checks is preferable to introducing new checks. Furthermore, introducing new downgrader code might cause problems or redundancy errors if overlapping code already exists along the flow. For example, repeating a downgrader that performs an encoding would result in a double-encoding, which could corrupt the input. As such, embodiments of the present invention use instances of existing downgrader code and enhance it. Referring now to the drawings in which like numerals represent the same or similar elements and initially to FIG. 1, a method for automatic correction of security downgraders is shown. Block 102 performs a security analysis, disregarding any pre-existing, user-provided downgraders in the tool configuration. For each witness flow W of type T, block 104 finds candidate downgraders along the flow W. A witness flow W is a representative vulnerable flow (e.g., a shortest flow). A flow “type” refers to a class of potential security vulnerabilities. For example, some flows will be vulnerable to cross-site scripting (XSS) attacks, others will be vulnerable to structured query language (SQL) injection (SQLI) attacks, etc. A single flow may be vulnerable to multiple types of attack and so may have multiple types. The following shows an example of vulnerable flows: - Vulnerable flow 1 (V1): San1 → Snk1 - Vulnerable flow 2 (V2): San2 → Snk2 The method then proceeds as follows: 1. For each vulnerable flow W, the method identifies candidate downgraders that can be applied to W. 2. Each candidate downgrader is evaluated to determine if it protects against all vulnerabilities associated with W. 3. If a candidate downgrader does not protect against all vulnerabilities, it is transformed to ensure that it does. 4. The transformed downgrader is then applied to W. This process is repeated for all vulnerable flows, resulting in a more secure system with fewer vulnerabilities. This shows two vulnerable flows. The first is from the source to the first sink, and is of type XSS, and the second is to the second sink, and is of type SQLi. In both cases, untrusted information coming from the user (the source) flows into a security-sensitive operation (the sink), without first being sanitized/validated. This makes it possible for a user to provide an input to either of the sinks that may disrupt functionality or lead to an elevation of the user’s rights in the system. Detecting candidate downgraders in block 104 can be performed in several ways. One way is to apply the analysis of a security tool where syntactic properties of called methods are used to highlight candidate downgraders. Another heuristic is to utilize the ignored parts of the user configuration, which indicate the methods that the user considers to act as downgraders. Additional techniques for finding downgraders may include searching for data-flow bottlenecks and by scanning user configuration files. For each candidate downgrader found, block 106 checks whether the downgrader protects all attack types corresponding to the flows that the downgrader participates in. This may be accomplished by providing a set of test inputs to the candidate downgrader. Block 106 generates a list of vulnerabilities that the candidate downgrader fails to protect again. Block 108 then considers whether each of the checked candidate downgraders are fully protected. If block 108 determines that a given downgrader fully protects a flow W (i.e., if block 106 determines that the downgrader provides a correctly sanitized/validated output for every test input), the flow W is removed from the list at block 110. Otherwise, block 112 transforms the downgrader to make it sufficient to prevent attacks of the relevant types. One possibility for augmenting the logic of an incomplete downgrader is to equip the analysis tool with a set of security checks that, together, form a correct downgrader. When an incomplete downgrader is detected, the analysis tool attempts to add to it individual missing checks. After each conjunction, an analysis tool can determine whether the result is a correct downgrader. If not, then the process continues and additional checks are added. This process is guaranteed to terminate with a correct downgrader, because the checks are designed such that the conjunction of all the individual checks is a correct downgrader. Adding checks to a downgrader may be performed directly, if access to the downgrader code is available. In some cases, however, security analysis may be performed on flows that use precompiled libraries or executables, where a downgrader may be opaque to the user. In such a case, a downgrader may be injected into the existing downgrader binary code. Alternatively, a downgrader may be enhanced by adding checks to the downgrader’s flow output, essentially concatenating the enhancing checks with the existing downgrader. Block 112, as described above, “transforms” a downgrader by supplementing it with additional validators and/or sanitizers. A given flow may be vulnerable to a wide variety of attack types, and each such attack type should be accounted for. In the example of a string-processing flow, where user inputs are passed to a security-critical resource, each potential sanitizer/validator may simply be concatenated, as each step will simply produce a sanitized/validated string for the next step. In the case of a validator, where an input that fails is simply rejected, concatenation of individual validators is intuitive regardless of flow type. Block 114 outputs to the user all of the flows where no candidate downgrader was found at all, allowing the user to institute an appropriate downgrader for the flow, while block 115 reports all of the downgrader transformations that were performed in block 112. In this way, the user is made aware of all substantive changes to the program, and is furthermore shown the places where the security of the program could be further improved. In an alternative embodiment, block 112 may introduce new downgraders in vulnerable flows that have no downgrader at all. In such an embodiment, block 116 also provides information regarding new downgraders that were added. As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method, or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit," "module" or "system." Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon. Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar program- ming languages. The program code may execute entirely on the user’s computer, partly on the user’s computer, as a standalone software package, partly on the user’s computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user’s computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). Aspects of the present invention are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks. The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. Reference in the specification to “one embodiment” or “an embodiment” of the present invention, as well as other variations thereof, means that a particular feature, structure, characteristic, and so forth described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrase “in one embodiment” or “in an embodiment”, as well any other variations, appearing in various places throughout the specification are not necessarily all referring to the same embodiment. It is to be appreciated that the use of any of the following “and/or”, “or”, and “at least one of”, for example, in the cases of “A/B”, “A and/or B” and “at least one of A and B”, is intended to encompass the selection of the first listed option (A only), or the selection of the second listed option (B only), or the selection of both options (A and B). As a further example, in the cases of “A, B, and/or C” and “at least one of A, B, and C”, such phrasing is intended to encompass the selection of the first listed option (A only), or the selection of the second listed option (B only), or the selection of the third listed option (C only), or the selection of the first and the second listed options (A and B) only, or the selection of the first and third listed options (A and C) only, or the selection of the second and third listed options (B and C) only, or the selection of all three options (A and B and C). This may be extended, as readily apparent by one of ordinary skill in this and related arts, for as many items listed. Referring now to FIG. 2, a downgrader correction system 200 is shown. The downgrader correction system 200 includes a processor 202 and memory 204. A security analysis module 206 uses processor 202 to perform a security analysis on a program stored in memory 204. An enhancer module 208 reviews the analysis provided by security analysis module 206 to locate candidate downgraders in the program's flows and determines whether any of those downgraders fail to provide for all of the potential vulnerabilities in the flows. For each downgrader that provides insufficient protection, the enhancer module 208 adds additional checks until the downgrader is able to fully protect the flow. The report module 210 then generates a report to the user that includes, e.g., a description of all vulnerable flows that lack a downgrader and a description of all enhancements made to the existing downgraders. Referring now to FIG. 3, an exemplary data flow is shown. At block 302 a user provides some input. For example, the user desires to perform a search and enters the search terms as a string. Block 304 receives the input and performs some elementary validation. For example, block 304 may check to determine whether the string is a null string and whether it has the correct format for a search query. If the input fails these tests, then block 304 may reject the query and provide an error message. If the downgrader concludes that the string meets its requirements, then the string is passed to database 306 and executed. However, in the present example, the downgrader 304 is incomplete and does not protect against potential attacks. As an example, consider an incomplete downgrader 304 that fails to sanitize user inputs to protect against SQL injection attacks. Such an attack allows the malicious user to provide direct commands to database 306, allowing the user to have access to sensitive information, such as credit cards and passwords. If the downgrader 304 does not provide, for example, filtering of escape characters or strong typing of the input 302, then there is nothing to prevent such attacks. Referring now to FIG. 4, the exemplary flow described above is shown again, after having had its downgrader 304 enhanced according to an embodiment of the present invention. Rather than replacing the incomplete downgrader 304, individual sanitization/validation downgraders 402 are placed in the data flow. For example, each additional downgrader 402 may check for a particular control or escape character or sequence which should be removed from the input. The additional downgraders 402 may simply be added into the flow after the incomplete downgrader 304, performing whatever additional processing is needed to fully protect the flow from any detected vulnerabilities. Any number of additional downgraders 402 may be added in this way. Having described preferred embodiments of a system and method for automatic correction of security downgraders (which are intended to be illustrative and not limiting), it is noted that modifications and variations can be made by persons skilled in the art in light of the above teachings. It is therefore to be understood that changes may be made in the particular embodiments disclosed which are within the scope of the invention as outlined by the appended claims. Having thus described aspects of the invention, with the details and particularity required by the patent laws, what is claimed and desired protected by Letters Patent is set forth in the appended claims. What is claimed is: 1. A method for automatic correction of security downgraders, comprising: determining, for one or more flows at a computer having one or more candidate downgrader modules, whether each candidate downgrader module protects against all vulnerabilities associated with said candidate downgrader module’s respective flow, wherein the downgrader modules are sanitizer or validator functions; and transforming with a processor candidate downgrader modules that do not protect against all of the associated vulnerabilities, such that the transformed downgrader modules do protect against all of the associated vulnerabilities. 2. The method of claim 1, wherein transforming candidate downgrader modules comprises adding a validating or sanitizing step to the candidate downgrader modules that checks for a known vulnerability. 3. The method of claim 2, wherein transforming candidate downgrader modules comprises concatenating the added validating or sanitizing step with an existing candidate downgrader module in a respective flow. 4. The method of claim 2, wherein transforming candidate downgrader modules comprises injecting a validating or sanitizing step into an existing precompiled candidate downgrader module in a respective flow. 5. The method of claim 2, further comprising repeating said steps of determining and transforming until each candidate downgrader module has been enhanced to address all known vulnerabilities associated with candidate downgrader module’s respective flow. 6. The method of claim 1, further comprising generating a report that includes information regarding flows that have no downgrader module and a list of transformations made to the candidate downgrader modules. 7. The method of claim 1, wherein determining whether each of the candidate downgrader modules protects against all vulnerabilities comprises providing a set of test inputs to each of the candidate downgrader modules to determine whether said candidate downgrader modules correctly downgrade the input. 8. The method of claim 7, further comprising generating a set of test inputs for each vulnerable flow that includes at least one test input that exploits each vulnerability associated with the vulnerable flow. 9. The method of claim 1, further comprising adding a complete downgrader module to any each flow that has no downgrader. 10. A non-statutory computer readable storage medium comprising a computer readable program for automatic correction of security downgrader modules, wherein the computer readable program when executed on a computer causes the computer to perform the steps of: determining, for one or more flows at a computer having one or more candidate downgrader modules, whether each candidate downgrader module protects against all vulnerabilities associated with said candidate downgrader module’s respective flow, wherein the downgrader modules are sanitizer or validator functions; and transforming candidate downgrader modules that do not protect against all of the associated vulnerabilities, such that the transformed downgrader modules do protect against all of the associated vulnerabilities. * * * * *
{"Source-Url": "https://patentimages.storage.googleapis.com/10/48/0e/cf69bdaac5c990/US9405916.pdf", "len_cl100k_base": 5647, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 12054, "total-output-tokens": 6694, "length": "2e12", "weborganizer": {"__label__adult": 0.0006327629089355469, "__label__art_design": 0.0006394386291503906, "__label__crime_law": 0.00827789306640625, "__label__education_jobs": 0.00070953369140625, "__label__entertainment": 0.00010484457015991212, "__label__fashion_beauty": 0.0002589225769042969, "__label__finance_business": 0.0018301010131835935, "__label__food_dining": 0.0003886222839355469, "__label__games": 0.0012197494506835938, "__label__hardware": 0.00611114501953125, "__label__health": 0.0005898475646972656, "__label__history": 0.0002777576446533203, "__label__home_hobbies": 0.000164031982421875, "__label__industrial": 0.000919818878173828, "__label__literature": 0.0003917217254638672, "__label__politics": 0.00046944618225097656, "__label__religion": 0.0004074573516845703, "__label__science_tech": 0.060577392578125, "__label__social_life": 6.371736526489258e-05, "__label__software": 0.041900634765625, "__label__software_dev": 0.873046875, "__label__sports_fitness": 0.00024580955505371094, "__label__transportation": 0.0005450248718261719, "__label__travel": 0.00016832351684570312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29288, 0.08586]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29288, 0.60687]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29288, 0.9048]], "google_gemma-3-12b-it_contains_pii": [[0, 2299, false], [2299, 2327, null], [2327, 2719, null], [2719, 2919, null], [2919, 3037, null], [3037, 10289, null], [10289, 17474, null], [17474, 25218, null], [25218, 29288, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2299, true], [2299, 2327, null], [2327, 2719, null], [2719, 2919, null], [2919, 3037, null], [3037, 10289, null], [10289, 17474, null], [17474, 25218, null], [25218, 29288, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29288, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29288, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29288, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29288, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29288, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29288, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29288, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29288, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29288, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29288, null]], "pdf_page_numbers": [[0, 2299, 1], [2299, 2327, 2], [2327, 2719, 3], [2719, 2919, 4], [2919, 3037, 5], [3037, 10289, 6], [10289, 17474, 7], [17474, 25218, 8], [25218, 29288, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29288, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
9ecdddbaf06d6892b85419cbda1feb89b6948d7c
Business process models and entity life cycles Giorgio Bruno Politecnico di Torino Corso Duca degli Abruzzi 24, Torino 10129 Italy giorgio.bruno@polito.it Abstract: Tasks and business entities are the major constituents of business processes but they are not always considered equally important. The activity-centric approach and the artifact-oriented one have radically different visions. The former focuses on the control flow, i.e., on the representation of the precedence constraints between tasks, and considers the dataflow an add-on. The latter emphasizes the states of the business entities and defines the transitions between states in a declarative way that makes it difficult to figure out what the control flow is. This paper presents the ELBA notation whose purpose is to integrate those different visions by leveraging the dataflow. The dataflow defines the input and output entities of the tasks in process models. Entities flowing through tasks change their states and then a process model results from the combination of the life cycles of the entities managed by the process. Process models are complemented by information models that show the attributes and relationships of the entity types handled by the processes. Life cycles are intertwined in process models but they can be separated by means of an extraction technique that is illustrated in this paper with the help of two examples. Keywords: business processes; control flow; dataflow; artifact; entity life cycle. DOI: 10.12821/ijisp070304 Manuscript received: 28 March 2019 Manuscript accepted: 16 May 2019 1. Introduction Business processes are made up of tasks that are usually meant to operate on persistent business entities, but, unfortunately, these two elements, tasks and entities, are not given the same level of importance in the current notations for business process modeling. In BPMN (Business Process Model and Notation) [1], business entities are denoted by variables in process instances and can optionally be shown in process models by means of graphical elements called data objects. BPMN is an activity-centric notation: the control flow organizes the execution of tasks in a predefined manner. Tasks are divided into automatic tasks and human ones. Human tasks are associated with roles and are performed by persons playing the roles required; however, they have little influence on the evolution of the process and are mainly involved in tasks that are not easy to automate. BPMN fits repetitive situations, i.e., routines, and provides an efficient way to handle them. While BPMN puts tasks before business entities, the opposite takes place with GSM (Guard-Stage-Milestone) [2]. GSM emphasizes that business entities evolve over time through life cycles made up of states (also called stages); tasks are included in stages. The overall process is a collection of interacting life cycles. Knowledge-intensive processes [3] focus on individual cases and rely on the expertise of participants to find the best way to handle them. Flexibility is the major issue and, then, a declarative approach to business process modeling is preferred to the traditional procedural one. In GSM, stages are activated and closed by means of ECA (Event Condition Action) rules [4] and this confers great flexibility to the approach. CMMN (Case Management Model and Notation) [5] draws on GSM and provides process participants with the flexibility of deciding the activities to be carried out; moreover, participants can assign tasks to each other. BPMN provides some flexibility with the ad-hoc construct: a comparison with CMMN can be found in Zensen and Küster [6]. What is missing in the above-mentioned approaches is the explicit representation of the dataflow, which defines the input and output entities of the tasks in process models. Entities flowing through tasks change their states: states indirectly represent the processing performed by tasks on entities. A process then combines the life cycles of the entities involved. This paper presents a notation, called ELBA, which addresses the above issues. Life cycles are intertwined in process models but they can be separated by means of an extraction technique. Since a life cycle emphasizes the evolution over time of an entity type its model may be important for a comparison with industry standards or with the expectations of stakeholders. The extraction of life cycles is challenging for two reasons: one is their intertwining in process models, and the other is due to hidden states. Hidden states are states that do not explicitly appear in process models because the presence of entities in these states is not an event for the process. Such states might be ignored; however, if they are considered important, they can be made explicit so the extraction algorithm will show them in life cycle models. This paper is organized as follows. Section 2 is about the related work. Section 3 introduces ELBA and shows how life cycles can be extracted from a simple process model. Section 4 concerns the handling of hidden states. Section 5 contains the conclusion. 2. Background Most of the notations for business process modeling fall in three categories based on the predominant aspect: the control flow, the dataflow and the entity life cycles. The activity-centric notations, such as UML (Unified Modeling Language) [7] activity diagrams and BPMN, focus on the control flow, whose purpose is to define precedence constraints between tasks by means of direct links or through the intermediation of control flow elements (called gateways in BPMN). This approach provides an efficient way to deal with repetitive situations, i.e., routines. The dataflow may be added to the above mentioned notations but in a way that Sanz [8] judges as an “afterthought”. The control flow and the dataflow are separate and this makes the notation more complicated. Combi et al. [9] present an extension to BPMN whose purpose is to link tasks to data: the extension consists in textual annotations that describe the operations performed by tasks on data stored in a database. Moreover, the attributes and relationships of the data are modeled with a UML class diagram. Another kind of extension is aimed at associating resources (i.e., IoT devices) to BPMN tasks. For example, Martins and Domingos [10] show how to translate a BPMN model into a programming language for IoT devices. The artifact-oriented approach shifts the focus from tasks to business entities. It does so by introducing the notion of artifact, which, in this context, represents an entity type and the life cycle of its entities. Artifacts designate concrete and self-describing chunks of information used to run a business [11]; moreover, they facilitate communication among stakeholders and help them focus on the primary purposes of the business [12]. The notation provided by GSM focuses on the life cycles of artifacts: they are separate and are made up of stages, which may include subordinate stages and tasks. Stages are provided with guards and milestones, which consist of ECA rules. When a guard becomes true, the stage is activated; the stage is closed, when a milestone becomes true. The event that causes the opening of a stage may come from a stage of another artifact. This mechanism confers great flexibility to the approach, but its major drawback is the difficulty of understanding the precedence between the stages. The control flow between stages is not shown; moreover, the dataflow is missing because the life cycles are separate. In PHILharmonicFlows [13], the entity life cycles are represented with state-transition diagrams (called micro processes), and then the precedence constraints between stages are clearly shown. Since life cycles are separate the notion introduces a coordination mechanism (called macro process) to orchestrate their evolution: this entails the repetition of several stages in the macro process. However, not all the stages are repeated in the macro process; therefore, the observation of all the models is needed to fully understand the overall process. Life cycles may be extracted from notations that provide the representation of the dataflow. Extraction approaches have been proposed for UML Activity Diagrams ([14], [15], [16]) as well as for BPMN models [17]. The recent case management standard CMMN stresses the abilities of case workers to decide the order of execution of tasks and to assign tasks to each other. It draws on GSM in that processes are made up of stages that are groups of tasks: the opening and closing of stages are based on events, such as the completion of a task, a time event or a human decision. However, the stages are not related to the life cycles of the business entities and, moreover, the dataflow is missing. If the control flow and the dataflow are integrated, the tasks in the process turn out to be data-driven: they are performed when their inputs contain suitable entities. In the case handling approach [18], tasks are data driven but the dataflow is not shown because the process data are kept in process variables; however, the process includes the links between the tasks and the variables that provide their inputs. Two frameworks have been proposed to compare notations on the basis of their data modeling capabilities. The first framework [19] focuses on the representation of the dataflow and of the life cycles of the entities that make it up. As to BPMN and UML activity diagrams, the authors conclude that data modeling is optional and when the dataflow is shown it is subordinate to the control flow. The second framework [20] is used to compare notations for business process modeling on the basis of 24 criteria subdivided in four groups: design, implementation and execution, diagnosis and optimization, tool implementation and practical cases. The authors compared three notations, i.e., case handling, GSM and PHILharmonicFlows. The conclusion is that most practitioners consider data-centric notations more complicated than activity-centric ones and therefore data-centric approaches need further research. The ELBA notation provides a solution giving equal importance to tasks and business entities. Tasks are data driven and the dataflow is explicitly shown. The dataflow consists of entity states and the entity types along with their attributes and relationships are defined in an information model, similar to a UML class diagram. The entity life cycles are intertwined in the dataflow but they can be separated as illustrated in the next sections. Data resides in the entities forming the dataflow: a process is implemented with a single instance and therefore it is easy to decompose and recompose entity flows. As a consequence, situations like the many-to-many mapping between requisition orders and procurement ones in build-to-order processes, which are difficult to handle with BPMN as pointed out in Meyer et al. [21], can be easily addressed with ELBA [22]. As to flexibility, ELBA [23] addresses human decisions that concern the selection of input entities when a task needs more than one, or the selection of the task when more than one are admissible to handle the input entities. 3. Introduction to ELBA The main feature of ELBA is that a process model results from the combination of the life cycles of the entities managed by the process. The definition of the entities takes place through an information model that shows the entity types along with their attributes and relationships. In addition, ELBA is a dataflow language in that the activation of tasks is based on the input entities and not on the completion of previous tasks. An ELBA process comes with a single instance: this implies that the entities handled by a process are not internal to the process but external to it. Entities are assumed to reside in an information system that guarantees their persistence. A process model actually consists of two models: one is the information model that shows the domain related to the process and the other is the actual process from which the separate life cycles of the managed entities can be extracted. This section presents a simple example to explain the basic features of the notation. The example concerns process CheckProposals, which operates in an organization that receives proposals from partners: the process checks the proposals with the help of a number of reviewers. The simplified requirements of the process are as follows. Partners and reviewers are registered in the information system. The relations between the partners and the organization are managed by persons playing the account manager (shortly, accountMgr) role: each partner is associated with one account manager who can deal with a number of partners. After a proposal has been entered by a partner, the process hands it to the suitable account manager, who selects three reviewers and associates them with the proposal. Each reviewer is required to submit a review and, when all the three reviews are available, the account manager decides whether to accept or reject the proposal. Then, the partner is notified of the outcome. The attributes of the entities are kept to a minimum: a proposal has a description and an outcome (which initially is null and then becomes accepted or rejected), and a review has a comment. The process model and the information one are shown in Fig. 1. The information model draws on the UML class diagram and adds a number of features such as required properties, rules, and indirect relationships. Entity types may be divided into three groups: role types, managed types and background ones. A role type denotes the process participants playing the role that it represents. For example, role type Reviewer denotes all the persons acting as reviewers and an instance of it denotes a specific reviewer. Instances of role types are referred to as role entities. Managed types, such as Proposal and Review, are the types of the entities forming the dataflow of the process. Background types denote entities that provide background information: the process does not generate background entities but may introduce associations between managed entities and background ones. In the example under consideration, there are no background entities. Relationships show the constraints on the number of associations between entities and they do so by means of multiplicity indicators. Indicators include integer values as well as symbols “n” and “*” which mean one or more associations, and zero or more associations, respectively. The standard multiplicity is “*”. Required properties encompass required attributes and required associations. The former are recognizable by the names underlined and the latter by the multiplicities underlined. Required properties must be set when new entities are generated. For example, a new proposal is linked to the partner who entered it; moreover, a new review is connected to the reviewer who provided it, and to the proposal it is related to. Indirect relationships are sequences of direct relationships. An example is given by the indirect relationship between Proposal and AccountMgr: a proposal is indirectly linked to an account manager through the association with a partner, and an account manager is indirectly linked to the proposals of the partners he or she is associated with. Rules will be explained in the next section. Information model A process model is a connected bipartite graph made up of tasks and entity states. The symbols of tasks and entity states are the rectangle and the circle, respectively. Tasks and entity states are connected by means of oriented arcs that establish input and output associations between them. The structure of ELBA processes has been inspired by Petri nets [24]: entity states correspond to places and tasks correspond to transitions. Entity states have labels consisting of two parts: the entity type, which matches a managed type appearing in the information model, and the state name. The states indicate the progress of the entities in their life cycles. The analysis of the input and output states of tasks leads to the identification of various kinds of tasks, the most frequent of which are entry tasks, exit tasks, transitional tasks and mapping ones. An entry task has no inputs: its purpose is to introduce new entities in a process. An exit task has no outputs in that it removes the input entities from the process. In a transitional task the input and output entities have the same type. When the task is completed the input entities are moved into the output states and, therefore, they undergo a change of state. A mapping task has one input state and one output state whose types are different, say, T1 and T2. The effect of the task is to map an input entity into a number of output entities: the output entities are those associated with the input one on the basis of the relationship between types T1 and T2. Tasks are divided into automatic tasks and human ones. The former are told apart because their names are preceded by “a:” and the latter are accompanied by the role of the performers (which is shown next to the task symbol). Tasks have a number of features, i.e., pre-conditions, post-conditions, and assignment rules. Pre-conditions and post-conditions are boolean expressions based on the input entities and draw on OCL (Object Constraint Language) [25]. In a declarative manner, the former express the conditions required for the execution of the tasks and the latter define the effects produced on the underlying information system. A human task is carried out by a process participant playing the role required by the task. The determination of the actual performer can be established by means of rules. ELBA provides two standard rules. The first rule concerns tasks having no inputs: any person playing the role required by the task is entitled to perform it. In the other cases, the second rule is applied. It assumes that the input entities of a task are associated with one or more role entities whose types match the role required by the task. Since role entities denote process participants, the associations between input entities and role entities determine the process participants who can operate on the input entities by performing the task. The standard rules can be replaced by specific assignment rules; in the examples, there are no exceptions to the standard rules. The process model shown in Fig. 1 is made up of six tasks and five entity states. The post-conditions are shown below the corresponding tasks. The first task, enterProposal, is an entry task in that it has no inputs; any partner can perform it. The purpose of the task is to enter a new Proposal in the process, as shown by the post-condition “isNew Proposal”. The effect of the execution of the task is the presence of a new proposal in the information system. From the dataflow point of view, the new proposal is put into the output state of the task because the type of the output state matches the type of the new entity. This state is the initial state of proposals. By convention, the name of any initial state is “i” (the first character of initial). If the type, say, T1, of a new entity is subject to a required relationship with type, say, T2, then the new entity must be associated with a T2 entity. This association can be carried out automatically if a T2 entity is found in the contextual entities of the task that brought the new entity into existence. The contextual entities of a task consist of the performer (in case of a human task) and the input entities. Since there is a required relationship from type Proposal to type Partner, a new proposal must be associated with a partner. Task enterProposal has one contextual entity, the performer of the task, whose type is Partner; therefore the performer is automatically connected to the new proposal. The second task, assign, enables an account manager to assign the proposal to three reviewers, as shown by post-condition “proposal.reviewers def”. The number of reviewers is taken from the multiplicity of the relationship Proposal – Reviewer. Due to the indirect relationship Proposal – AccountMgr defined in the information model, the account manager who is entitled to operate on the proposal is the one associated with the partner that has entered the proposal. The purpose of task assign is to associate the proposal with three reviewers selected by the performer. The choice can be based on various criteria such as the load balancing of the reviewers, but, from a declarative perspective, the effect is that the proposal turns out to be linked to three reviewers. The construct “proposal.reviewers” denotes the collection of the reviewers associated with the proposal. It is a navigational expression in which “proposal” represents the input entity: the input entity is referred to by means of its type name with the initial in lower case. The dot between proposal and reviewers shifts the focus from the proposal to the reviewers. Term “reviewers” is an associative attribute whose name is taken from type Reviewer: since the multiplicity of the associative attribute is greater than one, the plural of the type name with the initial in lower case is used. The effect of the post-condition “proposal.reviewers def”, where def is the abbreviation of “defined”, implies that the collection of reviewers, which initially is empty, will not be empty at the end of the task. On the basis of multiplicity 3 of the relationship from Proposal to Reviewer, the collection will refer to three reviewers. Task assign is a transitional task in that it has an input state and an output one of the same type: the input proposal is then moved into state assigned. Task review enables the reviewers of a proposal to produce reviews for it as shown by the post-condition “isNew Review”. A proposal is linked to three reviewers as shown by relationship Proposal – Reviewer; then all the reviewers of a proposal get the same proposal but each of them provides a different review. Due to the required relationships Review – Proposal and Review – Reviewer, each review is automatically connected to the input proposal and to the task performer (a reviewer). At the end of each execution of task review, a new review enters the output state, which is the initial state of reviews. The process does not handle the reviews individually but deals with the proposal when all the reviews are available. In such cases, ELBA uses an automatic task called aggregator, whose requirements are as follows. The aggregator has one input state and one output state whose types (say, T1 and T2) are different but interrelated: the input type (T1) and the output one (T2) participate in a relationship with multiplicity many to one. When all the entities T1 related to the same entity T2 are present in the input state, the aggregator removes them and outputs the entity T2. Therefore, the aggregator shown in Fig. 1 puts a proposal into the output state when its reviews are available in the input state. Three reviews are needed on the basis of the multiplicity of type Review with respect to type Proposal in the information model. Task evaluate is a transitional task because the input entities and the output ones have the same type. From the dataflow point of view, the effect is a change of state of the input entities. The effect on the input proposal is a modification of the outcome attribute as shown by the post-condition “proposal.outcome def”. The performer decides the value of the outcome, which initially is null, by selecting one of the two values, accepted or rejected, specified in the information model. The last task, notify, is an automatic task which notifies the outcome of the proposal to the partner. It is an exit task in that it removes the input entity from the process. The post-condition is omitted because it has no impact on the information system. The process model is a combination of different life cycles which are intertwined, but it can be useful to show them separately. The extraction of the life cycles is carried out with a simple algorithm as follows. The algorithm assumes that the resulting life cycles are sequential state models; therefore concurrent states are not handled. The algorithm is informally explained with reference to the process model shown in Fig. 1. Firstly, it adds output states to exit tasks and aggregators. The output states added to an exit task have the same type as the input ones and their name is “final”. The algorithm may add a new output state to an aggregator: the new state has the same type as the input state of the aggregator and its name is “final”. The addition is made only if there are no other states of the same type in the outgoing paths of the aggregator. If there are other states, the life cycle of the input entities continues after the aggregator. The changes made to the process model by the algorithm are shown in Fig. 2. For each managed type, say, T, the algorithm generates a life cycle graph whose nodes correspond to the states of type T in the process. The names of the nodes are the names of the states. For each node, say, N, of the graph, it determines the immediate successors and links node N to them: the successors are obtained by following the paths starting from the state corresponding to node N in the process model. For type Proposal, the graph is a linear sequence of nodes as shown in Fig. 2; likewise for type Review. 4. Hidden states The entity states that appear in a process model show the dataflow of the process; however, it may happen that certain states that are considered important to describe an entity life cycle do not explicitly appear in the process model because the presence of entities in these states does not bring about the activation of tasks. In such cases, these states, which are referred to as hidden states, may be made visible so that the algorithm that extracts the entity life cycles from the process model can include them. The example illustrated in this section explains how to make visible the hidden states. The example is a variant of process CheckProposals illustrated in the previous section; the variant is named CheckProposals2. The difference is that account managers do not directly associate reviewers with proposals but generate three assignments for each proposal; an assignment is connected exactly to one proposal and one reviewer. Reviewers fulfill their assignments and when all the expected assignments for a proposal have been fulfilled, the account manager can decide the outcome of the proposal. The information model of the process is shown in Fig. 3, while the process model and the entity life cycles are presented in Fig. 4. The annotations of the information model include a rule prescribing that a proposal must be assigned to different reviewers. The rule is based on the navigational expression “proposal.assignments.reviewer”, where the first term, proposal, stands for any proposal. For any proposal, the collection of reviewers related to the assignments associated with the proposal must contain different elements. Unlike the first process model, task assign is not a transitional task because the types of its input and output entities are different. It is instead a mapping task as it maps a proposal into three assignments. The effect of task assign is the presence of three new assignments after its execution. The number of assignments is indicated between parentheses in the post-condition. The required relationships of type Assignment imply that a new assignment must be linked to a proposal and a reviewer. The contextual entities of task assign are the input proposal and the performer. The input proposal is automatically associated with the new assignment. Since no reviewer is found in the contextual entities of the task, the reviewer is selected by the performer of the task during its execution. Rule 1 (included in the information model) guarantees that the new assignments are linked to distinct reviewers. The output state of the task is the initial state of the entities of type Assignment. Fig. 3. The information model of process CheckProposals2 ![Process model diagram] The name (withReviews) of the output state of the aggregator indicates that the assignments associated with the proposal contain the reviews provided by the reviewers. The Proposal life cycles extracted from processes CheckProposals and CheckProposals2 are different. The second life cycle has one state (assigned) less than the first one. The reason for this is that task assign is a transitional task in the first process and a mapping one in the second process. State assigned could be a significant state in the common vision of the life cycle of a proposal. In that case, state assigned can be considered a hidden state in the second process: it logically follows the initial state of a proposal as a consequence of task assign and precedes state “withReviews”. This hidden state can be made visible as follows. State “Proposal, assigned” is added to the process model and task assign is connected to it with a dashed arc so as to emphasize that it is a hidden state made visible. The new state has no outgoing links as it activates no tasks. The modified process along with the updated life cycles are shown in Fig. 5. The algorithm for extracting the entity life cycles includes the new state in the outgoing paths of task assign: as a result, state assigned is the successor of the initial state, and state withReviews is the successor of state assigned. The algorithm adds final states to task notify and to the aggregator task, but these states are not shown in Fig. 5. ![Diagram of process model] **Fig. 5.** The model of process CheckProposals2 with hidden states and the life cycles extracted 5. **Conclusion** This paper has presented the ELBA notation which is aimed at integrating the notions of dataflow and entity life cycle. The dataflow shows the input and output entities of tasks in terms of types and states. The states of the entities of a given type are the constituents of the life cycle of that type. The main difference between ELBA and GSM, which is the major representative of the artifact-oriented approach, is that process models in ELBA result from the intertwining of entity life cycles, while in GSM they are made up of separate entity life cycles. This paper has described an algorithm that is able to extract the entity life cycles from process models. The changes of state in two or more life cycles may be interrelated: in GSM the interrelationships are orchestrated by means of ECA rules and are not shown graphically. On the other hand, ELBA considers such interrelationships as the effect of spanning tasks, i.e., tasks having inputs of different types. Such tasks can be used to synchronize input flows of different types. Further work on ELBA is devoted to the introduction of collaborative features so as to enable processes to interact on the basis of agreed protocols. Such extensions are meant to be applied to the realm of Cyber Physical Systems. References Biographical notes Giorgio Bruno Giorgio Bruno is an associate professor of software engineering with the Department of Control and Computer Engineering, Politecnico di Torino (Polytechnic University of Turin), Italy. His teaching activities are concerned with software engineering and object-oriented programming. In the past he has dealt with languages for robots, real time systems, production control and scheduling, and plant simulation. His current research interests are in the development of notations for the operational modeling of processes in the domains of business applications and Cyber Physical Systems. He has published two books and over 180 refereed papers on the above-mentioned subjects in journals, edited books and conference proceedings. He has served in several program committees of international conferences.
{"Source-Url": "http://www.sciencesphere.org/ijispm/archive/ijispm-070304.pdf", "len_cl100k_base": 6282, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 36710, "total-output-tokens": 8416, "length": "2e12", "weborganizer": {"__label__adult": 0.0003676414489746094, "__label__art_design": 0.0008320808410644531, "__label__crime_law": 0.0004549026489257813, "__label__education_jobs": 0.005535125732421875, "__label__entertainment": 0.00011754035949707033, "__label__fashion_beauty": 0.00022518634796142575, "__label__finance_business": 0.0026378631591796875, "__label__food_dining": 0.0004801750183105469, "__label__games": 0.00067901611328125, "__label__hardware": 0.0006976127624511719, "__label__health": 0.0007028579711914062, "__label__history": 0.0004336833953857422, "__label__home_hobbies": 0.0001436471939086914, "__label__industrial": 0.0009183883666992188, "__label__literature": 0.0006260871887207031, "__label__politics": 0.0003578662872314453, "__label__religion": 0.00045108795166015625, "__label__science_tech": 0.12890625, "__label__social_life": 0.00016987323760986328, "__label__software": 0.02008056640625, "__label__software_dev": 0.833984375, "__label__sports_fitness": 0.0003020763397216797, "__label__transportation": 0.0008406639099121094, "__label__travel": 0.00024068355560302737}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36804, 0.02876]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36804, 0.43576]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36804, 0.93414]], "google_gemma-3-12b-it_contains_pii": [[0, 1592, false], [1592, 5652, null], [5652, 10146, null], [10146, 14645, null], [14645, 16872, null], [16872, 21829, null], [21829, 25584, null], [25584, 27935, null], [27935, 29458, null], [29458, 31236, null], [31236, 34670, null], [34670, 35967, null], [35967, 36804, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1592, true], [1592, 5652, null], [5652, 10146, null], [10146, 14645, null], [14645, 16872, null], [16872, 21829, null], [21829, 25584, null], [25584, 27935, null], [27935, 29458, null], [29458, 31236, null], [31236, 34670, null], [34670, 35967, null], [35967, 36804, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36804, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36804, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36804, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36804, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36804, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36804, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36804, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36804, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36804, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36804, null]], "pdf_page_numbers": [[0, 1592, 1], [1592, 5652, 2], [5652, 10146, 3], [10146, 14645, 4], [14645, 16872, 5], [16872, 21829, 6], [21829, 25584, 7], [25584, 27935, 8], [27935, 29458, 9], [29458, 31236, 10], [31236, 34670, 11], [34670, 35967, 12], [35967, 36804, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36804, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
02d97b6bf7c8285ce394329197f9a340aba71c60
ABSTRACT As product life-cycles become shorter and the scale and complexity of systems increase, accelerating the execution of large test suites gains importance. Existing research has primarily focussed on techniques that reduce the size of the test suite. By contrast, we propose a technique that accelerates test execution, allowing test suites to run in a fraction of the original time, by parallel execution with a Graphics Processing Unit (GPU). Program testing, which is in essence execution of the same program with multiple sets of test data, naturally exhibits the kind of data parallelism that can be exploited with GPUs. Our approach simultaneously executes the program with one test case per GPU thread. GPUs have severe limitations, and we discuss these in the context of our approach and define the scope of our applications. We observe speed-ups up to a factor of 27 compared to single-core execution on conventional CPUs with embedded systems benchmark programs. 1. INTRODUCTION The number of tests needed to effectively validate any non-trivial software is extremely large. For instance, Yoo et al. [24] state that for an IBM middleware product used in their study, it takes a total of seven weeks to execute all the test cases, making overnight execution impossible. Much of the research in software testing over the last few decades has focussed on test suite reduction techniques and criteria (such as coverage) that help in identifying the effective tests to retain. This trend is particularly seen in regression testing and black-box testing, where numerous optimisation techniques—test case selection, test suite reduction, and test case prioritisation—have been proposed to reduce testing time [23, 19]. Even after applying these optimisations, test suites remain large and their execution is typically very time consuming. This puts an enormous strain on software development schedules. We present an approach with the potential of executing test suites in a fraction of the original time and explore its feasibility on embedded systems benchmark programs. Our approach leverages the speedup offered by Graphics Processing Units (GPUs). GPUs are massively parallel processors featuring multi-threaded performance unmatched by high-end CPUs. The single-chip peak performance of state-of-the-art GPU architectures exceeds 1500 GFLOPS, which compares to around 100 GFLOPS for a traditional processor at its best [12]. A further advantage of GPUs is that they are more energy efficient than their multi-core CPU counterparts [11, 20]. Finally, in terms of cost per performance, GPUs are more affordable than multiple PCs. In particular, the management cost of one computer with GPU is much smaller than that of a corresponding cluster of PCs. Test Execution using GPUs. General-purpose computing on GPUs (GPGPU) has been successfully applied in a broad range of domains [15, 21, 8]. GPUs use a Single Instruction Multiple Thread (SIMT) architecture to exploit data-level parallelism. We believe software testing will benefit greatly from GPUs, and the most compelling reason for this is the fact that program testing, i.e., running the same program with multiple sets of test data, naturally exhibits data parallelism that can be exploited with GPUs. There has been no work in the past exploring this possibility and this paper paves the way in leveraging the acceleration provided by GPUs for software testing. Existing literature on GPGPU investigates techniques to transform CPU versions of a program to run on the GPU. The key problem here is the identification of opportunities to parallelise. Our approach is dramatically different: we leave the program and its logic untouched and only parallelise the running of multiple test cases on the program. Our approach simultaneously executes the program with one test case per GPU thread. We will instrument the program with GPU device management code that acts as a wrapper for the origi- inal code. We store the test suite in the GPU device memory and launch one GPU thread with the original program functionality for each test input. The threads require no synchronisation or coordination since the executions of different test cases are completely independent. No program transformation from CPU program to GPU program is required in our proposed technique, and thus, we still test the original program logic and obtain results that are the same as those obtained with a CPU. However, current GPUs have severe limitations and impose restrictions on the class of programs we can test. We discuss these limitations in Section 3.3. Our approach for accelerated test case execution using GPUs provides the following potential benefits: (1) a significant reduction in test suite execution time and as a result huge savings in testing costs, and (2) for the same allotted test time, allows more test cases to be executed, potentially increasing the likelihood of fault detection [9], and (3) better energy efficiency than testing using multi-core CPUs or PC clusters [11, 20]. Related Work. GPUs can be exploited for non-graphical tasks using General Purpose computing on Graphical Processing Units (GPGPU) [15]. GPGPU has been successfully applied in a broad range of applications. There is growing interest in the software engineering community to use the massive performance advantage offered by GPUs. Recently, Yu et al. explored the use of GPUs for test case generation [25]. Bardsley et al. have developed a static verification tool, GPUVerify, for analysing GPU kernels. Li at al. and Yoo et al. have adapted multi-objective evolutionary algorithms for test suite optimization to execute on GPUs. Nevertheless, we are not aware of any existing study that has explored the use of GPUs to accelerate test suite execution which is the goal of our work. Our contributions. To validate our hypothesis on test acceleration with GPUs, we make the following contributions in this paper: 1. We describe and implement our approach for executing test cases in parallel on a GPU using the CUDA programming model. 2. We evaluate our approach experimentally using example programs and test suites, and discuss the achieved speedup. We use programs from the embedded systems domain, in particular programs from the embassy benchmark suite [6]. 3. We discuss the limitations of our approach, i.e., the testing activities it can be applied to, and the application domain. 2. BACKGROUND The success of GPUs in the past few years has been due to the ease of programming using the CUDA [1] and OpenCL [3] parallel programming models, which abstract away details of the architecture. In these programming models, the developer uses a C-like programming language to implement algorithms. The parallelism in those algorithms has to be exposed explicitly. The GPU SIMT architecture can deliver extreme speedups if the different threads executed have no data dependencies. We now present a brief overview of the core concepts of CUDA. The highest level of the CUDA thread programming model is a parallel kernel. A kernel is a function that is invoked by a program running on the host CPU but is executed on the GPU. Kernel functions are identified by means of the keyword __global__. Kernel functions can only access the memory on the GPU; any data required by it has to be copied to GPU memory before invoking the kernel. The kernel is executed as a grid of blocks of threads. In other words, threads are grouped into blocks and blocks are grouped into a grid. Grid and block dimensions can be specified when launching the kernel. Grids and blocks can be multidimensional and along each dimension there is a hardware-defined limit on the number of threads and blocks that can be created. The memory system is organized in three levels of hierarchy. Closest to the core are the registers, which have the lowest latency and are private to each thread. Next is shared memory, which is again small and has low access latency. Shared memory is available to the threads within a block. All threads can access the large global memory, which is comparatively slow (400–800 cycles). Finally, the execution model in CUDA requires threads in blocks to be executed in groups called warps. A warp is a group of 32 threads in a block that are launched and executed together. All threads in a warp execute the same instruction but on different data. This is often referred to as lock-step execution semantics. When a conditional instruction is encountered with control flow divergence among the threads within a warp, GPUs resort to predication, which can improve instruction scheduling and cache behavior of programs. Each thread block is mapped to one or more warps. As a result, we choose the thread block dimension as multiple of the warp size. Warps within a thread block can execute independently. The example in Figure 3 gives the definition of a kernel function, which is invoked as follows: \[ \text{compute}<<<1,\text{NUM_TESTS}>>>(\text{device_inputs}) \] Here, 1 specifies the grid dimension and \text{NUM_TESTS} specifies the block dimension. A pointer to a block of GPU memory is passed as an argument to the function. Note that such memory is allocated and initialized by the host code using the \text{cudaMalloc} and \text{cudaMemcpy} functions. A unique thread id within a thread block is given by the system variable \text{threadIdx.x} (along a single dimension) and the block id is denoted by \text{blockIdx.x}. Thus, a thread id that is unique in the grid can be obtained by calculating \text{blockIdx.x} \cdot \text{blockDim.x} + \text{threadIdx.x}. 3. OUR APPROACH We believe that the execution of a large test suite is a natural match for this architecture, since it requires executing the same program multiple times with different inputs. Running a program with a test case on a CPU typically involves loading the program and the test case into memory and executing the instructions with the loaded data. This is repeated for every test case in the test suite. Each test case run is completely independent of other test case runs. Also, all the executions are over the same program, albeit possibly not the same instructions, depending on the control logic in the program. On the GPU, our approach will launch as many GPU threads as there are test cases, where each thread executes the same sequential program with a different test case. Since executions of different test cases have no data dependencies, there is no need for any thread synchronisation in our approach. The key points of our approach (Figure 1) are: 1. We vectorise the test inputs so that their dimension is the number of test cases in the test suite. 2. We copy the vectorised inputs from the host memory to the GPU device memory. 3. We then launch the kernel with the program functionality on the requisite number of GPU threads (ideally as many as there are test cases). Each GPU thread will operate on the same program but with different test data, using the unique thread id to identify the test case inputs to execute over. 4. We copy the program output from the GPU back to the CPU. The main code changes required for this example using our approach stem from: (i) transferring the test suite from the host to the device, (ii) adding a kernel function (compute) to be called on all threads, (iii) reading the test case from the test suite using the thread ID in the kernel function. The rest of the code largely remains the same. In particular, note that the quicksort and partition functions remain unchanged. As a result we still test the original functions with our approach. Compilers for GPU programs are highly specialised and do not support all features used in C/C++ programs. The limitations of GPUs and their implications are discussed in the next section. ### 3.2 Limitations of GPUs and Implications GPUs can offer considerable acceleration. However, in turn, there are severe limitations: **L1** GPU programs have to copy data back and forth from the host memory to perform I/O or when GPU memory is exhausted. Data transfer between the GPU and host memory is slow due to the high latency of the interface. Furthermore, the typical bandwidth of accesses to GPU memory is two to three orders of magnitude higher than the bandwidth of transfers to host memory over PCIe. **L2** The different programming model—CUDA or OpenCL—often requires heavy, non-trivial changes to existing source code to leverage GPU performance. **L3** Control-flow branching in source code (using control structures like if-then-else statements) penalises GPU performance. GPUs execute groups of threads in lock-step. All threads that belong to the same group execute the same instruction but use different data. Lock-step execution is violated if the branch instruction diverges. This can impact performance negatively [13]. **L4** While the typical memory bandwidth of GPUs is about five times higher than that of CPUs [14], GPUs are restricted by the fact that their bandwidth is shared among thousands of threads [18]. This is not a problem in applications like graphics where threads share large data sets that can be retrieved from the shared memory in blocks. In applications that do not share data, data transfers from the device global memory to each of the several thousand threads will be a bottleneck. **L5** The compiler for CUDA source files (NVCC [2]) processes them according to C++ syntax rules. As a result, some valid C (but invalid C++) code fails to compile. Full C++ is supported for the host code. However, only a subset of C++ is supported for the device code, as described in CUDA programming guide [1]. We will now discuss these limitations in the context of our approach. Limitations L1 and L4 are believed to be less of an issue in next generation GPUs. A recent keynote from the CEO of NVIDIA predicts the next generation of NVIDIA GPUs (Pascal, to be released in 2016) to bring larger memory size and bandwidth (using stacked memory), faster data transfer between CPU and GPU (5 to 12 times more) using NVLink, and smaller, more energy-efficient chips [10]. In our approach, we would ideally want to do a data transfer once from the CPU to the GPU and once the other way. However, system calls and other program features that CUDA/OpenCL cannot handle require data to be transferred more frequently. Handling system calls effectively on the GPU is an active area of research [22]. Limitation L2 is not an issue for our approach, since we are not transforming the program to run on the GPU. Instead, we only need to write a test wrapper in CUDA or OpenCL that launches for each test case a copy of the program on a thread. Little or no knowledge of the program logic is needed and the transformation is typically ```c #define ARRAY_SIZE 9 #define NUM_TESTS 256 __device__ #define void quickSort(int[], int, int); int partition(int[], int, int); int main(void) { // sample input array as test case int a[10] = {7, 12, 1, -2, 0, 15, 4, 11, 9}; quickSort(a, 0, ARRAY_SIZE-1); return 0; } Figure 2: Harness for testing Quicksort with one test input ``` straightforward. We plan to automate the insertion of this test wrapper in the future. Limitation L3 restricts the programs we can test with our approach. We hypothesise that for programs with heavy branching our approach will not produce a significant speedup. We have applied our approach to programs with different degrees of control flow divergence to test this hypothesis. It is expected that the impact of this limitation will reduce in future-generation GPUs, which will feature more sophisticated branch prediction logic. Limitation L5 restricts the program features CUDA/OpenCL can support on the GPU device. Unsupported features can be re-implemented for CUDA, but this requires program transformations, which may affect the correctness of the test execution. CUDA and OpenCL are evolving and future releases will support more features of C/C++. However, this limitation is currently the primary constraint for the scope of our approach. ### 3.3 Scope of our Approach Our approach is best suited to: 1. C++ programs that can be compiled for the GPU. Limitation L5 drives the set of programs that satisfy this constraint. 2. C/C++ programs with limited system calls. 3. C/C++ programs with limited control flow branching. 4. C/C++ programs that do not exceed GPU memory size. Future generation GPUs and CUDA/OpenCL compilers will potentially allow a wider application scope. In this paper, we use C programs from the embedded systems domain that satisfy the constraints mentioned above. ### 4. EVALUATION We check the feasibility of our approach on C programs from the embedded systems domain. We evaluate the hypothesis that test execution on GPUs is faster than on CPUs on the example C programs and test suites. We also show that our approach does not alter the program functionality and that the test case outputs on the GPUs and CPUs remain the same. Finally, we discuss the overhead of data transfer between host and device. In our experiments, the CPU we use is an Intel Xeon processor with 8 cores at 3.07 GHz and 16 GB RAM. The GPU we use features the GTX 670 Kepler architecture, 960 cores at 1.07 GHz, and 2 GB device memory. #### 4.1 Benchmarks We use four benchmarks in our evaluation: 1. Image decompression using inverse discrete cosine transform (idctrn01) 2. Fast Fourier Transform processing in the automotive area (aifftr01) 3. Inverse Fast Fourier Transform processing (aiifft01) 4. Brake-By-Wire System (bbw, 2473 LOC) The first three programs are from the Embedded Microprocessor Benchmark Consortium (EEMBC), which provides a diverse suite of processor benchmarks organised into categories that span numerous real-world applications, namely automotive, digital media, networking, office automation and signal processing, among others [16]. The three benchmark programs that we have chosen are from the automotive category. The benchmark programs have test inputs associated with them. All the inputs are stored in a large data structure. The program only reads a small fraction of the test inputs in the data structure for one execution iteration. However, the program is executed iteratively several times, each time reading test inputs from a different location in the data structure. The output values from all executions are captured for both CPU and GPU executions and later compared to determine correctness of test execution using our approach. The fourth benchmark is a brake-by-wire (BBW) system provided by Volvo Technology AB designed in Simulink [7]. C code was generated from it using Simulink Coder. The system consists of five components, four wheel brake controllers for sensing and actuating, and a main controller responsible for computing the braking torque. We generated random test vectors over the input ranges of five inputs, rotations per minute for each wheel and state of the brake pedal. The values of the four brake torque outputs were captured for CPU and GPU executions and compared. Similar to the example illustrated in Section 3.1, we added GPU device management code to run the program with one test case on each GPU thread. We did not make any changes to the code that implements the program functionality. The modifications on all programs were straightforward and easy to implement. ### 4.2 Experimental Results We collect the following data: 1. Execution time on the CPU, 2. Execution time on the GPU for different grid and block dimensions, 3. Test Outputs on the CPU and GPU, 4. Device from/to host data transfer time for the GPU executions. Table 1 gives the results obtained from test execution on the CPU and GPU. Column Benchmark contains the name of the benchmark used. Column #Tests is the number of tests run on the program. Block Dim and Grid Dim are the number of threads in a block and number of blocks in a grid, respectively. GPU time and CPU time are times taken on the GPU and CPU, respectively, to execute all the tests on the program. Device-Host time column indicates whether test outputs from the CPU run and the GPU run match. Table 2 gives the speedup achieved by executing tests on the GPU compared to the CPU for the different benchmarks. Speedup is computed by dividing the CPU time column by the GPU time column in Table 1. ### 4.3 Discussion **Speedup Achieved.** As seen in Table 2, the speedup achieved with our approach is 2 to 27 times, depending on the benchmark and test suite size. Speedups achieved for the EEMBC benchmarks, idctrn01, aifftr01 and aiifft01, are high (10 to 27 times). A possible explanation for this is that control flow in all of these benchmarks is induced by for statements rather than if-else statements. Recall that in Section 3.3 we mentioned that control-flow divergence reduces GPU performance since groups of threads execute in lock step. Lock-step execution is impossible if branches diverge. In our examples, the for statements cause only very limited divergence in control flow and hence a high speedup is observed. On the other hand, the speedup achieved for the bbw example is only two times, regardless of test suite size. The bbw code contains <table> <thead> <tr> <th>#Tests</th> <th>idctrn01</th> <th>aifftr01</th> <th>aiifft01</th> <th>bbw</th> </tr> </thead> <tbody> <tr> <td>1024</td> <td>10</td> <td>8</td> <td>18</td> <td>2</td> </tr> <tr> <td>2048</td> <td>18</td> <td>12</td> <td>23</td> <td>2</td> </tr> <tr> <td>4096</td> <td>26</td> <td>15</td> <td>25</td> <td>2</td> </tr> <tr> <td>8192</td> <td>25</td> <td>14</td> <td>24</td> <td>2</td> </tr> <tr> <td>16384</td> <td>27</td> <td>15</td> <td>25</td> <td>2</td> </tr> </tbody> </table> Table 2: Speedup (CPU time/GPU time rounded) for the 4 programs with GPU block dimension of 64 Table 1: Results on both the CPU and GPU from running the 4 benchmarks with different test suite sizes <table> <thead> <tr> <th>Benchmark</th> <th># Tests</th> <th>Block dim.</th> <th>Grid dim.</th> <th>GPU time (ms)</th> <th>CPU time (ms)</th> <th>Device-Host time</th> <th>Outputs Match?</th> </tr> </thead> <tbody> <tr> <td>idctrn01</td> <td>1024</td> <td>64</td> <td>16</td> <td>1.77</td> <td>17.70</td> <td>0.21</td> <td>Yes</td> </tr> <tr> <td>idctrn01</td> <td>2048</td> <td>64</td> <td>32</td> <td>1.95</td> <td>35.33</td> <td>0.35</td> <td>Yes</td> </tr> <tr> <td>idctrn01</td> <td>4096</td> <td>64</td> <td>64</td> <td>2.71</td> <td>70.51</td> <td>0.67</td> <td>Yes</td> </tr> <tr> <td>idctrn01</td> <td>8192</td> <td>64</td> <td>128</td> <td>5.60</td> <td>141.18</td> <td>1.19</td> <td>Yes</td> </tr> <tr> <td>idctrn01</td> <td>16384</td> <td>64</td> <td>256</td> <td>10.56</td> <td>282.24</td> <td>1.9</td> <td>Yes</td> </tr> <tr> <td>alift01</td> <td>1024</td> <td>64</td> <td>16</td> <td>25.42</td> <td>192.11</td> <td>12.33</td> <td>Yes</td> </tr> <tr> <td>alift01</td> <td>2048</td> <td>64</td> <td>64</td> <td>31.02</td> <td>383.56</td> <td>22.98</td> <td>Yes</td> </tr> <tr> <td>alift01</td> <td>4096</td> <td>64</td> <td>64</td> <td>50.57</td> <td>766.98</td> <td>45.81</td> <td>Yes</td> </tr> <tr> <td>alift01</td> <td>8192</td> <td>64</td> <td>128</td> <td>108.98</td> <td>1533.66</td> <td>94.86</td> <td>Yes</td> </tr> <tr> <td>alift01</td> <td>16384</td> <td>64</td> <td>256</td> <td>208.94</td> <td>3067.93</td> <td>178.03</td> <td>Yes</td> </tr> <tr> <td>aifftr01</td> <td>1024</td> <td>64</td> <td>16</td> <td>10.75</td> <td>190.76</td> <td>12.36</td> <td>Yes</td> </tr> <tr> <td>aifftr01</td> <td>2048</td> <td>64</td> <td>32</td> <td>16.52</td> <td>380.60</td> <td>23.01</td> <td>Yes</td> </tr> <tr> <td>aifftr01</td> <td>4096</td> <td>64</td> <td>64</td> <td>30.01</td> <td>760.89</td> <td>45.53</td> <td>Yes</td> </tr> <tr> <td>aifftr01</td> <td>8192</td> <td>64</td> <td>128</td> <td>62.88</td> <td>1521.73</td> <td>90.78</td> <td>Yes</td> </tr> <tr> <td>aifftr01</td> <td>16384</td> <td>64</td> <td>256</td> <td>124.19</td> <td>3044.81</td> <td>190.26</td> <td>Yes</td> </tr> <tr> <td>bbw</td> <td>1024</td> <td>64</td> <td>16</td> <td>3.45</td> <td>5.50</td> <td>4.18</td> <td>Yes</td> </tr> <tr> <td>bbw</td> <td>2048</td> <td>64</td> <td>64</td> <td>4.29</td> <td>10.46</td> <td>8.12</td> <td>Yes</td> </tr> <tr> <td>bbw</td> <td>4096</td> <td>64</td> <td>64</td> <td>9.06</td> <td>20.35</td> <td>8.47</td> <td>Yes</td> </tr> <tr> <td>bbw</td> <td>8192</td> <td>64</td> <td>128</td> <td>19.98</td> <td>40.41</td> <td>15.83</td> <td>Yes</td> </tr> <tr> <td>bbw</td> <td>16384</td> <td>64</td> <td>256</td> <td>39.43</td> <td>80.31</td> <td>30.88</td> <td>Yes</td> </tr> </tbody> </table> Figure 4: Effect of block size on kernel runtime is that a warp is executed as a group of 32 threads. Thus, block sizes in the aforementioned range will require minimal or no context switches. When a block contains a single thread, then for most benchmarks the execution time remains close to optimal, except for the BBW benchmark. A block of size one implies that none of 1024 threads are executing in lock-step. If a benchmark is frequently accessing global memory, then due to the absence of lock step execution the kernel runtime will increase. The BBW benchmark has such a memory access pattern. For each thread iteration in BBW, frequently accessed inputs to the thread are located in global memory. One may refactor the code to move inputs to the thread-local memory, but GPU cards only offer a very small amount of thread-local memory (few KBs). Thus, such code refactorings become nontrivial when the input data structures are large. Data transfer between host and device. A key component of our approach is to copy the inputs for a test suite from the CPU memory to the GPU memory. These transfers start to gain importance when the benchmark requires large inputs. Notice in Table 1 that the memory transfers between the CPU and the GPU take more time than kernel execution for large test runs on aifftr01 and aifftr01. While the issue pertaining to GPU-CPU memory transfer is not a show stopper for our benchmarks, it assumes much larger importance for benchmarks where the complete input may not fit in the GPU memory. In such cases, either GPU Effect of Block and Grid Dimensions. The user-supplied dimensions for grid and block play a crucial role in the runtime of a kernel. A larger block size will result in frequent context switching of warps in a block. While thread (and warp) context switching in GPUs is relatively lightweight, the effects become tangible when kernel execution is long. Current GPU cards allow a maximum of 1024 threads in a single-dimensional block (a hardware restriction). The log-plot in Figure 4 quantifies the effect of the block size on the benchmarks. For this plot, we fixed the number of test runs to 1024. Notice that as the block size approaches the limit of the hardware, the execution times worsen. All the benchmarks show an optimal execution time when the block size ranges between 16 and 64. A straightforward reason for this heavy control flow branching with if-else statements. Diverging control flow and lock-step semantics cause instructions on different branches to wait and synchronise, which leads to higher GPU execution times and lower speedups. The CUDA version of the bbw benchmark contains a large set of thread local variables. NVIDIA’s Kepler architecture (evaluations were performed on the card having this architecture) allows 255 32-bit registers per thread to be allocated for thread local variables. Excess variables are spilled over to the global memory. We confirmed that when bbw benchmark was evaluated on 32 threads, a spill of 264 bytes was observed. Accessing global memory is known to be at least an order of magnitude slower. Notice that for all four programs, the speedup achieved remains the same beyond 2048 tests (for grid dimensions 16, 32 and 64). The likely reason for this is that the number of blocks and hence warps for a very large number of tests exceeds the maximum number of warps that can be scheduled on a streaming multiprocessor. It might also be that for larger number of tests (>2048 in our examples), there are not enough resources available to run all the test cases in parallel. As a result, some of the threads will have to wait and be scheduled later. For all the benchmarks, we saved and compared the test outputs from the CPU and GPU for different numbers of tests. We found that in all the cases listed in Table 1, the test outputs from our approach matched the test outputs from CPU. Although this is not a proof of correctness of our approach, it does serve as an initial evidence of feasibility of test execution with GPUs. Figure 4: Effect of block size on kernel runtime The user-supplied dimensions for grid and block play a crucial role in the runtime of a kernel. A larger block size will result in frequent context switching of warps in a block. While thread (and warp) context switching in GPUs is relatively lightweight, the effects become tangible when kernel execution is long. Current GPU cards allow a maximum of 1024 threads in a single-dimensional block (a hardware restriction). The log-plot in Figure 4 quantifies the effect of the block size on the benchmarks. For this plot, we fixed the number of test runs to 1024. Notice that as the block size approaches the limit of the hardware, the execution times worsen. All the benchmarks show an optimal execution time when the block size ranges between 16 and 64. A straightforward reason for this heavy control flow branching with if-else statements. Diverging control flow and lock-step semantics cause instructions on different branches to wait and synchronise, which leads to higher GPU execution times and lower speedups. The CUDA version of the bbw benchmark contains a large set of thread local variables. NVIDIA’s Kepler architecture (evaluations were performed on the card having this architecture) allows 255 32-bit registers per thread to be allocated for thread local variables. Excess variables are spilled over to the global memory. We confirmed that when bbw benchmark was evaluated on 32 threads, a spill of 264 bytes was observed. Accessing global memory is known to be at least an order of magnitude slower. Notice that for all four programs, the speedup achieved remains the same beyond 2048 tests (for grid dimensions 16, 32 and 64). The likely reason for this is that the number of blocks and hence warps for a very large number of tests exceeds the maximum number of warps that can be scheduled on a streaming multiprocessor. It might also be that for larger number of tests (>2048 in our examples), there are not enough resources available to run all the test cases in parallel. As a result, some of the threads will have to wait and be scheduled later. For all the benchmarks, we saved and compared the test outputs from the CPU and GPU for different numbers of tests. We found that in all the cases listed in Table 1, the test outputs from our approach matched the test outputs from CPU. Although this is not a proof of correctness of our approach, it does serve as an initial evidence of feasibility of test execution with GPUs. Effect of Block and Grid Dimensions. The user-supplied dimensions for grid and block play a crucial role in the runtime of a kernel. A larger block size will result in frequent context switching of warps in a block. While thread (and warp) context switching in GPUs is relatively lightweight, the effects become tangible when kernel execution is long. Current GPU cards allow a maximum of 1024 threads in a single-dimensional block (a hardware restriction). The log-plot in Figure 4 quantifies the effect of the block size on the benchmarks. For this plot, we fixed the number of test runs to 1024. Notice that as the block size approaches the limit of the hardware, the execution times worsen. All the benchmarks show an optimal execution time when the block size ranges between 16 and 64. A straightforward reason for this cards with larger memory have to be procured or the GPU code will involve nontrivial refactoring where the kernel operates on partial inputs at a time and synchronises with the CPU before operating on the rest. Consequently, transfer latency and bandwidth limitations of the PCI express link will become more apparent. The recent road maps of companies developing GPU cards indicate that the next-generation GPUs try to address the problem of CPU-GPU memory transfers. Some of the solutions have already been released, including unified virtual memory and the fabrication of the GPU chip on the same die as the CPU (Kaveri [4]). We believe that with advances in GPU technology, test acceleration via GPUs will become only more attractive. System Calls. Currently there are few abstractions available that allow GPU code to perform system calls (such as brk(), file I/O) and callbacks. With current state-of-the-art GPU technology, it is still non-trivial to run benchmarks that frequently perform system calls on GPUs. This is an active area of research [22, 17]. Our benchmarks notably did not have system calls. 4.4 Threats to Validity The first threat to validity is the small number of programs used in our experiments. We have only used four programs, even if they are all industry standard programs. The second threat arises from the fact that we only use programs from the automotive domain. Programs from other domains were not used in our experiments. We plan to do a more extensive evaluation using programs from different application domains in the future. 5. CONCLUSION In this paper, we proposed an approach to accelerate test execution using GPUs and explored its feasibility. Our approach inserts GPU device management code in the program interface to launch a GPU thread for each test case. The program functionality is not modified in our approach. We evaluated our approach using programs in the embedded systems domain – 3 benchmark programs from EEMBC and one brake-by-wire system from Volvo. We ran the programs on test suites with sizes that range from 1024 to 16384 tests. Our approach using GPUs achieved speedups in the range of 10 to 27 times for the EEMBC benchmarks, and a speedup of 2 for the brake-by-wire system. The extent of control flow divergence in a program affects the speedup achieved with GPUs. We verified, for the 4 benchmark programs, that our approach generated the same test case outputs over all the tests as the CPU. Finally, we also discussed limitations in GPUs and the restrictions they impose in the context of our approach. Acknowledgements This work was supported by the ARTEMIS VeTeSS project and ERC project 280053. References
{"Source-Url": "http://homepages.inf.ed.ac.uk/arajan/My-Pubs/ase2014_paper.pdf", "len_cl100k_base": 8042, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 20951, "total-output-tokens": 9549, "length": "2e12", "weborganizer": {"__label__adult": 0.0008063316345214844, "__label__art_design": 0.0010442733764648438, "__label__crime_law": 0.0007205009460449219, "__label__education_jobs": 0.0008368492126464844, "__label__entertainment": 0.0001933574676513672, "__label__fashion_beauty": 0.00037932395935058594, "__label__finance_business": 0.0003075599670410156, "__label__food_dining": 0.0005412101745605469, "__label__games": 0.0019550323486328125, "__label__hardware": 0.035308837890625, "__label__health": 0.0010347366333007812, "__label__history": 0.0005784034729003906, "__label__home_hobbies": 0.0002837181091308594, "__label__industrial": 0.00146484375, "__label__literature": 0.00040268898010253906, "__label__politics": 0.00047898292541503906, "__label__religion": 0.0010623931884765625, "__label__science_tech": 0.39697265625, "__label__social_life": 9.369850158691406e-05, "__label__software": 0.01031494140625, "__label__software_dev": 0.54248046875, "__label__sports_fitness": 0.0006699562072753906, "__label__transportation": 0.0018291473388671875, "__label__travel": 0.000308990478515625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38162, 0.05315]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38162, 0.25873]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38162, 0.8879]], "google_gemma-3-12b-it_contains_pii": [[0, 3969, false], [3969, 11147, null], [11147, 15159, null], [15159, 21688, null], [21688, 31678, null], [31678, 38162, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3969, true], [3969, 11147, null], [11147, 15159, null], [15159, 21688, null], [21688, 31678, null], [31678, 38162, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38162, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38162, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38162, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38162, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38162, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38162, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38162, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38162, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38162, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38162, null]], "pdf_page_numbers": [[0, 3969, 1], [3969, 11147, 2], [11147, 15159, 3], [15159, 21688, 4], [21688, 31678, 5], [31678, 38162, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38162, 0.16959]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
25f11d00fe04766a1b9d8c73c9ea6e93e7bacef5
On the Semantics of Gringo Amelia Harrison, Vladimir Lifschitz, and Fangkai Yang University of Texas Abstract Input languages of answer set solvers are based on the mathematically simple concept of a stable model. But many useful constructs available in these languages, including local variables, conditional literals, and aggregates, cannot be easily explained in terms of stable models in the sense of the original definition of this concept and its straightforward generalizations. Manuals written by designers of answer set solvers usually explain such constructs using examples and informal comments that appeal to the user’s intuition, without references to any precise semantics. We propose to approach the problem of defining the semantics of GRINGO programs by translating them into the language of infinitary propositional formulas. This semantics allows us to study equivalent transformations of GRINGO programs using natural deduction in infinitary propositional logic. 1 Introduction In this note, Gringo is the name of the input language of the grounder GRINGO, which is used as the front end in many answer set programming systems. Several releases of GRINGO have been made public, and more may be coming in the future; accordingly, we can distinguish between several “dialects” of the language Gringo. We concentrate here on Version 4, released in March of 2013. (It differs from Version 3, described in the User’s Guide dated October 4, 2010, in several ways, including the approach to aggregates—it is modified as proposed by the ASP Standardization Working Group.) The basis of Gringo is the language of logic programs with negation as failure, with the syntax and semantics defined in [Gelfond and Lifschitz, 1988]. 1http://potassco.sourceforge.net/. 2The User’s Guide can be downloaded from the Potassco website (Footnote 1). It is posted also at http://www.cs.utexas.edu/users/vl/teaching/lbai/clingo_guide.pdf. 3https://www.mat.unical.it/aspcomp2013/ASPStandardization. here is to extend that semantics to a larger subset of Gringo. Specifically, we would like to cover arithmetical functions and comparisons, conditions, and aggregates.\(^4\) Our proposal is based on the informal and sometimes incomplete description of the language in the *User’s Guide*, on the discussion of ASP programming constructs in [Gebser *et al.*, 2012], on experiments with *GRINGO*, and on the clarifications provided in response to our questions by its designers. The proposed semantics uses a translation from Gringo into the language of infinitary propositional formulas—propositional formulas with infinitely long conjunctions and disjunctions. Including infinitary formulas is essential, as we will see, when conditions or aggregates use variables ranging over infinite sets (for instance, over integers). The definition of a stable model for infinitary propositional formulas, given in [Truszczynski, 2012], is a straightforward generalization of the stable model semantics of propositional theories from [Ferraris, 2005]. The process of converting Gringo programs into infinitary propositional formulas defined in this note uses substitutions to eliminate variables. This form of grounding is quite different, of course, from the process of intelligent instantiation implemented in *GRINGO* and other grounders. Mathematically, it is much simpler than intelligent instantiation; as a computational procedure, it is much less efficient, not to mention the fact that sometimes it produces infinite objects. Like grounding in the original definition of a stable model [Gelfond and Lifschitz, 1988], it is modular, in the sense that it applies to the program rule by rule, and it is applicable even if the program is not safe. From this perspective, *GRINGO*’s safety requirement is an implementation restriction. Instead of infinitary propositional formulas, we could have used first-order formulas with generalized quantifiers.\(^5\) The advantage of propositional formulas as the target language is that properties of these formulas, and of their stable models, are better understood. We may be able to prove, for instance, that two Gringo programs have the same stable models by observing that the corresponding infinitary formulas are equivalent in one of the natural deduction systems discussed in [Harrison *et al.*, 2013]. We give here several examples of reasoning about Gringo programs based on this idea. Our description of the syntax of Gringo disregards some of the features re- \(^4\)The subset of Gringo discussed in this note includes also constraints, disjunctive rules, and choice rules, treated along the lines of [Gelfond and Lifschitz, 1991] and [Ferraris and Lifschitz, 2005]. The first of these papers introduces also “classical” (or “strong”) negation—a useful feature that we do not include. (Extending our semantics of Gringo to programs with classical negation is straightforward, using the process of eliminating classical negation in favor of additional atoms described in [Gelfond and Lifschitz, 1991, Section 4].) \(^5\)Stable models of formulas with generalized quantifiers are defined by Lee and Meng [2012a, 2012b, 2012c]. latted to representing programs as strings of ASCII characters, such as using := to separate the head from the body, using semicolons, rather than parentheses, to indicate the boundaries of a conditional literal, and representing falsity (which we denote here by ⊥) as #false. Since the subset of Gringo discussed in this note does not include assignments, we can disregard also the requirement that equality be represented by two characters ==. 2 Syntax We begin with a signature σ in the sense of first-order logic that includes, among others, (i) numerals—object constants representing all integers, (ii) arithmetical functions—binary function constants +, −, ×, (iii) comparisons—binary predicate constants <, >, ≤, ≥. We will identify numerals with the corresponding elements of the set \( \mathbb{Z} \) of integers. Object, function, and predicate symbols not listed under (i)–(iii) will be called symbolic. A term is arithmetical if it does not contain symbolic object or function constants. A ground term is precomputed if it does not contain arithmetical functions. We assume that in addition to the signature, a set of symbols called aggregate names is specified, and that for each aggregate name \( α \) a function \( \hat{α} \) from sets of tuples of precomputed terms to \( \mathbb{Z} \cup \{∞, −∞\} \) is given—the function denoted by \( α \). **Examples.** The functions denoted by the aggregate names card, max, and sum are defined as follows. For any set \( T \) of tuples of precomputed terms, - \( \hat{\text{card}}(T) \) is the cardinality of \( T \) if \( T \) is finite, and \( ∞ \) otherwise; - \( \hat{\text{max}}(T) \) is the least upper bound of the set of the integers \( t_1 \) over all tuples \( (t_1, \ldots, t_m) \in T \) such that \( t_1 \) is an integer; - \( \hat{\text{sum}}(T) \) is the sum of the integers \( t_1 \) over all tuples \( (t_1, \ldots, t_m) \in T \) such that \( t_1 \) is a positive integer if there are finitely many such tuples, and \( ∞ \) otherwise.\(^6\) \(^6\)To allow negative numbers in this example, we would have to define summation for a set that contains both infinitely many positive numbers and infinitely many negative numbers. For instance, we can define the sum to be 0 in this case. Admittedly, this is somewhat unnatural. A literal is an expression of one of the forms \[ p(t_1, \ldots, t_k), \ t_1 = t_2, \ not \ p(t_1, \ldots, t_k), \ not \ (t_1 = t_2) \] where \( p \) is a symbolic predicate constant of arity \( k \), and each \( t_i \) is a term, or \[ t_1 < t_2, \ not \ (t_1 < t_2) \] where \( < \) is a comparison, and \( t_1, t_2 \) are arithmetical terms. A conditional literal is an expression of the form \( H : L \), where \( H \) is a literal or the symbol \( \bot \), and \( L \) is a list of literals, possibly empty. The members of \( L \) will be called conditions. If \( L \) is empty then we will drop the colon after \( H \), so that every literal can be viewed as a conditional literal. **Example.** If \( available \) and \( person \) are unary predicate symbols then \[ \text{available}(X) : \text{person}(X) \] and \[ \bot : (\text{person}(X), \text{not available}(X)) \] are conditional literals. An aggregate expression is an expression of the form \[ \alpha\{t : L\} \prec s \] where \( \alpha \) is an aggregate name, \( t \) is a list of terms, \( L \) is a list of literals, \( \prec \) is a comparison or the symbol \( = \), and \( s \) is an arithmetical term. **Example.** If \( enroll \) is a unary predicate symbol and \( hours \) is a binary predicate symbol then \[ \text{sum}\{H, C : enroll(C), hours(H, C)\} = N \] is an aggregate expression. A rule is an expression of the form \[ H_1 \ | \ \cdots \ | \ H_m \leftarrow B_1, \ldots, B_n \quad (1) \] \((m, n \geq 0)\), where each \( H_i \) is a conditional literal, and each \( B_i \) is a conditional literal or an aggregate expression. A program is a set of rules. If \( p \) is a symbolic predicate constant of arity \( k \), and \( t \) is a \( k \)-tuple of terms, then \[ \{p(t)\} \leftarrow B_1, \ldots, B_n \] is shorthand for \[ p(t) \mid \text{not } p(t) \leftarrow B_1, \ldots, B_n. \] **Example.** For any positive integer \( n \), \[ \{ p(i) \} \leftarrow p(X), p(Y), p(X+Y) \quad (i = 1, \ldots, n), \] is a program. # 3 Semantics We will define the semantics of Gringo using a syntactic transformation \( \tau \). It converts Gringo rules into infinitary propositional combinations of atoms of the form \( p(t) \), where \( p \) is a symbolic predicate constant, and \( t \) is a tuple of precomputed terms.\(^7\) ## 3.1 Semantics of Well-Formed Ground Literals A term \( t \) is **well-formed** if it contains neither symbolic object constants nor symbolic function constants in the scope of arithmetical functions. For instance, all arithmetical terms and all precomputed terms are well-formed; \( c+2 \) is not well-formed. The definition of “well-formed” for literals, aggregate expressions, and so forth is the same. For every well-formed ground term \( t \), by \( [t] \) we denote the precomputed term obtained from \( t \) by evaluating all arithmetical functions, and similarly for tuples of terms. For instance, \( [f(2+2)] \) is \( f(4) \). The translation \( \tau L \) of a well-formed ground literal \( L \) is defined as follows: - \( \tau p(t) \) is \( p([t]) \); - \( \tau(t_1 \prec t_2) \), where \( \prec \) is the symbol \( = \) or a comparison, is \( \top \) if the relation \( \prec \) holds between \([t_1]\) and \([t_2]\), and \( \bot \) otherwise; - \( \tau(\text{not } A) \) is \( \neg \tau A \). For instance, \( \tau(\text{not } p(f(2+2))) \) is \( \neg p(f(4)) \), and \( \tau(2+2 = 4) \) is \( \top \). Furthermore, \( \tau \bot \) stands for \( \bot \), and, for any list \( L \) of ground literals, \( \tau L \) is the conjunction of the formulas \( \tau L \) for all members \( L \) of \( L \). \(^7\)As in [Truszczynski, 2012], infinitary formulas are built from atoms and the falsity symbol \( \bot \) by forming (i) implications and (ii) conjunctions and disjunctions of arbitrary sets of formulas. We treat \( \neg F \) as shorthand for \( F \rightarrow \bot \), and \( \top \) stands for \( \bot \rightarrow \bot \). 3.2 Global Variables About a variable we say that it is global - in a conditional literal \( H : L \), if it occurs in \( H \) but does not occur in \( L \); - in an aggregate expression \( \alpha \{ t : L \} \prec s \), if it occurs in the term \( s \); - in a rule (1), if it is global in at least one of the expressions \( H_i, B_i \). For instance, the head of the rule \[ \text{total hours}(N) \leftarrow \text{sum}\{H, C : \text{enroll}(C), \text{hours}(H, C)\} = N \] is a literal with the global variable \( N \), and its body is an aggregate expression with the global variable \( N \). Consequently \( N \) is global in the rule as well. A conditional literal, an aggregate expression, or a rule is closed if it has no global variables. An instance of a rule \( R \) is any well-formed closed rule that can be obtained from \( R \) by substituting precomputed terms for global variables. For instance, \[ \text{total hours}(6) \leftarrow \text{sum}\{H, C : \text{enroll}(C), \text{hours}(H, C)\} = 6 \] is an instance of rule (3). It is clear that if a rule is not well-formed then it has no instances. 3.3 Semantics of Closed Conditional Literals If \( t \) is a term, \( x \) is a tuple of distinct variables, and \( r \) is a tuple of terms of the same length as \( x \), then the term obtained from \( t \) by substituting \( r \) for \( x \) will be denoted by \( t^x_r \). Similar notation will be used for the result of substituting \( r \) for \( x \) in expressions of other kinds, such as literals and lists of literals. The result of applying \( \tau \) to a closed conditional literal \( H : L \) is the conjunction of the formulas \[ \tau(L^x_r) \rightarrow \tau(H^x_r) \] where \( x \) is the list of variables occurring in \( H : L \), over all tuples \( r \) of precomputed terms of the same length as \( x \) such that both \( L^x_r \) and \( H^x_r \) are well-formed. For instance, \[ \tau(\text{available}(X) : \text{person}(X)) \] is the conjunction of the formulas \( \text{person}(r) \rightarrow \text{available}(r) \) over all precomputed terms \( r \); \[ \tau(\bot : p(2 \times X)) \] is the conjunction of the formulas \( \neg p(2 \times i) \) over all numerals \( i \). When a conditional literal occurs in the head of a rule, we will translate it in a different way. By \( \tau_h(H : L) \) we denote the disjunction of the formulas \[ \tau(L^x_r) \land \tau(H^x_r) \] where \( x \) and \( r \) are as above. For instance, \[ \tau_h(\text{available}(X) : \text{person}(X)) \] is the disjunction of the formulas \( \text{person}(r) \land \text{available}(r) \) over all precomputed terms \( r \). ### 3.4 Semantics of Closed Aggregate Expressions In this section, the semantics of ground aggregates proposed in [Ferraris, 2005, Section 4.1] is adapted to closed aggregate expressions. Let \( E \) be a closed aggregate expression \( \alpha\{t : L\} \prec s \), and let \( x \) be the list of variables occurring in \( E \). A tuple \( r \) of precomputed terms of the same length as \( x \) is admissible (w.r.t. \( E \)) if both \( t^x_r \) and \( L^x_r \) are well-formed. About a set \( \Delta \) of admissible tuples we say that it justifies \( E \) if the relation \( \prec \) holds between \( \hat{\alpha}(\{t^x_r : r \in \Delta\}) \) and \( [s] \). For instance, consider the aggregate expression \[ \text{sum}\{H,C : \text{enroll}(C), \text{hours}(H,C)\} = 6. \] In this case, admissible tuples are arbitrary pairs of precomputed terms. The set \( \{(3,\text{cs101}),(3,\text{cs102})\} \) justifies (4), because \[ \text{sum}(\{(H,C)^{H,C}_{3,\text{cs101}},(H,C)^{H,C}_{3,\text{cs102}}\}) = \text{sum}(\{(3,\text{cs101}),(3,\text{cs102})\}) = 3 + 3 = 6. \] More generally, a set \( \Delta \) of pairs of precomputed terms justifies (4) whenever \( \Delta \) contains finitely many pairs \( (h,c) \) in which \( h \) is a positive integer, and the sum of the integers \( h \) over all these pairs is 6. We define \( \tau_E \) as the conjunction of the implications \[ \bigwedge_{r \in \Delta} \tau(L^x_r) \rightarrow \bigvee_{r \in A \setminus \Delta} \tau(L^x_r) \] over all sets \( \Delta \) of admissible tuples that do not justify \( E \), where \( A \) is the set of all admissible tuples. For instance, if \( E \) is (4) then the conjunctive terms of \( \tau_E \) are the formulas \[ \bigwedge_{(h,c) \notin \Delta} (\text{enroll}(c) \land \text{hours}(h,c)) \rightarrow \bigvee_{(h,c) \notin \Delta} (\text{enroll}(c) \land \text{hours}(h,c)). \] The conjunctive term corresponding to \( \{(3,\text{cs101})\} \) as \( \Delta \) says: if I am enrolled in CS101 for 3 hours then I am enrolled in at least one other course. 3.5 Semantics of Rules and Programs For any rule $R$, $\tau_R$ stands for the conjunction of the formulas $$\tau B_1 \land \cdots \land \tau B_n \rightarrow \tau_H H_1 \lor \cdots \lor \tau_H H_m$$ for all instances (1) of $R$. A stable model of a program $\Pi$ is a stable model, in the sense of [Truszczyński, 2012], of the set consisting of the formulas $\tau_R$ for all rules $R$ of $\Pi$. Consider, for instance, the rules of program (2). If $R$ is the rule \{p(i)\} then $\tau R$ is $$p(i) \lor \neg p(i)$$ ($i = 1, \ldots, n$). If $R$ is the rule $$\leftarrow p(X), p(Y), p(X+Y)$$ then the instances of $R$ are rules of the form $$\leftarrow p(i), p(j), p(i+j)$$ for all numerals $i$, $j$. (Substituting precomputed ground terms other than numerals would produce a rule that is not well formed.) Consequently $\tau R$ is in this case the infinite conjunction $$\bigwedge_{i,j,k \in \mathbb{Z}} \neg (p(i) \land p(j) \land p(k)).$$ The stable models of program (2) are the stable models of formulas (6), (7), that is, sets of the form \{p(i) : i \in S\} for all sum-free subsets $S$ of \{1, \ldots, n\}. 4 Reasoning about Gringo Programs In this section we give examples of reasoning about Gringo programs on the basis of the semantics defined above. These examples use the results of [Harrison et al., 2013], and we assume here that the reader is familiar with that paper. 4.1 Simplifying a Rule from Example 3.7 of User’s Guide The program in Example 3.7 of User’s Guide (see Footnote 2) contains the rule $$\text{weekdays} \leftarrow \text{day}(X) : (\text{day}(X), \text{not weekend}(X)).$$ \footnote{To be precise, the syntax of conditional literals in User’s Guide is somewhat different—it corresponds to an earlier version of GRINGO.} Replacing this rule with the fact \textit{weekdays} within any program will not affect the set of stable models. Indeed, the result of applying translation $\tau$ to (8) is the formula \[ \bigwedge_r (\text{day}(r) \land \neg\text{weekend}(r) \to \text{day}(r)) \to \text{weekdays}, \] (9) where the conjunction extends over all precomputed terms $r$. The formula \[ \text{day}(r) \land \neg\text{weekend}(r) \to \text{day}(r) \] is intuitionistically provable. By the replacement property of the basic system of natural deduction from [Harrison et al., 2013], it follows that (9) is equivalent to \textit{weekdays} in the basic system. By the main theorem of [Harrison et al., 2013], it follows that replacing (9) with the atom \textit{weekdays} within any set of formulas does not affect the set of stable models. \subsection*{4.2 Simplifying the Sorting Rule} The rule \[ \text{order}(X,Y) \leftarrow p(X), p(Y), X < Y, \text{not } p(Z) : (p(Z), X < Z, Z < Y) \] (10) can be used for sorting.\footnote{This rule was communicated to us by Roland Kaminski on October 21, 2012.} It can be replaced by either of the following two simpler rules within any program without changing that program’s stable models. \[ \text{order}(X,Y) \leftarrow p(X), p(Y), X < Y, \bot : (p(Z), X < Z, Z < Y) \] (11) \[ \text{order}(X,Y) \leftarrow p(X), p(Y), X < Y, \text{not } p(Z) : (X < Z, Z < Y) \] (12) Let’s prove this claim for rule (11). By the main theorem of [Harrison et al., 2013] it is sufficient to show that the result of applying $\tau$ to (10) is equivalent in the basic system to the result of applying $\tau$ to (11). The instances of (10) are the rules \[ \text{order}(i,j) \leftarrow p(i), p(j), i < j, \text{not } p(Z) : (p(Z), i < Z, Z < j), \] and the instances of (11) are the rules \[ \text{order}(i,j) \leftarrow p(i), p(j), i < j, \bot : (p(Z), i < Z, Z < j) \] where $i$ and $j$ are arbitrary numerals. The result of applying $\tau$ to (10) is the conjunction of the formulas \[ p(i) \land p(j) \land i < j \land \bigwedge_k (\neg p(k) \land i < k \land k < j \to p(k)) \to \text{order}(i,j) \] (13) for all numerals $i, j$. The result of applying $\tau$ to (11) is the conjunction of the formulas $$\neg p(k) \land i < k \land k < j \rightarrow \bot \rightarrow \text{order}(i, j).$$ (14) By the replacement property of the basic system, it is sufficient to observe that $$p(k) \land i < k \land k < j \rightarrow \neg p(k)$$ is intuitionistically equivalent to $$p(k) \land i < k \land k < j \rightarrow \bot.$$ The proof for rule (12) is similar. Rule (11), like rule (10), is safe; rule (12) is not. ### 4.3 Eliminating Choice in Favor of Conditional Literals Replacing the rule $$\{p(X)\} \leftarrow q(X)$$ (15) with $$p(X) \leftarrow q(X), \bot : \text{not } p(X)$$ (16) within any program will not affect the set of stable models. Indeed, the result of applying translation $\tau$ to (15) is $$\bigwedge_{r}(q(r) \rightarrow p(r) \lor \neg p(r))$$ (17) where the conjunction extends over all precomputed terms $r$, and the result of applying $\tau$ to (16) is $$\bigwedge_{r}(q(r) \land \neg p(r) \rightarrow p(r)).$$ (18) The implication from (17) is equivalent to the implication from (18) in the extension of intuitionistic logic obtained by adding the axiom schema $$\neg F \lor \neg \neg F,$$ and consequently in the extended system presented in [Harrison et al., 2013, Section 7]. By the replacement property of the extended system, it follows that (17) is equivalent to (18) in the extended system as well. 4.4 Eliminating a Trivial Aggregate Expression The rule \[ p(Y) \leftarrow \text{card}\{X,Y : q(X,Y)\} \geq 1 \] \hspace{1cm} (19) says, informally speaking, that we can conclude \(p(Y)\) once we established that there exists at least one \(X\) such that \(q(X,Y)\). Replacing this rule with \[ p(Y) \leftarrow q(X,Y) \] \hspace{1cm} (20) within any program will not affect the set of stable models. To prove this claim, we need to calculate the result of applying \(\tau\) to rule (19). The instances of (19) are the rules \[ p(t) \leftarrow \text{card}\{X,t : q(X,t)\} \geq 1 \] \hspace{1cm} (21) for all precomputed terms \(t\). Consider the aggregate expression \(E\) in the body of (21). Any precomputed term \(r\) is admissible w.r.t. \(E\). A set \(\Delta\) of precomputed terms justifies \(E\) if \[ \widehat{\text{card}}(\{(r,t) : r \in \Delta\}) \geq 1, \] that is to say, if \(\Delta\) is non-empty. Consequently \(\tau E\) consists of only one implication (5), with the empty \(\Delta\). The antecedent of this implication is the empty conjunction \(\top\), and its consequent is the disjunction \(\bigvee_u q(u,t)\) over all precomputed terms \(u\). Then the result of applying \(\tau\) to (19) is \[ \bigwedge_t \left( \bigvee_u q(u,t) \rightarrow p(t) \right). \] \hspace{1cm} (22) On the other hand, the result of applying \(\tau\) to (20) is \[ \bigwedge_{t,u} (q(u,t) \rightarrow p(t)). \] This formula is equivalent to (22) in the basic system [Harrison et al., 2013, Example 2]. 4.5 Replacing an Aggregate Expression with a Conditional Literal Informally speaking, the rule \[ q \leftarrow \text{card}\{X : p(X)\} = 0 \] \hspace{1cm} (23) says that we can conclude $q$ once we have established that the cardinality of the set \( \{X : p(X)\} \) is 0; the rule \[ q \leftarrow \bot : p(X) \] (24) says that we can conclude $q$ once we have established that $p(X)$ does not hold for any $X$. We’ll prove that replacing (23) with (24) within any program will not affect the set of stable models. To this end, we’ll show that the results of applying $\tau$ to (23) and (24) are equivalent to each other in the extended system from [Harrison et al., 2013, Section 7]. First, we’ll need to calculate the result of applying $\tau$ to rule (23). Consider the aggregate expression $E$ in the body of (23). Any precomputed term $r$ is admissible w.r.t. $E$. A set $\Delta$ of precomputed terms justifies $E$ if \[ \hat{\text{card}}(\{r : r \in \Delta\}) = 0, \] that is to say, if $\Delta$ is empty. Consequently $\tau E$ is the conjunction of the implications \[ \bigwedge_{r \in \Delta} p(r) \rightarrow \bigvee_{r \in A \setminus \Delta} p(r) \] (25) for all non-empty subsets $\Delta$ of the set $A$ of precomputed terms. The result of applying $\tau$ to (23) is \[ \left( \bigwedge_{\Delta \subseteq A \setminus \emptyset} \left( \bigwedge_{\Delta \subseteq \emptyset} \bigwedge_{r \in \Delta} p(r) \rightarrow \bigvee_{r \in A \setminus \Delta} p(r) \right) \right) \rightarrow q. \] (26) The result of applying $\tau$ to (24), on the other hand, is \[ \left( \bigwedge_{r \in A} \neg p(r) \right) \rightarrow q. \] (27) The fact that the antecedents of (26) and (27) are equivalent to each other in the extended system can be established by essentially the same argument as in [Harrison et al., 2013, Example 7]. By the replacement property of the extended system, it follows that (26) is equivalent to (27) in the extended system as well. 5 Conclusion GRINGO User’s Guide and the monograph [Gebser et al., 2012] explain the meaning of many programming constructs using examples and informal comments that appeal to the user’s intuition, without references to any precise semantics. In the absence of such a semantics, it is impossible to put the study of some important issues on a firm foundation. This includes the correctness of ASP programs, grounders, solvers, and optimization methods, and also the relationship between input languages of different solvers (for instance, the equivalence of the semantics of aggregate expressions in Gringo to their semantics in the ASP Core language and in the language proposed in [Gelfond, 2002] under the assumption that aggregates are used nonrecursively). In this note we approached the problem of defining the semantics of Gringo by reducing Gringo programs to infinitary propositional formulas. We argued that this approach to semantics may allow us to study equivalent transformations of programs using natural deduction in infinitary propositional logic. **Acknowledgements** Many thanks to Roland Kaminski and Torsten Schaub for helping us understand the input language of Gringo. Roland, Michael Gelfond, Yuliya Lierler, and Joohyung Lee provided valuable comments on drafts of this note. **References** \[\text{http://www.cs.utexas.edu/users/vl/papers/etinf.pdf}\] 10
{"Source-Url": "http://www.cs.utexas.edu/users/ai-lab/downloadPublication.php?filename=http%3A%2F%2Fwww.cs.utexas.edu%2Fusers%2Fvl%2Fpapers%2Fgringo.pdf&pubid=127353", "len_cl100k_base": 7531, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 43185, "total-output-tokens": 9367, "length": "2e12", "weborganizer": {"__label__adult": 0.0004048347473144531, "__label__art_design": 0.0003941059112548828, "__label__crime_law": 0.000507354736328125, "__label__education_jobs": 0.0022411346435546875, "__label__entertainment": 0.0001175999641418457, "__label__fashion_beauty": 0.0002083778381347656, "__label__finance_business": 0.0003910064697265625, "__label__food_dining": 0.0005717277526855469, "__label__games": 0.0007748603820800781, "__label__hardware": 0.0006513595581054688, "__label__health": 0.00075531005859375, "__label__history": 0.0002987384796142578, "__label__home_hobbies": 0.0001531839370727539, "__label__industrial": 0.0006685256958007812, "__label__literature": 0.001132965087890625, "__label__politics": 0.000423431396484375, "__label__religion": 0.0006775856018066406, "__label__science_tech": 0.0794677734375, "__label__social_life": 0.0001710653305053711, "__label__software": 0.010040283203125, "__label__software_dev": 0.89892578125, "__label__sports_fitness": 0.0003056526184082031, "__label__transportation": 0.0007138252258300781, "__label__travel": 0.0001990795135498047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28699, 0.03556]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28699, 0.48102]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28699, 0.8488]], "google_gemma-3-12b-it_contains_pii": [[0, 2001, false], [2001, 5179, null], [5179, 7479, null], [7479, 9271, null], [9271, 11433, null], [11433, 13569, null], [13569, 16138, null], [16138, 17904, null], [17904, 20018, null], [20018, 21462, null], [21462, 23143, null], [23143, 25110, null], [25110, 27125, null], [27125, 28699, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2001, true], [2001, 5179, null], [5179, 7479, null], [7479, 9271, null], [9271, 11433, null], [11433, 13569, null], [13569, 16138, null], [16138, 17904, null], [17904, 20018, null], [20018, 21462, null], [21462, 23143, null], [23143, 25110, null], [25110, 27125, null], [27125, 28699, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28699, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28699, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28699, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28699, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28699, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28699, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28699, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28699, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28699, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28699, null]], "pdf_page_numbers": [[0, 2001, 1], [2001, 5179, 2], [5179, 7479, 3], [7479, 9271, 4], [9271, 11433, 5], [11433, 13569, 6], [13569, 16138, 7], [16138, 17904, 8], [17904, 20018, 9], [20018, 21462, 10], [21462, 23143, 11], [23143, 25110, 12], [25110, 27125, 13], [27125, 28699, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28699, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
d3028b098ddbaf12974ab87795753ac6802d3dd1
CHAPTER 2 Security Metrics Background & Preliminaries 2. SECURITY METRICS BACKGROUND & PRELIMINARIES 2.1 Introduction With a shift from standalone application to the complex interconnection of insecure components and networks, the information systems especially the software systems are becoming more and more vulnerable. Verities of protection mechanisms and security approaches are applied but the resulting security level remains unknown. Like other software quality attributes, the level of security a system possess need to be outlined and measured. Such security metrics can be utilized in many ways to benefit an organization, including increasing accountability, improving security effectiveness, and demonstrating compliance [Alger et al., 2001] “How much secure a software system is? “ “How secure does a system need to be?” “What are the factors responsible for the security of the system?” and “The stages in the software development where the security need to be evaluated”, these are questions that are asked to those who work to evaluate the efficiency of security efforts. The answer to all these questions can be only possible if we have such security metrics that evaluate the system for security and provide the evidence of the security level and performance of the System under investigation. In particular to the software systems several quality attributes, such as reliability, size, complexity etc. have been investigated and evaluated. Very least attention has been remained towards the evaluation of security. Literature surveys showed that the security is an emerging concern in the current times ranging, from an organization to an individual. The area of security metrics is very hot and demanding but at the same time the field is very young [Jansen, 2010]. The problem behind the immaturity of security metrics is that the current practice of information security is still a highly diverse field and holistic and widely accepted approaches are still missing [Savola, A 2007]. The field still aims mainly at the basic definitional aspect and lacks in well-structured literature at hand. According to [Savola A, 2007] in order to make an advance in the field of measuring, assessing or assuring security, the current state of the art should be investigated thoroughly. In this chapter we present the preliminaries of the field of security metrics and based on the literature survey analyze the relevant major effort made for measuring the security of information system in general and for the software system in particular. The rest of the chapter is organized as: Section 2.1 presents some preliminary concepts of security metrics, theirs properties and objectives. In section 2.3 investigates and presents some major taxonomies in the field of security metrics, Section 2. 4 presents some of the major efforts made towards the actual measurement in a classified manner, followed by the section 2.5, which presents the conclusion 2.2 Security Metrics Concepts To understand security metrics, we must first differentiate between the metrics and measurement. Measurements provide single-point-in-time views of specific, discrete factors, while metrics are derived by comparing to a predetermined baseline two or more measurements taken over time [Jelen, 2000]. Measurements are generated by counting; metrics are generated from analysis [Alger, et al., 2001]. In other words, measurements are objective raw data and metrics are either objective or subjective human interpretations of those data. The well-developed security evaluation framework and derived metrics can act as an effective tool for security manager to discern the effectiveness of various components of their security programs, a system, a product or process [Payne, 2006]. Such security metrics certainly enables the development team to provide the necessary protection mechanism to ensure the secure system development. As mentioned earlier security metrics have many interpretations. Below are some of short elaborations of term security metrics. - According to [SSE-CMM, 2011], metrics are quantifiable measurements of certain aspects of the system or enterprise. Such measurement is based on some attributes of the system that are responsible for the security of the system. Further, a security metric is a quantitative measure of how much these attributes the system possess. - According to [Swanson. M et al., 2003], metrics are tools designed to facilitate decision making and improve performance and accountability through collection, analysis, and reporting of relevant performance-related data such that the end results aim to facilitate in taking the required corrective measures. - According to [Payne, 2006] measurements provide single-point-in-time views of specific, discrete factors, while metrics are derived by comparing to a predetermined baseline two or more measurements taken over time. Measurements are generated by counting; metrics are generated from analysis. In other words, measurements are objective raw data and metrics are either objective or subjective human interpretations of those data. In their work [Farooq et al., 2011], outlined the importance of measurements and metrics and their relation to the software testing. In the same work [Farooq et al., 2011] analyzed the process of software measurement process in general, which is comprised of following key stages. - **Planning**: Defining the procedure and scope of the measurement process. - **Implementation**: The actual application of measurement process and procedure defined at the planning stage. The output of this stage should be in the form reports of performance related data. - **Improving**: Based on the process and progress evaluation (through the reports generated in the implementation phase) the necessary decision is to be taken in order to make an improvement to the system. The output scale of the software measurement and metrics is possibly hierarchal in nature, which is comprised of various levels. Each level scale in the hierarchy possesses all the properties of lower level scale in the hierarchy. In the same work [Farooq et al., 2011] identified the five measurement scale based on the literature survey. ### 2.2.1 Characteristics of Security Metrics According to [Jelen. G, 2000], security metrics should be SMART, i.e. Specific, Measureable, Attainable, Repeatable, and Time dependent. Security metrics should be able to identify and measure the degree of security attributes like confidentiality, integrity and availability of a system. The method of measurement employed should be reproducible, that is, capable of attaining the same results when performed independently by different evaluators [Jansen, 2010]. The ultimate goal of security metrics should be to mitigate the security risk and act as a tool for decision making especially in *assessment* or *prediction*, for the development team and other stockholders. When the target is to predict the security level of a system then mathematical model and algorithms are applied to the collection of measured data (e.g. regression analysis) to predict the security of the system, process, or product. Our security metric framework is based on the *prediction* method where mathematical modeling techniques has been adopted and finally the security evaluation process is transformed into an algorithmic form. It is important to clearly know the entity that is the target of measurement because otherwise the actual metrics might not be meaningful [Savola, 2008]. Federal Information Processing Standards [FIPS, 2004] provides a mechanism for the investigation of confidentiality, integrity and availability separately. As the security requirements for each of the system or organization vary according to the needs, so the security evaluation should be based on the well-defined security attributes such as confidentiality, integrity, availability, privacy, nonrepudiation etc. ### 2.2.2 Security Metrics: Properties Security metrics properties can be investigated based on the following classification [Savola, A 2007] - **Quantitative vs. Qualitative metrics**: The end result of the security metrics may be either quantitative or qualitative in nature. The quantitative results are preferred over the qualitative one because of the discrete measurable nature of the results. At the same time the generating the quantitative metrics are more challenging than the quantitative one. - **Objectivity vs. Subjectivity of Security Metrics**: As with the case of quantitative vs. quantitative the resultant security metrics should be either objective or subjective in nature. Objective security metrics is the preferred one and portrays the security posture of a system or process in certain discrete levels such as low, medium, and high on a scale. The subjective metrics normally takes into consideration the human behavioral aspects in the security. - **Direct vs. Indirect metrics**: Direct metrics are those that measure an atomic attribute of the system in a sense that the measured attribute responsible for the security does not depend on the other attributes, whereas indirect metrics involve multiple attributes that are interdependent. - **Static vs. Dynamic metrics**: Result of the dynamic metrics will be effected by the time elapsed whereas static metrics do not take the time into the account. - **Absolute vs. Relative metrics**: An absolute metrics is atomic in nature in a sense that it does not depend on the output of any other metric, whereas relative one does. ### 2.2.3 Security Metrics Objectives: The main objective of the security metrics is to gauge into the system for the level of security it possess, such that the most critical elements of the system with respect to the security can be identified. According to [Savola, 2009] security correctness, security effectiveness and security efficiency are main three objectives of the security measurement, defined as: – **Security Correctness**: ensures that security enforcing mechanisms have been implemented correctly in the system under investigation and system meets its security requirements. – **Security effectiveness**: ensures that stated security requirements are met in the system under investigation and the system does not behave in any way other than what it is intended. – **Security efficiency**: ensures that the adequate security quality has been achieved in the system under investigation. ### 2.3 Security Metrics taxonomies Since the area of security metrics is still in its early stages with varying definitions and terminologies. Various security metrics taxonomies exists in the literature that aim at the categorization of security metrics at higher level of abstraction. Based on the literature survey in this section we look at some of the most common among them. In [Swanson, 2001], [Swanson et al., 2003], NIST presented a security metrics taxonomy, which categorized the security metrics into three modes i.e. Management, Technical and Operational. It further presents 17 sub categories of metrics each with examples. The focus of this taxonomy is from the organizational and stockholders perspective, rather than the technical perspective of a particular system. Below diagram 2.1 depicts the classification of security metrics proposed by NIST. In their study [Henning et al, 2002], workshop on Information Security System, Scoring and Ranking (WISSR) provides a detailed discussion on issues related to Information Assurance (IA). They used the term IS* for Information Security. The (*) with IS is related to security measurement and can be used to denote the terms like metric, scores, ratings, rank, assessment results etc. The work shop did not propose any new specific security metric taxonomy, instead it was organized into three tracks; technical, operational and organizational based on the interest of the participants. Technical metrics are used to describe and compare the technical objects such as an algorithm, specification, design etc. Operational metrics are used to manage the risk to the operational environment and the organizational metrics are used to describe and track the effectiveness of organizational program and process. In their study [Vaughn et al., 2003] proposed a taxonomy for Information Assurance (IA) metrics. They divided the taxonomy into two distinct categories of security metrics (a) *Organizational security metric* (b) *Technical Target of Assessment* (TTOA). The former aims at providing the feedback to improve the security assurance status of the organization. The second category metrics (TTOA) is intended to measure the security capability of a particular system or product. Both categories are further categorized in order to put the specificity in terms of measuring security. Above figure (2.2) shows the higher level classification of the taxonomy. In their [Seddigh et al., 2004] proposed a new taxonomy for IA (information assurance) and IT networks that aim to provide the basis and motivation for the research in overcoming the challenges in the area. In Their taxonomy the metric space is divided into three categories: Security, QoS, and Availability. Each of these categories is further categorized into three subcategories as technical, organizational and operational metrics, which are further categorized into 27 classes. According to them, organizational metrics evaluate an IT organization’s emphasis on IA (Information Assurance) in terms of goals and organizational policies. Technical metrics evaluate the technical components of an IA network and also the subset metrics under this category provides the rating, incident statistics and security testing. Operational metrics evaluate the operations of an IT organization in terms of complying with the goals and policies. set by the organization. Figure (2.3) shows the proposed taxonomy by the authors at the higher abstract level of classification. In his study [Savola. A, 2007] proposed a security metrics taxonomy based on the literature survey. The main aim of the author is to bridge the gap between Information Security Management and Information and Communication Technology Industry (ICT). In this taxonomy the author’s intent was to enhance the composition of feasible security metrics all the way from the business management to the lowest level of technical details. This taxonomy categorized the security metrics in a tree like structure into six levels from L0 to L5 with business level security metrics at the root (L0) and the implementation level technical metrics at the leaf nodes (L5). At the higher level, business level security metrics are divided into five sub categories, (i) Security metrics for cost-benefit analysis, containing economic measures such as ROI (return on investment) (ii) Trust metrics for business collaboration (iii) Security metrics for information security management (ISM) (iv) Security metrics for business level risk analysis (v) Security dependability and trust (SDT) metrics for ICT product system and service. The author further provided the sub categories of the above category (iii) and (v). Below diagram shows the classification of security metrics taxonomy. All the proposed security metrics taxonomies are conceptual and abstract in nature. Very little has been reported on the actual scale of measurement. From these taxonomies it is evident that a great deal of efforts is needed to devise the metrics that can be applicable in real practices. 2.4 Software Security vs. Software Reliability Measurement: An Overview Software reliability always remained an important quality attribute of the software system and various efforts to evaluate and measure the reliability has been made. The idea of security and reliability are technically derived from the requirement to describe correctness. Both the terms have grown up in different domains of thinking. Security can be defined as a functional statistical predictability statement where the answer to the question being secure or not is whether a given system specified can be expected to continue to function for some period in some specified manner. Reliability can be defined as a functional statistical predictive statement of predictability where a system in reliable state or not is whether a given system can be expected to continue function for some specified period in some specified manner [Roy D. Follendore, 2002]. Reliability and security are not the isolated from one another; instead reliability has a great impact on the security of a system. The "reliability of security" is often considered but the "security of reliability" is not often considered [Roy D. Follendore, 2002]. As far as software measurement is concerned the various efforts have been made to device the new and updated metrics, models and measurements techniques to evaluate the reliability of the software systems and very least efforts to evaluate the security of software system has been made. The main problem behind it may be the multifaceted nature of security and the dependency of security on the various other qualities attributes like testing and reliability of the software systems. In their work [Farooq et al., 2012], presented an in depth analysis of the key concept, metrics, models and measurement used in software reliability. Since the reliability is the probability of a system or components to perform its required service without failure under stated conditions for a specified period of time [Farooq et al., 2012]. Various probabilistic models and methods have been proposed to predict the reliability of a system. Among the proposed model and methods the software reliability growth models (SRGM) has been used to predict the reliability of the systems. SRGM shows how reliability of a system improves in a period of time when the faults are detected and repaired. SRGM is actually used to determine when to stop testing to attain a given reliability level [Quadri S. M. K et al, 2011]. Over the last three decades many efforts have been made to develop the SRGMs. Among them [Musa et al., 1887], [Xie, 1991], [Lyu, 1996], are the most common SRGMs and verities of metrics like number of remaining faults, mean time between failure (MTBF), and mean time to failure (MTTR) have been derived. In the later times [Bokhari and Ahmad, 2006], [Quadri S.M. K et al, 2006] proposed the probabilistic software reliability growth model based on Non-homogeneous Poisson process (NHPP) which incorporates the testing efforts. Further in their work [Quadri S.M.K et al, 2011] a scheme for constructing software reliability growth model (SRGM based on (NHPP) have been proposed. Table 2.1 Software Reliability Prediction Models <table> <thead> <tr> <th>Model</th> <th>Year</th> <th>Author(s)</th> </tr> </thead> <tbody> <tr> <td>Ohba exponential model</td> <td>1984</td> <td>[Ohba, M. 1984]</td> </tr> <tr> <td>Yamada Rayleigh model with Weibull curve</td> <td>1993</td> <td>[Yamada , S et al , 1993]</td> </tr> <tr> <td>Quadri’s NHPP SRGM with generalized</td> <td>2006</td> <td>[Quadri S.M.K et al, 2006]</td> </tr> <tr> <td>exponential testing efforts</td> <td></td> <td></td> </tr> <tr> <td>Quadri’s SRGM based on (NHPP)</td> <td>2011</td> <td>[Quadri S.M.K et al, 2011]</td> </tr> </tbody> </table> Above table (2.1) summarizes the major efforts made towards modeling the reliability prediction of software system based on the software reliability growth models (SRGMs). On the other hand there exists no such probabilistic model to predict the security level of the system over a specified time period. Systematic efforts are needed develop the models and methods to predict the security of the software systems. The methods and models of reliability prediction can used to understand the state of art of prediction and measurement. ### 2.5 Related Work Security metrics can be obtained at different levels within an organization or a technical system. Detailed metrics can be aggregated and rolled up to progressively higher levels. The various efforts made on actual metrics development are sporadic in nature, some taking into the consideration the knowledge of previous and current vulnerabilities; some measure the code quality some strike at the design of a system. From the literature survey, at the higher level these efforts can be categorized as following based on the major factors taken into the consideration in measuring the security of a system as: - **Analyzing the capabilities of attacker**: Under this the security of a system is measured by taking into account the required efforts, capabilities and resources of an attacker. - **Knowledge of Vulnerabilities**: Under this category the security evaluation is carried out by taking into account the knowledge of both the vulnerabilities reported in past and present vulnerabilities. - **Conceptual**: Under this category the security metrics are based on the knowledge of the personnel regarding security and are conceptual in nature, having very limited use in real practices. - **Independent**: Such security metrics are based on the analysis of the attributes of the system itself (inner attributes) and are predictive in nature. Security metrics under this category are highly desirable to ship the more secure and less vulnerable systems. Our proposed security metrics framework in chapter (4) comes under forth category i.e. it is independent of the external security factors and is based on the internal attributes of the system. In their study [Manadhata et.al, 2007] proposed the system attack surface as an indicator to the security of a system and formalized the system attack surface using I/O automata model. The attack surface of a system is comprised of three kinds of resources that an attacker can use to... carry out an attack on the system. These resources are the methods, channels and data of the system. Based on the direction of flow of data they further technically defined the entry points and exit points of the system which are actually the methods of system through which the flow of data takes places. Authors defined the attack surface as subset of system resources that an attacker can utilize carrying out attack on the system and accordingly quantifying the resultant security indicators. Larger the attack surface of a system more the system vulnerable to the attacks. Further the authors analyzed the feasibility of the proposed approach by measuring the attack surface of open source FTP daemons and two IMAP servers. Major efforts made towards the security measurement by taking the capabilities and resources of an attacker into consideration also known as attacker-centric security metrics are [Leversage et al., 2008], [Miles et al., 2005] and [Ortalo et al., 1999]. In these studies the security measurement is carried out by analyzing the ways an attacker can carry out the successful attack on the system with respect to the knowledge and resources at hand. In contrast the proposed security metric framework in this thesis chapter (4) is based on the system architecture and design. Being an independent of attacker’s capabilities our proposed framework act as a tool for the software development team to measure the inherent security of the system and enables them to take the necessary decisions regarding security. [Littlewood et al., 1993] proposed a conceptual model based on probabilistic methods which initially used for reliability analysis, to measure security of a system. In their conceptual model they proposed the usage of the efforts made by an attacker to carry out a successful attack on the system as a measure of system’s security. In the second category where the knowledge of vulnerabilities reported in past and the present vulnerabilities [Alhazmi et al., 2006] proposed a vulnerability density (VD) metric which is defined as the number of vulnerabilities in a unit size of code. From the VD the authors further proposed a set of metrics such as vulnerability discovery rate (VRD) which is the number of vulnerabilities identified per unit time and known vulnerabilities density (VKD). In contrast the proposed security metric framework in this thesis chapter (4) is independent of the knowledge of the past or present vulnerabilities. [Alves-Foss et al., 1995] proposed the measurement of a system using System Vulnerability Index (SVI) as a measure of system vulnerability to common intrusion methods. SVI is 2. SECURITY METRICS BACKGROUND & PRELIMINARIES calculated by evaluation the various factors like “system characteristics”, “potentially neglectful acts” and “potentially malevolent acts”. [Voas et al., 1996] proposed the security metric based on the technique of deliberate fault injection. The fault injection is carried out by simulating the threat classes of the system by mutating the variables during the execution and then observing the impact of threat class on the behavior of the system. Finally they have proposed a minimum-time-to-intrusion (MTTI) metric based on the time before any simulated intrusion can take place. [Schneier, 1999] proposed the attack tree to analyze the security of a system. The attack tree is constructed by setting the attackers goal as the root and branches of the tree as the different ways that an attacker can adopt to carry out the attack on the system along with the cost involved along with each of the possible path to carrying out the attack. The cost estimation becomes the measure of the system security. The prerequisite to generate an attack tree is the knowledge about attacker’s goals, system vulnerabilities, and attacker’s behavior. Many other conceptual security measurement efforts are made by [Littlewood et al, 1993], [Madan et al., 2002],[Stuart, 2004]. These metrics are conceptual in nature and haven’t been applied in real security evaluation of the systems. 2.6 Conclusion and Future Scope In this chapter we have analyzed the preliminaries and various concepts of the security metric field. From the presented taxonomies of the security metrics, it is evident that the field of security evaluation is multifaceted and wide open challenge for the organizations and research community. In practice very little has been delivered on the actual scale and it needs a systematic approach to make the progress in this area. Based on the literature survey we have classified the efforts made in the field of security metrics and presented the major efforts made towards the security metrics of the software systems
{"Source-Url": "https://shodhganga.inflibnet.ac.in/bitstream/10603/91567/10/10_chapter2.pdf", "len_cl100k_base": 5211, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 31305, "total-output-tokens": 5835, "length": "2e12", "weborganizer": {"__label__adult": 0.0004317760467529297, "__label__art_design": 0.00034880638122558594, "__label__crime_law": 0.0014247894287109375, "__label__education_jobs": 0.0016803741455078125, "__label__entertainment": 8.612871170043945e-05, "__label__fashion_beauty": 0.0001621246337890625, "__label__finance_business": 0.0009303092956542968, "__label__food_dining": 0.0004122257232666016, "__label__games": 0.0008869171142578125, "__label__hardware": 0.0009303092956542968, "__label__health": 0.000911235809326172, "__label__history": 0.0002562999725341797, "__label__home_hobbies": 0.00011724233627319336, "__label__industrial": 0.000457763671875, "__label__literature": 0.0005216598510742188, "__label__politics": 0.0003154277801513672, "__label__religion": 0.00036025047302246094, "__label__science_tech": 0.08599853515625, "__label__social_life": 0.00013887882232666016, "__label__software": 0.024017333984375, "__label__software_dev": 0.87890625, "__label__sports_fitness": 0.00024819374084472656, "__label__transportation": 0.0003781318664550781, "__label__travel": 0.00017702579498291016}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26744, 0.02438]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26744, 0.43925]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26744, 0.93067]], "google_gemma-3-12b-it_contains_pii": [[0, 55, false], [55, 2850, null], [2850, 5126, null], [5126, 7709, null], [7709, 9983, null], [9983, 12254, null], [12254, 12907, null], [12907, 13845, null], [13845, 15013, null], [15013, 16840, null], [16840, 19507, null], [19507, 22011, null], [22011, 24668, null], [24668, 26744, null]], "google_gemma-3-12b-it_is_public_document": [[0, 55, true], [55, 2850, null], [2850, 5126, null], [5126, 7709, null], [7709, 9983, null], [9983, 12254, null], [12254, 12907, null], [12907, 13845, null], [13845, 15013, null], [15013, 16840, null], [16840, 19507, null], [19507, 22011, null], [22011, 24668, null], [24668, 26744, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26744, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26744, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26744, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26744, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26744, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26744, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26744, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26744, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26744, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26744, null]], "pdf_page_numbers": [[0, 55, 1], [55, 2850, 2], [2850, 5126, 3], [5126, 7709, 4], [7709, 9983, 5], [9983, 12254, 6], [12254, 12907, 7], [12907, 13845, 8], [13845, 15013, 9], [15013, 16840, 10], [16840, 19507, 11], [19507, 22011, 12], [22011, 24668, 13], [24668, 26744, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26744, 0.11538]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
fb8fe0d97637ad9f808c9e38b98e1354558be461
Package ‘reb’ January 31, 2017 Title Regional Expression Biases Version 1.52.0 Author Kyle A. Furge <kyle.furge@vai.org> and Karl Dykema <karl.dykema@vai.org> Depends R (>= 2.0), Biobase, idiogram (>= 1.5.3) Description A set of functions to identify regional expression biases Maintainer Karl J. Dykema <karl.dykema@vai.org> License GPL-2 ZipData no biocViews Microarray, CopyNumberVariation, Visualization NeedsCompilation yes R topics documented: absMax ........................................... 2 buildChromCytoband ................................. 2 buildChromMap ..................................... 3 cset2band ......................................... 4 fromRevIsh ....................................... 5 Hs.arms .......................................... 6 isAbnormal ...................................... 7 mcr.eset ......................................... 8 movbin ......................................... 8 movt ........................................... 9 naMean ......................................... 11 regmap ......................................... 11 revish .......................................... 12 rmAmbigMappings ................................. 13 smoothByRegion ................................. 14 summarizeByRegion ............................. 16 tBinomTest ..................................... 18 writeGFF3 ..................................... 19 Index 20 **absMax** *Absolute Maxima* **Description** Returns the absolute maxima of the input values. **Usage** ``` absMax(x) ``` **Arguments** - `x` numeric argument **Value** `absMax` returns the absolute maximum of all the values present in the arguments as double preserving the sign. Essentially `max(abs(x), na.rm=T)`. **Author(s)** Karl A. Dykema and Kyle A. Furge **Examples** ``` absMax(c(1,2,3,4)) absMax(c(-1,-2,-3,-4)) ``` --- **buildChromCytoband** *Construct a chromLocation object from a cytoband environment* **Description** Construct a chromLocation object from a cytoband environment. Human, Rat, and Mouse are currently possible. **Usage** ``` buildChromCytoband(organism = "h") ``` **Arguments** - `organism` character, "h" for human, "m" for mouse, and "r" for rat. **Value** a chromLocation object **Author(s)** Karl J. Dykema, <karl.dykema@vai.org> Kyle A. Furge, <kyle.furge@vai.org> buildChromMap See Also buildChromLocation Examples humanBands <- buildChromCytoband("h") humanBands@chromLocs[["1"]] Description This function will take the name of a data package and build a chromLocation object representing regions of the genomes. Usage buildChromMap(dataPkg, regions) Arguments dataPkg The name of the data package to be used (a.k.a generated by AnnBuilder or downloaded from Bioconductor web site regions a character vector of genome regions to be generated Details This function is related to the buildChromLocation function found in the 'annotate' library. However, this function can be used to build specialized chromLocation objects based on gene mapping information. For example, a chromLocation object can be build specifically for human chromosome 1 by supplying chromosomal band information, such as c("1p1", "1p2", "1p3", "1q1", "1q2", "1q3", "1q4"). Genes that map to these regions are isolated and a chromLocation object is returned. Note that genes are isolated by ‘grep’ing genome mapping information. Therefore the number of genes that are able to placed into a defined genetic region (i.e. 1q4) is dependent on the quality of the mapping information in the annotation data source. Unfortunately, not too many pre-built annotation packages are available for spotted arrays off the Bioconductor Metadata web set. Use AnnBuilder to make one or get one from your core. Value A 'chromLocation' object representing the specified genomic regions and annotation data source Author(s) Kyle A. Furge <kyle.furge@vai.org See Also buildChromLocation Examples ```r ## ## NOTE: This requires an annotation package to work, it is provided for info only. ## #if (require(hu6800)) { # library(Biobase) # library(annotate) ## Build a specific chrom arm # chr1q <- buildChromMap("hu6800",c("1q1","1q2","1q3","1q4")) ## Build human data based on chrom arms # data(Hs.arms) # map <- buildChromMap("hu6800",Hs.arms) #} ``` **cset2band** cset2band description This function will summarize gene expression data by cytogenetic band **Usage** cset2band(exprs, genome, chr = "ALL", organism = NULL, FUN = isAbnormal, ...) **Arguments** - **exprs**: matrix of gene expression data or similar. The rownames must contain the gene identifiers - **genome**: an associated chromLoc annotation object - **chr**: a character vector specifying the chromosomes to analyze - **organism**: character, "h" for human, "m" for mouse, and "r" for rat.; defaults to NULL - loads from chromLocation object - **FUN**: function by which to aggregate/summarize each cytogenetic band - **...**: extra arguments passed on to the aggregate/summary function **Details** This function loops through each band for a given organism and summarizes the data for genes that lie within each cytogenetic band based upon the input function. For example, a matrix of gene expression values could be used and the mean expression of each band be determined by passing the mean function. Alternative, DNA copy number gains or losses could be predicted using the reb function and regions of likely gain or losses be summarized by cytogenetic band using the isAbnormal function. Value a matrix with rows representing cytogenetic bands, and columns representing individual samples. Author(s) Karl Dykema Examples data(mcr.eset) data(idiogramExample) ## Create a vector with the index of normal samples norms <- grep("MNC", colnames(mcr.eset@exprs)) ## Smooth the data using the default 'movbin' method, ## with the normal samples as reference and median centering ct <- reb(mcr.eset@exprs, vai.chr, ref=norms, center=TRUE) ## Mask the result to remove noise exprs <- ct[, -norms] exprs[abs(exprs) < 1.96] <- NA ## Starting data midiogram(exprs, vai.chr, method="i", col=.rwb, dlim=c(-4,4)) ## Summarize each cytogenetic band banded <- cset2band(exprs, vai.chr, FUN=mean, na.rm=TRUE) ## Create chromLocation object based on human cytobands h.cyto <- buildChromCytoband(organism = "h") ## Plot all data using mideogram midiogram(banded, h.cyto, method="i", col=.rwb, dlim=c(-4,4)) --- fromRevIsh **Convert from revish strings to a matrix** Description This function will convert two lists of revish style strings to a matrix format. Usage fromRevIsh(enhList, dimList, chr, organism = "h") Arguments <table> <thead> <tr> <th>enhList</th> <th>list of enhanced bands on each individual sample</th> </tr> </thead> <tbody> <tr> <td>dimList</td> <td>list of diminished bands on each individual sample</td> </tr> <tr> <td>chr</td> <td>chromosome to examine</td> </tr> <tr> <td>organism</td> <td>character, &quot;h&quot; for human, &quot;m&quot; for mouse, and &quot;r&quot; for rat.</td> </tr> </tbody> </table> Value A matrix is returned. The rownames of this matrix correspond to the major bands located on that chromosome, and the columns correspond to the sample names. Author(s) Karl J. Dykema, <karl.dykema@vai.org> Kyle A. Furge, <kyle.furge@vai.org> References MCR eset data was obtained with permission. See PMID: 15377468 See Also reb,revish Examples ```r mb.chr <- buildChromCytoband("h") data(mcr.eset) data(idiogramExample) ## Create a vector with the index of normal samples norms <- grep("MNC", colnames(mcr.eset@exprs)) ## Smooth the data using the default 'movbin' method, with the normal samples as reference and median centering withinRange <- function(x, y, range = range(x, na.rm = TRUE)) { return(which(y >= range[1] & y <= range[2])) } cset <- reb(mcr.eset@exprs, vai.chr, ref=norms, center=TRUE) ## Mask the cset to remove noise exprs <- cset[, -norms] exprs[abs(exprs) < 1.96] <- NA ## Extract the aberrations on the 5th chromosome revish <- revish(exprs, vai.chr, "5") ## Convert back to matrix reconverted <- fromRevIsh(revish[[1]], revish[[2]], "5") layout(cbind(1, 2)) idiogram(cset[, -norms], vai.chr, "5", method="i", dlim=c(-2, 2), col=.rwb, main="chr 5 reb results") idiogram(reconverted, mb.chr, "5", method="i", dlim=c(-1, 1), col=.rwb, main="chr 5 converted \n and re-converted") ``` Description The data set gives the human chromosomal arms. Usage data(Hs.arms) isAbnormal Format The format is: chr [1:48] "1p" "1q" "2p" "2q" "3p" "3q" "4p" "4q" "5p" "5q" "6p" "6q" "7p" "7q" "8p" "8q" "9p" ... Source International System of Human Cytogenetic Nomenclature (ISCN) <table> <thead> <tr> <th>isAbnormal</th> <th>Is a band 'abnormal'?</th> </tr> </thead> </table> Description Returns 1 or -1 indicating a chromosomal change based upon an input percentage. Usage ```r isAbnormal(x, percent = 0.5) ``` Arguments - `x` : genomic data, can contain NA's - `percent` : numeric argument - a fraction or percentage Details This simple function is used by cset2band. Author(s) Karl Dykema See Also cset2band Examples ```r # Not abnormal isAbnormal(c(1,NA)) # Abnormal; + isAbnormal(c(1,NA,1)) # Abnormal; - isAbnormal(c(1,NA,-1,-1,-1)) ``` Description An example exprSet and a chromLocation object generated from an gene expression profiling experiment of leukemic and normal blood cells. Profiling was done on custom pin-printed cDNA arrays. Usage data(mcr.eset) Source Examples data(mcr.eset) str(mcr.eset) --- movbin Description This function analyzes ordered data series to identify regional biases using an moving (running) approximated binomial test. Usage movbin(v, span=NULL, summarize=mean) Arguments v data vector span numeric vector. Each element is used to define the number of points to include when the approximated binomial test is applied to v. While mixed for the defaults, the span can be specified as fraction of the observation or actual sizes, but not a mixture - defaults to: seq(25, length(v)*.3, by=5) summarize function that is used to summarize the results from multiple spans. if NULL, a matrix with length(span) rows and length(v) columns is returned. movt Details movbin applies a moving binomial test to sequential windows of elements of v. Within each span a z-score from an approximated binomial is computed such that \[ z = \frac{2r - n}{\sqrt{n}} \] where \( r \) is the number of positive relative gene expression values and \( n \) is the number of non-zero values within each window. For convenience, this function allows for the specification of multiple window sizes using the span argument. The result of a movbin call will generate a matrix with \( \text{length}(\text{span}) \) rows and \( \text{length}(v) \) columns. Each row of the matrix represents the data generated from each span. This matrix can be returned or the matrix from can be condensed to a single vector of length \( v \) by applying a summary function \text{summarize} to the matrix columns. Value Either a matrix or a vector containing the summarized z-scores from the applied binomial test. Author(s) Kyle A. Furge, Ph.D., <kyle.furge@vai.org> and Karl J. Dykema, <karl.dykema@vai.org> Examples ```r x <- c(rnorm(50,mean=1),rnorm(50,mean=-1),rnorm(100)) layout(1:2) plot(x,type="h",ylim=c(-5,5)) ## apply the approximated binomial with a single span mb <- movbin(x,span=25,summarize=NULL) lines(mb[1,]) ## try a few different span ranges mb <- movbin(x,span=c(10,25,50),summarize=NULL) lines(mb[1,]) ## span of 10 lines(mb[2,]) ## span of 25 lines(mb[3,]) ## span of 50 ## average the results from the different spans plot(x,type="h",ylim=c(-5,5)) mb <- movbin(x,span=c(10,25,50),summarize=mean) lines(mb,col="blue") mb <- movbin(x,span=c(10,25,50),summarize=median) lines(mb,col="red") ``` Description This function analyzes ordered data series to identify regional biases using an moving (running) approximated t-test. Usage `movt(v, span=NULL, summarize=mean)` Arguments - `v` data vector - `span` numeric vector. Each element is used to define the number of points to include when the approximated binomial test is applied to `v`. While mixed for the defaults, the span can be specified as fraction of the observation or actual sizes, but not a mixture - defaults to: `seq(25, length(v) * 0.3, by=5)` - `summarize` function that is used to summarize the results from multiple spans. If NULL, a matrix with `length(span)` rows and `length(v)` columns is returned. Details `movt` acts very similar to `movbin` Value Either a matrix or a vector containing the summarized z-scores from the applied t-test. Author(s) Kyle A. Furge, Ph.D., `<kyle.furge@vai.org>` and Karl J. Dykema, `<karl.dykema@vai.org>` See Also `movbin` Examples ```r x <- c(rnorm(50, mean=1), rnorm(50, mean=-1), rnorm(100)) layout(1:2) plot(x, type="h", ylim=c(-5,5)) ## apply the approximated binomial with a single span mb <- movt(x, span=25, summarize=NULL) lines(mb[1,]) ## try a few different span ranges mb <- movt(x, span=10:50, summarize=NULL) lines(mb[1,]) ## span of 10 lines(mb[2,]) ## span of 25 lines(mb[3,]) ## span of 50 ## average the results from the different spans plot(x, type="h", ylim=c(-5,5)) mb <- movt(x, span=10:50, summarize=mean) lines(mb, col="blue") mb <- movt(x, span=10:50, summarize=median) lines(mb, col="red") mb <- movt(x, span=10:50, summarize=max) ``` naMean lines(mb,col="green") --- **naMean** *Wrapper function for the arithmetic mean* **Description** Simple call to mean with the *na.rm* option set to TRUE. **Usage** `naMean(x)` **Arguments** - `x`: An R object **Value** The arithmetic mean of the values in `x`. **Examples** ```r mean(c(1,2,3,NA),na.rm=TRUE) naMean(c(1,2,3,NA)) ``` --- **regmap** *Image function wrapper* **Description** A simple wrapper around the `image` function **Usage** `regmap(m,scale=c(-6,6),na.color=par("bg"),...)` **Arguments** - `m`: a matrix - `scale`: Include a graph scale showing this range of values ‘image’ function - `na.color`: the color to draw over NA values - `...`: additional parameters to ‘image’ Details A small wrapper around the ‘image’ function to display genome region summary statistics. Additional parameters will be passed along to the image function. The scale argument is a two-element vector that provides a floor and ceiling for the matrix and allows a crude scale bar to be included on the lower portion of the graph. For other colors consider using the geneplotter (dChip.colors) or marrayPlots (maPalette) library functions (i.e. regmap(m, col=dChipColors(50))) Author(s) Kyle A. Furge See Also image, summarizeByRegion Examples ```r m <- matrix(rnorm(6*4),ncol=6) colnames(m) <- c(1:6) rownames(m) <- c("1p","1q","2p","2q") regmap(m,scale=c(-1,1)) ``` **revish** *Creation of CGH (reverse in situ hybridization) style character strings* Description This function returns a two lists of character strings. These two lists correspond to the enhanced and diminished chromosomal bands. Usage ```r revish(cset, genome, chr, organism = NULL) ``` Arguments - `cset` expression set containing cytogenetic predictions, see `reb` - `genome` chromLocation object containing annotation information - `chr` chromosome to examine - `organism` if NULL, determination of the host organism will be retrieved from the `organism` slot of the chromLocation object. Otherwise "h", "r", or "m" can be used to specify human, rat, or mouse chromosome information Value - `enh` list of enhanced bands on each individual sample - `dim` list of diminished bands on each individual sample Author(s) Karl J. Dykema, <karl.dykema@vai.org> Kyle A. Furge, <kyle.furge@vai.org> References MCR eset data was obtained with permission. See PMID: 15377468 See Also reb Examples data(idiogramExample) ix <- abs(colo.eset) > .225 colo.eset[ix] <- NA idiogram(colo.eset,ucsf.chr,"14",method="i",dlim=c(-1,1),col=.rwb) revlist<- revish(colo.eset,ucsf.chr,"14") str(revlist) rmAmbigMappings Remove genes that map to multiple chromosomes from a chromLocation object Description Due to the automated probe annotation, a subset of probes can be “confidently” mapped to multiple chromosomes on the genome. This can cause some confusion if you are trying to perform certain types of data analysis. This function examines a chromLocation object and removes probes that map to multiple chromosomes. Usage rmAmbigMappings(cl) Arguments cl an existing chromLocation object Value A chromLocation object Author(s) Kyle A. Furge smoothByRegion See Also buildChromLocation Examples ### ### NOTE: This requires an annotation package to work, it is provided for info only. ### ```r #if (require(hu6800)) { # library(Biobase) # library(annotate) ## Build a specific chrom arm # cl <- buildChromLocation("hu6800") # cleanCL <- rmAmbigMappings(cl) #} ``` smoothByRegion reb Description This function "smooths" gene expression data to assist in the identification of regional expression biases. Usage reb(eset, genome, chrom = "ALL", ref = NULL, center = FALSE, aggrfun=absMax, method = c("movbin", "supsmu", "lowess","movt"), ...) Arguments eset the expression set to analyze genome an associated chromLoc annotation object chrom a character vector specifying the chromosomes to analyze ref a vector containing the index of reference samples from which to make comparisons. Defaults to NULL (internally referenced samples) center boolean - re-center gene expression matrix columns. Helpful if ref is used aggrfun a function to summarizes/aggregates gene expression values that map to the same locations. Defaults to the maximum absolute value absMax. If NULL, all values are included. method smoothing function to use - either "supsmu","lowess","movbin" or "movt". ... additional parameters to pass along to the smoothing function Details reb returns an eset that contains predictions of regional expression bias using data smoothing approaches. The exprSet is separated into subsets based on the genome chromLocation object and the gene expression data within the subsets is organized by genomic location and smoothed. In addition, the approx function is used to estimate data between any missing values. This was implemented so the function follows the "principles of least astonishment". Smoothing approaches are most straightforwardly applied by comparing a set of test samples to a set of control samples. For single color experiments, the control samples can be specified using the ref argument and the comparisons are generated internal to the reb function. This argument can also be used for two-color experiments provided both the test and control samples were run against a common reference. If multiple clones map to the same genomic locus the aggrfun argument can be used to summarize the overlapping expression values to a single summarized value. This is can be helpful in two situations. First, the supsum and lowess smoothing functions do not allow for duplicate values. Currently, if duplicate values are found and these smoothing functions are used, the duplicate values are simply discard. Second, if 50 copies of the actin gene are present on a the array and actin changes expression under a given condition, it may appear as though a regional expression bias exists as 50 values within a region change expression. Summarizing the 50 expression values to a single value can partially correct for this effect. The idiogram package can be used to plot the regional expression bias. Value An exprSet Author(s) Kyle A. Furge, <kyle.furge@vai.org> Karl J. Dykema, <karl.dykema@vai.org> References MCR eset data was obtained with permission. See PMID: 15377468 See Also movbin,idiogram Examples # The mcr.eset is a two-color gene expression exprSet # with cytogenetically complex (MCR) and normal # control (MNC) samples which are a pooled-cell line reference. data("mcr.eset") data(idiogramExample) ## Create a vector with the index of normal samples norms <- grep("MNC",colnames(mcr.eset@exprs)) ## Smooth the data using the default 'movbin' method, ## with the normal samples as reference cset <- reb(mcr.eset@exprs,vai.chr,ref=norms,center=TRUE) ## Display the results with midiogram midiogram(cset@exprs[-norms],vai.chr,method="i",dlim=c(-5,5),col=.rwb) summarizeByRegion ### Compute Summary Statistics of Genome Regions **Description** Splits the data into subsets based on genome mapping information, computes summary statistics for each region, and returns the results in a convenient form. (cgma stands for Comparative Genomic Microarray Analysis) This function supplies a t.test function at the empirically derived significance threshold (p.value = 0.005) **Usage** cgma(eset, genome, chrom="ALL",ref=NULL,center=TRUE,aggrfun=NULL, p.value=0.005, FUN=t.test, verbose=TRUE, explode=FALSE, ...) **Arguments** eset: an exprSet object genome: an chromLocation object, such as on produced by buildChromLocation or buildChromMap chrom: a character vector specifying the chromosomes to analyze ref: a vector containing the index of reference samples from which to make comparisons. Defaults to NULL (internally referenced samples) center: boolean - re-center gene expression matrix columns. Helpful if ref is used aggrfun: a function to summarizes/aggregates gene expression values that map to the same locations. If NULL, all values are included. Also see absMax p.value: p.value cutoff, NA for all results, or TRUE for all t.stats and p.values FUN: function by which to summarize the data verbose: boolean - print verbose output during execution? explode: boolean - explode summary matrix into a full expression set? ...: further arguments pass to or used by the function **Details** Gene expression values are separated into subsets that based on the 'chromLocation' object argument. For example, buildChromMap can be used to produce a 'chromLocation' object composed of the genes that populate human chromosome 1p and chromosome 1q. The gene expression values from each of these regions are extracted from the 'exprSet' and a summary statistic is computed for each region. summarizeByRegion cgma is most straightforwardly used to identify regional gene expression biases when comparing a test sample to a reference sample. For example, a number of simple tests can be used to determine if a genomic region contains a disproportionate number of positive or negative log transformed gene expression ratios. The presence of such a regional expression bias can indicates an underlying genomic abnormality. If multiple clones map to the same genomic locus the aggregate.by.loc argument can be used to include a summary value for the overlapping expression values rather then include all of the individual gene expression values. For example, if 50 copies of the actin gene are on a particular array and actin changes expression under a given condition, it may appear as though a regional expression bias exists as 50 values in a small region change expression. regmap is usually the best way to plot results of this function. idiogram can also be used if you set the "explode" argument to TRUE. buildChromLocation.2 can be used to create a chromLocation object in which the genes can be divided a number of different ways. Separating the data by chromosome arm was the original intent. If you use buildChromLocation.2 with the "arms" argument to build your chromLocation object, set the "chrom" argument to "arms" in this function. Value m A matrix of summary statistics Author(s) Kyle A. Furge References See Also buildChromMap,tBinomTest,regmap,buildChromLocation.2 Examples ## Not run: ## ## NOTE: This requires an annotation package to work. ## In this example packages "hu6800" and "golubEsets" are used. ## They can be downloaded from http://www.bioconductor.org ## "hu6800" is under MetaData, "golubEsets" is under Experimental Data. if(require(hu6800) && require(golubEsets)) { data(Golub_Train) cloc <- buildChromMap("hu6800",c("1p","1q","2p","2q","3p","3q")) ## For one-color expression data ## compare the ALL samples to the AML samples ## not particularly informative in this example aml.ix <- which(Golub_Train$"ALL.AML" == "AML") bias <- cgma(eset=Golub_Train,ref=aml.ix,genome=cloc) regmap(bias,col=.rwb) A more interesting example The mcr.eset is a two-color gene expression exprSet where cytogenetically complex (MCR), cytogenetically simple (CN) leukemia samples and normal control (MNC) samples were profiled against a pooled-cell line reference The MCR eset data was obtained with permission. See PMID: 15377468 Notice the diminished expression on chromosome 5 in the MCR samples and the enhanced expression on chromosome 11 This reflects chromosome gains and losses as validated by CGH ```r data("mcr.eset") data(idiogramExample) norms <- grep("MNC", colnames(mcr.eset@exprs)) bias <- cgma(mcr.eset@exprs, vai.chr, ref=norms) regmap(bias, col=topo.colors(50)) ``` ## End(Not run) --- ### tBinomTest **binomial t-test** #### Description Binomial t-test #### Usage ```r tBinomTest(x, trim=.1) ``` #### Arguments - `x`: numeric argument - `trim`: trim at? #### Value bla bla bla #### Author(s) Karl A. Dykema and Kyle A. Furge #### Examples ```r cat("this is an example") ``` writeGFF3 Output of a GFF compliant table describing the enhanced and diminished chromosomal bands. Description This function writes out a GFF compliant tab delimited file for integration with genome browsers. Usage writeGFF3(cset, genome, chr, file.prefix = "temp.gff", organism = NULL) Arguments cset expression set containing cytogenetic predictions, see reb genome chromLocation object containing annotation information chr chromosome to examine file.prefix character string - name of the output file, defaults to "temp.gff" organism if NULL, determination of the host organism will be retrieved from the organism slot of the chromLocation object. Otherwise "h", "r", or "m" can be used to specify human, rat, or mouse chromosome information Value writeGFF3 returns an invisible list of character vectors. Author(s) Karl J. Dykema, <karl.dykema@vai.org> Kyle A. Furge, <kyle.furge@vai.org> References MCR eset data was obtained with permission. See PMID: 15377468 See Also reb Examples data(idiogramExample) ix <- abs(colo.eset) > .225 colo.eset[ix] <- NA idiogram(colo.eset, ucsf.chr, "14", method="i", dlim=c(-1,1), col=.rwb) gffmat <- writeGFF3(colo.eset, ucsf.chr, "14", NULL) gffmat[1:4,] Index *Topic arith isAbnormal, 7 *Topic datasets Hs.arms, 6 mcr.eset, 8 *Topic manip absMax, 2 buildChromCytoband, 2 buildChromMap, 3 cset2band, 4 fromRevIsh, 5 movbin, 8 movt, 9 naMean, 11 regmap, 11 revish, 12 rmAmbigMappings, 13 smoothByRegion, 14 summarizeByRegion, 12, 16 tBinomTest, 17, 18 writeGFF3, 19 absMax, 2 buildChromCytoband, 2 buildChromLocation, 3, 14 buildChromLocation.2, 17 buildChromMap, 3, 17 cgma (summarizeByRegion), 16 cset2band, 4, 7 fromRevIsh, 5 Hs.arms, 6 image, 12 isAbnormal, 7 mcr.eset, 8 movbin, 8, 10, 15 movt, 9 naMean, 11 reb, 6, 12, 13, 19 reb (smoothByRegion), 14 regmap, 11, 17 revish, 6, 12 rmAmbigMappings, 13 smoothByRegion, 14 summarizeByRegion, 12, 16 tBinomTest, 17, 18 writeGFF3, 19
{"Source-Url": "http://www.bioconductor.org/packages/release/bioc/manuals/reb/man/reb.pdf", "len_cl100k_base": 7523, "olmocr-version": "0.1.50", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 38929, "total-output-tokens": 9176, "length": "2e12", "weborganizer": {"__label__adult": 0.0004265308380126953, "__label__art_design": 0.0006856918334960938, "__label__crime_law": 0.0004224777221679687, "__label__education_jobs": 0.0013055801391601562, "__label__entertainment": 0.00026106834411621094, "__label__fashion_beauty": 0.00021660327911376953, "__label__finance_business": 0.0003345012664794922, "__label__food_dining": 0.0004906654357910156, "__label__games": 0.0009889602661132812, "__label__hardware": 0.0020294189453125, "__label__health": 0.001750946044921875, "__label__history": 0.0004930496215820312, "__label__home_hobbies": 0.00020956993103027344, "__label__industrial": 0.0006165504455566406, "__label__literature": 0.00033164024353027344, "__label__politics": 0.0004189014434814453, "__label__religion": 0.0006380081176757812, "__label__science_tech": 0.30859375, "__label__social_life": 0.00022459030151367188, "__label__software": 0.05279541015625, "__label__software_dev": 0.6259765625, "__label__sports_fitness": 0.0004322528839111328, "__label__transportation": 0.00034499168395996094, "__label__travel": 0.0002522468566894531}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28413, 0.02583]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28413, 0.69109]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28413, 0.66094]], "google_gemma-3-12b-it_contains_pii": [[0, 1420, false], [1420, 2348, null], [2348, 3941, null], [3941, 5530, null], [5530, 6987, null], [6987, 8387, null], [8387, 9146, null], [9146, 10350, null], [10350, 12119, null], [12119, 13589, null], [13589, 14308, null], [14308, 15807, null], [15807, 16991, null], [16991, 18324, null], [18324, 20782, null], [20782, 22887, null], [22887, 25137, null], [25137, 26131, null], [26131, 27606, null], [27606, 28413, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1420, true], [1420, 2348, null], [2348, 3941, null], [3941, 5530, null], [5530, 6987, null], [6987, 8387, null], [8387, 9146, null], [9146, 10350, null], [10350, 12119, null], [12119, 13589, null], [13589, 14308, null], [14308, 15807, null], [15807, 16991, null], [16991, 18324, null], [18324, 20782, null], [20782, 22887, null], [22887, 25137, null], [25137, 26131, null], [26131, 27606, null], [27606, 28413, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 28413, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28413, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28413, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28413, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28413, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28413, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28413, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28413, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28413, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28413, null]], "pdf_page_numbers": [[0, 1420, 1], [1420, 2348, 2], [2348, 3941, 3], [3941, 5530, 4], [5530, 6987, 5], [6987, 8387, 6], [8387, 9146, 7], [9146, 10350, 8], [10350, 12119, 9], [12119, 13589, 10], [13589, 14308, 11], [14308, 15807, 12], [15807, 16991, 13], [16991, 18324, 14], [18324, 20782, 15], [20782, 22887, 16], [22887, 25137, 17], [25137, 26131, 18], [26131, 27606, 19], [27606, 28413, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28413, 0.01136]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
e3444b8166c80aaf2d736d94e4a803305d48a451
Automated eContract Negotiation in Web Service Environment: Trust Management Aspects Marius Šaučiūnas Vilnius University, Institute of Mathematics and Informatics, PhD student Vilniaus universiteto Matematikos ir informatikos instituto doktorantas Akademijos Str. 4, LT-08412 Vilnius E-mail: m.sauciuunas@gmail.com Albertas Čaplinskas Vilnius University, Institute of Mathematics and Informatics, Principal Researcher Vilniaus universiteto Matematikos ir informatikos instituto vyriausiasis mokslo darbuotojas Akademijos Str. 4, LT-08412 Vilnius E-mail: albertas.caplinskas@mii.vu.lt The paper addresses trust management problems in automated eContract negotiation among software agents in the web service environment. From the point of trust management, the aim of the negotiation process is to choose the most trustworthy providers from those who provide services that satisfy certain functional and other requirements. In order to negotiate about the trust, the negotiation process should provide some mechanisms to reason about requesters’ policies, specifying who and under what conditions may access private information (Kagal et al., 2004) and to guarantee that no legal norms would be violated in the contract. The paper familiarizes with details of the trust negotiation problem and with the approaches that have been proposed to solve this problem. It presents also a critical analysis of the proposed approaches and summarizes their challenges and drawbacks. The author analyses also one of the more advanced conceptual frameworks of negotiation process from the trust modelling perspective, highlights its drawbacks and proposes how to improve this framework. Introduction The object of this paper is a critical analysis of the automated trust negotiation process among software agents in the web service environment. At present, the whole contract lifecycle in eBusiness, including negotiation, the preparation of eContract and its acceptation, is handled mainly manually. In order to develop an electronic contract, people should not only write and agree upon it, but also to translate it manually into a certain computer-readable internal representation (Hasselmeyer et al., 2006). Automated negotiation and the usage of eContracts is still a challenge. Automated negotiation is especially important in the dynamic environments in which short-time contracts prevail. Such contracts have to be dynamically set to meet the short term needs of end-users’ and service providers’. In such circumstances, it is impossible to rely on the partners’ trustiness characteristics because they are not tested by a long-time experience or are even completely unknown. According to Comuzzi et al. (2005), in dynamic environments, negotiation seems to be the most suitable mechanism to agree on the features of a dynamic contract. The goal of the paper is to discuss the state of the art of the automated trust negotiation problem, to highlight the challenges and drawbacks of the proposed solutions, and to contrast the conceptual modelling problems of trust management aspects with the current negotiation process modelling concepts. The main contribution of the paper is the proposal how to improve one of the more advanced negotiation process object-oriented modelling frameworks (Lin, 2008). The rest of the paper is organized as follows. Section 2 familiarizes with details of the trust negotiation problem, Section 3 surveys the current state of the art of the proposed solutions, Section 4 analyses the concepts used to model the trust negotiation process and proposes how to improve Lin’s conceptual modelling framework and, finally, Section 5 concludes the work. Trust negotiation problem The trust negotiation problem arises in the context of eContracting. One of the most important requirements is that in the semantic web environments the eContracts should be prepared automatically, without human intervention. Contracts should be prepared by negotiation of software agents. Trust is one of the negotiated issues. Traditional security mechanisms, which assume that parties are known to each other and that trust can be granted only on the basis of partner identity, are insufficient in the Semantic Web environment. To identify the requester in this environment, the provider requires additional information sufficient for him to make the access permission decision. On the other hand, the requester wants to restrict the conditions under which his personal information will be automatically disclosed. In some cases, even requirements to be fulfilled between parties cannot be publicly disclosed. In such cases, parties can disclose the confidential information (e.g., credentials or sensitive business rules) to each other iteratively only, by negotiation, at each step increasing the level of trust. To specify the information associated with parties and the requirements to be fulfilled, an agent-understandable language with well-defined semantics is required. In addition, the negotiation mechanisms should be semantically enriched so that the required authorisation process would be supported; the illegal disclosure of information would be not possible; the access to sensitive resources would be controlled, and the trust between contracting parties would be ensured (Bonatti, Olmedilla, 2007). From the latter requirement it follows that the trust management problem in the context of eContracting is essential because many different aspects, including the implementation of trust relationship among the parties, choosing the relevant trust model should be considered in order to solve this problem. In more detail these issues will be discussed in the next section. The critical analysis of the proposed solutions Authorization and privacy for Semantic Web Services A significant amount of research has been done in trust negotiation for Semantic Web Services. Such services should be discovered and invoked automatically. The interaction with services is also performed automatically, and the decision which information has to be exchanged needs to be autonomous. To meet these requirements, Semantic Web services should handle users’ private information that has to be protected, autonomously decide who can access it and under what conditions. Policies, as part of Web Service representations, could be used for this purpose (Kagal et al., 2004). In addition, Web Services should know how to reason about their users’ policies. The main role of policies is to specify who can use service and under what conditions, and to define the information handling rules. However, the notion of policy is not unambiguous. Policies are used also for other aims. For example, security policies constrain access to some resources, and trust management policies are used to collect agent properties in open environments. Policies can also define business rules, formalize and automate business decisions. Besides, policies can be reactive, include actions to collect information about events (e.g., event logging) and have some side effects. On the other hand, all kinds of policies share common information and tightly interact with each other (Bonatti et al., 2005). Privacy and authorization policies have been proposed by Kagal et al. (2004). They determine under what conditions the information can be exchanged, what usage of this information is legitimate. They also constrain the provider to accept requests for service only from certain requesters. The use of these policies is symmetric; they constrain both the provider and the requester. It is assumed that the requester and the provider discover each other’s policies during negotiation on the contract. Afterwards, they must decide whether they can satisfy each other’s requirements. The privacy policies can be interpreted as a contractual obligation. If some partner provides details of another partner to a third party, the person represented by the injured partner could take a legal action against the guilty partner on the basis of the policy. This approach was developed for the client–server architecture, but it can be easily extended also for service–service interactions. Some authors of the work Kagal et al. (2004) were also involved in the development of the OWL-S (Martin et al., 2004) specification and for this reason are quite familiar with the details of OWL-S. So Kagal et al. (2004) proposed the so-called semantic markup that specifies the security characteristics of Web Services’ I/O parameters in OWL-S, “keeping information about the data’s structure but without revealing its value” (Kagal et al., 2004). This provides the basis for determining whether a service parameter fits a requester’s requirements and whether the two services’ I/O parameters match. The Rei language (Kagal, 2002) is used to describe such policies. The Rei language is based on the first-order logic and includes an RDF interface based on a given ontology (McBride et al., 2004). In this language, the deontic concepts of rights, permissions, obligations, dispensations, and policy rules are represented as the Prolog predicates. The Rei framework provides for the policy engine that reasons about the policy specifications. The OWL-S Matchmaker acts as a service discovery agency, takes the OWL-S description of a service that matches the requester’s functional requirements, extract this service, retrieves the requester’s policies and extracts the policies from the provider’s profile, and sends the OWL-S description and the policies to the Rei reasoning engine which reasons about the compatibility of the partners. If the policies are not compatible, the reasoning engine returns the value as false, and the Matchmaker continues to check the next service for compatibility. Otherwise, the reasoning engine returns the value as true, and the Matchmaker returns this service to the requester. Although the Rei framework has many advantages, it has also some serious drawbacks. Firstly, it assumes that all policies are public and that a policy engine or a matchmaker decides in a single evaluation step whether two policies are compatible or not (Bonatti, Olmedilla, 2007). However, in some scenarios, the sensitive policies should be protected and disclosed iteratively by negotiation. Secondly, it assumes that both the requester and the provider trust the Matchmaker and will disclose to it all policies, including sensitive ones. However, in the decentralised or multi-centre environments, such assumption cannot be accepted. Since requesters interact with the services unknown to them, they are not sure whether they can trust them. Consequently, some means should be provided to achieve trust. According to De Coi, Olmedilla (2008), in such environments even the access control based on identity mechanisms may be ineffective and should be replaced by role-based ones (Herzberg et al., 2000). **Role-based mechanisms** Access control, which uses role-based mechanisms (Herzberg et al., 2000), splits the authorization process into two steps – assignment of roles and checking whether a member of the assigned role(s) is allowed to perform the requested action. For this purpose, some access control policy is usually used. Such a policy consists of the rules that specify what roles must be satisfied that sensitive information could be shared. To prove its role, the party must present the signed credentials that represent a statement by some authority that the party performs a particular role. The credentials could be exchanged with different granularity (Nejdl et al., 2005). This means that some attributes unessential for the policy could be hidden. The role memberships required to satisfy a policy are called provisions (Puchalski, Swarup, 2008). The drawback of provisions is that they must be known to both parties. However, each party may rep- resent the same provisions in different ways. In decentralised or multicentre environments, this fact may cause the negotiation process to breakdown because of the schema matching problem. One more problem with provisions is that they do not provide any control of usage of information once it has been shared. Puchalski, Swarup (2008) propose to solve this problem by using the obligations together with the provisions and in such a way to specify the actions that must occur after disclosure of sensitive information. The authors propose also a trust negotiation model that includes support for both provisions and obligations. They model obligations as sets of actions in a bounded time range and extend a parsimonious automated trust negotiation strategy proposed by Winsborough et al. (2000) in such a way that obligations are used to replace provisions in cases when the necessary credentials are not available. The parsimonious strategy aims to achieve a successful negotiation with a minimal exchange of credentials. A drawback of this strategy is that the requirement to present signed credentials is insecure because sensitive information about the party’s credentials, and maybe about other sensitive attributes, can be disclosed. The main challenge and open question of the strategy is how to minimize the disclosure of sensitive information, if it can be done in general. **Advanced policy languages** Advanced policy languages, for example, EPAL (Ashley et al., 2003), WSPL, and XACML, provide means to specify the mechanisms that make authorization decisions based directly on the properties of the requester and do not split the authorization process into two parts (De Coi, Olmedilla, 2008). Finally, the languages that have been developed to support the trust negotiation (Winsborough et al., 2000) ensure that “trust between peers is established by exchanging sets of credentials between them in a negotiation which may consist of several steps” (De Coi, Olmedilla, 2008). The trust negotiation aspect for Semantic Web Services in advanced policy languages has been discussed also in many other papers written by a research group headed by Olmedilla (L3S Research Center and University of Hannover) and other researchers cooperating with this group. Olmedilla et al. (2004) suggest that the problem of trust negotiation can be solved including trust policies into the WSMO standard (De Bruijn et al., 2005), together with the information disclosure policies of the requester, by using the PeerTrust language (Gavriloaie et al., 2004) developed by the authors. This language provides the means to specify the trust negotiation and the delegation of authority. In this language, a policy is defined as “a rule that specifies in which conditions a resource (or another policy) might be disclosed to a requester” (Olmedilla et al., 2004). A service requester should include his/her policy in the request. Using this policy, the service discovery agency, or matchmaker in terms of Olmedilla et al. (2004), can compare it with service providers’ policies and take into account the policies’ compatibility. As a result, trust is established iteratively through the negotiation process. Such approach requires that the service discovery agency would have access to both the requester and the provider policies. This requirement impacts the architecture of the registry and the service discovery agency. The authors propose some distributed architecture allowing the service providers to keep their policies private, and an algorithm that matches the requester and provider policies in order to determine whether trust between them can be established. A drawback of this approach is that it does not provide any explicit reputation-based trust information, such as feedback from other trusty parties, service supply history, the quantities of delivered services, etc. It is important because the matchmaker in the service discovery process would take into account such information and compare the service providers by trustiness parameters if other requirements are satisfied. The same could be applicable to the service requester. **Reactive behaviour control** A number of trust-related policy languages, including PAPL (Bonatti, Samarati, 2000), PeerTrust (Gavriloaie et al., 2004), Ponder and Protune (Bonatti et al. 2005), have been proposed. Some of them were created in line with the basic requirements for Semantic Web, such as simplicity, expressiveness, scalability, enforceability, analyzability (Blaze et al., 1998). However, early policy languages did not provide any means to specify policies controlling reactive behaviour, for example, when “decisions have to be made by taking events into account and consequences of decisions have to be turned into real actions” (Alferes et al., 2008). In addition, the reactive behaviour should take into account also the peculiarities of the distributed and heterogeneous environment where heterogeneity exists in languages and data. It means that the semantics of the policy language should define actions with respect to events, their sequences and flow control. An attempt to remove this drawback has been made by Alferes et al. (2008), who proposed a framework for the specification and enforcement of reactive Semantic Web policies. Similar frameworks have been proposed also by Bailey et al. (2005), May et al. (2005), Paschke et al. (2007). Typically, such frameworks implement the reactive behaviour control using the Event-Condition-Action Rules (ECA) that are neither able to model agent control in the process of negotiation nor some other particular interactions like delegation of authority or information disclosure, which are obligatory in the electronic contracting for Semantic Web Services. Thus, the ECA rules do not provide for any final solution of the problem of agent control in the electronic contracting, either. Some time ago, an attempt to solve this problem has been made by Bonatti et al. (2010) who extend the concept of reactive Semantic Web policies in such a way that applying such policies a trusted communication among the agents would be ensured. The policies ensure that changes of knowledge stored in some Semantic Web database cause appropriate actions in the real world. Since the policies have the form of ECA rules in which, inter alia, the guarding condition provides for security checks that may be carried out only by trust negotiation, the policy-compliant communication should be trusted. SLD (Selective Linear Definite) derivation is used to evaluate the conditions. To this end, an explicit trust information exposed on the Semantic Web by other trusty parties is used. In order to access external semantic data, the policy definition language offers a special kind of predicates, the in-predicate “that allows calls to external methods to be integrated into the policy evaluation process” (Bonatti et al., 2010). In addition to a language to describe reactive Semantic Web policies, Bonatti et al. propose also some policy-compliant negotiation protocol. This protocol provides for “obeying as well as enforcing Semantic Web policies, automated agreement with other systems and trusted interactions with Semantic Web agents” (Bonatti et al., 2010). The details of the negotiation model are not described. The Reactive Semantic Web policies define in a declarative way the behaviour control of agents and combine reactive behaviour control with the trust and security features. The main features of the proposed framework are as follows: - well-defined semantics of the proposed language; - seamless integration of Semantic Web sources into the reasoning process; - support for trust negotiation; - the possibility to use in the negotiation process both strong (e.g., digital signature) and lightweight (e.g., user name and password) evidences. Reactive language may be based on different programming paradigms (Berstel et al., 2007). For example, production rules can be used instead of ECA rules. While the execution of the ECA rules system is event-driven, the execution of the production rules system is state-driven. Thus, the ECA rules ensure a more detailed behaviour control than does the production rule, but designing this kind of system is often more expensive than of state-based one. Besides, in some systems, state handling could be more important than event handling. Therefore, the choice of the paradigm depends on the system under design. To make the right choice, it is important to know the kind of application the rules are suited for. The ECA rules are recommended for a distributed application in which the demand to operate with events exists. Meanwhile, production rules should be used in logically rich applications in which the demand to manage the state of the system in each web node is more important than to manage the distributed aspects of the whole system. A drawback of the ECA rules is that it is unclear how the natural bottom-up evaluation schema of ECA rules should be integrated with the top-down evaluation adopted by the policy languages (Bonatti, Olmedilla, 2007). The proposed policy languages differ also by other properties (De Coi, Olmedilla, 2008), for example, what is the underlying formalism, how well the semantics of the language is defined, monotonicity of the language, expressiveness of conditions, what kind of actions can be specified within a policy, the means to describe the delegation of rights, supported evidences, to which degree the negotiation is supported, the kinds of answers sent by the policy engine to requesters, and extensibility. An exhaustive comparison of the current policy languages according to these criteria has been done by De Coi and Olmedilla (2008). According to them, only a few current policy languages, namely PAPL (Bonatti, Samarati, 2000), Cassandra (Becker, Sewell, 2004), PeerTrust (Gavriloaie et al., 2004), and Protune (Bonatti et al., 2005) directly support the negotiation. Main challenges of the trust negotiation problem A number of challenges still exist in the trust negotiation problem. The main challenges are as follows (Bonatti, Olmedilla, 2007; Bonatti et al., 2010): - negotiation success: in which way to guarantee a successful result of negotiations in cases when some serious difficulties arise (e.g., rules are not disclosed because of the lack of trust; credentials cannot be found because their repository is unknown, etc.)? - optimal negotiations: what strategies should be used to optimize information disclosure in the negotiation process? Is it possible to prevent not obligatory information disclosure by reasonable preconditions? - choosing of service: how should the requester choose a particular service when the request can be fulfilled in several different ways? Both a language for expressing preferences and efficient optimization algorithms are required to solve this problem. Although the problem is more or less explicitly assumed by most of approaches on trust negotiation, so far no concrete solution has been proposed. Additionally, as pointed out in the previous section, the integration of ECA rules is an open issue. Conceptual modelling of negotiation process and trust management aspects Lin’s conceptual framework Lin’s conceptual framework (Lin, 2008) is one of widely accepted conceptual models of the negotiation process for web services contracting. He sees this process as a collaboration of three conceptual entities: the service requester, the service provider and the service discovery agency (Figure 1). ![Figure 1. Lin’s architecture for negotiating in a service-oriented environment (Lin, 2008)](image-url) Each entity is defined by an UML package and further modelled by the use case, class, sequence, and package diagrams that define the internal architecture of the entity. The use-case diagrams define the goals of each entity and, consequently, all the use case diagrams together model the functional requirements of the negotiation system. Class diagrams specify classes that implement the entities. Sequence diagrams model the interactions of the objects participating in the negotiation. The Lin’s conceptual framework provides neither the state machine nor activity diagrams. This means that neither the states of entities nor the methods of the classes are modelled. The main scheme of negotiation is as follows: - the service requester asks the service discovery agency to find the required service and begin the negotiation process with it. If the request requires that a composition of services would be delivered, the requester “needs to maintain any relationships among these constituent requests for negotiating and issuing them in an adequate sequence and to deal also with the consequences from negotiating or issuing these requests” (Lin, 2008). The discovery agency can also maintain the list of preferred service providers and may predict the future needs of requesters, using the patterns of previous requests. So, it can prepare and sign contracts in advance. For this purpose, it maintains a contract template; - in order to find the requested service, the service discovery agency maintains a registry of services. It negotiates with the service requester and as a result returns to its descriptions of accessible suitable services together with information about their providers; - once the suitable service provider is discovered, the service requester negotiates with the service provider on the contract and signs it. The service requester evaluates also the quality of delivered services and updates the trust values in the list of preferred providers. The trust values are evaluations of service requesters to which extent the promises denoted in the corresponding contracts have been fulfilled by the provider; - any service provider should register its service in the service discovery agency and must negotiate with it for this purpose. The model provides for the service discovery protocol, service publishing protocol and service contracting protocol for negotiations between the service requester and the service discovery agency, between the service provider and the service discovery agency, and between the service requester and the service provider. Analysis of Lin’s conceptual framework from the trust management perspective In order to negotiate about trust, the conceptual model should provide the mechanisms to reason about requesters’ and providers’ policies that determine who can access the sensitive information and under what conditions (Kagal et al., 2004). The model should also guarantee that no legal norms will be violated in the negotiated contracts. Lin’s conceptual framework (Lin, 2008) does not provide any details how to do this. Most problematic issues are the way in which the framework models the service discovery agency, and the proposed negotiation protocol. The model assumes that the service discovery agency should be trusted by any party and that all parties would disclose to it all their policies, including sensitive ones. It is an obvious drawback from the trust management perspective. Another drawback is that the agency collects only the evaluations of service providers presented by the requesters (trust values in terms of the author). However, such trust values are insufficient to ensure trust between previously unknown parties. The proposed negotiation protocol does not address any trust negotiation issues, except trust values, and does not provide any mechanisms to extend it by trust negotiation strategies. Besides, it does not provide for any authorization decision process. Serious drawbacks from the trust management perspec- tive are also the service discovery and publishing protocols which do not offer any rules how to take into account the peculiarities of reactive behaviour (events, event sequences, event flow control, etc.). Also, Lin (Lin, 2008) does not discuss how the proposed model can be extended for the Semantic Web Services for which the trust requirements are even stronger because in this case the service discovery agency can determine at run time which actual previously unknown services should be employed to satisfy the requirements of a requester. Besides, the assumption that the service requester must maintain and negotiate all relationships among the parts of composite services as well as monitor and evaluate the quality of delivered services is not realistic. Proposals on how to improve Lin’s conceptual framework To adapt Lin’s conceptual framework to the needs of trust management issues, first of all it is necessary to remove the above drawbacks. Modelling of service discovery agency. In order to change the assumption that the service discovery agency should be trusted by any party, the framework should provide for trust negotiation between service requesters and the agency and between service providers and the agency. This means that any sensitive information should be disclosed for the agency step-by-step in the process of negotiation only. In addition, some trustworthy authority, for example, VeriSign*, should issue signed credentials to the agency. The access control, provision and obligation policies should also be provided to manage the credentials’ exchange process. This means that, in addition to the reputation-based trust, the conceptual framework should also provide for the policy-based trust mechanisms. To specify such mechanisms, some trust-related policy language should be used. Once signed credentials have been exchanged, the further usage of the disclosed sensitive information should also be controlled. For this purpose, the eAgreement between any service requester and the agency and between any service provider and the agency should be signed, and these agreements should ensure legal sanctions for the disclosure of the protected information without permission. Negotiation protocol. The negotiation protocol should be supplemented with negotiation strategies and incorporate an appropriate authorization decision process. Service discovery and service publishing protocols. Service discovery and service publishing protocols should be extended by reactive Semantic Web policies that combine the reactive behaviour control with the trust and security management mechanisms. Maintenance relationships among the parts of composite services, monitoring and evaluation of the quality of delivered services. The removal of this drawback requires that Lin’s framework would be reworked fundamentally. The volume of this paper does not allow to discuss the required reworking in detail. The main idea is that the mechanisms similar to that provided by the JBoss (Jamae, Johnson, 2009) or Spring (Laddad, 2010) frameworks should be used for this purpose. Extension of the framework for the Semantic Web. Advanced trust-related and reactive policies adapt Lin’s conceptual framework to the requirements of the Semantic Web. The OWL-based ontologies to describe services should also be provided in the model. To ensure the processing of the semantic annotations in service discovery, the matchmaking algorithms should be changed. Summary and Conclusions In this study, a critical analysis of the automated eContract trust negotiation process among software agents in the web service environment has been performed. In such environment, the negotiation mechanism should support the authorisation process, control access to sensitive information, prevent its illegal disclosure, and ensure trust between the contracting parties. From the trust management perspective, several significant aspects, such as trust relationships among the parties or a relevant trust model have to be taken into account when dealing with the eContract negotiation problem. From this perspective, five major groups of approaches and mechanisms facilitating the trust negotiation problem can be identified: policy-based approaches, role-based mechanisms, trust negotiation models, reactive behaviour control, and trust-related policy languages. These groups do not represent some independent alternatives but rather complement one another in a hierarchical way. The drawbacks and challenges of each group have been discussed in the paper. Further, the object-oriented Lin’s negotiation model (Lin, 2008), accepted by many researchers working in the automated negotiation field, has been evaluated from the trust negotiation perspective. Its shortcomings have been highlighted, and some significant improvements of the model have been proposed. The critical analysis of the automated eContract negotiation problem demonstrates that a lot of different approaches and useful ideas have been proposed up to date. However, there is a lack of works to synthesize all these approaches and ideas and to recommend how to use the results of research in practice. A lot of experimental research should be done to this end. It intends to be a major focus of our further studies. LITERATURE PASITIKĖJIMO UŽTIKRINIMO METODAI AUTOMATINIU BŪDU SUDARANT ELEKTRONINĖS PASAULINIO SAITYNO PASLAUGŲ GAVIMO SUTARTIS Marius Šaučiūnas, Albertas Čaplinskas S a n t r a u k a
{"Source-Url": "http://www.zurnalai.vu.lt/informacijos-mokslai/article/download/3146/2270", "len_cl100k_base": 6305, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 34346, "total-output-tokens": 8468, "length": "2e12", "weborganizer": {"__label__adult": 0.0004301071166992187, "__label__art_design": 0.0005469322204589844, "__label__crime_law": 0.0012531280517578125, "__label__education_jobs": 0.00098419189453125, "__label__entertainment": 0.0001112818717956543, "__label__fashion_beauty": 0.00022411346435546875, "__label__finance_business": 0.00148773193359375, "__label__food_dining": 0.0004193782806396485, "__label__games": 0.0006504058837890625, "__label__hardware": 0.0006403923034667969, "__label__health": 0.0008392333984375, "__label__history": 0.0003437995910644531, "__label__home_hobbies": 0.0001023411750793457, "__label__industrial": 0.0004684925079345703, "__label__literature": 0.0006604194641113281, "__label__politics": 0.0007777214050292969, "__label__religion": 0.0003888607025146485, "__label__science_tech": 0.10076904296875, "__label__social_life": 0.00015997886657714844, "__label__software": 0.02728271484375, "__label__software_dev": 0.8603515625, "__label__sports_fitness": 0.00023949146270751953, "__label__transportation": 0.0005927085876464844, "__label__travel": 0.00024330615997314453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37351, 0.0171]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37351, 0.25534]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37351, 0.88404]], "google_gemma-3-12b-it_contains_pii": [[0, 3303, false], [3303, 7433, null], [7433, 11860, null], [11860, 16168, null], [16168, 20325, null], [20325, 23547, null], [23547, 27563, null], [27563, 31547, null], [31547, 35330, null], [35330, 35330, null], [35330, 37351, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3303, true], [3303, 7433, null], [7433, 11860, null], [11860, 16168, null], [16168, 20325, null], [20325, 23547, null], [23547, 27563, null], [27563, 31547, null], [31547, 35330, null], [35330, 35330, null], [35330, 37351, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37351, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37351, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37351, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37351, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37351, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37351, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37351, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37351, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37351, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37351, null]], "pdf_page_numbers": [[0, 3303, 1], [3303, 7433, 2], [7433, 11860, 3], [11860, 16168, 4], [16168, 20325, 5], [20325, 23547, 6], [23547, 27563, 7], [27563, 31547, 8], [31547, 35330, 9], [35330, 35330, 10], [35330, 37351, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37351, 0.0]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
71d7e1839fb42c6a406500ab388278dbd4065bd6
RTSJ Extensions: Event Manager and Feasibility Analyzer Damien Masson, Serge Midonnet To cite this version: Damien Masson, Serge Midonnet. RTSJ Extensions: Event Manager and Feasibility Analyzer. JTRES 2008, Sep 2008, Santa Clara, California, USA, United States. pp.10–18. hal-00620350 HAL Id: hal-00620350 https://hal-upec-upem.archives-ouvertes.fr/hal-00620350 Submitted on 30 Sep 2011 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. RTSJ Extensions: Event Manager and Feasibility Analyzer Damien Masson and Serge Midonnet Université Paris-Est Laboratoire d’informatique Gaspard-Monge UMR 8049 IGM-LabInfo 77454 Marne-la-Vallée Cedex 2, France {damien.masson, serge.midonnet}@univ-paris-est.fr ABSTRACT We present in this paper our experience on the implementation with RTSJ of advanced algorithms to handle aperiodic traffic. We have adapted existing algorithms in order to take into account some constraints brought about by the use of Java language, and our aim which is to propose a portable mechanism. To circumscribe some difficulties, we had to use some programming ruses which could be better integrated into the specification. From these experiences resulted a set of modifications to the specification which we propose to submit to the community in this paper, in addition to a unified event manager framework. 1. INTRODUCTION The aim of the Real-time Specification for Java is to design APIs and virtual machine specifications for the writing of real-time applications in Java language. An important aspect of real-time system programming is the feasibility analysis to ensure the respect of temporal constraints. The case of hard real-time systems composed of periodic tasks was extensively elaborated for each of the many scheduling policies. The RTSJ is well designed to write such systems, with feasibility analysis methods integrated both in Scheduler abstract class and Schedulable interface. In more realistic systems composed of hard real-time periodic tasks and soft real-time aperiodic events, three approaches are possible to ensure that the interference of aperiodic tasks on periodic ones is bounded: 1) scheduling the aperiodic tasks with a lower priority ; 2) bounding the minimal inter-arrival time of aperiodic events, and studying the worst case scenario where they arrive at this worst rate ; 3) delegating the service of non periodic events to a mechanism which can be integrated into the analysis. There are classes in the RTSJ to model handlers associated to asynchronous events. These handlers are schedulable objects which can be set either with AperiodicParameters or with SporadicParameters. The latter enables the event to be integrated into the feasibility analysis process as a stand alone task, as if it were arriving at its maximal rate. For the third approach, the ProsessingGroupParameters class is proposed. It enables several schedulable objects to share release parameters, such as a mutual CPU time periodic budget. Unfortunately, it will not support any aperiodic task server policy nor any other advanced mechanism to handle aperiodic events. Moreover, as pointed out in [1], this is far too permissive and does not provide appropriate schedulability analysis techniques. Therefore, if the sporadic approach is not possible or too pessimistic, the only remaining solution with RTSJ is to schedule the task in the background. In [7] we began to address the problem of event managing with an API proposition to write task servers. We continued this work and developed an approximate slack stealer compatible with the task servers model in [8]. In this paper, we propose to generalize our task server model into an event manager model. We want to set up a homogeneous event manager framework and propose modifications to the specification in order to better include this framework. What we propose exactly is this: to modify the RTSJ feasibility analysis approach and to add methods to the Scheduler class for monitoring tasks execution and inserting treatments before and after each schedulable object execution. The remainder of the paper will be structured as follow: we shall discuss in Section 2 which programming level is suitable to write event handling mechanisms. We present a task server framework and a slack stealer for RTSJ respectively in Sections 3 and 4. We also discuss modifications on the specification in order to better integrate them. Some results are commented on in Section 5. We expose the limitations of the feasibility analysis model of the RTSJ and propose to extend it in Section 6. Finally we conclude in Section 7 where we recapitulate the extensions which we propose for the RTSJ. 2. PROGRAMMING EVENT HANDLING The aim of a scheduler is to schedule tasks. It is responsible for handling the execution of pending tasks by following... public class MyRealtimeThread extends RealtimeThread{ public static boolean waitForNextPeriod() { computeAfterPeriodic(); boolean return = RealtimeThread.waitForNextPeriod(); computeBeforePeriodic(); return return; }... Figure 1: WaitForNextPeriod() modifications This method call is blocking and the thread is activated automatically by the virtual machine at its next activation. Thus we just have to extend RealtimeThread or other Schedulable object and override the waitForNextPeriod() method in the way shown in Figure 1. Then it only remains to write the two methods computeBeforePeriodic() and computeAfterPeriodic(). Although this design may seem to be just a patch, it is extremely useful. For example, in [6] we used it to set up temporal fault detectors, and as we will see in Section 4 it enables computations to be carried out to estimate the available slack time in the system. In a system composed only of periodic tasks, it can be used to find out the amount of time a task has really consumed. A task stops its execution only when it has completed its periodic activation or when a higher priority task begins its own execution. Thus if the elapsed time is measured between the last time an entry is made into one of the two proposed methods, the consumed CPU time for each task can be kept up-to-date. It supposes that the execution stack is kept in order to know which is the executing task between the two calls. This behaviour is missing from the RTSJ and has been proposed by the JSR-282: “Add a method to Schedulable and processing group parameters, that will return the elapsed CPU time for that schedulable object (if the implementation supports CPU time)”. With our patch we can obtain the elapsed CPU time even if the implementation cannot support the method which provides that elapsed CPU time. A drawback is that the cost must be paid both in memory and CPU usage. The proposed MyRealtimeThread class (Figure 1) requires that all the tasks in the system either extend or use it. If one uses both MyRealtimeThread and regular Schedulable objects, one can no longer deduce anything from the elapsed time since the last beginning or end of a MyRealtimeThread implemented task. Moreover, the patch becomes inefficient if AsyncEventHandler are used to write periodic tasks, or if the tasks share resources, inducing priority inversions. So it is, in our opinion, a good idea to integrate this mechanism with the RTSJ. In order to do so, we propose to modify the Scheduler abstract class by adding abstract methods automatically called before tasks instances start and after their completion, for all schedulable objects. Appropriate methods can also be called for other context switches due to monitor control algorithms or tasks which suspend themselves. Then we also propose to add a CPU time monitor to the Scheduler which uses this mechanism. This monitor can be turned off if it is not needed or if the cost enforcement is available on the targeted platform. If it is turned on, a method RelativeTime getConsumedCPUTime(Schedulable s) should return the total amount of time the Schedulable s has consumed before its last activation. Another method getTotalCPUTime(Schedulable s) can return the total CPU time consumed by the schedulable. If priority inversion due to resource sharing has occurred, the blocking time values can be incorporated or return separately. As this mechanism has a non negligible cost, it has to be integrated into the feasibility analysis. In order to do so in a transparent way, a method `getContextSwitchCost()` should be added in the `Scheduler` class. All the extension propositions we make are summed up in Section 7. 2.2 Ensuring Timing Constraints Let us suppose here that an event management algorithm allows us to start an `AsyncEventHandler h` at the priority `p`, assuming that the worst case execution time of `h` is `C_h`. We need to ensure that `h` does not exceed this worst case execution time, since the algorithm relies on this parameter to allow us to start `h` at the priority `p`. Cost enforcement is an optional behavior in the specification, so we do not want to use it. The only remaining solution is to set up a timer which suspends the task if it has not completed its execution at the expiration. But the consequence is that we need to cancel the timer each time the task is preempted, and resume it each time the task is resumed. We can do that with the patch proposed in Section 2.1, but the overhead of these timers’ management would be linear with the number of pending tasks. Such a cost can neither be bounded nor integrated in the feasibility analysis since the number of aperiodic tasks cannot be controlled. This induces a strong limitation in the user land approach to managing non periodic events: we have to schedule them at the highest available priority. When doing that, we ensure that we need only one control timer at the same time and this timer is set up once and for all. The other limitation is that if a thread can be suspended through the `AsynchronouslyInterruptedException` mechanism, it is not possible to resume it. This means that we need to adapt existing algorithms available in real-time scheduling literature. Indeed, available resources (server remaining capacity or slack) can exist but may be lower than the worst case execution cost of any pending aperiodic events. This leads to situations where some resources available in the general context of real-time systems are not expendable because of the use of the RTSJ. To limit this side effect, we can take action with the queue policy in the event manager. Moreover, for the same reason, a task with a very high cost can never be carried out. To avoid this drawback, we imagined a BS-duplication policy which consists of simultaneously scheduling each aperiodic task in background and enqueuing it in the event manager. The first replication which completes its execution interrupts the other. 3. TASK SERVERS IMPLEMENTATION Task servers which were first introduced in 1987 [5], are special tasks in the system which are in charge of servicing aperiodic traffic. They are characterized by a capacity and a policy to recover it. The execution of the aperiodic traffic cannot interfere with the system more than a server using its full capacity. The main advantage is that these special tasks can be integrated into the feasibility analysis (with or without modifications to it). Many task server algorithms have been proposed in the two previous decades [10, 9]. However, the RTSJ does not support any particular task server policy. It proposes two classes `AsyncEvent` and `AsyncEventHandler` to model respectively an asynchronous event and its handler. The only way to include the handler in the feasibility process is to consider it as an independent task, and that implies knowing its worst-case occurring frequency. Even if the maximum value of the frequency is known, it often represents a very pessimistic approximation. The RTSJ also provides the class `ProcessingGroupParameters` (PGP), which allows programmers to assign resources to a group of tasks. A PGP object is like a `ReleaseParameters` which is shared between several tasks. More specifically, PGP has a `cost` field which defines a time budget for its associated tasks set. This budget is replenished periodically, since PGP has also a field `period`. This mechanism provides a way to set up a task server at a logical level. Unfortunately it does not take into account any server policy. Moreover, as pointed out in [1], it is far too permissive and it does not provide appropriate schedulability analysis techniques. Finally, as cost enforcement is an optional feature for a RTSJ-compliant virtual Java machine, PGP may not be available. This is the case with the Timesys Reference Implementation of the specification (RI). We proposed a set of classes to write task servers in [7]. This contribution is summed up by Figure 2. It is composed of six new classes: ServableAsyncEvent (SAE), ServableAsyncEventHandler (SAEH), TaskServer, PollingServer and DeferrableServer. The notion of “Servable” object is similar to the “Schedulable” one, except that if a Schedulable is executed by a scheduler, whereas a Servable is handled by a server. Thus, the Schedulable object is the server. A ServableAsyncEvent extends the regular AsyncEvent. It models an asynchronous event where its handlers can be either asynchronous event handlers, i.e. regular Schedulable objects, or ServableAsyncEventHandler: shells containing logic that task servers can execute. Each ServableAsyncEventHandler has to be registered in one TaskServer. Then, when a ServableAsyncEvent is fired, each of its regular AsyncEventHandlers is set ready for execution. For each of its servable handlers the servableEventReleased(SAEH h) method of its associated server is invoked. After this the server is notified and can, for example, enqueue the handler. This allows developers to write different behaviours for different task server policies: the handlers can be scheduled in a FIFO order, or any other desired order, depending on the implemented policy. Finally TaskServer inherits from Scheduler in order to enable the use of the feasibility methods for aperiodic tasks inside the server. 3.1 Polling and Deferrable Server Modifications As a demonstration of the efficiency of this design, we implement two well known service policies: the Polling Server and the Deferrable Server, taking into account the limitations described in Section 2 to modify these policies. The servers have to run at the highest priority, which prohibits the use of several servers. As we cannot resume the threads, we start a handler only if the remaining capacity in the server is equal to or greater than its worst case execution time. This leads to situations where the server still has capacity, and has tasks to execute, but remains inactive. In the case of the deferrable server, the loss in performance is limited, as the server has bandwidth conservation, but the polling server loses its remaining capacity when it becomes inactive. To limit the loss of performances, we investigated a lot of queue policies. We tried a simple FIFO, a LIFO, scheduling first the task with the highest cost (HCF) and finally the one with the lower (LCF). The policy which perform better in all the cases is the LCF policy. Moreover, to ensure that all task can be scheduled, we set up a policy we called “BS duplication” as explained in Section 2.2. Figure 3 show results of simulations we conducted to estimate the loss of performances. Surprisingly, with the impact of the BS duplication, the policies performances are quiet similar. We even have the modified DS policy which performs better than the preemptive one in some cases. From an implementation point of view, there is no specific problem with the Polling Server. We use a delegation to a RealtimeThread set up with PeriodicParameters. Writing the code for the deferrable server was a little bit more tricky, since it can be activated at any moment. So we use a delegation to an AsyncEventHandler associated to a special AsyncEvent “wakeUp”. This event is fired each time a ServableAsyncEventHandler is added to the waiting queue of aperiodic requests (i.e. each time a ServableAsyncEvent is fired). This handler is also associated to a PeriodicTimer in order to manage the capacity. Despite the limitations and adaptations to the policies, the performances are much better than just executing aperiodic service as a background task. However, the extensive evaluation of performances is not the purpose of this paper. 3.2 Integrating Servers and Feasibility Analysis Integrating the Polling Server in the feasibility analysis process is straightforward. Indeed, it is just a regular periodic task in the worst case (when it uses its full capacity). The general overhead of the mechanism can be deducted from the server capacity. We use a PeriodicParameters object in which the field cost is used for the capacity. The Deferrable Server induces more difficulties. In fact, the feasibility analysis has to be modified because in the worst case, this server does not interfere like a regular pe- periodic task on the lower priority tasks. This is due to its ability to conserve its capacity when there is no traffic to the server. The counter example of a feasible system with a Polling Server but not feasible with a Deferrable Server is well known as the double hit effect. It is represented in Figure 4. A sufficient and necessary condition for the feasibility is available and described in [3]. However, with the current specification, the only way to change the feasibility analysis is not modified, which is the case. We propose the creation of a new interface FeasibilityAnalyzer, which can be integrated into the scheduler as a field, and can be changed by a setter method call. This question will be further discussed in Section 6. 4. A SLACK STEALER FOR RTSJ For the general problem of jointly scheduling hard real-time periodic tasks and soft real-time non periodic events, the best known solution for minimizing the soft tasks’ response times whilst guaranteeing hard tasks deadlines is the use of a slack stealer, proposed in [4] and [2]. An approximate algorithm, DASS, is also presented in [2]. We proposed in [8] MASS, an algorithm to estimate the slack using data only updated when periodic tasks end or begin. This algorithm is more pessimistic than DASS but needs less computations, and only when tasks ends. The slack at the instant $t$ is the total amount of time you can suspend all the tasks without inducing temporal faults (i.e. deadline misses). We perform $O(n)$ complexity operations each time a task ends, and this allows us to compute a lower limit on the available slack in constant time. Implementing this algorithm with RTSJ is simple as it is designed to take into account RTSJ userland restrictions evoked in Section 2. Indeed, the evaluation relies on two pieces of data for each task kept up to date whenever a task begins and ends. 4.1 Estimating the Slack We consider a process model of a mono processor system, $\Phi$, made up of $n$ periodic tasks, $\Pi = \{\tau_1, ..., \tau_n\}$ scheduled with fixed priorities. Each $\tau_i \in \Pi$ is a sequence of requests for execution characterized by the tuple $\tau_i = (C_i, D_i, T_i, P_i)$. Where $C_i$ is the worst case execution time of the request ; $D_i$ is its relative deadline; $T_i$ its period and $P_i$ its priority, 3 being the highest. The system also has to serve an unbounded number $p$ of aperiodic requests, $\Gamma = \{\sigma_1, ..., \sigma_p\}$. A request $\sigma_i \in \Gamma$ is characterized by a worst case execution time $C_i$. Let $S_{i,t}^{max}$ denotes the slack at priority level $i$ available at the instant $t$, i.e. the amount of time the processor will be idle for priority levels higher or equal to $i$ between $t$ and the next $\tau_i$ deadline. Then $S_{i,t}^{max}$, the available time at the highest priority at time $t$ is the minimum over all the $S_{i,t}^{max}$. This quantity can increase only when a periodic task ends its periodic execution. So we propose to keep up-to-date for all task $S_{i,t}$, a lower bound on $S_{i,t}^{max}$. $S_t$ is computed each time a periodic task ends, and is assumed to have decreased by the elapsed time otherwise. So the $S_{i,t}$ values only have to be correct (i.e. lesser than or equal to $S_{i,t}^{max}$) in such situations. $S_{i,t}$ is described using two elements: firstly a lower bound on the maximum possible work at priority $i$ regardless of lower priority processes, $w_{i,t}$, secondly the effective hard real-time work we have to process at the instant $t$, $c_i(t)$. Equation 1 recapitulates the operations needed to obtain a lower bound on the available slack at time $t$. These operations have an $O(n)$ time complexity. \[ \forall i, S_{i,t} = \min_{t \geq \tau_i} S_{i,t} \] Equation (3) indicates the operations which have to be performed at the beginning of the task $\tau_i$ and Equation (3) indicates the operations which have to be performed at the end of the task $\tau_i$. Equation 3 still holds even if we allow resources sharing and priority inversions (we just have to replace $C_i$ by $C_i + B_i$), but the operation described by equation 2 has also to be performed when a critical section is entered. At the end of a $\tau_i$ periodic execution, the work available: - is increased by $T_i$ minus the interference from higher priority tasks activated during its next instance for $\tau_i$; • is decreased by $dt_w$ but increased by $C_i$ for tasks with a lower priority; • is decreased by $dt_w$ for tasks with a higher priority. The interference, which we shall call $I_i$ in Equation (3), can take two different values depending on the instance. The most accurate value can be found in a constant time complexity operation. The operations needed when a task completes all have a time complexity linear in the number of tasks. Their cost can be bounded to the worst case execution cost $C_{ep}$. Respectively, the operation needed when a periodic task begins has a constant time complexity and can be bounded to the worst case execution cost $C_{bp}$. These values can be integrated into the feasibility analysis process. This integration is not harmonious with the current RTSJ. If we use the MyRealtimeThread class proposed in Figure 1, we have to increase the cost of each task by $C_{ep} + C_{bp}$. A more elegant way to proceed is to use additional methods in the Scheduler class to add the computation before and after all the Schedulable executions, and to add a new parameter to the Scheduler class which represents the context switch cost. 4.2 Using the Slack A slack stealer can be viewed as a task server with a capacity which is always equal to the available slack. Thus when an aperiodic task arrives, the slack can be estimated in a constant time operation. After that, as with the task servers, multiple policies are possible. We can add a new class SlackStealer to Figure 2. This class extends TaskServer. The slack stealer has to be activated aperiodically: when an aperiodic task is released while the queue is empty, and when the slack time is increased while the queue is not empty. So we can use the same mechanism we used for writing the code of the DeferableServer: The logic of the SlackStealerTaskServer class can be delegated to an AsyncEventHandler associated to a special AsyncEvent. However, the slack stealer approach is not really similar to that of the server. The server is similar to a reservation approach whilst the slack stealer uses the unused available resources. So we prefer to rename our class TaskServer in EventManager. 5. SOME RESULTS Figure 5 exposes some simulation results with modified server and RTSJ compliant slack stealer algorithms. MASS designates the algorithm implementable with RTSJ using the slack approximation while ESS and DASS designate the same algorithm with respectively an exact knowledge of the available slack and a slack approximation obtained by DASS algorithm. We compared a lot of queuing policies, and the best one consisted of first scheduling the task with the lowest cost. We named this policy LCF (Lower Cost First). The use of the BS-duplication is noted “&BS”. ![Figure 5: RTSJ compliant Slack Stealer and Task servers algorithms](image-url) We measured the mean response time of aperiodic tasks with different aperiodic and periodic loads. First, we generated groups of ten periodic task sets with utilization levels of 30, 50, 70 and 90%. The results presented are averages over a group of ten task sets. We conduct the experiments with a variable number of periodic tasks from 2 to 100. The periods are randomly generated with an exponential distribution in the range [40-2560] time units. Then the costs are randomly generated with an exponential distribution in the range [1-period] and deadlines with an exponential distribution in the range [cost-period]. Priorities are assigned assuming a Deadline Monotonic Policy. Non feasible systems are rejected, the utilization is computed and systems with an utilization level differing by less than 1% from that required are kept. Then, we generate groups of ten aperiodic tasks sets with a range of utilization levels (plotted on the X-axis in the following graphs). Costs are randomly generated with an exponential distribution in the range [1-16] and arrival times following graphs. Costs are randomly generated with an exponential distribution in the range [40-2560] time units. Then the costs are randomly generated with an exponential distribution in the range [1-period] and deadlines with an exponential distribution in the range [cost-period]. Priorities are assigned assuming a Deadline Monotonic Policy. Periods are generated with a uniform distribution in the range [1-2560] time units. Then the costs are randomly generated with an exponential distribution in the range [1-period] and deadlines with an exponential distribution in the range [cost-period]. Priorities are assigned assuming a Deadline Monotonic Policy. For other applications, a sufficient test is needed, a test which can be necessary or not. However, the FA is highly dependent on the scheduling algorithm. So in our opinion, the feasibility analyzer has to be a separate object by itself, but must be integrated into the scheduler object as a parameter, which the developers can change according to the target application they are writing. Of course, this is already possible with the current specification by overriding the Scheduler and changing the FA methods, but this is not really a coherent approach. Indeed, the FA does not have to affect the scheduling behavior, and changing the analysis algorithm should not mean changing the scheduler object since the task scheduled will remain the same. We measured the mean response time of aperiodic tasks with different aperiodic and periodic loads. First, we generated groups of ten periodic task sets with utilization levels of 30, 50, 70 and 90%. The results presented are averages over a group of ten task sets. We conduct the experiments with a variable number of periodic tasks from 2 to 100. The periods are randomly generated with an exponential distribution in the range [40-2560] time units. Then the costs are randomly generated with an exponential distribution in the range [1-period] and deadlines with an exponential distribution in the range [cost-period]. Priorities are assigned assuming a Deadline Monotonic Policy. Non feasible systems are rejected, the utilization is computed and systems with an utilization level differing by less than 1% from that required are kept. Then, we generate groups of ten aperiodic tasks sets with a range of utilization levels (plotted on the X-axis in the following graphs). Costs are randomly generated with an exponential distribution in the range [1-16] and arrival times are generated with a uniform distribution in the range [1-10000]. Our simulations end when all soft tasks have been served. The figure presents the best policy for each algorithm (BS, MPS, MDS, MASS, DASS and ESS) on systems with $D_i < T_i$ for all periodic tasks. Equivalent results are obtained on systems with $D_i = T_i$. For all load conditions, servers bring real improvement compared to BS. The DS offers better performances than the PS. MASS performs better than the DS, DASS better than MASS and ESS better than DASS. For systems with periodic loads of 30% and 50%, results obtained with MASS, DASS and ESS are quite similar. Considering the differences between these algorithms' time complexities (linear with very low constant, linear with a high constant and pseudo polynomial), this is a very satisfying result. However MASS performances degrade quickly than DASS ones when periodic load increases. Nevertheless MASS remains a really good implementable algorithm even for systems with a periodic load of 90%. 6. FEASIBILITY ANALYSIS DESIGN The feasibility process in the RTSJ is not suitable for mixed task systems. Indeed, the problem is that the methods are part of the scheduler class and this has several disadvantages. The choice of feasibility analysis (FA) algorithms depends on the type of application. A simple statistical condition can be suitable for a multimedia application. Here, a deadline miss can be acceptable, if the analysis ensures that after $m$ periods, $k$ instances of the task can be executed. In other cases, a necessary but non sufficient condition on the feasibility can be acceptable. For example if a detection mechanism and a temporal fault gesture is set up at runtime, or if the worst case execution times are known to have been over-evaluated. For other applications, a sufficient test is needed, a test which can be necessary or not. However, the FA is highly dependent on the scheduling algorithm. So in our opinion, the feasibility analyzer has to be a separate object by itself, but must be integrated into the scheduler object as a parameter, which the developers can change according to the target application they are writing. Of course, this is already possible with the current specification by overriding the Scheduler and changing the FA methods, but this is not really a coherent approach. Indeed, the FA does not have to affect the scheduling behavior, and changing the analysis algorithm should not mean changing the scheduler object since the task scheduled will remain the same. We propose the addition of an interface FeasibilityAnalyzer in the RTSJ. A field of Scheduler can be typed with this interface, and all the methods relative to the FA delegated to it. Then it is possible to change the default FA on demand. This interface should have the methods addToFeasibility(Schedulable s) and boolean isFeasible(). We propose two other interfaces extending FeasibilityAnalyzer: SufficientTest and NecessaryTest. Then we can set up a fourth interface SufficientAndNecessaryTest which extends the others two. In this way, if a class can use any FA policy, it can just type its field FeasibilityAnalyzer, but all the other degrees of precision can be enforced just by the correct choice of the type. We can easily write the class LoadCondition as an implementation of NecessaryTest. This class must compute the system load (for fixed priority scheduled systems : $U = \sum \frac{C_i}{T_i}$). For the fixed priority preemptive case, which is the only scheduling policy imposed for an RTSJ compliant JVM, we can add the abstract class ResponseTimeAnalysis as an implementation of the interface SufficientAndNecessaryTest, and finally the class FixedPriorityRTAnalysis as its subclass. Possible methods for this abstract class are RelativeTime, computeLevelIBusyPeriod(int i) which computes the i-level busy period and computeResponseTime(int i, int q) which computes the response time of the instance $q$ of the task $\tau_i$. 7. CONCLUSIONS In this paper, we have shown how an aperiodic event traffic can be handled using the RTSJ implementation in a portable way. We have presented the necessary adaption of existing algorithms in order to take into account the constraints brought about by the use of the Java language. We have presented a set of modifications to the RTSJ Specification and a unified event management framework. This framework enables RTSJ programmers to write a simple and portable application code. We now recapitulate the modifications and additions which we consider to be relevant in order to improve the specification. Then, the RTSJ extension proposed is the following: - add `void computeBefore(Runnable logic, RelativeTime cost)` and `void computeAfter(Runnable logic, RelativeTime cost)` methods in `Scheduler` abstract class. With these methods the scheduler automatically adds logic parameter before or after each instance of periodic or non periodic tasks handled. The parameter cost is the logic worst case execution time; - add the boolean field `monitorCPUTime` and the two methods `RelativeTime getConsumedCPUTime(Schedulable s)` and `getTotalCPUTime(Schedulable s)` to the class `Scheduler`, in order to allow the CPU time consumption monitoring when the underlying operating system does not directly provide this feature; - add a `contextSwitchCost` field typed `RelativeTime` and its getter method to the `Scheduler` abstract class, in order to integrate the context switch cost, the logic added by the previously proposed methods and the optional CPU monitoring mechanism to the feasibility analysis; - add new classes `ServableAsyncEvent` and `ServableAsyncEventHandler`. The first extends the class `AsyncEvent` and models an event which can be associated both to regular schedulable handlers and special handlers associated to an event manager; - add the new abstract class `EventManager`, and its implementations `PollingServer`, `DeferrableServer`, `SlackStealer`. We can provide the Java code for these three classes; - add the interfaces `FeasibilityAnalyzer`, `SufficientTest`, `NecessaryTest` and `SufficientAndNecessaryTest`; - add the abstract class `ResponseTimeAnalysis` and its subclass `FixedPriorityRTAnalysis`; - integrate these feasibility relative classes and interfaces into the `Scheduler` abstract class as a field and with setter/getter methods. Delegate the behaviors of existing feasibility analysis methods to this field. Figure 6 recapitulates these propositions in a UML diagram. In future work, we have to investigate on the interactions between the memory parameters and the feasibility analyzer object. We also have to clarify the behavior of the proposed CPU time user land module when resource sharing is allowed. **Acknowledgments** We want to thank Sian Cronin and Aurore Sibois for their valuable advices and English corrections. **8. REFERENCES**
{"Source-Url": "https://hal-upec-upem.archives-ouvertes.fr/hal-00620350/file/hal.pdf", "len_cl100k_base": 7497, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 32135, "total-output-tokens": 8690, "length": "2e12", "weborganizer": {"__label__adult": 0.0002663135528564453, "__label__art_design": 0.00026297569274902344, "__label__crime_law": 0.00028395652770996094, "__label__education_jobs": 0.0004792213439941406, "__label__entertainment": 7.492303848266602e-05, "__label__fashion_beauty": 0.00011986494064331056, "__label__finance_business": 0.00022983551025390625, "__label__food_dining": 0.00026154518127441406, "__label__games": 0.0005745887756347656, "__label__hardware": 0.001415252685546875, "__label__health": 0.00040340423583984375, "__label__history": 0.00026488304138183594, "__label__home_hobbies": 7.456541061401367e-05, "__label__industrial": 0.0004422664642333984, "__label__literature": 0.00018703937530517575, "__label__politics": 0.000278472900390625, "__label__religion": 0.0004570484161376953, "__label__science_tech": 0.04571533203125, "__label__social_life": 6.687641143798828e-05, "__label__software": 0.010528564453125, "__label__software_dev": 0.9365234375, "__label__sports_fitness": 0.0002435445785522461, "__label__transportation": 0.00051116943359375, "__label__travel": 0.00021755695343017575}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37517, 0.01861]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37517, 0.35806]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37517, 0.90332]], "google_gemma-3-12b-it_contains_pii": [[0, 934, false], [934, 5325, null], [5325, 8733, null], [8733, 13329, null], [13329, 17648, null], [17648, 22046, null], [22046, 24891, null], [24891, 32941, null], [32941, 35774, null], [35774, 37517, null]], "google_gemma-3-12b-it_is_public_document": [[0, 934, true], [934, 5325, null], [5325, 8733, null], [8733, 13329, null], [13329, 17648, null], [17648, 22046, null], [22046, 24891, null], [24891, 32941, null], [32941, 35774, null], [35774, 37517, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37517, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37517, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37517, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37517, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37517, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37517, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37517, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37517, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37517, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37517, null]], "pdf_page_numbers": [[0, 934, 1], [934, 5325, 2], [5325, 8733, 3], [8733, 13329, 4], [13329, 17648, 5], [17648, 22046, 6], [22046, 24891, 7], [24891, 32941, 8], [32941, 35774, 9], [35774, 37517, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37517, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
397b33d6dce3295e1b53439ee955c9e7579207ab
Software Life Cycle Models Software Life Cycle Models The goal of Software Engineering is to provide models and processes that lead to the production of well-documented maintainable software in a manner that is predictable. Software Life Cycle Models “The period of time that starts when a software product is conceived and ends when the product is no longer available for use. The software life cycle typically includes a requirement phase, design phase, implementation phase, test phase, installation and check out phase, operation and maintenance phase, and sometimes retirement phase”. Build & Fix Model - Product is constructed without specifications or any attempt at design - Adhoc approach and not well defined - Simple two phase model Build & Fix Model - Suitable for small programming exercises of 100 or 200 lines - Unsatisfactory for software for any reasonable size - Code soon becomes unfixable & unenhanceable - No room for structured design - Maintenance is practically not possible Waterfall Model This model is named “waterfall model” because its diagrammatic representation resembles a cascade of waterfalls. This model is easy to understand and reinforces the notion of "define before design" and "design before code". The model expects complete & accurate requirements early in the process, which is unrealistic. Waterfall Model Problems of waterfall model i. It is difficult to define all requirements at the beginning of a project ii. This model is not suitable for accommodating any change iii. A working version of the system is not seen until late in the project’s life iv. It does not scale up well to large projects. v. Real projects are rarely sequential. Incremental Process Models They are effective in the situations where requirements are defined precisely and there is no confusion about the functionality of the final product. After every cycle a useable product is given to the customer. Popular particularly when we have to quickly deliver a limited functionality system. Iterative Enhancement Model This model has the same phases as the waterfall model, but with fewer restrictions. Generally the phases occur in the same order as in the waterfall model, but they may be conducted in several cycles. Useable product is released at the end of each cycle, with each release providing additional functionality. - Customers and developers specify as many requirements as possible and prepare a SRS document. - Developers and customers then prioritize these requirements. - Developers implement the specified requirements in one or more cycles of design, implementation, and test based on the defined priorities. Iterative Enhancement Model 1. Requirements specification 2. Architectural design 3. Detailed design 4. Implementation and unit testing 5. Integration and testing 6. Operation and Maintenance The Rapid Application Development (RAD) Model - Developed by IBM in 1980 - User participation is essential The requirements specification was defined like this The developers understood it in that way This is how the problem was solved before. This is how the program is described by marketing department This, in fact, is what the customer wanted … The Rapid Application Development (RAD) Model - Build a rapid prototype - Give it to user for evaluation & obtain feedback - Prototype is refined With active participation of users - Requirements Planning - User Description - Construction - Cut over The Rapid Application Development (RAD) Model Not an appropriate model in the absence of user participation. Reusable components are required to reduce development time. Highly specialized & skilled developers are required and such developers are not easily available. Evolutionary Process Models Evolutionary process model resembles iterative enhancement model. The same phases as defined for the waterfall model occur here in a cyclical fashion. This model differs from iterative enhancement model in the sense that this does not require a usable product at the end of each cycle. In evolutionary development, requirements are implemented by category rather than by priority. This model is useful for projects using new technology that is not well understood. This is also used for complex projects where all functionality must be delivered at one time, but the requirements are unstable or not well understood at the beginning. Evolutionary Process Model Concurrent activities Outline description Specification Development Validation Initial version Intermediate versions Final version ### Prototyping Model - The prototype may be a usable program but is not suitable as the final software product. - The code for the prototype is thrown away. However experience gathered helps in developing the actual system. - The development of a prototype might involve extra cost, but overall cost might turnout to be lower than that of an equivalent system developed using the waterfall model. Prototyping Model - Linear mode - “Rapid” Models do not deal with uncertainty which is inherent to software projects. Important software projects have failed because project risks were neglected & nobody was prepared when something unforeseen happened. Barry Boehm recognized this and tried to incorporate the “project risk” factor into a life cycle model. The result is the spiral model, which was presented in 1986. Spiral Model - Determine Objectives, Alternatives and Constraints - Obtain Commitment - Evaluate Alternatives - Identify, Resolve Risks Review Life Cycle Plan Development Plan Integrate, Test Plan Concept Proto Reg. Product Design Detail Design Prototype Operating Prototype Unit Test - Plan - Develop • Verify - Design Validation and Verification - Integration - Implement Risk Analysis Risk Analysis Risk Analysis Spiral Model The radial dimension of the model represents the cumulative costs. Each path around the spiral is indicative of increased costs. The angular dimension represents the progress made in completing each cycle. Each loop of the spiral from X-axis clockwise through 360° represents one phase. One phase is split roughly into four sectors of major activities. - **Planning:** Determination of objectives, alternatives & constraints. - **Risk Analysis:** Analyze alternatives and attempts to identify and resolve the risks involved. - **Development:** Product development and testing product. - **Assessment:** Customer evaluation **Spiral Model** - An important feature of the spiral model is that each phase is completed with a review by the people concerned with the project (designers and programmers). - The advantage of this model is the wide range of options to accommodate the good features of other life cycle models. - It becomes equivalent to another life cycle model in appropriate situations. The spiral model has some difficulties that need to be resolved before it can be a universally applied life cycle model. These difficulties include lack of explicit process guidance in determining objectives, constraints, alternatives; relying on risk assessment expertise; and provides more flexibility than required for many applications. The Unified Process • Developed by I. Jacobson, G. Booch and J. Rumbaugh. • Software engineering process with the goal of producing good quality maintainable software within specified time and budget. • Developed through a series of fixed length mini projects called iterations. • Maintained and enhanced by Rational Software Corporation and thus referred to as Rational Unified Process (RUP). Phases of the Unified Process - Inception - Definition of objectives of the project - Elaboration - Planning & architecture for the project - Construction - Initial operational capability - Transition - Release of the Software product Phases of the Unified Process • **Inception:** defines scope of the project. • **Elaboration** - How do we plan & design the project? - What resources are required? - What type of architecture may be suitable? • **Construction:** the objectives are translated in design & architecture documents. • **Transition:** involves many activities like delivering, training, supporting, and maintaining the product. Initial development & Evolution Cycles **Initial development Cycle** - Inception - Elaboration - Construction - Transition **Evolution Cycle** - Inception - Elaboration - Construction - Transition Continue till the product is retired V1=version1, V2 =version2, V3=version3 Iterations & Workflow of Unified Process Phases - Inception - Elaboration - Construction - Transition Model Implementation Test Deployment Configuration Management Project Management Environment Iterations I1 E1 C1 C2 Cn T1 T2 Inception Phase The inception phase has the following objectives: - Gathering and analyzing the requirements. - Planning and preparing a business case and evaluating alternatives for risk management, scheduling resources etc. - Estimating the overall cost and schedule for the project. - Studying the feasibility and calculating profitability of the project. Outcomes of Inception Phase - Prototypes - Business model - Vision document - Initial use case model - Initial project case - Initial risk assessment - Initial business case - Project plan - Glossary Elaboration Phase The elaboration phase has the following objectives: - Establishing architectural foundations. - Design of use case model. - Elaborating the process, infrastructure & development environment. - Selecting component. - Demonstrating that architecture support the vision at reasonable cost & within specified time. Outcomes of Elaboration Phase - Revised risk document - An executable architectural prototype - Supplementary Requirements with non functional requirement - Architecture Description document - Use case model - Preliminary User manual - Development plan Elaboration Construction Phase The construction phase has the following objectives: - Implementing the project. - Minimizing development cost. - Management and optimizing resources. - Testing the product. - Assessing the product releases against acceptance criteria. Outcomes of Construction Phase - Test Outline - Operational manuals - Test Suite - Documentation manuals - A description of the current release - Software product - User manuals Transition Phase The transition phase has the following objectives: - Starting of beta testing - Analysis of user’s views. - Training of users. - Tuning activities including bug fixing and enhancements for performance & usability - Assessing the customer satisfaction. Outcomes of Transition Phase Transition Product release Beta test reports User feedback Selection of a Life Cycle Model Selection of a model is based on: a) Requirements b) Development team c) Users d) Project type and associated risk ### Based On Characteristics Of Requirements <table> <thead> <tr> <th>Requirements</th> <th>Waterfall</th> <th>Prototype</th> <th>Iterative enhancement</th> <th>Evolutionary development</th> <th>Spiral</th> <th>RAD</th> </tr> </thead> <tbody> <tr> <td>Are requirements easily understandable and defined?</td> <td>Yes</td> <td>No</td> <td>No</td> <td>No</td> <td>No</td> <td>Yes</td> </tr> <tr> <td>Do we change requirements quite often?</td> <td>No</td> <td>Yes</td> <td>No</td> <td>No</td> <td>Yes</td> <td>No</td> </tr> <tr> <td>Can we define requirements early in the cycle?</td> <td>Yes</td> <td>No</td> <td>Yes</td> <td>Yes</td> <td>No</td> <td>Yes</td> </tr> <tr> <td>Requirements are indicating a complex system to be built</td> <td>No</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>No</td> </tr> </tbody> </table> ### Based On Status Of Development Team <table> <thead> <tr> <th>Development team</th> <th>Waterfall</th> <th>Prototype</th> <th>Iterative enhancement</th> <th>Evolutionary development</th> <th>Spiral</th> <th>RAD</th> </tr> </thead> <tbody> <tr> <td>Less experience on similar projects?</td> <td>No</td> <td>Yes</td> <td>No</td> <td>No</td> <td>Yes</td> <td>No</td> </tr> <tr> <td>Less domain knowledge (new to the technology)</td> <td>Yes</td> <td>No</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>No</td> </tr> <tr> <td>Less experience on tools to be used</td> <td>Yes</td> <td>No</td> <td>No</td> <td>No</td> <td>Yes</td> <td>No</td> </tr> <tr> <td>Availability of training if required</td> <td>No</td> <td>No</td> <td>Yes</td> <td>Yes</td> <td>No</td> <td>Yes</td> </tr> </tbody> </table> Based On User’s Participation <table> <thead> <tr> <th>Involvement of Users</th> <th>Waterfall</th> <th>Prototype</th> <th>Iterative enhancement</th> <th>Evolutionary development</th> <th>Spiral</th> <th>RAD</th> </tr> </thead> <tbody> <tr> <td>User involvement in all phases</td> <td>No</td> <td>Yes</td> <td>No</td> <td>No</td> <td>No</td> <td>Yes</td> </tr> <tr> <td>Limited user participation</td> <td>Yes</td> <td>No</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>No</td> </tr> <tr> <td>User have no previous experience of participation in similar projects</td> <td>No</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>No</td> </tr> <tr> <td>Users are experts of problem domain</td> <td>No</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>No</td> <td>Yes</td> </tr> </tbody> </table> ### Based On Type Of Project With Associated Risk <table> <thead> <tr> <th>Project type and risk</th> <th>Waterfall</th> <th>Prototype</th> <th>Iterative enhancement</th> <th>Evolutionary development</th> <th>Spiral</th> <th>RAD</th> </tr> </thead> <tbody> <tr> <td>Project is the enhancement of the existing system</td> <td>No</td> <td>No</td> <td>Yes</td> <td>Yes</td> <td>No</td> <td>Yes</td> </tr> <tr> <td>Funding is stable for the project</td> <td>Yes</td> <td>Yes</td> <td>No</td> <td>No</td> <td>No</td> <td>Yes</td> </tr> <tr> <td>High reliability requirements</td> <td>No</td> <td>No</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>No</td> </tr> <tr> <td>Tight project schedule</td> <td>No</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>Use of reusable components</td> <td>No</td> <td>Yes</td> <td>No</td> <td>No</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>Are resources (time, money, people etc.) scare?</td> <td>No</td> <td>Yes</td> <td>No</td> <td>No</td> <td>Yes</td> <td>No</td> </tr> </tbody> </table> Multiple Choice Questions Note: Select most appropriate answer of the following questions: 2.1 Spiral Model was developed by (a) Bev Littlewood (b) Berry Boehm (c) Roger Pressman (d) Victor Basili 2.2 Which model is most popular for student’s small projects? (a) Waterfall model (b) Spiral model (c) Quick and fix model (d) Prototyping model 2.3 Which is not a software life cycle model? (a) Waterfall model (b) Spiral model (c) Prototyping model (d) Capability maturity model 2.4 Project risk factor is considered in (a) Waterfall model (b) Prototyping model (c) Spiral model (d) Iterative enhancement model 2.5 SDLC stands for (a) Software design life cycle (b) Software development life cycle (c) System development life cycle (d) System design life cycle Multiple Choice Questions Note: Select most appropriate answer of the following questions: 2.6 Build and fix model has (a) 3 phases (b) 1 phase (c) 2 phases (d) 4 phases 2.7 SRS stands for (a) Software requirements specification (b) Software requirements solution (c) System requirements specification (d) none of the above 2.8 Waterfall model is not suitable for (a) small projects (b) accommodating change (c) complex projects (d) none of the above 2.9 RAD stands for (a) Rapid application development (b) Relative application development (c) Ready application development (d) Repeated application development 2.10 RAD model was proposed by (a) Lucent Technologies (b) Motorola (c) IBM (d) Microsoft Multiple Choice Questions Note: Select most appropriate answer of the following questions: 2.11 If requirements are easily understandable and defined, which model is best suited? (a) Waterfall model (b) Prototyping model (c) Spiral model (d) None of the above 2.12 If requirements are frequently changing, which model is to be selected? (a) Waterfall model (b) Prototyping model (c) RAD model (d) Iterative enhancement model 2.13 If user participation is available, which model is to be chosen? (a) Waterfall model (b) Iterative enhancement model (c) Spiral model (d) RAD model 2.14 If limited user participation is available, which model is to be selected? (a) Waterfall model (b) Spiral model (c) Iterative enhancement model (d) any of the above 2.15 If project is the enhancement of existing system, which model is best suited? (a) Waterfall model (b) Prototyping model (c) Iterative enhancement model (d) Spiral model Multiple Choice Questions Note: Select most appropriate answer of the following questions: 2.16 Which one is the most important feature of spiral model? (a) Quality management (b) Risk management (c) Performance management (d) Efficiency management 2.17 Most suitable model for new technology that is not well understood is: (a) Waterfall model (b) RAD model (c) Iterative enhancement model (d) Evolutionary development model 2.18 Statistically, the maximum percentage of errors belong to the following phase of SDLC (a) Coding (b) Design (c) Specifications (d) Installation and maintenance 2.19 Which phase is not available in software life cycle? (a) Coding (b) Testing (c) Maintenance (d) Abstraction 2.20 The development is supposed to proceed linearly through the phase in (a) Spiral model (b) Waterfall model (c) Prototyping model (d) None of the above Multiple Choice Questions Note: Select most appropriate answer of the following questions: 2.21 Unified process is maintained by (a) Infosys (b) Rational software corporation (c) SUN Microsystems (d) None of the above 2.22 Unified process is (a) Iterative (b) Incremental (c) Evolutionary (d) All of the above 2.23 Who is not in the team of Unified process development? (a) I.Jacobson (b) G.Booch (c) B.Boehm (d) J.Rumbaugh 2.24 How many phases are in the unified process? (a) 4 (b) 5 (c) 2 (d) None of the above 2.25 The outcome of construction phased can be treated as: (a) Product release (b) Beta release (c) Alpha release (d) All of the above Exercises 2.1 What do you understand by the term Software Development Life Cycle (SDLC)? Why is it important to adhere to a life cycle model while developing a large software product? 2.2 What is software life cycle? Discuss the generic waterfall model. 2.3 List the advantages of using waterfall model instead of adhoc build and fix model. 2.4 Discuss the prototyping model. What is the effect of designing a prototype on the overall cost of the project? 2.5 What are the advantages of developing the prototype of a system? 2.6 Describe the type of situations where iterative enhancement model might lead to difficulties. 2.7 Compare iterative enhancement model and evolutionary process model. Exercises 2.8 Sketch a neat diagram of spiral model of software life cycle. 2.9 Compare the waterfall model and the spiral model of software development. 2.10 As we move outward along with process flow path of the spiral model, what can we say about software that is being developed or maintained. 2.11 How does “project risk” factor effect the spiral model of software development. 2.12 List the advantages and disadvantages of involving a software engineer throughout the software development planning process. 2.13 Explain the spiral model of software development. What are the limitations of such a model? 2.14 Describe the rapid application development (RAD) model. Discuss each phase in detail. 2.15 What are the characteristics to be considered for the selection of the life cycle model? Exercises 2.16 What is the role of user participation in the selection of a life cycle model?. 2.17 Why do we feel that characteristics of requirements play a very significant role in the selection of a life cycle model? 2.18 Write short note on “status of development team” for the selection of a life cycle model? 2.19 Discuss the selection process parameters for a life cycle model. 2.20 What is unified process? Explain various phases along with the outcome of each phase. 2.21 Describe the unified process work products after each phase of unified process. 2.22 What are the advantages of iterative approach over sequential approach? Why is unified process called as iterative or incremental?
{"Source-Url": "http://mait4us.weebly.com/uploads/9/3/5/9/9359206/chapter_2_software_development_life_cycle_models.pdf", "len_cl100k_base": 4879, "olmocr-version": "0.1.53", "pdf-total-pages": 48, "total-fallback-pages": 0, "total-input-tokens": 67454, "total-output-tokens": 6156, "length": "2e12", "weborganizer": {"__label__adult": 0.00048661231994628906, "__label__art_design": 0.0003731250762939453, "__label__crime_law": 0.0004241466522216797, "__label__education_jobs": 0.008270263671875, "__label__entertainment": 5.841255187988281e-05, "__label__fashion_beauty": 0.00022077560424804688, "__label__finance_business": 0.0005817413330078125, "__label__food_dining": 0.0005140304565429688, "__label__games": 0.0006508827209472656, "__label__hardware": 0.0005850791931152344, "__label__health": 0.00046133995056152344, "__label__history": 0.0002562999725341797, "__label__home_hobbies": 9.423494338989258e-05, "__label__industrial": 0.0004410743713378906, "__label__literature": 0.00042057037353515625, "__label__politics": 0.00025963783264160156, "__label__religion": 0.00054931640625, "__label__science_tech": 0.003124237060546875, "__label__social_life": 0.00014066696166992188, "__label__software": 0.00439453125, "__label__software_dev": 0.9765625, "__label__sports_fitness": 0.00034165382385253906, "__label__transportation": 0.0005064010620117188, "__label__travel": 0.00022602081298828125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20979, 0.00908]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20979, 0.03726]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20979, 0.8747]], "google_gemma-3-12b-it_contains_pii": [[0, 27, false], [27, 225, null], [225, 593, null], [593, 748, null], [748, 1004, null], [1004, 1134, null], [1134, 1341, null], [1341, 1698, null], [1698, 2025, null], [2025, 2666, null], [2666, 2859, null], [2859, 3215, null], [3215, 3468, null], [3468, 3740, null], [3740, 4404, null], [4404, 4570, null], [4570, 4971, null], [4971, 5014, null], [5014, 5393, null], [5393, 5820, null], [5820, 6461, null], [6461, 7181, null], [7181, 7579, null], [7579, 7823, null], [7823, 8240, null], [8240, 8519, null], [8519, 8757, null], [8757, 9118, null], [9118, 9319, null], [9319, 9650, null], [9650, 9917, null], [9917, 10174, null], [10174, 10353, null], [10353, 10624, null], [10624, 10716, null], [10716, 10868, null], [10868, 11941, null], [11941, 12507, null], [12507, 13413, null], [13413, 14678, null], [14678, 15535, null], [15535, 16283, null], [16283, 17211, null], [17211, 18111, null], [18111, 18770, null], [18770, 19472, null], [19472, 20275, null], [20275, 20979, null]], "google_gemma-3-12b-it_is_public_document": [[0, 27, true], [27, 225, null], [225, 593, null], [593, 748, null], [748, 1004, null], [1004, 1134, null], [1134, 1341, null], [1341, 1698, null], [1698, 2025, null], [2025, 2666, null], [2666, 2859, null], [2859, 3215, null], [3215, 3468, null], [3468, 3740, null], [3740, 4404, null], [4404, 4570, null], [4570, 4971, null], [4971, 5014, null], [5014, 5393, null], [5393, 5820, null], [5820, 6461, null], [6461, 7181, null], [7181, 7579, null], [7579, 7823, null], [7823, 8240, null], [8240, 8519, null], [8519, 8757, null], [8757, 9118, null], [9118, 9319, null], [9319, 9650, null], [9650, 9917, null], [9917, 10174, null], [10174, 10353, null], [10353, 10624, null], [10624, 10716, null], [10716, 10868, null], [10868, 11941, null], [11941, 12507, null], [12507, 13413, null], [13413, 14678, null], [14678, 15535, null], [15535, 16283, null], [16283, 17211, null], [17211, 18111, null], [18111, 18770, null], [18770, 19472, null], [19472, 20275, null], [20275, 20979, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20979, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20979, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20979, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20979, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20979, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20979, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20979, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20979, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20979, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20979, null]], "pdf_page_numbers": [[0, 27, 1], [27, 225, 2], [225, 593, 3], [593, 748, 4], [748, 1004, 5], [1004, 1134, 6], [1134, 1341, 7], [1341, 1698, 8], [1698, 2025, 9], [2025, 2666, 10], [2666, 2859, 11], [2859, 3215, 12], [3215, 3468, 13], [3468, 3740, 14], [3740, 4404, 15], [4404, 4570, 16], [4570, 4971, 17], [4971, 5014, 18], [5014, 5393, 19], [5393, 5820, 20], [5820, 6461, 21], [6461, 7181, 22], [7181, 7579, 23], [7579, 7823, 24], [7823, 8240, 25], [8240, 8519, 26], [8519, 8757, 27], [8757, 9118, 28], [9118, 9319, 29], [9319, 9650, 30], [9650, 9917, 31], [9917, 10174, 32], [10174, 10353, 33], [10353, 10624, 34], [10624, 10716, 35], [10716, 10868, 36], [10868, 11941, 37], [11941, 12507, 38], [12507, 13413, 39], [13413, 14678, 40], [14678, 15535, 41], [15535, 16283, 42], [16283, 17211, 43], [17211, 18111, 44], [18111, 18770, 45], [18770, 19472, 46], [19472, 20275, 47], [20275, 20979, 48]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20979, 0.0686]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
07d826a8a543b7a2ec3cc726ddb7311b3e314014
Revision Programming with Preferences by Inna Pivkina, Enrico Pontelli, Tran Cao Son Outline 1. Basic concepts of revision programming (RP). 2. Two approaches to express preferences (a) Control program - Example - Syntax and Semantics - Properties (b) Soft revision rules with weights - Examples - Definitions - Implementation Formalism for describing and enforcing constraints on databases Database - a collection of atomic facts from some universe. Revision rules - specify constraints on a database, - specify a preferred way to satisfy constraints. Arbitrary initial database. Justified revisions - satisfy all constraints, - all changes are justified by revision rules. Basic concepts - Revision literals: $in(a), out(a)$ ($a \in U$). - Revision rules: $$in(a) \leftarrow in(a_1), \ldots, in(a_m), out(b_1), \ldots, out(b_n),$$ \hfill \text{(in-rule)} $$out(a) \leftarrow in(a_1), \ldots, in(a_m), out(b_1), \ldots, out(b_n),$$ \hfill \text{(out-rule)} where $a, a_i, b_i \in U$ ($1 \leq i \leq n$). - Revision program - collection of revision rules. Necessary change - $\alpha^D$ - dual of a literal $\alpha$. $\text{in}(a)^D = \text{out}(a)$, $\text{out}(a)^D = \text{in}(a)$. - A set of literals is coherent if it does not contain a pair of dual literals. - $P$ – a revision program. The necessary change of $P$, $NC(P)$, is the least model of $P$, treated as a Horn program built of independent propositional atoms of the form $\text{in}(a)$ and $\text{out}(b)$. - Coherent $NC(P)$ specifies a revision. **Example.** $P : \text{in}(Ann) \quad \leftarrow \quad NC(P) = \{\text{in}(Ann), \text{out}(Bob)\}$ $\text{out}(Bob) \quad \leftarrow \quad \text{in}(Ann)$ $\text{out}(Tom) \quad \leftarrow \quad \text{out}(Ann)$ Justified revisions - Given a database $I$ and a coherent set of literals $L$, define \[ I \oplus L = (I \cup \{a: \text{in}(a) \in L\}) \setminus \{a: \text{out}(a) \in L\}. \] - Inertia set for databases $I$, $R$: \[ I(I, R) = \{\text{in}(a) : a \in I \cap R\} \cup \{\text{out}(a) : a \notin I \cup R\}. \] - Reduct of $P$ with respect to $(I, R)$ (denoted $P_{I,R}$) – the revision program obtained from $P$ by eliminating from the body of each rule in $P$ all literals in $I(I, R)$. - $P$ - a revision program, $I$ and $R$ - databases. $R$ is called a $P$-justified revision of $I$ if $NC(P_{I,R})$ is coherent and $R = I \oplus NC(P_{I,R})$. Example \[ P : \] \begin{align*} & \text{in}(Ann) \leftarrow \text{out}(Bob) \\ & \text{in}(Bob) \leftarrow \text{out}(Ann) \\ & \text{in}(David) \leftarrow \text{in}(Tom) \\ & \text{out}(Tom) \leftarrow \text{out}(David) \\ & \text{out}(Ann) \leftarrow \text{in}(David) \\ & \text{out}(David) \leftarrow \text{in}(Bob) \end{align*} \[ P_{I,R} : \] \begin{align*} & \text{in}(Ann) \leftarrow \text{out}(Bob) \\ & \text{in}(Bob) \leftarrow \\ & \text{in}(David) \leftarrow \text{in}(Tom) \\ & \text{out}(Tom) \leftarrow \text{out}(David) \\ & \text{out}(Ann) \leftarrow \text{in}(David) \\ & \text{out}(David) \leftarrow \text{in}(Bob) \end{align*} Initial database: \( I = \{David, Tom\}. \) Revision: \( R = \{Bob\}. \) Inertia (no justification is needed): \( \text{out}(Ann). \) Necessary change: \( \text{in}(Bob), \text{out}(David), \text{out}(Tom). \) Updating \( I: \) \( (I \cup \{Bob\}) \setminus \{David, Tom\}. \) Basic properties 1. If a database $R$ is a $P$-justified revision of $I$, then $R$ is a model of $P$. 2. If a database $B$ satisfies a revision program $P$ then $B$ is a unique $P$-justified revision of itself. 3. If $R$ is a $P$-justified revision of $I$, then $R \div I$ is minimal in the family $\{B \div I : B$ is a model of $P\}$. Relation to Logic Programming $P$-justified revisions of $\emptyset$ coinside with stable models of the logic program with constraints, $lp(P)$, obtained from $P$ by replacing revision rules of the form \[ in(a) \leftarrow in(a_1), \ldots, in(a_m), out(b_1), \ldots, out(b_n) \] by \[ a \leftarrow a_1, \ldots, a_m, \neg b_1, \ldots, \neg b_n \] and replacing revision rules of the form \[ out(a) \leftarrow in(a_1), \ldots, in(a_m), out(b_1), \ldots, out(b_n) \] by constraints \[ \leftarrow a, a_1, \ldots, a_m, \neg b_1, \ldots, \neg b_n. \] Shifting $I \subseteq J$ $W = I \div J = (I \setminus J) \cup (J \setminus I)$ $W$ - a set of atoms that change status Define a $W$-transformation (shift) as follows. For a literal $\alpha$ ($\alpha = in(a)$ or $\alpha = out(a)$), $T_W(\alpha) = \begin{cases} \alpha^D, & \text{when } a \in W \\ \alpha, & \text{when } a \notin W. \end{cases}$ For a set of literals $L$, $T_W(L) = \{ T_W(\alpha) : \alpha \in L \}$. For a set of atoms $X$, $T_W(X) = \{ a : in(a) \in T_W(\{ in(b) : b \in X \} \cup \{ out(b) : b \notin X \}) \}$. For a revision program $P$, $T_W(P)$ is obtained from $P$ by applying $T_W$ to each literal in $P$. Shifting theorem For any databases $I_1$ and $I_2$, database $R$ is a $P$-justified revision of $I_1$ if and only if $T_{I_1} \div I_2(R)$ is a $T_{I_1} \div I_2(P)$-justified revision of $I_2$. **Corollary.** For each $I$ and $R$, $R$ is $P$-justified revision of $I$ if and only if $T_I(R)$ is $T_I(P)$-justified revision of $\emptyset$. Computing justified revisions by means of LP 1. Given $P$ and $I$, apply $T_I$ to obtain $T_I(P)$ and $\emptyset$. 2. Convert $T_I(P)$ into the logic program $lp(T_I(P))$. 3. Compute its answer sets. 4. Apply $T_I$ to the answer sets to obtain the $P$-justified revisions of $I$. A robot is equipped with sensors which provide observations: \[ \text{observation}(Par, Value, Sensor) \] View of the world has exactly one value for each parameter: \[ \text{world}(Par, Value, Sensor) \] RP updates the view of the world, consists of rules of types: \[ \text{in}(\text{observation}(Par, Value, Sensor)) \leftarrow \] \[ \text{in}(\text{world}(Par, Value, Sensor)) \leftarrow \text{in}(\text{observation}(Par, Value, Sensor)). \] \[ \text{out}(\text{world}(Par, Value, Sensor)) \leftarrow \text{in}(\text{world}(Par, Value1, Sensor1)). \] (\text{where Sensor} \neq \text{Sensor1} \text{ and/or Value} \neq \text{Value1}) An ordered revision program is a pair \((P, \mathcal{L})\) where \(\mathcal{L}\) is a function which assigns to revision rules in \(P\) unique labels. \(\mathcal{L}(P)\) - set of labels in \(P\). \[ l : \alpha_0 \leftarrow \alpha_1, \ldots, \alpha_n \] A preference on rules in \((P, \mathcal{L})\) is an expression of the form \[ \text{prefer}(l_1, l_2) \leftarrow \text{initially}(\alpha_1, \ldots, \alpha_k), \alpha_{k+1}, \ldots, \alpha_n, \] where \(l_i\) are labels, \(\alpha_j\) are revision literals. A revision program with preferences is a triple \((P, \mathcal{L}, S)\), where \((P, \mathcal{L})\) is an ordered revision program and \(S\) is a set of preferences on rules in \((P, \mathcal{L})\). \(S\) - the control program. \((P, \mathcal{L}, S)\) is translated into ordinary RP: \[ U^{\mathcal{L}(P)} = U \cup \{\text{ok}(l), \text{defeated}(l), \text{prefer}(l, l') : l, l' \in \mathcal{L}(P)\} \] Define \(P^{S,I}\) over \(U^{\mathcal{L}(P)}\) to be a revision program consisting of rules: - for each \(l \in \mathcal{L}(P)\) \[ \begin{align*} \text{head}(l) & \leftarrow \text{body}(l), \text{in}(\text{ok}(l)) \\ \text{in}(\text{ok}(l)) & \leftarrow \text{out}(\text{defeated}(l)) \end{align*} \] - for each preference \[ \text{prefer}(l_1, l_2) \leftarrow \text{initially}(\alpha_1, \ldots, \alpha_k), \alpha_{k+1}, \ldots, \alpha_n, \] in \(S\) such that \(\alpha_1 \ldots, \alpha_k\) are satisfied by \(I\) \[ \begin{align*} \text{in}(\text{prefer}(l_1, l_2)) & \leftarrow \alpha_{k+1}, \ldots, \alpha_n \\ \text{in}(\text{defeated}(l_2)) & \leftarrow \text{body}(l_1), \text{in}(\text{prefer}(l_1, l_2)) \end{align*} \] (\(P, \mathcal{L}, S\))-justified revisions A database \(R\) is a \((P, \mathcal{L}, S)\)-justified revision of \(I\) if there exists \(R' \subseteq U^\mathcal{L}(P)\) such that \(R'\) is a \(P^S,I\)-justified revision of \(I\), and \(R = R' \cap U\). Properties - Justified revision semantics for revision programs with preferences extends justified revision semantics for ordinary revision programs - Shifting property holds - Not every \((P, \mathcal{L}, S)\)-justified revision is a model of \(P\) When \((P, L, S)\)-justified revisions are models of \(P\)? Two rules \(r, r'\) of \(P\) are in conflict if one of the following conditions is satisfied: 1. \((\text{head}(r))^D \in \text{body}(r')\) and \((\text{head}(r'))^D \in \text{body}(r)\); or 2. \(\text{body}(r) \cup \text{body}(r')\) is incoherent. A set of preferences is conflict-resolving if it contains only preferences between conflicting rules. **Theorem.** Let \((P, L, S)\) be a revision program with preferences where \(S\) is a set of conflict-resolving preferences and is cycle-free. For every \((P, L, S)\)-justified revision \(R\) of \(I\), \(R\) is a model of \(P\). **Soft revision rules with weights** Revision program is divided into hard and soft rules: \( P = HR \cup SR \) All hard rules must be satisfied. Only a subset of soft rules may be satisfied. The subset of soft rules that is satisfied is optimal with respect to some criteria. Maximal number of rules **Definition 1** \( R \) is a \((HR, SR)\)-preferred justified revision of \( I \) if \( R \) is a \((HR \cup S)\) - justified revision of \( I \) for some \( S \subseteq SR \), and for all \( S' \) if \( S \subseteq S' \subseteq SR \), then there are no \((HR \cup S')\)-justified revisions of \( I \). For each $I$, translate $P = HR \cup SR$ into an smodels program $lp(T_I(HR)) \cup lp'(T_I(SR))$. $lp'$ translates a rule $$in(a) \leftarrow in(p_1), \ldots, in(p_m), out(s_1), \ldots, out(s_n)$$ into the rules $$\{rule_i\} : - p_1, \ldots, p_m, not s_1, \ldots, not s_n.$$ $$a : - rule_i$$ $lp'$ translates a rule $$out(a) \leftarrow in(p_1), \ldots, in(p_m), out(s_1), \ldots, out(s_n)$$ into the rules $$\{rule_i\} : - p_1, \ldots, p_m, not s_1, \ldots, not s_n.$$ $$: - rule_i, a.$$ Implementation, cont’d smodels statement \texttt{maximize\{rule_1, \ldots, rule_k\}.} $(k$ is the number of rules in $SR)$ allows to compute one (not all) $(HR, SR)$-preferred justified revision, which has max size. Weighted rules Each \( r \in SR \) is assigned a weight, \( w(r) \) (its importance). **Definition 2** \( R \) is called a rule-weighted \( (HR, SR) \)-justified revision of \( I \) if the following two conditions are satisfied: 1. there exists a set of rules \( S \subseteq SR \) such that \( R \) is a \( (HR \cup S) \)-justified revision of \( I \), and 2. for any set of rules \( S' \subseteq SR \), if \( R' \) is a \( (HR \cup S') \)-justified revision of \( I \), then the sum of weights of rules in \( S' \) is less than or equal than the sum of weights of rules in \( S \). Implementation Same translation of soft rules into smodels program, but different maximize statement: \[ \text{maximize}[\text{rule}_1 = w(1), \text{rule}_2 = w(2), \ldots, \text{rule}_k = w(k)]. \] Weighted atoms. Each \( a \in U \) is assigned a weight \( w(a) \) (the more the weight the less we want to change its status) **Definition 3** \( R \) is called an atom-weighted \((HR, SR)\)-justified revision of \( I \) if the following two conditions are satisfied: 1. there exists a set of rules \( S \subseteq SR \) such that \( R \) is a \((HR \cup S)\)-justified revision of \( I \), and 2. for any set of rules \( S' \subseteq SR \), if \( Q \) is a \((HR \cup S')\)-justified revision of \( I \), then the sum of weights of atoms in \( I \div Q \) is greater than or equal to the sum of weights of atoms in \( I \div R \). Implementation Same translation of soft rules into `smodels` program, but different `maximize` statement: \[ \text{minimize}[a_1 = w(a_1), a_2 = w(a_2), \ldots, a_n = w(a_n)] \] where \(a_1, \ldots, a_n\) are all the atoms in \(U\). Definition 4 \( R \) is called a minimal size difference \( P \)-justified revision of \( I \) if the following two conditions are satisfied: 1. \( R \) is a \( P \)-justified revision of \( I \), and 2. for any \( P \)-justified revision \( R' \), the number of atoms in \( R \div I \) is less than or equal to the number of atoms in \( R' \div I \). Implementation Use minimize statement: \[ \text{minimize}\{a_1, a_2, \ldots, a_n\} \] where \(a_1, \ldots, a_n\) are all the atoms in \(U\).
{"Source-Url": "https://www.cs.nmsu.edu/~tson/TAG03/TAG%20Meeting/inna.pdf", "len_cl100k_base": 4322, "olmocr-version": "0.1.49", "pdf-total-pages": 28, "total-fallback-pages": 0, "total-input-tokens": 51581, "total-output-tokens": 5630, "length": "2e12", "weborganizer": {"__label__adult": 0.0002827644348144531, "__label__art_design": 0.00031304359436035156, "__label__crime_law": 0.0003993511199951172, "__label__education_jobs": 0.0013208389282226562, "__label__entertainment": 4.935264587402344e-05, "__label__fashion_beauty": 0.00012803077697753906, "__label__finance_business": 0.0003533363342285156, "__label__food_dining": 0.00034236907958984375, "__label__games": 0.0005707740783691406, "__label__hardware": 0.0008254051208496094, "__label__health": 0.0005550384521484375, "__label__history": 0.00021183490753173828, "__label__home_hobbies": 0.00015079975128173828, "__label__industrial": 0.0005536079406738281, "__label__literature": 0.0002715587615966797, "__label__politics": 0.00021922588348388672, "__label__religion": 0.0004012584686279297, "__label__science_tech": 0.0330810546875, "__label__social_life": 8.863210678100586e-05, "__label__software": 0.007671356201171875, "__label__software_dev": 0.951171875, "__label__sports_fitness": 0.00023937225341796875, "__label__transportation": 0.0004374980926513672, "__label__travel": 0.00015497207641601562}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 12602, 0.00501]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 12602, 0.61209]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 12602, 0.67355]], "google_gemma-3-12b-it_contains_pii": [[0, 87, false], [87, 373, null], [373, 725, null], [725, 1193, null], [1193, 1869, null], [1869, 2526, null], [2526, 3460, null], [3460, 3799, null], [3799, 4352, null], [4352, 4992, null], [4992, 5335, null], [5335, 5618, null], [5618, 6265, null], [6265, 7008, null], [7008, 7953, null], [7953, 8206, null], [8206, 8457, null], [8457, 9103, null], [9103, 9384, null], [9384, 9714, null], [9714, 10215, null], [10215, 10435, null], [10435, 11022, null], [11022, 11223, null], [11223, 11868, null], [11868, 12104, null], [12104, 12459, null], [12459, 12602, null]], "google_gemma-3-12b-it_is_public_document": [[0, 87, true], [87, 373, null], [373, 725, null], [725, 1193, null], [1193, 1869, null], [1869, 2526, null], [2526, 3460, null], [3460, 3799, null], [3799, 4352, null], [4352, 4992, null], [4992, 5335, null], [5335, 5618, null], [5618, 6265, null], [6265, 7008, null], [7008, 7953, null], [7953, 8206, null], [8206, 8457, null], [8457, 9103, null], [9103, 9384, null], [9384, 9714, null], [9714, 10215, null], [10215, 10435, null], [10435, 11022, null], [11022, 11223, null], [11223, 11868, null], [11868, 12104, null], [12104, 12459, null], [12459, 12602, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 12602, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 12602, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 12602, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 12602, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 12602, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 12602, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 12602, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 12602, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 12602, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 12602, null]], "pdf_page_numbers": [[0, 87, 1], [87, 373, 2], [373, 725, 3], [725, 1193, 4], [1193, 1869, 5], [1869, 2526, 6], [2526, 3460, 7], [3460, 3799, 8], [3799, 4352, 9], [4352, 4992, 10], [4992, 5335, 11], [5335, 5618, 12], [5618, 6265, 13], [6265, 7008, 14], [7008, 7953, 15], [7953, 8206, 16], [8206, 8457, 17], [8457, 9103, 18], [9103, 9384, 19], [9384, 9714, 20], [9714, 10215, 21], [10215, 10435, 22], [10435, 11022, 23], [11022, 11223, 24], [11223, 11868, 25], [11868, 12104, 26], [12104, 12459, 27], [12459, 12602, 28]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 12602, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
168a9d1d78021fbe9832b0f77bed5950e6fa2414
CAPPORT Architecture draft-ietf-capport-architecture-08 Abstract This document describes a CAPPORT architecture. DHCP or Router Advertisements, an optional signaling protocol, and an HTTP API are used to provide the solution. The role of Provisioning Domains (PvDs) is described. Status of This Memo This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." This Internet-Draft will expire on 12 November 2020. Copyright Notice Copyright (c) 2020 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Table of Contents 1. Introduction .................................................. 3 1.1. Requirements Language .................................. 4 1.2. Terminology .............................................. 4 2. Components .................................................... 5 2.1. User Equipment ........................................... 5 2.2. Provisioning Service ..................................... 6 2.2.1. DHCP or Router Advertisements .................. 6 2.2.2. Provisioning Domains ............................... 6 2.3. Captive Portal API Server ............................... 7 2.4. Captive Portal Enforcement Device ..................... 7 2.5. Captive Portal Signal ................................... 8 2.6. Component Diagram ..................................... 8 3. User Equipment Identity ...................................... 10 3.1. Identifiers .............................................. 10 3.2. Recommended Properties ................................ 10 3.2.1. Uniquely Identify User Equipment .............. 11 3.2.2. Hard to Spoof ..................................... 11 3.2.3. Visible to the API Server ........................ 11 3.2.4. Visible to the Enforcement Device ............. 11 3.3. Evaluating Types of Identifiers ......................... 12 3.4. Example Identifier Types ............................... 12 3.4.1. Physical Interface .................................. 12 3.4.2. IP Address .......................................... 13 3.5. Context-free URI ......................................... 13 4. Solution Workflow ............................................. 14 4.1. Initial Connection ....................................... 14 4.2. Conditions About to Expire ............................. 15 4.3. Handling of Changes in Portal URI ....................... 15 5. Acknowledgments .............................................. 16 6. IANA Considerations ......................................... 16 7. Security Considerations ..................................... 16 7.1. Trusting the Network ................................... 16 1. Introduction In this document, "Captive Portal" is used to describe a network to which a device may be voluntarily attached, such that network access is limited until some requirements have been fulfilled. Typically a user is required to use a web browser to fulfill requirements imposed by the network operator, such as reading advertisements, accepting an acceptable-use policy, or providing some form of credentials. Implementations generally require a web server, some method to allow/block traffic, and some method to alert the user. Common methods of alerting the user involve modifying HTTP or DNS traffic. This document standardizes an architecture for implementing captive portals while addressing most of the problems arising for current captive portal mechanisms. The architecture is guided by these principles: * Solutions SHOULD NOT require the forging of responses from DNS or HTTP servers, or any other protocol. In particular, solutions SHOULD NOT require man-in-the-middle proxy of TLS traffic. * Solutions MUST operate at the layer of Internet Protocol (IP) or above, not being specific to any particular access technology such as Cable, WiFi or mobile telecom. * Solutions MAY allow a device to be alerted that it is in a captive network when attempting to use any application on the network. * Solutions SHOULD allow a device to learn that it is in a captive network before any application attempts to use the network. * The state of captivity SHOULD be explicitly available to devices (in contrast to modification of DNS or HTTP, which is only indirectly machine-detectable by the client when it compares responses to well-known queries with expected responses). * The architecture MUST provide a path of incremental migration, acknowledging a huge variety of portals and end-user device implementations and software versions. A side-benefit of the architecture described in this document is that devices without user interfaces are able to identify parameters of captivity. However, this document does not yet describe a mechanism for such devices to escape captivity. The architecture uses the following mechanisms: * Network provisioning protocols provide end-user devices with a Uniform Resource Identifier [RFC3986] (URI) for the API that end- user devices query for information about what is required to escape captivity. DHCP, DHCPv6, and Router-Advertisement options for this purpose are available in [RFC7710bis]. Other protocols (such as RADIUS), Provisioning Domains [I-D.pfister-capport-pvd], or static configuration may also be used. A device MAY query this API at any time to determine whether the network is holding the device in a captive state. * End-user devices can be notified of captivity with Captive Portal Signals in response to traffic. This notification works in response to any Internet protocol, and is not done by modifying protocols in-band. This notification does not carry the portal URI; rather it provides a notification to the User Equipment that it is in a captive state. * Receipt of a Captive Portal Signal informs an end-user device that it could be captive. In response, the device MAY query the provisioned API to obtain information about the network state. The device MAY take immediate action to satisfy the portal (according to its configuration/policy). The architecture attempts to provide confidentiality, authentication, and safety mechanisms to the extent possible. 1.1. Requirements Language The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here. 1.2. Terminology Captive Network: A network which limits communication of attached devices to restricted hosts until the user has satisfied Captive Portal Conditions, after which access is permitted to a wider set of hosts (typically the Internet). Captive Portal Conditions: site-specific requirements that a user or device must satisfy in order to gain access to the wider network. Captive Portal Enforcement Device: The network equipment which enforces the traffic restriction. Also known as Enforcement Device. Captive Portal User Equipment: Also known as User Equipment. A device which has voluntarily joined a network for purposes of communicating beyond the constraints of the captive network. Captive Portal Server: The web server providing a user interface for assisting the user in satisfying the conditions to escape captivity. Captive Portal API: Also known as API. An HTTP API allowing User Equipment to query its state of captivity within the Captive Portal. Captive Portal API Server: Also known as API Server. A server hosting the Captive Portal API. Captive Portal Signal: A notification from the network used to inform the User Equipment that the state of its captivity could have changed. Captive Portal Signaling Protocol: Also known as Signaling Protocol. The protocol for communicating Captive Portal Signals. 2. Components 2.1. User Equipment The User Equipment is the device that a user desires to be attached to a network with full access to all hosts on the network (e.g., to have Internet access). The User Equipment communication is typically restricted by the Enforcement Device, described in Section 2.4, until site-specific requirements have been met. At this time we consider only devices with web browsers, with web applications being the means of satisfying Captive Portal Conditions. An example interactive User Equipment is a smart phone. The User Equipment: * SHOULD support provisioning of the URI for the Captive Portal API (e.g., by DHCP) * SHOULD distinguish Captive Portal API access per network interface, in the manner of Provisioning Domain Architecture [RFC7556]. * SHOULD have a mechanism for notifying the user of the Captive Portal * SHOULD have a web browser so that the user may navigate the Captive Portal user interface. * MAY prevent applications from using networks that do not grant full network access. E.g., a device connected to a mobile network may be connecting to a captive WiFi network; the operating system MAY avoid updating the default route until network access restrictions have been lifted (excepting access to the Captive Portal server) in the new network. This has been termed "make before break". None of the above requirements are mandatory because (a) we do not wish to say users or devices must seek full access to the captive network, (b) the requirements may be fulfilled by manually visiting the captive portal web application, and (c) legacy devices must continue to be supported. If User Equipment supports the Captive Portal API, it MUST validate the API server's TLS certificate (see [RFC2818]). An Enforcement Device SHOULD allow access to any services that User Equipment could need to contact to perform certificate validation, such as OCSP responders, CRLs, and NTP servers; see Section 4.1 of [I-D.ietf-capport-api] for more information. If certificate validation fails, User Equipment MUST NOT proceed with any of the behavior described above. 2.2. Provisioning Service Here we discuss candidate mechanisms for provisioning the User Equipment with the URI of the API to query captive portal state and navigate the portal. 2.2.1. DHCP or Router Advertisements A standard for providing a portal URI using DHCP or Router Advertisements is described in [RFC7710bis]. The CAPPORT architecture expects this URI to indicate the API described in Section 2.3. 2.2.2. Provisioning Domains Although still a work in progress, [I-D.pfister-capport-pvd] proposes a mechanism for User Equipment to be provided with PvD Bootstrap Information containing the URI for the JSON-based API described in Section 2.3. 2.3. Captive Portal API Server The purpose of a Captive Portal API is to permit a query of Captive Portal state without interrupting the user. This API thereby removes the need for User Equipment to perform clear-text "canary" HTTP queries to check for response tampering. The URI of this API will have been provisioned to the User Equipment. (Refer to Section 2.2). This architecture expects the User Equipment to query the API when the User Equipment attaches to the network and multiple times thereafter. Therefore the API MUST support multiple repeated queries from the same User Equipment and return the state of captivity for the equipment. At minimum, the API MUST provide: (1) the state of captivity and (2) a URI for the Captive Portal Server. A caller to the API needs to be presented with evidence that the content it is receiving is for a version of the API that it supports. For an HTTP-based interaction, such as in [I-D.ietf-capport-api] this might be achieved by using a content type that is unique to the protocol. When User Equipment receives Captive Portal Signals, the User Equipment MAY query the API to check the state. The User Equipment SHOULD rate-limit these API queries in the event of the signal being flooded. (See Section 7.) The API MUST be extensible to support future use-cases by allowing extensible information elements. The API MUST use TLS to ensure server authentication. The implementation of the API MUST ensure both confidentiality and integrity of any information provided by or required by it. This document does not specify the details of the API. 2.4. Captive Portal Enforcement Device The Enforcement Device component restricts the network access of User Equipment according to site-specific policy. Typically User Equipment is permitted access to a small number of services and is denied general network access until it satisfies the Captive Portal Conditions. The Enforcement Device component: * Allows traffic to pass for User Equipment that is permitted to use the network and has satisfied the Captive Portal Conditions. * Blocks (discards) traffic according to the site-specific policy for User Equipment that has not yet satisfied the Captive Portal Conditions. * May signal User Equipment using the Captive Portal Signaling protocol if certain traffic is blocked. * Permits User Equipment that has not satisfied the Captive Portal Conditions to access necessary APIs and web pages to fulfill requirements for escaping captivity. * Updates allow/block rules per User Equipment in response to operations from the Captive Portal Server. 2.5. Captive Portal Signal When User Equipment first connects to a network, or when there are changes in status, the Enforcement Device could generate a signal toward the User Equipment. This signal indicates that the User Equipment might need to contact the API Server to receive updated information. For instance, this signal might be generated when the end of a session is imminent, or when network access was denied. An Enforcement Device MUST rate-limit any signal generated in response to these conditions. See Section 7.4 for a discussion of risks related to a Captive Portal Signal. 2.6. Component Diagram The following diagram shows the communication between each component. In the diagram: * During provisioning (e.g., DHCP), the User Equipment acquires the URI for the Captive Portal API. * The User Equipment queries the API to learn of its state of captivity. If captive, the User Equipment presents the portal user interface from the Web Portal Server to the user. * Based on user interaction, the Web Portal Server directs the Enforcement Device to either allow or deny external network access for the User Equipment. The User Equipment attempts to communicate to the external network through the Enforcement Device. The Enforcement Device either allows the User Equipment's packets to the external network, or blocks the packets. If blocking traffic and a signal has been implemented, it may respond with a Captive Portal Signal. Although the Provisioning Service, API Server, and Web Portal Server functions are shown as discrete blocks, they could of course be combined into a single element. 3. User Equipment Identity Multiple components in the architecture interact with both the User Equipment and each other. Since the User Equipment is the focus of these interactions, the components must be able to both identify the User Equipment from their interactions with it, and to agree on the identity of the User Equipment when interacting with each other. The methods by which the components interact restrict the type of information that may be used as an identifying characteristics. This section discusses the identifying characteristics. 3.1. Identifiers An Identifier is a characteristic of the User Equipment used by the components of a Captive Portal to uniquely determine which specific User Equipment is interacting with them. An Identifier MAY be a field contained in packets sent by the User Equipment to the External Network. Or, an Identifier MAY be an ephemeral property not contained in packets destined for the External Network, but instead correlated with such information through knowledge available to the different components. 3.2. Recommended Properties The set of possible identifiers is quite large. However, in order to be considered a good identifier, an identifier SHOULD meet the following criteria. Note that the optimal identifier will likely change depending on the position of the components in the network as well as the information available to them. An identifier SHOULD: * Uniquely Identify the User Equipment * Be Hard to Spoof * Be Visible to the API Server An identifier might only apply to the current point of network attachment. If the device moves to a different network location its identity could change. 3.2.1. Uniquely Identify User Equipment Each instance of User Equipment interacting with the Captive Network MUST be given an identifier that is unique among User Equipment interacting at that time. Over time, the User Equipment assigned to an identifier value MAY change. Allowing the identified device to change over time ensures that the space of possible identifying values need not be overly large. Independent Captive Portals MAY use the same identifying value to identify different User Equipment. Allowing independent captive portals to reuse identifying values allows the identifier to be a property of the local network, expanding the space of possible identifiers. 3.2.2. Hard to Spoof A good identifier does not lend itself to being easily spoofed. At no time should it be simple or straightforward for one User Equipment to pretend to be another User Equipment, regardless of whether both are active at the same time. This property is particularly important when the User Equipment is extended externally to devices such as billing systems, or where the identity of the User Equipment could imply liability. 3.2.3. Visible to the API Server Since the API Server will need to perform operations which rely on the identity of the User Equipment, such as answering a query about whether the User Equipment is captive, the API Server needs to be able to relate a request to the User Equipment making the request. 3.2.4. Visible to the Enforcement Device The Enforcement Device will decide on a per-packet basis whether the packet should be forwarded to the external network. Since this decision depends on which User Equipment sent the packet, the Enforcement Device requires that it be able to map the packet to its concept of the User Equipment. 3.3. Evaluating Types of Identifiers To evaluate whether a type of identifier is appropriate, one should consider every recommended property from the perspective of interactions among the components in the architecture. When comparing identifier types, choose the one which best satisfies all of the recommended properties. The architecture does not provide an exact measure of how well an identifier type satisfies a given property; care should be taken in performing the evaluation. 3.4. Example Identifier Types This section provides some example identifier types, along with some evaluation of whether they are suitable types. The list of identifier types is not exhaustive. Other types may be used. An important point to note is that whether a given identifier type is suitable depends heavily on the capabilities of the components and where in the network the components exist. 3.4.1. Physical Interface The physical interface by which the User Equipment is attached to the network can be used to identify the User Equipment. This identifier type has the property of being extremely difficult to spoof: the User Equipment is unaware of the property; one User Equipment cannot manipulate its interactions to appear as though it is another. Further, if only a single User Equipment is attached to a given physical interface, then the identifier will be unique. If multiple User Equipment is attached to the network on the same physical interface, then this type is not appropriate. Another consideration related to uniqueness of the User Equipment is that if the attached User Equipment changes, both the API Server and the Enforcement Device MUST invalidate their state related to the User Equipment. The Enforcement Device needs to be aware of the physical interface, which constrains the environment: it must either be part of the device providing physical access (e.g., implemented in firmware), or packets traversing the network must be extended to include information about the source physical interface (e.g. a tunnel). The API Server faces a similar problem, implying that it should co-exist with the Enforcement Device, or that the Enforcement Device should extend requests to it with the identifying information. 3.4.2. IP Address A natural identifier type to consider is the IP address of the User Equipment. At any given time, no device on the network can have the same IP address without causing the network to malfunction, so it is appropriate from the perspective of uniqueness. However, it may be possible to spoof the IP address, particularly for malicious reasons where proper functioning of the network is not necessary for the malicious actor. Consequently, any solution using the IP address SHOULD proactively try to prevent spoofing of the IP address. Similarly, if the mapping of IP address to User Equipment is changed, the components of the architecture MUST remove or update their mapping to prevent spoofing. Demonstrations of return routeability, such as that required for TCP connection establishment, might be sufficient defense against spoofing, though this might not be sufficient in networks that use broadcast media (such as some wireless networks). Since the IP address may traverse multiple segments of the network, more flexibility is afforded to the Enforcement Device and the API Server: they simply must exist on a segment of the network where the IP address is still unique. However, consider that a NAT may be deployed between the User Equipment and the Enforcement Device. In such cases, it is possible for the components to still uniquely identify the device if they are aware of the port mapping. In some situations, the User Equipment may have multiple IP addresses, while still satisfying all of the recommended properties. This raises some challenges to the components of the network. For example, if the User Equipment tries to access the network with multiple IP addresses, should the Enforcement Device and API Server treat each IP address as a unique User Equipment, or should it tie the multiple addresses together into one view of the subscriber? An implementation MAY do either. Attention should be paid to IPv6 and the fact that it is expected for a device to have multiple IPv6 addresses on a single link. In such cases, identification could be performed by subnet, such as the /64 to which the IP belongs. 3.5. Context-free URI A Captive Portal API needs to present information to clients that is unique to that client. To do this, some systems use information from the context of a request, such as the source address, to identify the UE. Using information from context rather than information from the URI allows the same URI to be used for different clients. However, it also means that the resource is unable to provide relevant information if the UE makes a request using a different network path. This might happen when UE has multiple network interfaces. It might also happen if the address of the API provided by DNS depends on where the query originates (as in split DNS [RFC8499]). Accessing the API MAY depend on contextual information. However, the URIs provided in the API SHOULD be unique to the UE and not dependent on contextual information to function correctly. Though a URI might still correctly resolve when the UE makes the request from a different network, it is possible that some functions could be limited to when the UE makes requests using the captive network. For example, payment options could be absent or a warning could be displayed to indicate the payment is not for the current connection. URIs could include some means of identifying the User Equipment in the URIs. However, including unauthenticated User Equipment identifiers in the URI may expose the service to spoofing or replay attacks. 4. Solution Workflow This section aims to improve understanding by describing a possible workflow of solutions adhering to the architecture. 4.1. Initial Connection This section describes a possible workflow when User Equipment initially joins a Captive Network. 1. The User Equipment joins the Captive Network by acquiring a DHCP lease, RA, or similar, acquiring provisioning information. 2. The User Equipment learns the URI for the Captive Portal API from the provisioning information (e.g., [RFC7710bis]). 3. The User Equipment accesses the Captive Portal API to receive parameters of the Captive Network, including web-portal URI. (This step replaces the clear-text query to a canary URI.) 4. If necessary, the User navigates the web portal to gain access to the external network. 5. The Captive Portal API server indicates to the Enforcement Device that the User Equipment is allowed to access the external network. 6. The User Equipment attempts a connection outside the captive network 7. If the requirements have been satisfied, the access is permitted; otherwise the "Expired" behavior occurs. 8. The User Equipment accesses the network until conditions Expire. 4.2. Conditions About to Expire This section describes a possible workflow when access is about to expire. 1. Precondition: the API has provided the User Equipment with a duration over which its access is valid 2. The User Equipment is communicating with the outside network 3. The User Equipment's UI indicates that the length of time left for its access has fallen below a threshold 4. The User Equipment visits the API again to validate the expiry time 5. If expiry is still imminent, the User Equipment prompts the user to access the web-portal URI again 6. The User extends their access through the web-portal 7. The User Equipment's access to the outside network continues uninterrupted 4.3. Handling of Changes in Portal URI A different Captive Portal API URI could be returned in the following cases: * If DHCP is used, a lease renewal/rebind may return a different Captive Portal API URI. * If RA is used, a new Captive Portal API URI may be specified in a new RA message received by end User Equipment. Whenever a new Portal URI is received by end User Equipment, it SHOULD discard the old URI and use the new one for future requests to the API. 5. Acknowledgments The authors thank Lorenzo Colitti for providing the majority of the content for the Captive Portal Signal requirements. The authors thank various individuals for their feedback on the mailing list and during the IETF98 hackathon: David Bird, Erik Kline, Alexis La Goulette, Alex Roscoe, Darshak Thakore, and Vincent van Dam. 6. IANA Considerations This memo includes no request to IANA. 7. Security Considerations 7.1. Trusting the Network When joining a network, some trust is placed in the network operator. This is usually considered to be a decision by a user on the basis of the reputation of an organization. However, once a user makes such a decision, protocols can support authenticating that a network is operated by who claims to be operating it. The Provisioning Domain Architecture [RFC7556] provides some discussion on authenticating an operator. Given that a user chooses to visit a Captive Portal URI, the URI location SHOULD be securely provided to the user's device. E.g., the DHCPv6 AUTH option can sign this information. If a user decides to incorrectly trust an attacking network, they might be convinced to visit an attacking web page and unwittingly provide credentials to an attacker. Browsers can authenticate servers but cannot detect cleverly misspelled domains, for example. 7.2. Authenticated APIs The solution described here assumes that when the User Equipment needs to trust the API server, server authentication will be performed using TLS mechanisms. 7.3. Secure APIs The solution described here requires that the API be secured using TLS. This is required to allow the User Equipment and API Server to exchange secrets which can be used to validate future interactions. The API MUST ensure the integrity of this information, as well as its confidentiality. 7.4. Risks Associated with the Signaling Protocol If a Signaling Protocol is implemented, it may be possible for any user on the Internet to send signals in attempt to cause the receiving equipment to communicate with the Captive Portal API. This has been considered, and implementations may address it in the following ways: * The signal only informs the User Equipment to query the API. It does not carry any information which may mislead or misdirect the User Equipment. * Even when responding to the signal, the User Equipment securely authenticates with API Servers. * Accesses to the API Server are rate limited, limiting the impact of a repeated attack. 7.5. User Options The Captive Portal Signal could inform the User Equipment that it is being held captive. There is no requirement that the User Equipment do something about this. Devices MAY permit users to disable automatic reaction to Captive Portal Signals indications for privacy reasons. However, there would be the trade-off that the user doesn't get notified when network access is restricted. Hence, end-user devices MAY allow users to manually control captive portal interactions, possibly on the granularity of Provisioning Domains. 8. References 8.1. Normative References 8.2. Informative References [I-D.ietf-capport-api] [I-D.pfister-capport-pvd] Appendix A. Existing Captive Portal Detection Implementations Operating systems and user applications may perform various tests when network connectivity is established to determine if the device is attached to a network with a captive portal present. A common method is to attempt to make a HTTP request to a known, vendor-hosted endpoint with a fixed response. Any other response is interpreted as a signal that a captive portal is present. This check is typically not secured with TLS, as a network with a captive portal may intercept the connection, leading to a host name mismatch. This has been referred to as a "canary" request because, like the canary in the coal mine, it can be the first sign that something is wrong. Another test that can be performed is a DNS lookup to a known address with an expected answer. If the answer differs from the expected answer, the equipment detects that a captive portal is present. DNS queries over TCP or HTTPS are less likely to be modified than DNS queries over UDP due to the complexity of implementation. The different tests may produce different conclusions, varying by whether or not the implementation treats both TCP and UDP traffic, and by which types of DNS are intercepted. Malicious or misconfigured networks with a captive portal present may not intercept these requests and choose to pass them through or decide to impersonate, leading to the device having a false negative. Authors' Addresses Kyle Larose Agilicus Email: kyle@agilicus.com David Dolson Email: ddolson@acm.org Heng Liu Google Email: liucougar@google.com
{"Source-Url": "https://datatracker.ietf.org/doc/pdf/draft-ietf-capport-architecture-08", "len_cl100k_base": 6513, "olmocr-version": "0.1.48", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 40274, "total-output-tokens": 7624, "length": "2e12", "weborganizer": {"__label__adult": 0.0003876686096191406, "__label__art_design": 0.0007662773132324219, "__label__crime_law": 0.0016145706176757812, "__label__education_jobs": 0.0010595321655273438, "__label__entertainment": 0.00023937225341796875, "__label__fashion_beauty": 0.00018930435180664065, "__label__finance_business": 0.0007762908935546875, "__label__food_dining": 0.0003113746643066406, "__label__games": 0.000965118408203125, "__label__hardware": 0.01314544677734375, "__label__health": 0.0004570484161376953, "__label__history": 0.0006413459777832031, "__label__home_hobbies": 0.00013113021850585938, "__label__industrial": 0.00087738037109375, "__label__literature": 0.0004611015319824219, "__label__politics": 0.00039839744567871094, "__label__religion": 0.0004992485046386719, "__label__science_tech": 0.3544921875, "__label__social_life": 0.0001029372215270996, "__label__software": 0.1728515625, "__label__software_dev": 0.4482421875, "__label__sports_fitness": 0.0002715587615966797, "__label__transportation": 0.0007901191711425781, "__label__travel": 0.0002689361572265625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32738, 0.02284]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32738, 0.24897]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32738, 0.90407]], "google_gemma-3-12b-it_contains_pii": [[0, 1091, false], [1091, 3832, null], [3832, 5281, null], [5281, 7208, null], [7208, 8865, null], [8865, 10404, null], [10404, 12103, null], [12103, 13782, null], [13782, 15052, null], [15052, 15504, null], [15504, 17206, null], [17206, 18777, null], [18777, 20604, null], [20604, 22617, null], [22617, 24676, null], [24676, 26156, null], [26156, 27434, null], [27434, 29092, null], [29092, 30654, null], [30654, 31149, null], [31149, 32588, null], [32588, 32738, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1091, true], [1091, 3832, null], [3832, 5281, null], [5281, 7208, null], [7208, 8865, null], [8865, 10404, null], [10404, 12103, null], [12103, 13782, null], [13782, 15052, null], [15052, 15504, null], [15504, 17206, null], [17206, 18777, null], [18777, 20604, null], [20604, 22617, null], [22617, 24676, null], [24676, 26156, null], [26156, 27434, null], [27434, 29092, null], [29092, 30654, null], [30654, 31149, null], [31149, 32588, null], [32588, 32738, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32738, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32738, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32738, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32738, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32738, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32738, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32738, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32738, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32738, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32738, null]], "pdf_page_numbers": [[0, 1091, 1], [1091, 3832, 2], [3832, 5281, 3], [5281, 7208, 4], [7208, 8865, 5], [8865, 10404, 6], [10404, 12103, 7], [12103, 13782, 8], [13782, 15052, 9], [15052, 15504, 10], [15504, 17206, 11], [17206, 18777, 12], [18777, 20604, 13], [20604, 22617, 14], [22617, 24676, 15], [24676, 26156, 16], [26156, 27434, 17], [27434, 29092, 18], [29092, 30654, 19], [30654, 31149, 20], [31149, 32588, 21], [32588, 32738, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32738, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
d312d5316c350db9d32dfd83d0e1d17f3692a721
Translation of Restricted OCL Constraints into Graph Constraints for Generating Meta Model Instances by Graph Grammars Jessica Winkelmann 1 Gabriele Taentzer 2 Department of Computer Science Technical University of Berlin Germany Karsten Ehrig 3 Department of Computer Science, University of Leicester, UK Jochen M. Küster 4 IBM Zurich Research Laboratory CH-8803 Rüschlikon, Switzerland Abstract The meta modeling approach to syntax definition of visual modeling techniques has gained wide acceptance, especially by using it for the definition of UML. Since meta-modeling is non-constructive, it does not provide a systematic way to generate all possible meta model instances. In our approach, an instance-generating graph grammar is automatically created from a given meta model. This graph grammar ensures correct typing and cardinality constraints, but OCL constraints for the meta model are not supported yet. To satisfy also the given OCL constraints, well-formedness checks have to be done in addition. We present a restricted form of OCL constraints that can be translated to graph constraints which can be checked during the instance generation process. Key words: OCL, meta model, graph grammar, UML 1 Email: danye@cs.tu-berlin.de 2 Email: gabi@cs.tu-berlin.de 3 Email: karsten@mcs.le.ac.uk 4 Email: jku@zurich.ibm.com This paper is electronically published in Electronic Notes in Theoretical Computer Science URL: www.elsevier.nl/locate/entcs 1 Introduction Meta modeling is a wide-spread technique to define visual languages, with the UML [UML03] being the most prominent one. Despite several advantages of meta modeling such as ease of use, the meta modeling approach has one disadvantage: It is not constructive i. e. it does not offer a direct means of generating instances of the language. This disadvantage poses a severe limitation for certain applications. For example, when developing model transformations, it is desirable to have a large set of valid instance models available for large-scale testing. Producing such a large set by hand is tedious. In the related problem of compiler testing [BS97] a string grammar together with a simple generation algorithm is typically used to produce words of the language automatically. Generating instance-generating graph grammars for creating instances of meta models automatically can overcome the main deficit of the meta modeling approach for defining languages. The graph grammar introduced in [EKTW05] ensures cardinality constraints, but OCL constraints for the meta model are not considered until now. In this paper we present the main concepts of automatic instance generation based on graph grammars by an example. In addition, we show how restricted OCL constraints can be translated to equivalent graph constraints. The restricted OCL constraints that can be translated can express local constraints like the existence or non-existence of certain structures (like nodes and edges or subgraphs) in an instance graph. Positive ones have to be checked after the generation of a meta model instance, negative graph constraints can be checked during the generation. They can be transformed into application conditions for the corresponding rules, as defined in [EEHP04]. We first introduce meta models with OCL constraints in Section 2. Section 3 presents the main concepts for automatic generation of instances from meta models using the graph grammar approach. The generation process is illustrated at a simplified statechart meta model. We use graph transformation with node type inheritance [BEdLT04] as underlying approach. In Section 4 we explain how restricted OCL constraints can be translated into graph constraints. We conclude by a discussion of related and future work. 2 Meta Models with OCL-Constraints Visual languages such as the UML [UML03] are commonly defined using a meta modeling approach. In this approach, a visual language is defined using a meta model to describe the abstract syntax of the language. A meta model can be considered as a class diagram on the meta level, i. e. it contains meta classes, meta associations and cardinality constraints. Further features include special kinds of associations such as aggregation, composition and inheritance as well as abstract meta classes which cannot be instantiated. Each instance of a meta model must conform to the cardinality con- straints. In addition, instances of meta models may be further restricted by the use of additional constraints specified in the Object Constraint Language (OCL) [Obj05]. Figure 1 shows a slightly simplified statechart meta model (based on [UML03]) which will be used as running example. A state machine has one top CompositeState. A CompositeState contains a set of StateVertices where such a StateVertex can be either an InitialState or a State. Note that StateVertex and State are modeled as abstract classes. A State can be a SimpleState, a CompositeState or a FinalState. A Transition connects a source and a target state. Furthermore, an Event and an Action may be associated to a transition. Aggregations and compositions have been simplified to an association in our approach but they could be treated separately as well. For clarity, we hide association names, but show only role names in Figure 1. The association names between classes StateVertex and Transition are called source and target as corresponding role names. The names of all other associations are equal to their corresponding role names. Since we want to concentrate on the main concepts of meta models here, we do not consider attributes in our example. The set of instances of the meta model can be restricted by additional OCL constraints. For the simplified statecharts example at least the following OCL constraints are needed: (i) A final state cannot have any outgoing transitions: context FinalState inv: self.outgoing->size()=0 (ii) A final state has at least one incoming transition: context FinalState inv: self.incoming->size()>=1 (iii) An initial state cannot have any incoming transitions: context InitialState inv: self.incoming->size()=0 (iv) Transitions outgoing InitialStates must always target a State: context Transition inv: self.source.oclIsTypeOf(InitialState) implies self.target.oclIsKindOf(State) 3 Generating Statechart Instances In this section, we introduce the idea of an instance-generating graph grammar that allows one to derive instances of a meta model in a systematic way. Given an arbitrary meta model, the corresponding instance-generating graph grammar can be derived by creating specific graph grammar rules, each one depending on the occurrence of a certain meta model pattern. The idea is to associate to a specific meta model pattern a graph grammar rule that creates an instance of the meta model pattern under certain conditions. An instance-generating graph grammar also requires a start graph and a type graph. The start graph will be the empty graph and the type graph is obtained by converting the meta model class diagram to a type graph. Given a concrete meta model, assembling the rules derived, the type graph created and the empty start graph will lead to an instance-generating graph grammar for this meta model. For a detailed description see [EKTW05]. Overall, we use the concept of layered graph grammars [EEdL+05] to order rule applications. In the following, we describe the rules that we derive for the meta model of state machines (see Figure 1). First, we will get a create rule for each non-abstract class within the meta model, allowing us to create an arbitrary number of instances of all non-abstract classes. The rules of layer 1 are applied *arbitrarily often*, meaning that layer 1 does not terminate and has to be interrupted by user interaction or after a random time period. For the sample meta model we get the rules `createStateMachine`, `createCompositeState`, `createSimpleState`, `createFinalState`, `createInitialState`, `createTransition`, `createEvent`, and `createAction` in layer 1. Layer 2 consists of rules for link creation for associations with multiplicity \([1,1]\) at one association end. The rules have to be applied *as long as possible*. We have rules that create links between existing instances and rules that create an instance (of a concrete type) and a link to this instance starting at an instance that is not yet connected to another instance. New instances can only be created if there are not enough instances in the graph what is ensured by (negative) application conditions. For the association `source` between `StateVertex` and `Transition`, we derive four rules: one rule creates a link `source` between an existing `StateVertex` and an existing `Transition`. Further, for each concrete class that inherits from class `StateVertex` one rule is derived that creates the `StateVertex`, an `InitialState`, a `CompositeState`, `SimpleState` or a `FinalState`, and the link `source`. Note that the abstract class `StateVertex` could be matched to any of its concrete subclasses `InitialState`, `CompositeState`, `FinalState`, and `SimpleState`. For the association `target` between `StateVertex` and `Transition`, similar rules are derived. For the association `top` between `StateMachine` and `CompositeState`, we derive two rules. One of them is shown in Figure 3, creating a `CompositeState` to a `StateMachine` if each other `CompositeState` is bound and the `StateMachine` is not already connected to a top `CompositeState`. Layer 3 consists of rules creating links for associations with multiplicity [0,1] or [0,∗] at the association ends. The graph grammar derivation rules in layer 3 are terminating. But in order to generate all possible instances, the rule application can be interrupted by user interaction or after a random time period. The rules create links between existing instances, so they have negative application conditions prohibiting the insertion of more links than allowed by the meta model cardinalities. For the running example, the rules of layer 3 are shown in Figure 4. They insert links for the association `effect` between `Transition` and `Action` and association `trigger` between `Transition` and `Event` as well as association `subVertex` between `CompositeState` and `StateVertex`. The example rules shown in Figures 2 - 4 construct a simple instance graph consisting of a `StateMachine` with its top `CompositeState` containing three state vertices and two transitions between them. In the application conditions shown in Figures 2 - 4 the node types are abbreviated (CS for `CompositeState` etc.). --- <table> <thead> <tr> <th>Layer</th> <th>Grammar Rule</th> <th>Application Conditions</th> <th>Example Graph</th> </tr> </thead> <tbody> <tr> <td>2</td> <td>InsertFinalState_target_Transition</td> <td>NAC₁: NAC₂, NAC₃, NAC₄</td> <td></td> </tr> <tr> <td></td> <td></td> <td>1:FinalState</td> <td></td> </tr> <tr> <td></td> <td></td> <td>target</td> <td></td> </tr> <tr> <td></td> <td></td> <td>2:Transition</td> <td></td> </tr> <tr> <td></td> <td></td> <td>target</td> <td></td> </tr> <tr> <td></td> <td>InsertInitialState_target_TransitionNewObj, InsertCompositeState_target_TransitionNewObj, InsertSimpleState_target_TransitionNewObj, InsertStateVertex_target_Transition</td> <td></td> <td></td> </tr> <tr> <td></td> <td></td> <td>Cond₁: NAC₂, NAC₃, NAC₄</td> <td></td> </tr> <tr> <td></td> <td></td> <td>1:StateVertex</td> <td></td> </tr> <tr> <td></td> <td></td> <td>top</td> <td></td> </tr> <tr> <td></td> <td></td> <td>1:StateMachine</td> <td></td> </tr> </tbody> </table> --- Fig. 3. Example Grammar Rules 2 Up to now there is no general way to transform OCL constraints into equivalent graph constraints, which are introduced in [EEHP04]. As a first approach, we show how restricted OCL constraints can be translated to equivalent graph constraints. In contrast to the translation of meta models to graph grammars which was described formally in [EKTW06], we discuss first ideas for the translation of OCL constraints only and sketch how they can be ensured. Besides having one common formalism the motivation for translating OCL constraints into graph constraints is their later consideration within the derivation process (sketched below). We restrict OCL constraints to equality, size, and attribute operations for navigation expressions, called restricted OCL constraints. In future work, OCL constraints and graph constraints have to be further compared concerning their expressiveness. In this section we first introduce graph constraints and present some example graph constraints. Then we define restricted OCL constraints and describe their translation. **Graph constraints** Graph constraints are properties on graphs which have to be fulfilled. They are used to express contextual conditions like the existence or non-existence of certain nodes and edges or certain subgraphs in a given graph. Application conditions for rules were first introduced in [EEHP04]. They restrict the capability of rules, e.g. a rule can be applied if certain nodes and edges or certain subgraphs in the given graph exist or do not exist. **Definition 4.1** [graph constraint] Graph constraints over an object $P$ are defined inductively as follows: For a graph morphism $x : P \rightarrow C$, $\exists x$ is a The restricted OCL constraints that can be translated are divided into atomic navigation expressions and complex navigation expressions. Atomic navigation expressions: Atomic navigation expressions are OCL expressions that... • express equivalent navigations, • end with operation `size()` (if the result is compared with constants), • end with operations `isEmpty()`, `notEmpty()` or `isUnique()`, or • end with attribute operations (not considered explicitly in the paper). The navigation expressions contain navigation along association ends or association classes only. Atomic navigation expressions can be transformed into basic graph constraints of the form $\exists x$ or boolean formulae over basic graph constraints. A navigation expression stating that two navigations have the same result, like `self.ass1=self.ass2.ass3`, can be transformed into a graph constraint, see Figure 7 a). Here the conclusion of the constraint ensures that `ass1` and `ass3` are connected to the same instance of `Class2`. Operation `size()` can be translated into a Boolean graph constraint that is composed of two basic graph constraints, see Figure 7 b). The first constraint ensures that there exist the minimum number (= value of the constant) of association ends, the second prohibits the existence of more than the constant value association ends. If the comparison operation is $\leq$ or $\geq$ the OCL constraint can be translated into just one graph constraint. Operations `isEmpty()` and `notEmpty()` can be translated back to a `size()` operation: `self.ass1->isEmpty()` is translated back to `self.ass1->size()=0`, `self.ass1->notEmpty()` to `self.ass1->size()>=1`. Collection operation `isUnique()` can be translated into a `size()` operation, if the body of the collection operation is a navigation expression ending at an instance set: `self.ass1->isUnique(navexp)` is translated back to `self.ass1.navexp->size()<=1`. --- ![Fig. 7. Examples for Translation of OCL Constraints](image-url) Complex navigation expressions: **Definition 4.2** [complex navigation expressions] Atomic navigation expressions are complex navigation expressions. Given complex navigation expressions \( a \), \( b \) and \( c \), expressions \( \text{not}(a) \), \( a \) and \( b \), \( a \) or \( b \), \( a \) implies \( b \), and if \( a \) then \( b \) else \( c \) are complex navigation expressions. Complex navigation expressions can be transformed into conditional graph constraints as described in the following. An OCL expression of the form \( a \) implies \( b \) is equivalent to the expression \( \text{not}(a) \) or \( b \). So we have to translate \( \text{not}(a) \) or \( b \) into an equivalent conditional graph constraint. First the expressions \( a \) and \( b \) are transformed into graph constraints \( c_a \) and \( c_b \) as described above. \( a \) and \( b \) have a common part that has to be identified. In general, they have the same navigation start that is at least the variable \( \text{self} \) (in the example constraint for Transition in Figure 8, the node of type \( \text{Transition} \) is the common part). Having this common part \( cp \) we can combine \( c_a \) and \( c_b \) by the operator \( \lor \). Now we can build a conditional graph constraint that is equivalent to the OCL constraints as follows: Build the basic graph constraint \( b : cp \rightarrow cp \). Build the conditional graph constraint \( \exists (b, \neg((c_a) \lor (c_b))), \) where the elements of the common part are mapped to the corresponding elements in \( c_a \) and \( c_b \). See the description of Figure 8 for an example. OCL constraints of the form if \( a \) then \( b \) else \( c \) can be translated back into two implies operations: \((a \) implies \( b \)) and \(((\text{not} a) \) implies \( c \)). The implies expressions are translated as described before into two graph constraints which then are combined by the logical operator \( \land \) to a new one that is equivalent to the OCL constraint. **Ensuring of graph constraints:** Ensuring of graph constraints can be done in two ways: One is to check constraints once the overall derivation of an instance model has terminated which would also be the approach followed when checking OCL constraints directly. However, this leads to the generation of a large number of non-valid instances in between. A more promising approach is to take the constraints into consideration during the derivation process: For each class in the meta model the corresponding graph constraints can be identified. For rules of layer 1, constraints are ignored. For rules of layer 2 and 3, negative constraints of the form \( \neg \exists x, \neg \exists (x, c), \neg \forall (x, c) \) for the participating classes are evaluated before a possible application of a rule. If the resulting instance violates a constraint, the previous application of a rule is not executed. Here we use the translation of graph constraints to application conditions as presented in [EEHP04]. Positive constraints of the form \( \exists x, \exists (x, c), \forall (x, c) \) are checked after termination of layer 3. If a positive constraint is violated, the model can be fixed by adding additional elements required by the positive constraint. It remains to future work to determine those negative constraints that can be Translation of the OCL constraints for the statechart meta model: Figure 8 shows the translation of the OCL constraints for the simple statechart meta model example in Figure 1. The first translates the OCL constraint context FinalState inv: self.incoming ->size() >=1 (that is an atomic navigation expression) into an equivalent basic graph constraint. This constraint corresponds to the size()-operation constraint shown in Figure 7 c). The second translates the OCL constraint context FinalState inv: self.outgoing ->size() =0 into an equivalent basic graph constraint, corresponding to the graph constraints shown in Figure 7 b). Note, that the positive graph constraint is not needed if size() is compared to 0. The third one is similar. The OCL constraint context Transition inv: self.source.oclIsTypeOf(InitialState) implies self.target.oclIsKindOf(State) is a complex navigation expression. It is equivalent to the expression (not(self.source.oclIsTypeOf(InitialState))) or (self.target.oclIsKindOf(State)), stating that each source instance of a Transition instance is not an InitialState or the target instance is a State. The two OCL expressions can be translated into two basic graph constraints shown in Figure 8 (the lower part of the last graph constraint). We have to combine the two basic graph constraints by operator \( \lor \) and we have to express that the Transition instance in both expressions is the same. The common part of the premise and the conclusion contains the Transition only. The complete conditional graph constraint states: all nodes of type Transition have a source node of type InitialState or a target node of type State. This is equivalent to: Transitions outgoing InitialStates must always target a State. 5 Related Work One of the related problems is the one of automated snapshot generation for class diagrams for validation and testing purposes, tackled by Gogolla et al. [GBR05]. In their approach, properties that the snapshot has to fulfill are specified in OCL. For each class and association, object and link generation procedures are specified using the language ASSL. In order to fulfill constraints and invariants, ASSL offers try and select commands which allow the search for an appropriate object and backtracking if constraints are not fulfilled. The overall approach allows snapshot generation taking into account invariants but also requires the explicit encoding of constraints in generation commands. As such, the problem tackled by automatic snapshot generation is different from the meta model to graph grammar translation. Formal methods such as Alloy [All00] can also be used for instance generation: After translating a class diagram to Alloy one can use the instance generation within Alloy to generate an instance or to show that no instances exist. This instance generation relies on the use of SAT solvers and can also enumerate all possible instances. In contrast to such an approach, our approach aims at the construction of a grammar for the metamodel and thus establishes a bridge between metamodel-based and grammar-based definition of visual languages. In [RT05] it is shown how structural properties of models like multiplicity constraints and edge inheritance can be expressed by graph constraints. Our approach covers a larger set of OCL constraints. 6 Conclusion and Future Work In this paper we have presented the main concepts for translating a meta model to an instance generating graph grammar by an example. We discussed the translation of restricted OCL constraints into equivalent graph constraints. To handle the OCL constraints during the instance generation process that was formally described in [EKTW05], they are first translated to graph constraints and then partly to application conditions of rules. In future work, OCL constraints and graph constraints have to be further compared concerning their expressiveness. Moreover, we started to give OCL a new kind of semantics which has to be set into relation with other OCL semantics. References
{"Source-Url": "http://www.user.tu-berlin.de/lieske/tfs/publikationen/Papers06/WTE+06.pdf", "len_cl100k_base": 4883, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 33391, "total-output-tokens": 5974, "length": "2e12", "weborganizer": {"__label__adult": 0.0003712177276611328, "__label__art_design": 0.00044798851013183594, "__label__crime_law": 0.0003402233123779297, "__label__education_jobs": 0.0008082389831542969, "__label__entertainment": 7.265806198120117e-05, "__label__fashion_beauty": 0.00017380714416503906, "__label__finance_business": 0.0001971721649169922, "__label__food_dining": 0.00033283233642578125, "__label__games": 0.0005006790161132812, "__label__hardware": 0.00064849853515625, "__label__health": 0.0005559921264648438, "__label__history": 0.00026726722717285156, "__label__home_hobbies": 8.690357208251953e-05, "__label__industrial": 0.0004651546478271485, "__label__literature": 0.0003974437713623047, "__label__politics": 0.0002834796905517578, "__label__religion": 0.0005660057067871094, "__label__science_tech": 0.03240966796875, "__label__social_life": 0.00010710954666137697, "__label__software": 0.005741119384765625, "__label__software_dev": 0.9541015625, "__label__sports_fitness": 0.00032711029052734375, "__label__transportation": 0.0005545616149902344, "__label__travel": 0.00021028518676757812}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24756, 0.01777]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24756, 0.57714]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24756, 0.87604]], "google_gemma-3-12b-it_contains_pii": [[0, 1464, false], [1464, 4393, null], [4393, 6811, null], [6811, 8786, null], [8786, 11523, null], [11523, 13221, null], [13221, 13448, null], [13448, 15224, null], [15224, 18585, null], [18585, 20335, null], [20335, 22902, null], [22902, 24756, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1464, true], [1464, 4393, null], [4393, 6811, null], [6811, 8786, null], [8786, 11523, null], [11523, 13221, null], [13221, 13448, null], [13448, 15224, null], [15224, 18585, null], [18585, 20335, null], [20335, 22902, null], [22902, 24756, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24756, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24756, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24756, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24756, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24756, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24756, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24756, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24756, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24756, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24756, null]], "pdf_page_numbers": [[0, 1464, 1], [1464, 4393, 2], [4393, 6811, 3], [6811, 8786, 4], [8786, 11523, 5], [11523, 13221, 6], [13221, 13448, 7], [13448, 15224, 8], [15224, 18585, 9], [18585, 20335, 10], [20335, 22902, 11], [22902, 24756, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24756, 0.10714]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
48d5586475f12c2294c8623c3326f58f6bde4275
An Object-Oriented Modeling Scheme for Distributed Applications D. Anagnostopoulos Department of Geography Harokopion University of Athens El. Venizelou Str, 17671 Athens, Greece Tel.: (+) 301 – 9549171 Fax: (+) 301 – 7275214 Email: dimosthe@hua.gr M. Nikolaidou Department of Informatics University of Athens Panepistimiopolis, 15784 70 Athens, Greece Tel.: (+) 301 – 7275614 Fax: (+) 301 – 7275214 Email: mara@di.uoa.gr Abstract A modeling approach is here introduced for distributed applications. During the last years computer networks have dominated the world, forcing the development of applications operating in a network environment. Since new technologies, as WWW, middleware and co-operative software emerged, distributed applications functionality became rather complex and the requirements from the underlying network increased considerably. Distributed applications usually consist of interacting services provided in a multi-level hierarchy. In order to effectively evaluate their performance through simulation, we introduce a multi-layer object-oriented modeling scheme that facilitates the in-depth and detailed distributed application description and supports most popular architectural models, as the client/server model and its variations. Application functionality is described using predefined operations, which can be further decomposed into simpler ones through a multi-layer hierarchy resulting into elementary actions that indicate primitive network operations, such as transfer or processing. Important features of the modeling scheme are extendibility and wide applicability. The simulation environment build according to this modeling scheme is also presented, along with an example indicating key features and sample results. 1. Introduction Simulation modeling is widely adopted in the computer network domain for performance evaluation purposes. During the last decade, numerous simulation tools were constructed, aiming at analyzing the behavior of complex, user-defined network environments ([1], [2]). Application performance exploitation is thus closely depended on the network infrastructure. In most cases, applications running on a network environment are viewed as generators of network traffic, while application operation mechanisms are often overlooked. The outburst in network technology forced the development of new types of applications, such as distributed information and control systems, e-mail and WWW applications, distant learning environments and workflow management systems. Most are based on the client/server model and its variations, such as the two-tier and three-tier models [3], and are generally called distributed applications. Distributed applications extend to multiple sites and operate on multi-platform networks. As distributed applications become more complex and new services are emerging, the detailed description of operation mechanisms is more significant, considering the fact that network delays are often negligible. Thus, even though distributed applications depend on the supporting network, detailed modeling of application operation mechanisms is a prerequisite for their in-depth performance evaluation. In current research, a number of cases with a different orientation can be referenced. Simulation modeling of customized applications is usually performed analytically using mathematical models (i.e. the corresponding functions/distributions) to represent network load generation ([5], [6]). In other cases, the QoS provided by the network to support specific application requirements is exploited. When performance evaluation is oriented towards issues as the above, it is performed using modeling solutions that are restrained to these specific objectives, without emphasizing the application operation mechanisms. Application operations are examined in approaches, such as the ones presented in [2], [7] and [8], where object-oriented modeling is widely adopted. Application operation is expressed at the primitive action layer, using a series of discrete requests for processing, network transfer, etc., in terms of predefined, primitive actions. This, however, cannot be effective when application decomposition is not supported through a mechanism that transforms operations into primitive actions through intermediate ones, which conform to the various architectural models (e.g. the client/server model) and standards. Decomposition is thus accomplished in an “empirical” manner. When determining the effect of applications without analyzing the operation mechanisms, an accurate estimation of application load is not feasible. Extendibility and wide applicability, to support variations of the architectural models as well as customized implementations, are also not supported. Establishing a generic modeling scheme is thus required to facilitate the representation of different types of applications, i.e. primitive (e.g. FTP) and complex (e.g. distributed databases), according to common modeling principles, as well as the interaction between applications and the underlying network. The modeling scheme introduced in this paper facilitates accuracy in distributed application description using a multi-layer action hierarchy. Actions indicate autonomous operations describing a specific service and can be decomposed into simpler ones, resulting in elementary actions similar to those described in [2] and [7]. The modeling framework supports the client/server model and its variations and can be further extended to support other architectural models. Main features of the modeling scheme are modularity, extendibility and wide applicability. To evaluate distributed application performance, a simulation environment, namely Distributed System Simulator (DSS), was constructed. DSS enables the exploitation of various types of distributed applications, including user-defined ones, as well as the exploitation of the network infrastructure through its graphical components. Object-oriented modeling and component preconstruction is employed. In the following sections we present Distributed System Simulator components and emphasize the modeling scheme introduced for distributed application description and model extension and validation issues, crucial for our approach. An example using DSS to evaluate the performance of a distributed banking system is also presented. 2. Simulation Environment Distributed System Simulator was initially developed as part of a distributed architecture design environment, called IDIS ([9]). Since requirements for network and application modeling, experimentation and model management increased considerably, DSS evolved into a standalone environment. DSS is based on object-oriented and process-oriented simulation and its current version is implemented using MODSIM III ([10]) for model construction and Java for all other modules. DSS is modular, as presented in figure 1, and consists of a graphical user interface (GUI), which co-operates with individual modules for simulation program construction and model manipulation and a model base. Model and experimentation specifications are provided through GUI. Model Generator constructs the simulation program, using component models that reside in Model Base. Completeness and validity of specifications must be pre-ensured, and this is accomplished through the Compatibility Rule Base, which includes a representation of all models residing in Model Base and compatibility rules. Model Manager is invoked during the model extension process. ![DSS components](image) Figure 1. Distributed System Simulator components Line connections in figure 1 indicate module invocation and data access. When experiments are completed, results are subjected to output analysis in order to a) determine whether distributed applications operate efficiently and b) determine the ability of the network infrastructure to support the requirements imposed by distributed applications. 3. Model Definition Object-oriented modeling provides an almost natural representation of multi-entity systems, as distributed systems, since modularity enables the in-depth description of all their components. In simulation modeling, modularity often results in a hierarchical structure, according to which components are coupled together to form larger models ([11]). Distributed systems are modeled as a combination of two types of entities: distributed application and network infrastructure entities. Both are described in terms of their elementary components ([12]). Network model composition is a complex task due to the increased number of network technologies and standards. Since modeling solutions for communication network architectures are already employed by commercial simulation environments, as Connet and OpNet ([2], [1]), this topic is not further discussed in this paper. In most contemporary systems, distributed application operation is based on the client-server model. When designing distributed applications, as indicated in [3], there are many architectural solutions that may be employed regarding the functionality provided by clients and servers and the replication scheme. There are two variations of the client/server model that are widely adopted: the two-tier and the three-tier models. According to the two-tier model, application functionality is merely embedded in the clients, while servers deal with data manipulation and consistency issues ([3]). After the explosion of the Internet and the WWW, this model was no more viable, since functionality was embedded in Web Servers to minimize communication delay. Furthermore, the aggregate functionality was dispatched into more than one layer, with the use of intermediate ones (middleware) between clients and servers, thus offering common services to clients. This is the three-tier model. Within the DSS framework, a basic scheme was introduced to facilitate the description of applications, regardless of their complexity and architecture, supporting both the above architectures. Two types of processes can be defined: clients, which are invoked by users, and servers, which are invoked by other processes. The specific interfaces, acting as process activation mechanisms must be defined for each process, along with the operation scenario that corresponds to the invocation of each interface. Each operation scenario comprises the actions that occur upon process activation. Actions are described by qualitative and quantitative parameters, e.g. the processes being involved and the amount of data sent and received. In most cases, the operation scenario is executed sequentially (each action is performed when the previous one has completed). However, there are cases where actions must be performed concurrently. This is supported through specifying groups of actions that have the same sequence number. The basic actions used for application description are the following: - **Processing**: indicating data processing - **Request**: indicating invocation of a server process - **Write**: indicating data storage - **Read**: indicating data retrieval - **Transfer**: indicating data transfer between processes - **Synchronize**: indicating replica synchronization Each process is executed on a processing node and, thus, **Processing** action indicates invocation of the processing unit of the corresponding node. According to the client-server model, communication between processes is performed through exchanging messages. Server processes can be invoked by other processes, clients or servers. **Request** action indicates invocation of a server process and is characterized by the name of the server process, the invoked interface and the amount of data sent and received. It also implies activation of the network, since the request and the reply must be transferred between the invoking and the invoked process. DSS currently supports RPC, RMI and HTTP protocols. Storing data is performed through **File Servers**. There are two actions available for data storing, which are **read** and **write**, which are characterized by the amount of data stored and retrieved, respectively, and the file server invoked. Temporary data can also be stored in the local disk, resulting in the invocation of the corresponding node storage element. File Server process supports two interfaces, namely **read** and **write**, corresponding to the aforementioned actions. **Transfer** action is used to indicate data exchange between processes. Replication of processes and data is a common practice in distributed applications in order to enhance performance. While process replication is easy to implement, replication of data is accomplished through defining process replicas, for handling data, and a synchronization policy. In the latter case, there are many issues to be resolved, such as determining the process responsible for the synchronization (the invoking process or a process replica), when synchronization is performed (i.e. whenever a change is made or periodically at pre-specified time points) and the synchronization algorithm. Definition of process replicas operating on different nodes and data replicas stored at different file servers is supported. DSS does not support specific synchronization policies. It allows the description of the logical connection between replicated processes and data during process definition and provides the **synchronize** action to facilitate the specification of synchronization policy. This action corresponds to the invocation of the synchronize interface, which must be supported by all process replicas. The corresponding operation scenario has to be defined by the human operator. Synchronize action parameters include the process replicas that must be synchronized and the amount of data transferred. User behavior is modeled through User Profiles. Each profile includes user requests to the client interfaces that may be invoked by the user. For each profile, execution parameters, such as the execution probability, are also specified. User profiles are associated only with processing nodes. In figure 2, an example of the processes involved in a distributed banking system is presented. Tellers are represented through Teller Profile, which activates Teller Client by invoking the Deposit Interface. The teller manager, represented by Manager Profile, can also activate Teller Client by invoking Closing Interface. Deposit interface corresponds to a deposit in a client account and is invoked with two parameters, account and amount, which indicates the size of the corresponding data. Deposit operation scenario includes actions, such as read (indicating program download) and request (indicating the actual deposit) that activate the corresponding operation scenarios of Local Database and File Server. The first parameter of each action indicates the execution sequence. Local Database is a replica of Central Database, thus synchronize action is used to indicate the need for data synchronization between the local branch and the main system. After data is stored in Local Database, Central Database is also updated. Since the synchronization algorithm is application-specific, the corresponding operation scenario is defined by the user. Server process activation is performed through read, write, request and synchronize actions and is indicated by dotted lines. Processes are composite objects acquiring static (e.g. process type) and dynamic properties as lists of objects (e.g. interfaces and operation scenarios). Each operation scenario is also a composite object, including a list of actions. DSS operator can store specific instances of processes, as the DB Server in the previous example, for future reuse in other experiments. This is accomplished by properly extending object hierarchies, as discussed in section 4. The actions used to define operation scenarios are either elementary or of higher layer. In the latter case, they can be decomposed into elementary ones. While processing is an elementary action, write is expressed through simpler ones, i.e. a process and a request sent to a File Server. All actions can be ultimately expressed through the three elementary ones, processing, network and diskIO, each indicating invocation of the corresponding infrastructure component ([13]). Action decomposition is not performed in a single step. Intermediate stages are introduced to simplify the overall process and maintain relative data. The action decomposition scheme is presented in figure 3. Dotted rectangles represent intermediate actions, while gray rectangles represent elementary ones. Finally, rectangles with black border represent application actions used when defining operation scenarios. Note that even though processing action is an elementary one, it is used in the definition of operation scenarios. This diagram can be further extended to include user-defined, domain-oriented actions, as discussed in section 4, which conform to specific architectural models. However, although alteration or creation of elementary actions is not allowed. The supported actions are categorized into 4 layers. The lowest layer includes only elementary actions, while the highest one includes only actions built upon existing ones. User-defined actions are also placed at this layer. Each action can be decomposed into others of the same or the lower layer. Actions support specific parameters and are derived as ancestors of the action class. During action decomposition, all parameters of the invoked action must be defined. ### 4. Model Extension and Validation The proposed object-oriented modeling scheme facilitates the extension of application component (e.g. actions) functionality, in order to describe custom applications, and also storing of specific application component (e.g. DB server) instances for future reuse. The same capabilities are provided for network components, as network protocols. Model extension is performed by the human operator through the invocation of Model Manager and Compatibility Rule Base. Extending distributed application modeling constructs is a strong requirement for the modeling scheme and is noted as a pitfall of current simulation tools. Processes, actions and communication protocols are the most common entities, new models of which (i.e. components) need to be provided. Models are created as ancestors of existing, abstract entity models. A concise modeling framework for extending object structures has been described in ([13]). Considering the example of section 3.1, a new insert action model would be constructed as a direct descendant of the abstract application action model, while a DB process model would be constructed as a descendant of BE process model (figure 4). Extending object hierarchies is performed according to specific restrictions that ensure the validity of the modeling scheme. User-defined action models are either of intermediate or application type. Existing actions can not be altered, while new actions must always be described in terms of existing ones. When creating a new process model, the interfaces and operation scenarios are usually fully defined. Although a new operation scenario (e.g. insert) can be stored within a new process model (e.g. DB process), the operation scenario model can not be extended, since the addition of new operation scenarios not belonging to a specific process is not supported. While describing an application, the user can copy a specific operation scenario, since the description of specific instances is temporarily stored within Compatibility Rule Base. At the implementation level, the Model Base is extended using object inheritance. When extending composite entities (e.g. process), hierarchical layering enables the construction of complex models through extending the behavior of existing objects and ensures that models of a single entity, organized in a single class hierarchy, are accessed through a common interface, using polymorphism ([14]). When defining a new action, the user declares its parameters and the actions used to describe it, while GUI ensures that all actions are properly invoked (their parameters are properly filled). The code fragment generated when constructing the write action is presented in figure 5. As indicated in this figure, write is constructed as a descendant of application action and results in the activation of a request action. Its parameters are stored as object properties, and only the init method needs to be modified. User-defined actions are added in Model Base in a similar manner. The init method is explicitly created for all user-defined actions, since they support different input parameters and correspond to different descriptions stored in the consistOf property. Other methods, such as activate, maintain the same syntax to facilitate polymorphism and remain the same for all actions. {Object Definitions} ApplicationActionObj= Object(ActionObj) CalledProcess:ProcessObj; CallingProcess:ProcessObj; Number_consist_of:Integer; Consist_of: ARRAY[1..Number_consist_of] of ActionObj; Override ASK METHOD Init(); Override WAIT FOR METHOD Activate; END OBJECT; RequestObj= Object(ApplicationActionObj) Seq:Integer; Interface:InterfaceObj; Int_Par_List:List of STRING; ReqSize: INTEGER; ReplySize: INTEGER; Override ASK METHOD Init(IN Seq:INTEGER; IN Calling_Procedure: ProcessObj; IN Called_Process: BE_ProcessObj; IN Interface: InterfaceObj; IN Int_Par_List: List of STRING; IN ReqSize: INTEGER; IN ReplySize: INTEGER); END OBJECT; WriteObj= Object(ApplicationActionObj) File:FileObj; Data_size: INTEGER; Override Called_Process: FSObj; Override ASK METHOD Init(IN Seq:INTEGER; IN Calling_Procedure: ProcessObj; IN Called_Process: FSObj; IN File: FileObj; IN Data_size: INTEGER); END OBJECT; {Objects Implementation} OBJECT ApplicationActionObj; ... WAIT FOR METHOD Activate; BEGIN FOR ALL a IN consist_of WAIT FOR a TO Activate; END METHOD; END OBJECT; OBJECT WriteObj; ASK METHOD Init(IN seq:INTEGER; IN calling_procedure:ProcessObj; IN called_Process:FSObj; IN file: FileObj; IN data_size: INTEGER); BEGIN Seq:=seq; Calling_Procedure:=calling_procedure; Called_Process:=called_process; File:= file; Data_size:=data_size /* initiate inherited properties */ Interface:=new(InterfaceObj); Int_Par_List:={File,Data_size}; ReqSize:= Data_size+100; ReplySize:= 100; /* fill consist_of list */ Number_Consist_of:=1; Consist_of:=new(requestObj); ASK Consist_of To Init (Seq,Calling_Procedure, Called_Process, Interface, Int_Par_List, ReqSize,ReplySize); END METHOD; END OBJECT; Figure 5. Code generation when constructing Write action Extension of process models is accomplished based on the same guidelines. Code generation is performed by Model Manager, which establishes a coupling relation between these components. The extension process comprises the following steps: 1. Ensuring model validity and compatibility with the existing ones. 2. Inserting component models in the Model Base. 3. Updating Compatibility Rule Base with the new component structure. The overall process is depicted in figure 6. Model extension and simulation program generation capabilities can only be supported when input specifications are thoroughly examined to ensure model validity. Validation is not trivial, even though models are preconstructed, since models are coupled to form larger ones and are extended to conform to customized implementations. Validation is carried out through rule-based mechanisms during system specification. Graphical visualization of the existing model hierarchies supports the addition of customized models. Compatibility Rule Base is invoked to ensure that consistency is maintained. When Model Library is extended, the Compatibility Rule Base is updated with the additional model structure, its relations with the existing models and rules concerning its initialization. 5. Practical Example of DSS Usage Distributed System Simulator was used for evaluating the performance of a distributed banking system. Except from headquarters, the bank maintains 64 branches. The banking system supports 24 discrete transactions, grouped in four categories, which are mainly initiated by tellers. The average transaction number per day in a branch is 500, while the maximum transaction number in central branches is over 1000. The required response time is 15-20 sec for all transactions. Network infrastructure could be modeled and evaluated using various commercial simulation tools. However, application description was not feasible on the basis of primitive modeling constructs and required an intermediate-layer analysis that gradually extends to the primitive action layer, so that a credible application model would emerge. The system architecture is based on the three-tier model. A central database is installed in headquarters, where all transactions are executed, while transaction logs are maintained in local databases at the branches. The central database supports 33 stored procedures corresponding to the different execution steps of the 24 transactions. Transactions are coordinated by a transaction monitoring system, also installed in headquarters. Digital’s RDB database management system and ACMS transaction monitoring system are used. The overall network is a TCP/IP one. Light client applications are running on user workstations. Client data are stored locally in the branch file server. When a transaction is executed, the corresponding forms are invoked, each having an average size of 3K. ACMS is invoked up to four times for the execution of the corresponding stored procedure. Before finishing each transaction, a log is stored in the local database. Server processes that were modeled are the following: *File Server* at headquarters and local branches, *CentralDB*, *LocalDB* and *ACMS*. Since *LocalDB* represents logging, only a simple *insert* interface had to be implemented for recording the log. *CentralDB* is accessed through the 33 stored procedures, which are implemented and stored in the database. For each stored procedure, a single interface had to be implemented. Since system performance was mainly determined by the interaction of the different system modules and not by the internal database mechanisms, we decided to establish a common representation for all stored procedures. A new action called *call_stored_procedure_step* was created and inserted in the action hierarchy. Action parameters are *preprocessing*, *data_accessed* and *postprocessing*. *Data_accessed* parameter indicates the amount of data accessed at each step, while *preprocessing* and *postprocessing* parameters indicate the amount of data to be processed before and after access, as a fraction of the accessed data. Using this action, the description of stored procedures was significantly simplified. Each stored procedure consists of one to five steps. The *call_stored_procedure_step* action is implemented as an interface of the *CentralDB* process in a way similar to read/write and includes the activation of *processing*, *read* and *write* actions. *ACMS* is modeled as a server process providing the interface *call_ACMS* (procedure, *inputdata*, *outputdata*, *processing*), which initiates the activation of the corresponding stored procedure. Client applications involve the invocation and processing of forms, the activation of stored procedures through ACMS and log recording. Log recording is depicted through properly invoking the *insert* interface of *LocalDB*, while stored procedure activation is accomplished through the invocation of the *call_ACMS* interface of *ACMS*. *Form_access* (*FS*, *form_name*, *processing*) was added in the action hierarchy to depict accessing, activating and processing of a form. Using combinations of these three actions, it was possible to describe all applications in a simplified, common way. Applications were categorized in four groups, each controlled by a different type of user. Applications of the same group are not executed simultaneously by the same user. This led us to depict each group as a client process supporting one interface for each specific application. Users are depicted as profiles initiating the corresponding client application. Except from building a credible distributed application model, DSS also enabled the estimation of the exact amount of data processed and transferred within and between branches. Modeling advantages that were offered are also simplification in client application description, extendibility and flexibility during process description. The capability to extend the action hierarchy was important to ensure detailed application description. If only predefined actions could be used, the same description would have to be repeatedly given for all transactions, e.g., form activation. Furthermore, this scheme facilitated application description at the level of abstraction required by different groups of users. While the system was under deployment, DSS contributed to determining potential weak points and ensuring the response time of client transactions. Since the main activity of all transactions relates to the invocation of the central database through ACMS, special attention was given to the system performance at headquarters. DSS indicated two drawbacks: First, the processing power of the hardware supporting the Central Database was not adequate to execute client transactions within the predefined response time. This proved to be accurate, forcing the bank to upgrade the hardware platform. Second, for the interconnection of branches with headquarters, load estimation indicated that the throughput of specific leased lines should be increased. On the other hand, Ethernet (10BaseT) proved to be efficient for branches, since the average throughput was very low (less than 0.05 Mbps). 6. Conclusions Exploring the behavior of distributed systems while emphasizing the description of distributed applications was the objective of the modeling scheme introduced. Application modeling extends to the operation and interaction mechanisms and conforms to the various forms of the client/server model. Since distributed system architectures are configurable, considerable effort was put in constructing and organizing the preconstructed component models to ensure their efficient manipulation. The modeling scheme provides guidelines for modeling the essential, both primitive and composite, distributed system components. The capability to reuse models when implementing customized component models was crucial for the description of different architectures, despite the complicated nature of this process. An important feature of this research is that the modeling guidelines can also be used in other modeling and simulation studies. References
{"Source-Url": "http://galaxy.hua.gr/~mara/publications/ass01.pdf", "len_cl100k_base": 5610, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 25338, "total-output-tokens": 6731, "length": "2e12", "weborganizer": {"__label__adult": 0.0004146099090576172, "__label__art_design": 0.0005383491516113281, "__label__crime_law": 0.0003662109375, "__label__education_jobs": 0.0015459060668945312, "__label__entertainment": 0.0001245737075805664, "__label__fashion_beauty": 0.0002065896987915039, "__label__finance_business": 0.0004835128784179687, "__label__food_dining": 0.0003690719604492187, "__label__games": 0.0009889602661132812, "__label__hardware": 0.001956939697265625, "__label__health": 0.0007472038269042969, "__label__history": 0.0005464553833007812, "__label__home_hobbies": 0.0001049041748046875, "__label__industrial": 0.00072479248046875, "__label__literature": 0.00038814544677734375, "__label__politics": 0.00027823448181152344, "__label__religion": 0.0005168914794921875, "__label__science_tech": 0.2235107421875, "__label__social_life": 0.00010287761688232422, "__label__software": 0.0145111083984375, "__label__software_dev": 0.75, "__label__sports_fitness": 0.0003781318664550781, "__label__transportation": 0.0008497238159179688, "__label__travel": 0.0002903938293457031}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33403, 0.01989]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33403, 0.44716]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33403, 0.91746]], "google_gemma-3-12b-it_contains_pii": [[0, 3770, false], [3770, 8005, null], [8005, 13549, null], [13549, 17041, null], [17041, 20973, null], [20973, 24301, null], [24301, 28997, null], [28997, 33403, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3770, true], [3770, 8005, null], [8005, 13549, null], [13549, 17041, null], [17041, 20973, null], [20973, 24301, null], [24301, 28997, null], [28997, 33403, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33403, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33403, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33403, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33403, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33403, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33403, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33403, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33403, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33403, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33403, null]], "pdf_page_numbers": [[0, 3770, 1], [3770, 8005, 2], [8005, 13549, 3], [13549, 17041, 4], [17041, 20973, 5], [20973, 24301, 6], [24301, 28997, 7], [28997, 33403, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33403, 0.0]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
cf25fd831d3456db0b761ab193b10ce179cb6076
[REMOVED]
{"Source-Url": "https://hcl.ucd.ie/system/files/Papers/SolvinLinearAlgebra_2001.pdf", "len_cl100k_base": 7987, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 39970, "total-output-tokens": 9185, "length": "2e12", "weborganizer": {"__label__adult": 0.00034046173095703125, "__label__art_design": 0.0003914833068847656, "__label__crime_law": 0.0004508495330810547, "__label__education_jobs": 0.0015316009521484375, "__label__entertainment": 0.00012683868408203125, "__label__fashion_beauty": 0.0001885890960693359, "__label__finance_business": 0.0005273818969726562, "__label__food_dining": 0.0003888607025146485, "__label__games": 0.0005488395690917969, "__label__hardware": 0.0020580291748046875, "__label__health": 0.000751495361328125, "__label__history": 0.0004682540893554687, "__label__home_hobbies": 0.00016117095947265625, "__label__industrial": 0.0012054443359375, "__label__literature": 0.0003161430358886719, "__label__politics": 0.0004000663757324219, "__label__religion": 0.0006313323974609375, "__label__science_tech": 0.387451171875, "__label__social_life": 0.00015997886657714844, "__label__software": 0.018585205078125, "__label__software_dev": 0.58154296875, "__label__sports_fitness": 0.0003981590270996094, "__label__transportation": 0.0010318756103515625, "__label__travel": 0.0002791881561279297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34897, 0.03258]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34897, 0.53452]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34897, 0.8899]], "google_gemma-3-12b-it_contains_pii": [[0, 2166, false], [2166, 5691, null], [5691, 7230, null], [7230, 10201, null], [10201, 12038, null], [12038, 13854, null], [13854, 16811, null], [16811, 20025, null], [20025, 22218, null], [22218, 24125, null], [24125, 26209, null], [26209, 27622, null], [27622, 28968, null], [28968, 30298, null], [30298, 33362, null], [33362, 34897, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2166, true], [2166, 5691, null], [5691, 7230, null], [7230, 10201, null], [10201, 12038, null], [12038, 13854, null], [13854, 16811, null], [16811, 20025, null], [20025, 22218, null], [22218, 24125, null], [24125, 26209, null], [26209, 27622, null], [27622, 28968, null], [28968, 30298, null], [30298, 33362, null], [33362, 34897, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34897, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34897, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34897, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34897, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34897, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34897, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34897, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34897, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34897, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34897, null]], "pdf_page_numbers": [[0, 2166, 1], [2166, 5691, 2], [5691, 7230, 3], [7230, 10201, 4], [10201, 12038, 5], [12038, 13854, 6], [13854, 16811, 7], [16811, 20025, 8], [20025, 22218, 9], [22218, 24125, 10], [24125, 26209, 11], [26209, 27622, 12], [27622, 28968, 13], [28968, 30298, 14], [30298, 33362, 15], [33362, 34897, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34897, 0.03349]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
3687a8cf0b2eaadcd2f60d8ed51dea207331bcd1
[REMOVED]
{"Source-Url": "https://engagedscholarship.csuohio.edu/cgi/viewcontent.cgi?article=1130&context=enece_facpub", "len_cl100k_base": 6206, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 29608, "total-output-tokens": 7552, "length": "2e12", "weborganizer": {"__label__adult": 0.0004379749298095703, "__label__art_design": 0.00036025047302246094, "__label__crime_law": 0.000652313232421875, "__label__education_jobs": 0.0006160736083984375, "__label__entertainment": 0.00011861324310302734, "__label__fashion_beauty": 0.0002081394195556641, "__label__finance_business": 0.0008664131164550781, "__label__food_dining": 0.0004298686981201172, "__label__games": 0.0006313323974609375, "__label__hardware": 0.001636505126953125, "__label__health": 0.001125335693359375, "__label__history": 0.0003767013549804687, "__label__home_hobbies": 0.00010818243026733398, "__label__industrial": 0.0006165504455566406, "__label__literature": 0.0004243850708007813, "__label__politics": 0.0004265308380126953, "__label__religion": 0.0005426406860351562, "__label__science_tech": 0.1929931640625, "__label__social_life": 0.00014400482177734375, "__label__software": 0.0175323486328125, "__label__software_dev": 0.7783203125, "__label__sports_fitness": 0.00033402442932128906, "__label__transportation": 0.0008220672607421875, "__label__travel": 0.0002503395080566406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32586, 0.03197]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32586, 0.12007]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32586, 0.88506]], "google_gemma-3-12b-it_contains_pii": [[0, 1251, false], [1251, 3630, null], [3630, 6445, null], [6445, 9221, null], [9221, 10899, null], [10899, 14214, null], [14214, 16660, null], [16660, 19035, null], [19035, 22489, null], [22489, 24739, null], [24739, 26831, null], [26831, 29690, null], [29690, 32586, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1251, true], [1251, 3630, null], [3630, 6445, null], [6445, 9221, null], [9221, 10899, null], [10899, 14214, null], [14214, 16660, null], [16660, 19035, null], [19035, 22489, null], [22489, 24739, null], [24739, 26831, null], [26831, 29690, null], [29690, 32586, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32586, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32586, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32586, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32586, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32586, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32586, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32586, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32586, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32586, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32586, null]], "pdf_page_numbers": [[0, 1251, 1], [1251, 3630, 2], [3630, 6445, 3], [6445, 9221, 4], [9221, 10899, 5], [10899, 14214, 6], [14214, 16660, 7], [16660, 19035, 8], [19035, 22489, 9], [22489, 24739, 10], [24739, 26831, 11], [26831, 29690, 12], [29690, 32586, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32586, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
42b92a741d531630155f1fc0c941b1d051789b3b
OSI formal specification case study: the Inres protocol and service, revised Dieter Hogrefe Institut für Informatik Universität Bern Länggassstrasse 51 CH-3012 Bern, Switzerland May 1991 Update May 1992 Abstract This paper contains an OSI specification case study. An informal specification of an OSI-like protocol and service is followed by an SDL [ZI80], Estelle [ISO 9074] and LOTOS [ISO 8807] specification of the same protocol and service. The protocol is called Inres, for Initiator-Responder protocol. It is connection oriented and asymmetric, i.e. one side can only establish connections and send data while the other side can accept connections, release them and receive data. 1. Introduction The system under study, Inres, is not a real system, although it does contain many basic OSI concepts and is therefore very suitable for illustrative purposes because it is easy to understand and not too big. It is an abridged version of the Abracadabra system described in [TR 10167]. The Inres system has originally been published in [HOG89] in German and has already been used as a reference in many publications. This paper contains only a short evaluation and experience section at the end. The main purpose of the paper is to offer the community a well worked out protocol example, which has been checked in parts with tools to serve as: - a reference for other work using the Inres protocol - an illustration for the use of PDTs (formal description techniques) - stimulate and provoke the discussion on protocol against service verification, automatic generation of conformance tests, - stimulate and provoke experts of other formal description techniques such as Z [SP88], stream functions (BRO87), temporal logic (GOT91), to specify the same protocol with their approach. In the following sections the services and the protocol are first described verbally and semi-formally with TS diagrams. These informal descriptions form the basis for the formal specifications with SDL. There are some conventions in the descriptions for the naming of SPs, SAPs and SDUs. Those SPs, SAPs, and SDUs that are related to the Medium service have the prefix M. For example, MSDU is the name of a service data unit of the Medium service. SPs, SAPs, and SDUs that are related to the Inres service and protocol have the prefix I. The order of the description in the next chapters is a recommended order: First, one should think about the service that has to be rendered, then the service that can be used is taken into account, and thereafter the protocol is designed which can render the desired service. 1.1 Informal specification of the Inres service This is an abridged version of the Abracadabra service [TR 10167]. The service is connection-oriented. A user who wants to communicate with another user via the service must first initiate a connection before exchanging data. Fig. 1.2 shows the basic schema of the service with its SPs and SAPs. \[ \begin{array}{c} \text{Initiator user} \\ \text{ICONconf} \\ \text{IDISind} \\ \text{ISAPini} \\ \end{array} \quad \begin{array}{c} \text{Responder user} \\ \text{ICONind} \\ \text{IDATind} \\ \text{IDISreq} \\ \end{array} \] Figure 1.2 The Inres service For simplification purposes the service is not symmetrical. The service can be accessed on two SAPs. On the one SAP (the left one in Fig. 1.2) the Initiator-user can initiate a connection and afterwards send data. On the other SAP another user, Responder-user, can accept the connection or reject it. After acceptance it can receive data from the initiating user. The following SPs are used for the communication between user and provider: - ICONreq: request of a connection by Initiator-user - ICONInd: indication of a connection by the provider - ICONResp: response to a connection attempt by Responder-user - ICONConf: confirmation of a connection by the provider - IDATreq(SDUs): data from the Initiator-user to the provider, this SP has a parameter of type SDU - IDATind(SDUs): data from the Provider to the Responder-user, this SP has a parameter of type SDU - IDISreq: request of a disconnection by the Responder-user - IDISind: indication of a disconnection by the provider The order of SPs at the different SAPs is specified in Fig. 1.3a-1.3h with generalized TS-diagrams (see [TR 8509]). \[ \begin{array}{c} \text{ICONreq} \\ \end{array} \quad \begin{array}{c} \text{ICONind} \\ \text{ICONResp} \\ \end{array} \] Figure 1.3a Successful connection establishment 1.2 Informal specification of the Medium service The Medium service has two SAPs: MSAP1 and MSAP2. The service is symmetrical and operates connectionless. It can be accessed at the two SAPs by the SPs MDATreq and MDATind, both of which have a parameter of type MSDU. With the SPs data (MSDUs) can be transmitted from one SAP to the other. The data transmission is unreliable, and data can be lost. But data cannot be corrupted or duplicated. Fig. 1.4 shows the overall schema of the service, and Fig. 1.5a-1.5b show the respective TS diagrams. 1.3 Informal specification of the Inres protocol This section describes a protocol, which by use of the unreliable Medium service, renders the Inres service to users in the imaginary next higher layer. Fig. 1.6 shows the overall architecture of the protocol. ### General properties of the protocol The Inres protocol is a connection-oriented protocol that operates between two protocol entities Initiator and Responder. The protocol entities communicate by exchange of the protocol data units CR, CC, DT, AK and DR. The meaning of the PDUs is specified below. <table> <thead> <tr> <th>PDU</th> <th>Meaning</th> <th>parameter</th> <th>respective SPs</th> </tr> </thead> <tbody> <tr> <td>CR</td> <td>connection establishment</td> <td>none</td> <td>ICONreq, ICONind</td> </tr> <tr> <td>CC</td> <td>connection confirmation</td> <td>None</td> <td>ICONresp, ICONconf</td> </tr> <tr> <td>DT</td> <td>data transfer</td> <td>sequence number, ISDU</td> <td>IDATreq, IDATind</td> </tr> <tr> <td>AK</td> <td>acknowledgement</td> <td>sequence number</td> <td>IDISreq, IDISind</td> </tr> <tr> <td>DR</td> <td>disconnection</td> <td>none</td> <td>IDISresp</td> </tr> </tbody> </table> The communication between the two protocol entities takes place in three distinct phases: the connection establishment phase, the data transmission phase, and the disconnection phase. In each phase only certain PDUs and SPs are meaningful. Unexpected PDUs and SPs are ignored by the entities Initiator and Responder. ### Figure 1.6 The Inres protocol #### Connection establishment phase A connection establishment is initiated by the Initiator-user at the entity Initiator with an ICONreq. The entity Initiator then sends a CR to the entity Responder. Responder answers with CC or DR. In the case of CC, Initiator issues an ICONconf to its user, and the data phase can be entered. If Initiator receives a DR from Responder, the disconnection phase is entered. If Initiator receives nothing at all within 5 seconds, CR is transmitted again. If, after 4 attempts, still nothing is received by Initiator, it enters the disconnection phase. If Responder receives a CR from Initiator, the Responder-user gets an ICONind. The user can respond with ICONresp or IDISreq. ICONresp indicates the willingness to accept the connection. Responder thereafter sends a CC to Initiator, and the data transmission phase is entered. Upon receipt of an IDISreq, Responder enters the disconnection phase. #### Data transmission phase If the Initiator-user of the entity issues an IDATreq, the Initiator sends a DT to the Responder and is then ready to receive another IDATreq from the user. IDATreq has one parameter that is a service data unit ISDU, which is used by the user to transmit information to the peer user. This user data is transmitted transparently by the protocol entity Initiator as a parameter of the protocol data unit DT. After having sent a DT to Responder, Initiator waits for 5 seconds for a respective acknowledgement AK. Then the DT is sent again. After 4 unsuccessful transmissions, Initiator enters the disconnection phase. DT and AK carry a one-bit sequence number (0 or 1) as a parameter. Initiator starts, after having entered the data transmission phase, with the transmission of a DT with sequence number 1. A correct acknowledgement of a DT has the same sequence number. After receipt of a correct acknowledgement, the next DT with the next (i.e. other) sequence number can be sent. If Initiator receives an AK with incorrect sequence number, it sends the last DT once again. It is also sent again if the respective AK does not arrive within 5 seconds. A DT can only be sent 4 times. Afterwards Initiator enters the disconnection phase. The same happens upon receipt of a DR. Following the establishment of a successful connection, Responder expects the first DT with the sequence number 1. After receipt of a DT with the expected number, Responder gives the ISDU as a parameter of an IDATInd to its user and sends an AK with the same sequence number to the Initiator. A DT with an unexpected sequence number is acknowledged with an AK with the sequence number of the last correctly received DT. The user data ISDU of an incorrect DT is ignored. If Responder receives a CR, it enters the connection establishment phase. And upon receipt of an IDISreq, it enters the disconnection phase. Disconnection phase An IDISreq from the Responder-user results in the sending of a DR by the Responder. Afterwards Responder can receive another connection establishment attempt CR from Initiator. At the Initiator, the DR results in an IDISInd sent by the Initiator to its user. An IDISInd is also sent to the user after DT or CR have been sent unsuccessfully to the Responder. Then a new connection can be established. 2. Formal specification of Inres in SDL At some places the formal specification has to add some information to that found in the informal one. This is because informal specifications tend to be incomplete: they sometimes leave things up to the intuition of the reader. Therefore, informal service and protocol specifications can interpreted correctly only if the reader has some universal knowledge about services and protocols. Examples are given in the following sections. The basic approach to the specification of the services and protocol is as follows. We consider a system called Inres (shown in Example 2.1). The system contains exactly one block, the Inres_service. The processes of this block specify the behaviour of the service provider; one process for each service access point. In addition, the block has a substructure, which is the Inres_protocol (specified in Example 2.4). This protocol specification again contains a block for the specification of a service, the Medium service. This block can in turn have a substructure if a protocol has to be specified, which should render the Medium service. More on this approach can be found in [BHHT88] The substructure specification is used in SDL to specify the behaviour of a block in more detail, as an alternative to a more abstract block specification in terms of interacting processes. This approach to service and protocol specification takes two very basic aspects of OSI into account: - First, that of the recursive nature of the OSI-IRM. A service can be defined by a protocol using the underlying service, which again can be defined by a protocol using the next lower underlying service, and so on. The recursion stops with the Physical Medium (see [ISO 7498]). This recursive definition is mapped on a repeated use of the substructure construct. - Second, the very important aspect that the service can be seen as an abstraction of the protocol and the next lower service. This is expressed in SDL by an abstract "overview" block specification in terms of interacting processes. 2.1 The Inres service in SDL/GR In Example 2.1 the service provider block Inres_service consists of two processes interconnected by a signal route. Each process models the behaviour of one service access point. Example 2.1: ```plaintext NEWTYPE /* insert type of service data unit here */ ENDNEWTYPE /* Definition of macro "daemon" see "inres_protocol" */ SYSTEM Inres_Service SIGNAL ICONconf, IDATreq(ISDUType), ICONreq, IDAT, ICONind, ICONresp, IDAT(ISDUType); BLOCK ISAP_Ini SIGNAL ICONreq, IDATreq(ISDUType), ICONconf, ICONind, ICONresp, IDAT(ISDUType); ISAP_Resp SIGNAL ICONreq, IDATreq(ISDUType), ICONconf, ICONind, ICONresp, IDAT(ISDUType); Example 2.2: In principle, it would have been possible to model the whole behaviour of the service by just one process. But the multi-process solution usually results in a less complex specification. Especially in situations in which difficult collision situations may occur (this is not the case here, but is, for example, in the Abracadabra protocol in [ISO 10167]), it is very useful to model each service access point separately. Example 2.2 shows the behaviour of the Initiator-SAP called ISAP_Manager_Ini and Example 2.3 shows the behaviour of the Responder-SAP called ISAP_Manager_Res. ISAP_Manager_Ini and ISAP_Manager_Res can communicate through a channel to establish the global behaviour of the service. Example 2.2: ``` The SDL specification of the service relies on the TS diagrams of Section 1.1. Since the TS diagrams do not have a formal semantics, whereas SDL does, no one-to-one mapping between the diagrams and the SDL specification is possible. Some information has to be added for formal specification of the service. The Inres service is connection-oriented. Therefore, we will distinguish between the three phases connection establishment, data transfer, and disconnection. In the following, not all features of the SDL specification are discussed; rather, only those are commented on which may not be obvious to the reader. **Connection establishment** Fig. 1.3a-1.3e illustrate the basic behaviour of the service provider during the connection establishment phase. Fig. 1.3a and Fig. 1.3b show the "normal" course of events, first a successful connection establishment and second a user-rejected connection attempt. Fig. 1.3c-1.3e show unpredictable non-deterministic behaviour of the service provider. In Fig. 1.3c the service provider does not indicate the connection attempt to the Responder-user, and in Fig. 1.3d the response of the Responder-user is not transmitted to the Initiator-user. In Fig. 1.3e the Responder-user does not respond "in time." The modelling of the "normal" course of events in SDL is quite obvious. The difficulties arise from the various "abnormal" situations. After the provider has received an ICONreq by the Initiator-user, basically two things can happen: Either the provider rejects the connection attempt with an IDISind to the Initiator-user (Fig. 1.3c); or the provider indicates an ICONind to the Responder-user (Fig. 1.3a). The latter is modelled by the sending of an ICON from ISAP_Manager_Init to ISAP_Manager_Resp. The Responder-user may answer with an ICONresp or an IDISreq. According to Fig. 1.3d, even if an ICONresp is issued to the provider, it may not be able to transmit it to the Initiator-user. The Initiator-user then receives an IDISind instead. The TS diagram in Fig. 1.3e specifies the situation in which the Responder-user does not react “in time” upon receipt of the ICOInd - or does not react at all. This is modelled in SDL by the use of the timer construct. After a certain unspecified time, ISAP_Manager_Ini aborts the connection attempt on its own. If Responder-user issues the ICOInd after the time-out, this results in a “half-open connection.” Initiator-user “thinks” the connection has been aborted, whereas Responder-user “thinks” the connection exists. ISAP_Manager_Ini is in state disconnected and ISAP_Manager_Ini is in state connected. If Initiator-user now tries to open a connection by issuing an ICOInd, ISAP_Manager_Res receives an ICON, issues an ICOInd to the user, and proceeds to state wait. This specific behaviour is not clearly specified by the TS diagrams, but it follows directly if one makes a model of the provider. Data transfer If a connection has been established successfully, the Initiator-user may issue an IDATreq with a parameter d of type ISDU to the ISAP_Manager_Ini. According to Fig. 1.3f and 1.3g, two things may happen: Either the data are issued to the Responder-user as an IDATInd, or the Initiator-user receives an IDISInd. In Example 2.2 this is modelled by the use of the Daemon after receipt of the signal IDATreq in state connected. It is important to note that, in case of a disconnection during data transfer, the process ISAP_Manager_Ini may be in state disconnected, whereas the process ISAP_Manager_Res is still in state connected. This situation is terminated when the Initiator user tries to open up another connection. ISAP_Manager_Res then goes to state wait from state connected. Disconnection An IDISreq may be issued by the Responder-user at any time. According to Fig. 1.3h and 1.3i, an IDISreq may or may not result in an IDISInd at the Initiator-user. This is modelled by the Daemon in Example 2.3. Should the IDIS not be transmitted, the system runs into a half-open connection: ISAP_Manager_Res is in state disconnected while ISAP_Manager_Ini is in state connected and still trying to send data. But upon the first receipt ISAP_Manager_Res then aborts the connection with an IDIS. This situation is also captured by the TS diagram 1.3g. 2.2 The Inres Service in SDL/PR system Inres_Service; signal ICOInd, IDATreq( ISDUType), ICOConf, ICOInd, IDATrep, IDISInd, IDATind( ISDUType), ICON, ICONF, IDIS, IDAT( ISDUType); newtype ISDUType literals 0, 1 /* insert type of service data unit here */ endnewtype; channel ISAPresp from ISAP_Resp to env with ICONInd, IDATInd; from env to ISAP_Resp with ICONresp, IDISreq; endchannel ISAPresp; channel Internal from ISAP_Ini to ISAP_Resp with ICON, IDAT; from ISAP_Resp to ISAP_Ini with ICONF, IDIS; endchannel Internal; channel ISAPini from ISAP_Ini to env with ICONInd, IDISind; from env to ISAP_Ini with ICONreq, IDATreq; endchannel ISAPini; process ISAP_Resp referenced; endblock ISAP_Resp; block ISAP_Ini referenced; endblock ISAP_Ini; process ISAP_Manager_Resp; dcl d ISDUType; start; nextstate Disconnected; state Wait; input ICONresp; decision ANY: (EITHER) output ICONInd; nextstate Connected; (OR) nextstate Connected; enddecision; state Disconnected; input ICON; output ICONInd; nextstate Wait; input IDAT( d); output IDIS; nextstate -; state Connected; input IDAT( d); 2.3 The Inres protocol and Medium service in SDL/GR Example 2.4 shows the overall structure of the Inres protocol together with the underlying Medium service as a substructure diagram (referenced in the block diagram Inres_service in Example 2.1) MACRODEFINITION Datatype definitions NEWTYPE Sequencenumber LITERALS 0, 1; OPERATORS succ: Sequencenumber -> Sequencenumber; AXIOMS succ(0) == 1; succ(1) == 0; ENDNEWTYPE Sequencenumber; NEWTYPE MSDUType STRUCT id IPDUType; num Sequencenumber; ENDNEWTYPE MSDUType; NEWTYPE IPDUType STRUCT id CR, CC, DR, DT, AK; ENDNEWTYPE IPDUType; NEWTYPE IPDUType STRUCT id MSDUType; ENDNEWTYPE IPDUType; NEWTYPE Medium Example 2.4: MACRODEFINITION Datatype definitions The specification consists of three basic parts, all three of which are modelled by blocks: the two protocol entities Station_Init and Station_Res, and the service provider Medium. Each Station consists of two processes. The Codex processes model the interface to the next lower layer by transforming the PDUs produced by the other processes (Initiator and Responder) into the SDUs of the next lower layer, which are then passed down as parameters of SPs. This chosen architecture of a protocol entity is a useful one for all sorts of different protocols. Many protocol specifications nowadays describe the behaviour of the processes similar to Initiator and Responder, and they assume that there is an (abstract) channel between them which can be used to transmit the PDUs directly. Of course, according to the OSI-BRM, this is not the case: The service of the next lower layer has to be used for this communication. Therefore, the PDUs have to be transformed by processes like Codex_Init and Codex_Res. In the most general case the Coder processes may have additional duties. According to [ISO 7498] (more precisely Section 5.7.4 in [ISO 7498]) these processes may handle the connection setup and maintenance of the next lower layer. More on this topic is given in [BHS91]. The SDL specification of the Ingres protocol is rather obvious and needs no further comments. It follows rather naturally from the informal description, although, similar to the service, some additional information had to be provided. The verification of the SDL specification with respect to the informal description is left to the reader. 2.4 The Inres protocol and Medium service in SDL/PR Example 2.7: Example 2.8: Example 2.9: MACRODEFINITION MSAP_Manager 1(1) 2.4 The Inres protocol and Medium service in SDL/PR system INRES; signal with CC, AK, DR; from Coder_Resp to Responder with CR, DT; signal route ISAP from Responder to env with ICONind, IDATind; from env to Responder with ICONresp, IDISreq; process Coder_Resp [1, 1] referenced; process Responder [1, 1] referenced; endblock Res_Station; process Coder_Init; dcl d ISDUType, Num Sequencenumber, Sdu MSDUType; start; nextstate Idle; state Idle; input CR; task Sdu!id := CR;grs0 :output MDATreq( Sdu);nextstate Idle; input DT( Num, d); task Sdu!id := DT, Sdu!Num := Num, Sdu!Data := d; join grs0; input MDATind( Sdu); decision Sdu!id;( CC) : output CC;grs1 :nextstate Idle; ( AK) : output AK( Sdu!Num);join grs1; ( DR) : output DR;join grs1; else : nextstate Idle; enddecision; endprocess Coder_Init; process Initiator; dcl d Counter Integer, d ISDUType, Num, Sequencenumber; timer T; synonym P Duration = 5; start; nextstate Disconnected; state Disconnected; input ICONreq; task Counter := 1;output CR;set( now + P, T);nextstate Wait; input DR; reset( T);task Number := 1;output IDISreq; nextstate Connected; input T; decision Counter < 4; | TRUE) : output CR; task Counter := Counter + 1; set( now + P, T);nextstate Sending; input DR; output IDISind; nexstate Disconnected; | FALSE) : output IDISind; nexstate Disconnected; enddecision; input DR; reset( T);output IDISind; nexstate Disconnected; state Connected; input DT( Number, d); task Counter := 1; set( now + P, T);nextstate Sending; input DR; output IDISind; nexstate Disconnected; state Sending; input T; nextstate Connected; process MSAP_Manager2; dcl d MSDUTyp; start; nexstate Idle; state Idle; input MAATreq( d); decision ANY; | EITHER) : nexstate Idle; | OR) : output IDAT( d);nexstate Idle; enddecision; input AK( Num); reset( T);nextstate Disconnected; save IDATreq; endprocess Initiator; process MSAP_Manager1; dcl d MSDUTyp; start; nexstate Idle; state Idle; input MAATreq( d); decision ANY; | EITHER) : nexstate Idle; | OR) : output IDAT( d);nexstate Idle; enddecision; input IDAT( d); output MAATind( d); nexstate Idle; endprocess MSAP_Manager2; process MSAP_Manager1; dcl d MSDUTyp; 3. Formal specification of Inres in Estelle 3.1 The Inres service in Estelle This section describes the Inres service in Estelle. Figure 3.1 gives an overview on the specification. It consists of two modules User plus the module Service_provider. The Service_provider itself consists of two modules Initiator and Responder which define the behaviour at the two service access points. They communicate via the channel INTERNchn. The specification is very similar to the SDL specification, therefore any comments made there also apply here. This section describes the Inres protocol in Estelle. The basic structure of the specification is depicted in Figure 3.2. The specification is very similar to the SDL specification, therefore any comments made there also apply here. This section describes the Inres service in LOTOS. The specification style is constraint oriented [VSS88]. Constraints specify parts of the total behaviour of a system which are combined via the parallel operator. In the following example there are three constraints which define the: - behaviour at the service access point ISAPini (ICEPini) - behaviour at the service access point ISAPres (ICEPRes) - end-to-end behaviour related to the events at the service access points (EndtoEnd) The sequences of events ICEPini, ICEPRes and EndtoEnd are first defined independently from each other. Then they are coordinated by the parallel operator to define the overall behaviour of the system. where process ICEPini[g] : noexit := ( ConnectionphaseIni[g]; DataphaseIni[g]; ) DisconnectionIni[g] where process ConnectionphaseIni[g] : exit := g! ICONreq; g! ICONconf; endproc (* ConnectionphaseIni *) process DataphaseIni[g] : noexit := g! IDATreq par:ISDU; endproc (* DataphaseIni *) process DisconnectionIni[g] : noexit := g! IDISind; ICEPini[g]; endproc (* DisconnectionIni *) endproc (* ICEPini *) process ICEPres[g] : noexit := ( ConnectionphaseRes[g]; DataphaseRes[g]; ) endproc (* ICEPres *) process EndtoEnd[ini,res] : noexit := ( ConnectionphaseEte[ini,res]; DataphaseEte[ini,res]; ) endproc (* EndtoEnd *) 4.2 The Inres protocol and Medium service in LOTOS This section describes the Inres protocol and Medium service. While the Inres service specification was constraint oriented, this specification is state oriented according to [VSS88]. Fig. 4.1 depicts the basic architecture of the example. x ≤ y = (x < y) or (x = y); >x y = not (x < y); >< y = not (x ≤ y); ofsort DecNumb 1 = s(0); 2 = s(s(0)); 3 = s(s(s(0))); 4 = s(s(s(s(0)))); 5 = s(s(s(s(s(0))))); 6 = s(s(s(s(s(s(0)))))); 7 = s(s(s(s(s(s(s(0))))))); 8 = s(s(s(s(s(s(s(s(s(0))))))))); 9 = s(s(s(s(s(s(s(s(s(s(0)))))))))); eqns forall f: Sequencenumber, d : ISDU, ipdu : IPDU ofsort DecNumb map(CR) = 0; map(CC) = 1; map(DT(f,d)) = 2; map(AK(f)) = 3; map(DR) = 4; ofsort ISDU data(DT(f,d)) = d; ofsort DecNumb map(IDATreq) = 3; map(IDATind) = 4; ofsort Boolean isCR(ipdu) = map(ipdu) == 0; isCC(ipdu) = map(ipdu) == 1; ofsort ISDU data(Serial) = d; ofsort Boolean isIDISreq(ipdu) = map(ipdu) == 3; ofsort ISDU map(IDATreq) = 3; map(IDATind) = 4; endproc (* MediumSpType *) behaviour hide MSAP1,MSAP2 in Station_Ini[ISAPini,MSAP1] | [MSAP1] | Medium[MSAP1,MSAP2] | [MSAP2] | Station_Res[MSAP2,ISAPres] where process Medium[MSAP1,MSAP2] : noexit := Channel[MSAP1,MSAP2] ||| Channel[MSAP2,MSAP1] where process Channel[a,b] : noexit := a?d:MSP [isMDATreq(d)]; (b!MDATind(d);Channel[a,b][]i;Channel[a,b]) endproc (* Channel *) endproc (* Medium *) process Station_Ini[ISAPini,IPdu_ini] where process Initiator[ISAP,IPdu] : noexit := (Connectionphase[ISAP,IPdu] >> Dataphase[ISAP,IPdu] (succ(0))) [>Disconnection[ISAP,IPdu] where process Channel[a,b] : noexit := a?p : Channel[a,b][]i,Channel[a,b] endproc (* Channel *) endproc (* Medium *) process Station_Ini[ISAPini,IPdu_ini] : noexit := Initiator[ISAPini,IPdu_ini] | [IPdu_ini] | Coder[IPdu_ini,ISAPini] where process Channel[a,b] : noexit := (Connectionphase[ISAP,IPdu] >> Dataphase[ISAP,IPdu] (succ(0))) [>Disconnection[ISAP,IPdu] where process Connectionphase[ISAP,IPdu] : noexit := Connectrequest[ISAP,IPdu] >> accept z:DecNumb in Wait[ISAP,IPdu](z) where process Connectrequest [ISAP, IPdu] : not (DecNumb) := [ISAP?sp:SP; [i:ICOMReq(sp)] => IPdu!ICOMconf; exit [i]] [not (i:ICOMReq(sp))] => Connectrequest [ISAP, IPdu] (* User errors are ignored *) (* DR is only accepted by process Disconnection *) [IPdu?ipdu:IPDU[not (i:ICOMReq(sp))]; Connectrequest [ISAP, IPdu]] (* System errors are ignored *) endproc (* Connectrequest *) process Wait [ISAP, IPdu] := [IPdu?ipdu:IPDU[not (i:ICOMReq(sp))]; ICOMconf; exit] [not (i:ICOMReq(sp))] => Wait [ISAP, IPdu] exit (* Connectrequest *) endproc (* Connectrequest *) process **5. Experiences and evaluation** The specifications have been checked by tools and by thorough review. This of course doesn't exclude the possibility of errors. The specifications appear to be fairly "correct" as far as syntax and the specified behaviour are concerned. But since the term "correct" has many meanings in the context of semantics the author is aware of the fact that there may still be problems with the specifications and is happy about any comment. In particular, it wasn't possible to formally verify the protocol specifications against the service specifications, also due to the fact that it is not really clear what verification means in this context. What kinds of equivalence relation should hold between service and protocol? The LOTOS specifications have been syntactically checked with the Hippo tool [vE88]. The semantics have been checked by performing a limited number of simulation experiments on the specifications with the same tool. The syntax of the Estelle specifications has been checked with the Estelle-C compiler [CHA87] and also some experiments have been performed on the specifications by simulation. The SDL specifications have been check by thorough review. Many comments have been received from readers of [HOG89] after the first publication of the specification of Inters. Some of the comments lead to corrections in the specification. It has been experienced during the specification process that the differences between the three languages are not very big. The SDL and Estelle specifications could almost be translated one to one into another. Differences are mainly due to the different input port semantics of the two languages. SDL only has one input port per process and discards unexpected signals, while in Estelle any number if input ports per process are possible and unexpected messages may lead to deadlock. The LOTOS specification of the Inres protocol has been produced according to the state oriented approach [VSS88]. This makes it very similar to the SDL and Estelle specifications of the Inres protocol. Many of the state names in SDL and Estelle appear as process names in the LOTOS specification. The Inres service specification on the other hand is constraint oriented. This makes it fundamentally different to the SDL and Estelle specifications of the Inres service. 6. References
{"Source-Url": "https://web.fe.up.pt/~mricardo/04_05/amsr/artigos/inresSpec.pdf", "len_cl100k_base": 7698, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 56735, "total-output-tokens": 9398, "length": "2e12", "weborganizer": {"__label__adult": 0.00031185150146484375, "__label__art_design": 0.00039005279541015625, "__label__crime_law": 0.00035119056701660156, "__label__education_jobs": 0.0005712509155273438, "__label__entertainment": 9.554624557495116e-05, "__label__fashion_beauty": 0.00014317035675048828, "__label__finance_business": 0.0004115104675292969, "__label__food_dining": 0.00031948089599609375, "__label__games": 0.0005168914794921875, "__label__hardware": 0.0025081634521484375, "__label__health": 0.0004286766052246094, "__label__history": 0.0003390312194824219, "__label__home_hobbies": 7.587671279907227e-05, "__label__industrial": 0.0006117820739746094, "__label__literature": 0.0003566741943359375, "__label__politics": 0.0002918243408203125, "__label__religion": 0.0005064010620117188, "__label__science_tech": 0.12493896484375, "__label__social_life": 8.666515350341797e-05, "__label__software": 0.01849365234375, "__label__software_dev": 0.84716796875, "__label__sports_fitness": 0.0002465248107910156, "__label__transportation": 0.0006556510925292969, "__label__travel": 0.0002009868621826172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32139, 0.01799]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32139, 0.37976]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32139, 0.83436]], "google_gemma-3-12b-it_contains_pii": [[0, 4487, false], [4487, 5083, null], [5083, 8185, null], [8185, 13113, null], [13113, 15112, null], [15112, 18465, null], [18465, 19177, null], [19177, 20184, null], [20184, 20797, null], [20797, 21001, null], [21001, 21001, null], [21001, 23087, null], [23087, 23628, null], [23628, 23628, null], [23628, 23861, null], [23861, 24550, null], [24550, 25531, null], [25531, 27351, null], [27351, 29807, null], [29807, 32139, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4487, true], [4487, 5083, null], [5083, 8185, null], [8185, 13113, null], [13113, 15112, null], [15112, 18465, null], [18465, 19177, null], [19177, 20184, null], [20184, 20797, null], [20797, 21001, null], [21001, 21001, null], [21001, 23087, null], [23087, 23628, null], [23628, 23628, null], [23628, 23861, null], [23861, 24550, null], [24550, 25531, null], [25531, 27351, null], [27351, 29807, null], [29807, 32139, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32139, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32139, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32139, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32139, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32139, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32139, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32139, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32139, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32139, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32139, null]], "pdf_page_numbers": [[0, 4487, 1], [4487, 5083, 2], [5083, 8185, 3], [8185, 13113, 4], [13113, 15112, 5], [15112, 18465, 6], [18465, 19177, 7], [19177, 20184, 8], [20184, 20797, 9], [20797, 21001, 10], [21001, 21001, 11], [21001, 23087, 12], [23087, 23628, 13], [23628, 23628, 14], [23628, 23861, 15], [23861, 24550, 16], [24550, 25531, 17], [25531, 27351, 18], [27351, 29807, 19], [29807, 32139, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32139, 0.01509]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
15484c01bafc08a880e2476e908663b6989f2bb5
Using the GOST R 34.10-94, GOST R 34.10-2001 and GOST R 34.11-94 algorithms with the Internet X.509 Public Key Infrastructure Certificate and CRL Profile. Status of this Memo By submitting this Internet-Draft, each author represents that any applicable patent or other IPR claims of which he or she is aware have been or will be disclosed, and any of which he or she becomes aware will be disclosed, in accordance with Section 6 of BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet-Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than a "work in progress." The list of current Internet-Drafts can be accessed at http://www.ietf.org/1id-abstracts.html. The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html. This Internet-Draft will expire on June 21, 2006. Copyright Notice Copyright (C) The Internet Society (2005). Abstract This document supplements RFC 3279. It describes encoding formats, identifiers and parameter formats for the algorithms GOST R 34.10-94, GOST R 34.10-2001 and GOST R 34.11-94 for use in Internet X.509 Public Key Infrastructure (PKI). 1 Introduction The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119]. This document supplements RFC 3279 [PKALGS]. It describes the conventions for using the GOST R 34.10-94 and GOST R 34.10-2001 signature algorithms, VKO GOST R 34.10-94 and VKO GOST R 34.10-2001 key derivation algorithms, and GOST R 34.11-94 one-way hash function in the Internet X.509 Public Key Infrastructure (PKI) [PROFILE]. This document is a proposal put forward by the CRYPT-PRO Company to provide supplemental information and specifications needed by the "Russian Cryptographic Software Compatibility Agreement" community. The algorithm identifiers and associated parameters for subject public keys that employ the GOST R 34.10-94 [GOSTR341094] / VKO GOST R 34.10-94 [CPALGS] or the GOST R 34.10-2001 [GOSTR341001] / VKO GOST R 34.10-2001 [CPALGS] algorithms, and the encoding format for the signatures produced by these algorithms are specified. Also, the algorithm identifiers for using the GOST R 34.11-94 one-way hash function with the GOST R 34.10-94 and GOST R 34.10-2001 signature algorithms are specified. This specification defines the contents of the signatureAlgorithm, signatureValue, signature, and subjectPublicKeyInfo fields within... Internet X.509 Certificates and CRLs. For each algorithm, the appropriate alternatives for the keyUsage certificate extension are provided. ASN.1 modules, including all the definitions used in this document can be found in [CPALGS]. 2 Algorithm Support This section is an overview of cryptographic algorithms, that may be used within the Internet X.509 certificates and CRL profile [PROFILE]. It describes one-way hash functions and digital signature algorithms, that may be used to sign certificates and CRLs, and identifies OIDs and ASN.1 encoding for public keys contained in a certificate. The conforming CAs and/or applications MUST fully support digital signatures and public keys for at least one of the specified algorithms. 2.1 One-way Hash Function This section identifies the use of one-way, collision free hash function GOST R 34.11-94 - the only one that can be used in digital signature algorithms GOST R 34.10-94/2001. The data that is hashed for certificates and CRL signing is fully described in RFC 3280 [PROFILE]. 2.1.1 One-way Hash Function GOST R 34.11-94 GOST R 34.11-94 has been developed by "GUBS of Federal Agency Government Communication and Information" and "All-Russian Scientific and Research Institute of Standardization". The algorithm GOST R 34.11-94 produces a 256-bit hash value of the arbitrary finite bit length input. This document does not contain the full GOST R 34.11-94 specification, which can be found in [GOSTR3411] in Russian. [Schneier95] ch. 18.11, p. 454. contains a brief technical description in English. This function MUST always be used with parameter set identified by id-GostR3411-94-CryptoProParamSet (see section 8.2 of [CPALGS]). 2.2 Signature Algorithms Conforming CAs may use GOST R 34.10-94 or GOST R 34.10-2001 signature algorithms to sign certificates and CRLs. These signature algorithms MUST always be used with a one-way hash function GOST R 34.11-94 as indicated in [GOSTR341094] and [GOSTR341001]. This section defines algorithm identifiers and parameters to be used in the signatureAlgorithm field in a Certificate or CertificateList. 2.2.1 Signature Algorithm GOST R 34.10-94 GOST R 34.10-94 has been developed by "GUBS of Federal Agency Government Communication and Information" and "All-Russian Scientific and Research Institute of Standardization". This document does not contain the full GOST R 34.10-94 specification, which can be found in [GOSTR341094] in Russian. [Schneier95] ch. 20.3, p. 495 contains a brief technical description in English. The ASN.1 object identifier used to identify this signature algorithm is: ``` id-GostR3411-94-with-GostR3410-94 OBJECT IDENTIFIER ::= { iso(1) member-body(2) ru(643) rans(2) cryptopro(2) gostR3411-94-with-gostR3410-94(4) } ``` When the id-GostR3411-94-with-GostR3410-94 algorithm identifier appears as the algorithm field in an AlgorithmIdentifier, the encoding SHALL omit the parameters field. That is, the AlgorithmIdentifier SHALL be a SEQUENCE of one component: the OBJECT IDENTIFIER id-GostR3411-94-with-GostR3410-94. The parameters in the subjectPublicKeyInfo field of the certificate of the issuer SHALL apply to the verification of the signature. Signature algorithm GOST R 34.10-94 generates digital signature in the form of two 256-bit numbers r' and s. Its octet string representation consists of 64 octets, where first 32 octets contain big endian representation of s and second 32 octets contain big endian representation of r'. Signature values in CMS [CMS] are represented as octet strings, and the output is used directly. However, signature values in certificates and CRLs [PROFILE] are represented as bit strings, and conversion is needed. To convert a signature value to a bit string, the most significant bit of the first octet of the signature value SHALL become the first bit of the bit string, and so on through the least significant bit of the last octet of the signature value, which SHALL become the last bit of the bit string. 2.2.2 Signature Algorithm GOST R 34.10-2001 GOST R 34.10-2001 was developed by "GUBS of Federal Agency Government Communication and Information" and "All-Russian Scientific and Research Institute of Standardization". This document does not contain the full GOST R 34.10-2001 specification, which can be found in [GOSTR341001] in Russian. The ASN.1 object identifier used to identify this signature algorithm is: id-GostR3411-94-with-GostR3410-2001 OBJECT IDENTIFIER ::= { iso(1) member-body(2) ru(643) rans(2) cryptopro(2) gostR3411-94-with-gostR3410-2001(3) } When the id-GostR3411-94-with-GostR3410-2001 algorithm identifier appears as the algorithm field in an AlgorithmIdentifier, the encoding SHALL omit the parameters field. That is, the AlgorithmIdentifier SHALL be a SEQUENCE of one component: the OBJECT IDENTIFIER id-GostR3411-94-with-GostR3410-2001. The parameters in the subjectPublicKeyInfo field of the certificate of the issuer SHALL apply to the verification of the signature. Signature algorithm GOST R 34.10-2001 generates digital signature in the form of two 256-bit numbers r’ and s. Its octet string representation consists of 64 octets, where first 32 octets contain big endian representation of s and second 32 octets contain big endian representation of r’. Signature values in CMS [CMS] are represented as octet strings, and the output is used directly. However, signature values in certificates and CRLs [PROFILE] are represented as bit strings, and conversion is needed. To convert a signature value to a bit string, the most significant bit of the first octet of the signature value SHALL become the first bit of the bit string, and so on through the least significant bit of the last octet of the signature value, which SHALL become the last bit of the bit string. 2.3 Subject Public Key Algorithms This section defines OIDs and public key parameters for public keys that employ the GOST R 34.10-94 [GOSTR341094] / VKO GOST R 34.10-94 [CPALGS] or the GOST R 34.10-2001 [GOSTR341001] / VKO GOST R 34.10-2001 [CPALGS] algorithms. Use of the same key for both signature and key derivation is NOT RECOMMENDED. The intended application for the key MAY be indicated in the keyUsage certificate extension (see [PROFILE], Section 4.2.1.3). 2.3.1 GOST R 34.10-94 Keys GOST R 34.10-94 public keys can be used for signature algorithm GOST R 34.10-94 [GOSTR341094] and for key derivation algorithm VKO GOST R 34.10-94 [CPALGS]. GOST R 34.10-94 public keys are identified by the following OID: {id-GostR3410-94 OBJECT IDENTIFIER ::= { iso(1) member-body(2) ru(643) rans(2) cryptopro(2) gostR3410-94(20) }} SubjectPublicKeyInfo.algorithm.algorithm field (see RFC 3280 [PROFILE]) for GOST R 34.10-94 keys MUST be id-GostR3410-94. When the id-GostR3410-94 algorithm identifier appears as the algorithm field in an AlgorithmIdentifier, the encoding MAY completely omit the parameters field or set it to null. Otherwise this field MUST have the following structure: GostR3410-94-PublicKeyParameters ::= SEQUENCE { publicKeyParamSet OBJECT IDENTIFIER, digestParamSet OBJECT IDENTIFIER, encryptionParamSet OBJECT IDENTIFIER DEFAULT id-Gost28147-89-CryptoPro-A-ParamSet } where: * publicKeyParamSet - public key parameters identifier for GOST R 34.10-94 (see section 8.3 of [CPALGS]) * digestParamSet - parameters identifier for GOST R 34.11-94 (see section 8.2 of [CPALGS]) * encryptionParamSet - parameters identifier for GOST 28147-89 (see section 8.1 of [CPALGS]) Absence of parameters SHALL be processed as described in RFC 3280 [PROFILE], section 6.1, that is, parameters are inherited from the issuer certificate if possible. The GOST R 34.10-94 public key MUST be ASN.1 DER encoded as an OCTET STRING; this encoding shall be used as the contents (i.e., the value) of the subjectPublicKey component (a BIT STRING) of the SubjectPublicKeyInfo data element. GostR3410-94-PublicKey ::= OCTET STRING -- public key, Y GostR3410-94-PublicKey MUST must contain 128 octets of the little-endian representation of the public key Y = a^x (mod p), where a and p - parameters. If the keyUsage extension is present in an end-entity certificate, which contains a GOST R 34.10-94 public key, the following values MAY be present: digitalSignature; nonRepudiation. keyEncipherment; keyAgreement. If the keyAgreement or keyEncipherment extension is present in a certificate GOST R 34.10-94 public key, the following values MAY be present as well: cipherOnly; decipherOnly. The keyUsage extension MUST NOT assert both encipherOnly and decipherOnly. If the keyUsage extension is present in an CA or CRL signer certificate which contains a GOST R 34.10-94 public key, the following values MAY be present: digitalSignature; nonRepudiation; keyCertSign; cRLSign. 2.3.2 GOST R 34.10-2001 Keys GOST R 34.10-2001 public keys can be used for signature algorithm GOST R 34.10-2001 [GOSTR341001] and for key derivation algorithm VKO GOST R 34.10-2001 [CPALGS]. GOST R 34.10-2001 public keys are identified by the following OID: id-GostR3410-2001 OBJECT IDENTIFIER ::= { iso(1) member-body(2) ru(643) rans(2) cryptopro(2) } SubjectPublicKeyInfo.algorithm.algorithm field (see RFC 3280 [PROFILE]) for GOST R 34.10-2001 keys MUST be id-GostR3410-2001. When the id-GostR3410-2001 algorithm identifier appears as the algorithm field in an AlgorithmIdentifier, the encoding MAY completely omit the parameters field or set it to null. Otherwise this field MUST have the following structure: GostR3410-2001-PublicKeyParameters ::= SEQUENCE { publicKeyParamSet OBJECT IDENTIFIER, digestParamSet OBJECT IDENTIFIER, encryptionParamSet OBJECT IDENTIFIER DEFAULT id-Gost28147-89-CryptoPro-A-ParamSet } where: - publicKeyParamSet - public key parameters identifier for GOST R 34.10-2001 (see section 8.4 of [CPALGS]) - digestParamSet - parameters identifier for GOST R 34.11-94 (see section 8.2 of [CPALGS]) - encryptionParamSet - parameters identifier for GOST 28147-89 (see section 8.1 of [CPALGS]) Absence of parameters SHALL be processed as described in RFC 3280 [PROFILE], section 6.1, that is, parameters are inherited from the issuer certificate if possible. The GOST R 34.10-2001 public key MUST be ASN.1 DER encoded as an OCTET STRING; this encoding shall be used as the contents (i.e., the value) of the subjectPublicKey component (a BIT STRING) of the SubjectPublicKeyInfo data element. GostR3410-2001-PublicKey ::= OCTET STRING -- public key vector, Q According to [GOSTR341001], public key is a point on the elliptic curve \( Q = (x,y) \). GostR3410-2001-PublicKey MUST contain 64 octets, where first 32 octets contain little endian representation of \( x \) and second 32 octets contain little endian representation of \( y \). This corresponds to the binary representation of \( \langle y \rangle_{256} | \langle x \rangle_{256} \) from [GOSTR341001], ch. 5.3. If the keyUsage extension is present in an end-entity certificate, which contains a GOST R 34.10-2001 public key, the following values MAY be present: - digitalSignature, - nonRepudiation, - keyEncipherment, - keyAgreement. If the keyAgreement or keyEncipherment extension is present in a certificate, the following values MAY be present: - encipherOnly, - decipherOnly. The keyUsage extension MUST NOT assert both encipherOnly and decipherOnly. If the keyUsage extension is present in an CA or CRL signer certificate which contains a GOST R 34.10-2001 public key, the following values MAY be present: - digitalSignature, - nonRepudiation, - keyCertSign, - cRLSign. 3 Security Considerations It is RECOMMENDED, that applications verify signature values and subject public keys to conform to [GOSTR341001] [GOSTR341094] standards prior to their use. When certificate is used as analogue to a manual signing, in the context of Russian Federal Digital Signature Law [RFDSL], certificate MUST contain keyUsage extension, it MUST be critical, and keyUsage MUST NOT include keyEncipherment and keyAgreement. When certificate validity period (typically 5 years for end entities and 7 years for CAs in Russia) is not equal to the private key validity period (typically 15 months in Russia) it is RECOMMENDED to use private key usage period extension. For security discussion concerning use of algorithm parameters, see section Security Considerations from [CPALGS]. 4 Appendix Examples 4.1 GOST R 34.10-94 Certificate -----BEGIN CERTIFICATE----- MIICCzCCAb0CECMOC42BG1xST0xwvk1BgfuswCAYGKoUDAgIEMGkxHTAbBgNVBAMM FEdv3RSxMqXMC05NCqleGFtGc6xM1RiWdEAYDVQQKDA1DcnlwG9Qcm8xCzAJBgNV BAYTAkJVMScwJQYjKoZIhvcNAQkBFhhHb3NUOjM0MTAtOTA2NTwhXbEBsZ25j20w HhcNMUDwODE2MTizMjUwMhccNCMTUwODE2MTizMjUwMjBpMR0wGwYDVQQDDBRHB3N0 UjM0MTAtOUTgZxhhbXBeZTESMBAAGA1UgwjQ3J5cHRvUHJvMQswCQQyDVQQGEwJS VTEnMCUGCSqGSIb3QEJARYYRY29zdFizNDEwLTk0QGV4Y1wbGUuY29tMIG1MBwG BiFAwIFQDASBgcIcqhpQMCAIAgBwcgqMBQGCCsGCCsGAQUFBwIcHjckggYDAgEB -----END CERTIFICATE----- 0 30 523: SEQUENCE { 4 30 442: SEQUENCE { 8 02 16: INTEGER : 23 0E E3 60 46 95 24 CE C7 0E E4 94 18 2E 7E EB 26 30 8: SEQUENCE { 28 06 6: OBJECT IDENTIFIER : id-GostR3411-94-with-GostR3410-94 (1 2 643 2 2 4) : } 36 30 105: SEQUENCE { 38 31 29: SET { 40 30 27: SEQUENCE { 42 06 3: OBJECT IDENTIFIER commonName (2 5 4 3) 47 0C 20: UTF8String 'GostR3410-94 example' : } : } 69 31 18: SET { 71 30 16: SEQUENCE { 73 06 3: OBJECT IDENTIFIER organizationName (2 5 4 10) 78 0C 9: UTF8String 'CryptoPro' : } 89 31 11: SET { 91 30 9: SEQUENCE { 93 06 3: OBJECT IDENTIFIER countryName (2 5 4 6) 98 13 2: PrintableString 'RU' : } : } 102 31 39: SET { 104 30 37: SEQUENCE { 106 06 9: OBJECT IDENTIFIER emailAddress (1 2 840 113549 1 9 1) 117 16 24: IA5String 'GostR3410-94@example.com' : } Leontiev & Shefanovski Standards Track [Page 10] :: :: 143 30 30: SEQUENCE { 145 17 13: UTCTime '050816123250Z' 160 17 13: UTCTime '150816123250Z' :: 175 30 105: SEQUENCE { 177 31 29: SET { 179 30 27: SEQUENCE { 181 06 3: OBJECT IDENTIFIER commonName (2 5 4 3) 186 0C 20: UTF8String 'GostR3410-94 example' :: 208 31 18: SET { 210 30 16: SEQUENCE { 212 06 3: OBJECT IDENTIFIER organizationName (2 5 4 10) 217 0C 9: UTF8String 'CryptoPro' :: 228 31 11: SET { 230 30 9: SET { 232 06 3: OBJECT IDENTIFIER countryName (2 5 4 6) 237 13 2: PrintableString 'RU' :: 241 31 39: SET { 243 30 37: SET { 245 06 9: OBJECT IDENTIFIER emailAddress (1 2 840 113549 1 9 1) 256 16 24: IA5String 'GostR3410-94@example.com' :: 282 30 165: SEQUENCE { 285 30 28: SEQUENCE { 287 06 6: OBJECT IDENTIFIER id-GostR3410-94 (1 2 643 2 2 20) 295 30 18: SEQUENCE { 297 06 7: OBJECT IDENTIFIER :: id-GostR3410-94-CryptoPro-A-ParamSet :: (1 2 643 2 2 32 2) 306 06 7: OBJECT IDENTIFIER :: id-GostR3411-94-CryptoProParamSet :: (1 2 643 2 2 30 1) :: 315 03 132: BIT STRING 0 unused bits, encapsulates { 319 04 128: OCTET STRING :: BB 84 66 E1 79 9E 5B 34 D8 2C 80 7F 13 A8 19 66 :: 71 57 FE 8C 54 25 21 47 6F 30 0B 27 77 46 98 C6 In the signature of the above certificate, r' equals to 0x22F785F355BD94EC663F7D73803FBCD43 and s equals to 0x11C7087E12DC02F102232947768F472A818350E307CCF2E431238942C873E1DE 4.2 GOST R 34.10-2001 Certificate -----BEGIN CERTIFICATE----- MIIB0DCCAX8CECv1xh7CEboXx9zUYma0LiEwCAYGKoUDAgIDMG0xHzAdBgNVBAMM Fkdvc3RSMzQxMC0yMDAxIGV4YW1wbGUxEjAQBgNVBAoMCUNyeXB0b1BybzEJMKAkG A1UEBhMCUk1xKuXk7b0Xx9zUYma0LiEwCAYGKoUDAgIDMG0xHzAdBgNVBAMM -----END CERTIFICATE----- Leontiev & Shefanovski Standards Track [Page 12] 36 30 109: SEQUENCE { 38 31 31: SET { 40 30 29: SEQUENCE { 42 06 3: OBJECT IDENTIFIER commonName (2 5 4 3) 47 0C 22: UTF8String 'GostR3410-2001 example' : : 71 31 18: SET { 73 30 16: SEQUENCE { 75 06 3: OBJECT IDENTIFIER organizationName (2 5 4 10) 80 0C 9: UTF8String 'CryptoPro' : : 91 31 11: SET { 93 30 9: SEQUENCE { 95 06 3: OBJECT IDENTIFIER countryName (2 5 4 6) 100 13 2: PrintableString 'RU' : : 104 31 41: SET { 106 30 39: SEQUENCE { 108 06 9: OBJECT IDENTIFIER emailAddress (1 2 840 113549 1 9 1) 119 16 26: IA5String 'GostR3410-2001@example.com' : : 147 30 30: SEQUENCE { 149 17 13: UTCTime '050816141820Z' 164 17 13: UTCTime '150816141820Z' : : 179 30 109: SEQUENCE { 181 31 31: SET { 183 30 29: SEQUENCE { 185 06 3: OBJECT IDENTIFIER commonName (2 5 4 3) 190 0C 22: UTF8String 'GostR3410-2001 example' : : 214 31 18: SET { 216 30 16: SEQUENCE { 218 06 3: OBJECT IDENTIFIER organizationName (2 5 4 10) 223 0C 9: UTF8String 'CryptoPro' : : 234 31 11: SET { 236 30 9: SEQUENCE { 238 06 3: OBJECT IDENTIFIER countryName (2 5 4 6) 243 13 2: PrintableString 'RU' In the public key of the above certificate, x equals to 0x577E324FE70F26DF45C437A0305E5FD2C89318C13CD0875401A026075689584 and y equals to 0x601AEACBC660FDFB0C87567EBBA6EA8DE40FAE857C9AD0038895B916CCEB8F Corresponding private key d equals to 0x0B293BE050D0082BDAE785631A6BAB68F35B42786D6DDA56AFAF169891040F77 In the signature of the above certificate, \( r' \) equals to 0xC1DE176E8D1BEC71B593F3DD36935577688989176220F4DAB131D5B51C33DEE2 and \( s \) equals to 0x3C2FC90944B727A9ECA7D5E9FB536DD2C3AA647C442EDEED3116454FBC543FDD 5 References Normative references: Informative references: [RFDSL] Russian Federal Digital Signature Law, 10 Jan 2002 N1-FZ Acknowledgments This document was created in accordance with "Russian Cryptographic Software Compatibility Agreement", signed by FGUE STC "Atlas", CRYPTO-PRO, Factor-TS, MD PREI, Infotecs GmbH, SPRCIS (SPbRCZI), Cryptocom, R-Alpha. The goal of this agreement is to achieve mutual compatibility of the products and solutions. The authors wish to thank: Microsoft Corporation Russia for provided information about company products and solutions, and also for technical consulting in PKI. RSA Security Russia and Demos Co Ltd for active colaboration and critical help in creation of this document. RSA Security Inc for compatibility testing of the proposed data formats while incorporating them into RSA Keon product. Baltimore Technology plc for compatibility testing of the proposed data formats while incorporating them into UniCERT product. Russ Hously (Vigil Security, LLC, housley@vigilsec.com) and Vasilij Sakharov (DEMOS Co Ltd, svp@dol.ru) for initiative creating this document. Grigorij Chudov for navigating the IETF process for this document. Author’s Addresses Sergei Leontiev CRYPTO-PRO 38, Obraztsova, Moscow, 127018, Russian Federation EMail: lse@cryptopro.ru Dennis Shefanovski DEMONS Co Ltd 6/1, Ovchinnikovskaja naberezhnaya, Moscow, 113035, Russian Federation EMail: sdb@dol.ru Grigorij Chudov CRYPTO-PRO 38, Obraztsova, Moscow, 127018, Russian Federation EMail: chudov@cryptopro.ru Alexandr Afanasiev Factor-TS office 711, 14, Presnenskij val, Moscow, 123557, Russian Federation EMail: afa1@factor-ts.ru Nikolaj Nikishin Infotecs GmbH p/b 35, 80-5, Leningradskij prospekt, Moscow, 125315, Russian Federation EMail: nikishin@infotecs.ru Boleslav Izotov FGUE STC "Atlas" 38, Obraztsova, Moscow, 127018, Russian Federation EMail: izotov@niivoskhod.ru Elena Minaeva MD PREI build 3, 6A, Vtoroj Troitskij per., Moscow, Russian Federation EMail: evminaeva@mail.ru Serguei Murugov R-Alpha 4/1, Raspletina, Moscow, 123060, Russian Federation EMail: msm@top-cross.ru Igor Ustinov Cryptocom office 239, 51, Leninskij prospekt, Moscow, 119991, Russian Federation EMail: igus@cryptocom.ru Anatolij Erkin SPRCIS (SPbRCZI) 1, Obrucheva, St.Petersburg, 195220, Russian Federation EMail: erkin@nevsky.net Disclaimer of Validity This document and the information contained herein are provided on an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Full Copyright Statement Copyright (C) The Internet Society (2005). This document is subject to the rights, licenses and restrictions contained in BCP 78, and except as set forth therein, the authors retain all their rights. Acknowledgment Funding for the RFC Editor function is currently provided by the Internet Society.
{"Source-Url": "https://tools.ietf.org/pdf/draft-ietf-pkix-gost-cppk-04.pdf", "len_cl100k_base": 7161, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 39111, "total-output-tokens": 9780, "length": "2e12", "weborganizer": {"__label__adult": 0.0005235671997070312, "__label__art_design": 0.0004987716674804688, "__label__crime_law": 0.004741668701171875, "__label__education_jobs": 0.0008115768432617188, "__label__entertainment": 0.00013744831085205078, "__label__fashion_beauty": 0.0002491474151611328, "__label__finance_business": 0.00286102294921875, "__label__food_dining": 0.00033211708068847656, "__label__games": 0.0010347366333007812, "__label__hardware": 0.003459930419921875, "__label__health": 0.00072479248046875, "__label__history": 0.0005402565002441406, "__label__home_hobbies": 0.00014102458953857422, "__label__industrial": 0.0010061264038085938, "__label__literature": 0.0004246234893798828, "__label__politics": 0.0009222030639648438, "__label__religion": 0.0007686614990234375, "__label__science_tech": 0.287841796875, "__label__social_life": 0.00013327598571777344, "__label__software": 0.07305908203125, "__label__software_dev": 0.61865234375, "__label__sports_fitness": 0.00032401084899902344, "__label__transportation": 0.0006604194641113281, "__label__travel": 0.0002244710922241211}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25449, 0.11794]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25449, 0.24088]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25449, 0.6901]], "google_gemma-3-12b-it_contains_pii": [[0, 1457, false], [1457, 2829, null], [2829, 4732, null], [4732, 6836, null], [6836, 8915, null], [8915, 10649, null], [10649, 12061, null], [12061, 13844, null], [13844, 15334, null], [15334, 16880, null], [16880, 18132, null], [18132, 18640, null], [18640, 19871, null], [19871, 20179, null], [20179, 22026, null], [22026, 23374, null], [23374, 24298, null], [24298, 25449, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1457, true], [1457, 2829, null], [2829, 4732, null], [4732, 6836, null], [6836, 8915, null], [8915, 10649, null], [10649, 12061, null], [12061, 13844, null], [13844, 15334, null], [15334, 16880, null], [16880, 18132, null], [18132, 18640, null], [18640, 19871, null], [19871, 20179, null], [20179, 22026, null], [22026, 23374, null], [23374, 24298, null], [24298, 25449, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25449, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25449, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25449, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25449, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25449, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25449, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25449, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25449, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25449, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25449, null]], "pdf_page_numbers": [[0, 1457, 1], [1457, 2829, 2], [2829, 4732, 3], [4732, 6836, 4], [6836, 8915, 5], [8915, 10649, 6], [10649, 12061, 7], [12061, 13844, 8], [13844, 15334, 9], [15334, 16880, 10], [16880, 18132, 11], [18132, 18640, 12], [18640, 19871, 13], [19871, 20179, 14], [20179, 22026, 15], [22026, 23374, 16], [23374, 24298, 17], [24298, 25449, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25449, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
9fe6e47b2dfaba0c267d8e3aca089397e6e210d7
Performance Evaluation: Workload Characterization Techniques Hongwei Zhang http://www.cs.wayne.edu/~hzhang Acknowledgement: this lecture is partially based on the slides of Dr. Raj Jain. Outline - Terminology - Selection of Workload Components and Parameters - Workload Characterization Techniques Outline - Terminology - Selection of Workload Components and Parameters - Workload Characterization Techniques Terminology - **Workload** - service requests to the system - **User** - entity that makes the service request **Workload component** - User is usually called “workload component” in workload characterization - E.g., applications, sites, and user Sessions (each one of them can be regarded as a component) - **Workload parameter** - measured quantities that are used to model/characterize the workload - For example: packet sizes, source-destinations of a packet Outline - Terminology - Selection of Workload Components and Parameters - Workload Characterization Techniques Selection of workload components - The workload component should be at the SUT (i.e., system under test) interface - Each component should represent as homogeneous a group as possible - E.g., combining very different users into a site workload may not be meaningful - Purpose of study and domain of control also affect the choice - E.g., a mail system designer is more interested in a typical mail session than a typical user session involving many applications Selection of workload parameters - Do not use parameters that depend upon the system - E.g., the elapsed time, CPU time Instead, only use parameters that depend on workload itself - E.g., characteristics of service requests: - Arrival Time - Type of request or the resource demanded - Duration of the request - Quantity of the resource demanded, for example, buffer space - Exclude those parameters that have little impact Outline - Terminology - Selection of Workload Components and Parameters - Workload Characterization Techniques Workload characterization techniques? - Averaging - Specifying dispersion - Histograms: single-parameter, multi-parameter - Principal Component Analysis - Markov Models - Clustering Workload characterization techniques - Averaging - Specifying dispersion - Histograms: single-parameter, multi-parameter - Principal Component Analysis - Markov Models - Clustering Averaging - (arithmetic) Mean \[ \bar{x} = \frac{1}{n} \sum_{i=1}^{n} x_i \] - Alternatives (to be discussed in detail later) - Mode (for categorical variables): Most frequent value - Median: 50-percentile - Geometric mean - Harmonic mean Workload characterization techniques - Averaging - Specifying dispersion - Histograms: single-parameter, multi-parameter - Principal Component Analysis - Markov Models - Clustering Specifying dispersion - Standard deviation (s): square root of variance $s^2$ \[ s^2 = \frac{1}{n-1} \sum_{i=1}^{n} (x_i - \bar{x})^2 \] - Coefficient of variation (C.O.V.): \( \frac{s}{\bar{x}} \) - Alternatives (to be discussed in detail later) - Range - 10- and 90-percentile - Semi-interquartile range - Mean absolute deviation What if C.O.V. is high? - Then “mean” is not sufficient to characterize the workload - Alternatives - Complete histogram - Divide users (i.e., workload components) into classes, and specify average for each class # Case Study: Program Usage in Educational Environments (6 universities) <table> <thead> <tr> <th>Data</th> <th>Average</th> <th>Coef. of Variation</th> </tr> </thead> <tbody> <tr> <td>CPU time ((VAX-11/780^{TM}))</td> <td>2.19</td> <td>40.23</td> </tr> <tr> <td>Elapsed time</td> <td>73.90</td> <td>8.59</td> </tr> <tr> <td>Number of direct writes</td> <td>8.20</td> <td>53.59</td> </tr> <tr> <td>Direct write bytes</td> <td>10.21</td> <td>82.41</td> </tr> <tr> <td>Size of direct writes</td> <td>1.25</td> <td></td> </tr> <tr> <td>Number of direct reads</td> <td>22.64</td> <td>25.65</td> </tr> <tr> <td>Direct read bytes</td> <td>49.70</td> <td>21.01</td> </tr> </tbody> </table> - High Coefficient of Variation Case study (contd.): only focus on editing sessions <table> <thead> <tr> <th>Data</th> <th>Average</th> <th>Coef. of Variation</th> </tr> </thead> <tbody> <tr> <td>CPU time (VAX-11/780)</td> <td>2.57 seconds</td> <td>3.54</td> </tr> <tr> <td>Elapsed time</td> <td>265.45 seconds</td> <td>2.34</td> </tr> <tr> <td>Number of direct writes</td> <td>19.74</td> <td>4.33</td> </tr> <tr> <td>Direct write bytes</td> <td>13.46 Kilo-bytes</td> <td>3.87</td> </tr> <tr> <td>Size of direct writes</td> <td>0.68 kilo-bytes</td> <td></td> </tr> <tr> <td>Number of direct reads</td> <td>37.77</td> <td>3.73</td> </tr> <tr> <td>Direct read bytes</td> <td>36.93 Kilo-bytes</td> <td>3.16</td> </tr> </tbody> </table> - Much more reasonable C.O.V. Workload characterization techniques - Averaging - Specifying dispersion - Histograms: single-parameter, multi-parameter - Principal Component Analysis - Markov Models - Clustering Single-parameter histograms - A histogram shows the relative frequency of various values of a parameter - For continuous-value parameters, need to divide parameter range into subranges called *buckets/cells* - E.g. Single-parameter histograms (contd.) - Could also use tabular/vector representation - In analytical modeling, histograms can be used to fit a probability distribution, or to verify that the distribution used in the model is similar to what is observed in the histogram - (-) # of numerical values: \( n \) buckets \(* m \) parameters \(* k \) components - May be too much details to be useful - => should be used only if variance is high & averages cannot be used - (-) Key problem: Ignores correlation among parameters - => multi-parameter histogram Multi-parameter histograms - Difficult to plot joint histograms for more than two parameters Workload characterization techniques - Averaging - Specifying dispersion - Histograms: single-parameter, multi-parameter - Principal Component Analysis - Markov Models - Clustering Principal component analysis (PCA) - **Key Idea**: Use a weighted sum of parameters *to classify workload components* - Need to identify what contributes variance - For j-th component, Let $x_{ij}$ denote the i-th parameter for j-th component: $y_j = \sum_{i=1}^{n} w_i x_{ij}$ - PCA assigns weights $w_i$'s such that $y_j$'s provide the maximum discrimination among the components - The quantity $y_j$ is called the principal factor (more precisely, "principal component" in statistics) - The factors are ordered s.t., the first factor explains the highest percentage of the variance, the second factor explains a lower percentage ... PCA (contd.) - Statistically: given $X$, - The $y$'s are linear combinations of $x$'s: \[ y_i = \sum_{j=1}^{n} a_{ij} x_j \] Here, $a_{ij}$ is called the *loading* of variable $x_j$ on factor $y_i$ - The $y$'s form an orthogonal set (i.e., their pairwise inner product is zero); This is “equivalent” to stating that $y_i$'s are “uncorrelated” to each other (note: not in the precise sense of “uncorrelation”) - Two r.v., $X$ and $Y$, $X$ and $Y$ are said to be *orthogonal* if $E[XY] = 0$, and *uncorrelated* if $E[XY] = E[X] \times E[Y]$ - The $y$'s form an ordered set such that $y_1$ explains the highest percentage of the variance in resource demands Find principal factors - Find the correlation matrix of normalized variables - Find the eigenvalues of the matrix and sort them in the order of decreasing magnitude - Find corresponding eigenvectors - These give the required loadings Example of PCA: packets tx & rx for all the workstations of a network <table> <thead> <tr> <th>Obs. No.</th> <th>Variables</th> <th>Normalized Variables</th> <th>Principal Factors</th> </tr> </thead> <tbody> <tr> <td></td> <td>$x_s$</td> <td>$x_r$</td> <td>$x'_s$</td> </tr> <tr> <td>1</td> <td>7718</td> <td>7258</td> <td>1.359</td> </tr> <tr> <td>2</td> <td>6958</td> <td>7232</td> <td>0.922</td> </tr> <tr> <td>3</td> <td>8551</td> <td>7062</td> <td>1.837</td> </tr> <tr> <td>4</td> <td>6924</td> <td>6526</td> <td>0.903</td> </tr> <tr> <td>5</td> <td>6298</td> <td>5251</td> <td>0.543</td> </tr> <tr> <td>6</td> <td>6120</td> <td>5158</td> <td>0.441</td> </tr> <tr> <td>7</td> <td>6184</td> <td>5051</td> <td>0.478</td> </tr> <tr> <td>8</td> <td>6527</td> <td>4850</td> <td>0.675</td> </tr> <tr> <td>9</td> <td>5081</td> <td>4825</td> <td>-0.156</td> </tr> <tr> <td>10</td> <td>4216</td> <td>4762</td> <td>-0.652</td> </tr> <tr> <td>17</td> <td>3644</td> <td>3120</td> <td>-0.981</td> </tr> <tr> <td>18</td> <td>2020</td> <td>2946</td> <td>-1.914</td> </tr> <tr> <td>$\sum x$</td> <td>96336</td> <td>88009</td> <td>0.000</td> </tr> <tr> <td>$\sum x^2$</td> <td>567119488</td> <td>462661024</td> <td>17.000</td> </tr> <tr> <td>Mean</td> <td>5352.0</td> <td>4889.4</td> <td>0.000</td> </tr> <tr> <td>Std. Dev.</td> <td>1741.0</td> <td>1379.5</td> <td>1.000</td> </tr> </tbody> </table> PCA example (contd.) - Compute the mean and standard deviations of the variables: \[ \bar{x}_s = \frac{1}{n} \sum_{i=1}^{n} x_{si} = \frac{96336}{18} = 5352.0 \] \[ \bar{x}_r = \frac{1}{n} \sum_{i=1}^{n} x_{ri} = \frac{88009}{18} = 4889.4 \] \[ s^2_{x_s} = \frac{1}{n-1} \sum_{i=1}^{n} (x_{si} - \bar{x}_s)^2 \] \[ = \frac{1}{n-1} \left[ \left( \sum_{i=1}^{n} x_{si}^2 \right) - n \times \bar{x}_s^2 \right] \] \[ = \frac{567119488 - 18 \times 5353^2}{17} = 1741.0^2 \] \[ s^2_{x_r} = \frac{462661024 - 18 \times 4889.4^2}{17} = 1379.5^2 \] PCA example (contd.) - Normalize the variables to zero mean and unit standard deviation. The normalized values $x_s'$ and $x_r'$ are given by $$x_s' = \frac{x_s - \bar{x}_s}{s_{x_s}} = \frac{x_s - 5352}{1741}$$ $$x_r' = \frac{x_r - \bar{x}_r}{s_{x_r}} = \frac{x_r - 4889}{1380}$$ PCA example (contd.) - Compute the correlation among the variables: \[ R_{x_s,x_r} = \frac{1}{n-1} \sum_{i=1}^{n} \frac{(x_{si} - \bar{x}_s)(x_{ri} - \bar{x}_r)}{s_{x_s}s_{x_r}} = 0.916 \] - Prepare the correlation matrix: \[ C = \begin{bmatrix} 1.000 & 0.916 \\ 0.916 & 1.000 \end{bmatrix} \] PCA example (contd.) - Compute the eigenvalues of the correlation matrix: By solving the *characteristic equation*: \[ |\lambda I - C| = \begin{vmatrix} \lambda - 1 & -0.916 \\ -0.916 & \lambda - 1 \\ \end{vmatrix} = 0 \] \[ (\lambda - 1)^2 - 0.916^2 = 0 \] - The eigenvalues are 1.916 and 0.084. PCA example (contd.) - Compute the eigenvectors of the correlation matrix. The eigenvectors $q_1$ corresponding to $\lambda_1 = 1.916$ are defined by the following relationship: \[ \{ C \} \{ q \}_1 = \lambda_1 \{ q \}_1 \] or: \[ \begin{bmatrix} 1.000 & 0.916 \\ 0.916 & 1.000 \end{bmatrix} \times \begin{bmatrix} q_{11} \\ q_{21} \end{bmatrix} = 1.916 \begin{bmatrix} q_{11} \\ q_{21} \end{bmatrix} \] or: $q_{11} = q_{21}$ PCA example (contd.) - Restricting the length of the eigenvectors to one: \[ q_1 = \begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix} \quad q_2 = \begin{bmatrix} \frac{1}{\sqrt{2}} \\ -\frac{1}{\sqrt{2}} \end{bmatrix} \] - Note: if the eigenvector for an eigenvalue is not unique, any of them can be used in PCA (without much impact on the results of PCA) - Obtain principal factors by multiplying the eigenvectors by the normalized vectors: \( Y = [q_1; q_2]' \times X^* \) \[ \begin{bmatrix} y_1 \\ y_2 \end{bmatrix} = \begin{bmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} \end{bmatrix} \begin{bmatrix} x_s - 5352 \\ 1741 \\ x_r - 4889 \\ 1380 \end{bmatrix} \] PCA example (contd.) - Compute the values of the principal factors - Compute the sum and sum of squares of the principal factors - The sum must be zero - The sum of squares give the percentage of variation explained - The first factor explains $32.565/(32.565+1.435)$ or 95.7% of the variation - The second factor explains only 4.3% of the variation and can, thus, be ignored PCA example (contd.) - Plot the values of the principal factors Workload characterization techniques - Averaging - Specifying dispersion - Histograms: single-parameter, multi-parameter - Principal Component Analysis - Markov Models - Clustering Markov models - Why? - In addition to the *number* of service requests of each type, we may well want to know the *order* of service requests - Markov model: the next request depends only on the last request - Described by a transition matrix: <table> <thead> <tr> <th>From/To</th> <th>CPU</th> <th>Disk</th> <th>Terminal</th> </tr> </thead> <tbody> <tr> <td>CPU</td> <td>0.6</td> <td>0.3</td> <td>0.1</td> </tr> <tr> <td>Disk</td> <td>0.9</td> <td>0</td> <td>0.1</td> </tr> <tr> <td>Terminal</td> <td>1</td> <td>0</td> <td>0</td> </tr> </tbody> </table> Transition matrix/probability - Transition matrices can also be used - for application transitions: e.g., $P(\text{Link} | \text{Compile})$ - to specify page-reference locality: $P(\text{Reference module } i | \text{Referenced module } j)$ - Given the same relative frequency of requests of different types, it is possible to realize the frequency with several different transition matrices - => If order is important, measure the transition probabilities directly on the real system - E.g., (next slide) Example: Two packet sizes: Small (80%), Large (20%) - Case #1: An average of four small packets are followed by an average of one large packet, e.g., sssslsssslssss. <table> <thead> <tr> <th>Current Packet</th> <th>Next packet</th> </tr> </thead> <tbody> <tr> <td></td> <td>Small</td> </tr> <tr> <td>Small</td> <td>0.75</td> </tr> <tr> <td>Large</td> <td>1</td> </tr> </tbody> </table> Example (contd.) - Case #2: Eight small packets followed by two large packets <table> <thead> <tr> <th>Current Packet</th> <th>Next packet</th> </tr> </thead> <tbody> <tr> <td></td> <td>Small</td> </tr> <tr> <td>Small</td> <td>0.875</td> </tr> <tr> <td>Large</td> <td>0.5</td> </tr> </tbody> </table> - Case #3: Generate a random number $x$ uniformly distributed between 0 and 1 - If $x < 0.8$: generate a small packet; Otherwise, generate a large packet <table> <thead> <tr> <th>Current Packet</th> <th>Next packet</th> </tr> </thead> <tbody> <tr> <td></td> <td>Small</td> </tr> <tr> <td>Small</td> <td>0.8</td> </tr> <tr> <td>Large</td> <td>0.8</td> </tr> </tbody> </table> Workload characterization techniques - Averaging - Specifying dispersion - Histograms: single-parameter, multi-parameter - Principal Component Analysis - Markov Models - Clustering Clustering - To classify the (potentially) large number of components into a small number of \textit{classes} or \textit{clusters} s.t. - The components within a cluster are very similar to each other - One member from each class can be selected to represent the class and to study the effect of system design decisions on the entire class Clustering steps - Take a sample, that is, a subset of workload components - Select workload parameters - *Transform parameters, if necessary* - *Remove outliers* - *Scale all observations* - *Select a distance measure* - *Perform clustering* - *Interpret results* - Change parameters, or number of clusters, and repeat italicized steps - Select representative components from each cluster Sampling - E.g., In one study, 2% of the population was chosen for analysis; later 99% of the population could be assigned to the clusters obtained. Methods - Random selection - Scenario-driven: e.g., - Select top consumers of a resource, when wanting to study the impact of the resource. Parameter selection - **Criteria:** - Impact on performance - Variance of parameters across clusters/components - **Two candidate methods** - Redo clustering with one less parameter, and count the number of components that change cluster membership (note: domain knowledge helps in this case) - If changes are small, can remove the parameter - Principal component analysis: Identify factors (and hence parameters via weights) with the highest variance Transformation - If the distribution is highly skewed, consider a function of the parameter - E.g., log of CPU time - Two programs taking 1 and 2 seconds are almost as different as those taking 1 and 2 milliseconds => the ratio of CPU time, rather than their difference, is more important Outliers - Outliers = *data points (of components)* with extreme parameter values - They affect max/min/mean/variance values, thus affecting normalization (discussed next) - Can exclude outlying components only if they do not consume a significant portion of system resources - Example, disk backup program can be excluded from a workload characterizing sites where backups are done a few times per month, but may not be excluded for cases where backups are done a few times per day Data scaling - Objective: to scale parameter values so that their relative values and ranges are approx. equal - Method #1: Normalize to Zero Mean and Unit Variance: \( \{x_{ik}\} \) for k-th parameter \[ x'_{ik} = \frac{x_{ik} - \bar{x}_k}{s_k} \] - Method #2: Weights: \[ x'_{ik} = w_k \cdot x_{ik} \] where \( w_k \) is proportional to relative importance or \[ w_k = 1/s_k \] Data scaling (contd.) - Method #3: Range Normalization: \[ x'_{ik} = \frac{x_{ik} - x_{\text{min},k}}{x_{\text{max},k} - x_{\text{min},k}} \] (+) no need to do square/square-root calculation (-) easily affected by outliers - Method #4: Percentile Normalization: \[ x'_{ik} = \frac{x_{ik} - x_{2.5,k}}{x_{97.5,k} - x_{2.5,k}} \] Distance metric - Euclidean Distance: Given \( \{x_{i1}, x_{i2}, \ldots, x_{in}\} \) and \( \{x_{j1}, x_{j2}, \ldots, x_{jn}\} \) \[ d = \left\{ \sum_{k=1}^{n} (x_{ik} - x_{jk})^2 \right\}^{0.5} \] - Most commonly used distance metric - Weighted-Euclidean Distance: \[ d = \sum_{k=1}^{n} \left\{ a_k (x_{ik} - x_{jk})^2 \right\}^{0.5} \] Here \( a_k, k=1,\ldots,n \) are suitably chosen weights for the \( n \) parameters - Used if the parameters - have not been scaled, or - have significantly different levels of importance Distance metric (contd.) - **Chi-Square Distance:** \[ d = \sum_{k=1}^{n} \left\{ \frac{(x_{ik} - x_{jk})^2}{x_{ik}} \right\} \] - Usually used in distribution fitting (e.g., distance between two multinomial distributions) - Used only if \(x_{.k}\)'s are close to each other (e.g., has been normalized); otherwise, parameters with low values of \(x_{.k}\) get higher weights Clustering techniques - Goal: Partition into groups so the members of a group are as similar as possible, and different groups are as dissimilar as possible. - Statistically, the intragroup variance should be as small as possible, and inter-group variance should be as large as possible. - Total Variance = Intra-group Variance “+” Inter-group Variance - Only need to worry about either inter- or intra-group variance. Clustering tech. (contd.) - **Nonhierarchical techniques**: Start with an arbitrary set of $k$ clusters, move members until the *intra-group* variance is minimum. - **Hierarchical Techniques**: - Agglomerative: Start with $m$ clusters and merge. - Divisive: Start with one cluster and divide. - Two popular techniques: - Minimum spanning tree method (agglomerative) - Centroid method (Divisive) Minimum spanning tree (MST) method - Start with $k = n$ clusters - Find the centroid of the $i$-th cluster, $i = 1, 2, \ldots, k$ - Compute the inter-cluster distance matrix (based on centroids) - Merge the nearest clusters - Repeat italicized steps until all components are part of one cluster Example of MST method <table> <thead> <tr> <th>Program</th> <th>CPU Time</th> <th>Disk I/O</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>2</td> <td>4</td> </tr> <tr> <td>B</td> <td>3</td> <td>5</td> </tr> <tr> <td>C</td> <td>1</td> <td>6</td> </tr> <tr> <td>D</td> <td>4</td> <td>3</td> </tr> <tr> <td>E</td> <td>5</td> <td>2</td> </tr> </tbody> </table> - Step 1: Consider five clusters with i-th cluster consisting solely of i-th program - Step 2: The centroids are \{2, 4\}, \{3, 5\}, \{1, 6\}, \{4, 3\}, and \{5, 2\} Example (contd.) - Step 3: The Euclidean distance is: <table> <thead> <tr> <th>Program</th> <th>A</th> <th>B</th> <th>C</th> <th>D</th> <th>E</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>0</td> <td>$\sqrt{2}$</td> <td>$\sqrt{5}$</td> <td>$\sqrt{5}$</td> <td>$\sqrt{13}$</td> </tr> <tr> <td>B</td> <td>0</td> <td>$\sqrt{5}$</td> <td>$\sqrt{5}$</td> <td>$\sqrt{13}$</td> <td></td> </tr> <tr> <td>C</td> <td>0</td> <td></td> <td>$\sqrt{18}$</td> <td></td> <td>$\sqrt{32}$</td> </tr> <tr> <td>D</td> <td>0</td> <td></td> <td></td> <td>$\sqrt{2}$</td> <td></td> </tr> <tr> <td>E</td> <td>0</td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> - Step 4: Minimum inter-cluster distance = $\sqrt{2}$. Merge A+B, D+E Example (contd.) - Step 2: The centroid of cluster pair AB is \(\{(2+3) \div 2, (4+5) \div 2\}\), that is, \(\{2.5, 4.5\}\). Similarly, the centroid of pair DE is \(\{4.5, 2.5\}\) Step 3: The distance matrix is: <table> <thead> <tr> <th>Program</th> <th>AB</th> <th>C</th> <th>DE</th> </tr> </thead> <tbody> <tr> <td>AB</td> <td>0</td> <td>(\sqrt{4.5})</td> <td>(\sqrt{8})</td> </tr> <tr> <td>C</td> <td>0</td> <td>(\sqrt{24.5})</td> <td>0</td> </tr> <tr> <td>DE</td> <td></td> <td></td> <td>0</td> </tr> </tbody> </table> Step 4: Merge AB and C. Step 2: The centroid of cluster ABC is \(\{(2+3+1) \div 3, (4+5+6) \div 3\}\), that is, \{2, 5\} Example (contd.) - Step 3: The distance matrix is: | Program | Program |---------|------- | ABC | 0 | | DE | √12.5 | - Step 4: Minimum distance is \((12.5)^{0.5}\). Merge ABC and DE => Single Custer ABCDE Dendogram - Dendogram = Spanning Tree - Purpose: Obtain clusters for any given maximum allowable intra-cluster distance Centroid method - Start with \( k = 1 \) - Find the centroid and intra-cluster variance for the \( i \)-th cluster, \( i = 1, 2, \ldots, k \). - Find the cluster with the highest variance and arbitrarily divide it into two clusters - Find the two components that are farthest apart, assign other components according to their distance from these points; OR - Place all components below a hyperplane crossing the centroid in one cluster and all components above the hyperplane in the other - Adjust the points in the two new clusters until the inter-cluster distance between the two clusters is maximum - Set \( k = k+1 \). Repeat italicized steps until \( k = m \) Cluster interpretation - Assign all measured components to the clusters - Interpret clusters in functional terms (e.g., a business application), or label clusters by their resource demands (for example, CPU-bound, I/O-bound, and so forth) - Clusters with very small populations and small total resource demands can be discarded - (Don't just discard a small cluster) - Select one or more representative components from each cluster for use as test workload - # of representatives can be proportional to the cluster size, the total resource demands of the cluster, or any combination of the two Problems with clustering - The goal of minimizing intracluster variance (or maximizing intercluster variance) many lead to final clusters that are quite different from those visible to eyes. (How to set the right clustering objective?) Problems with clustering (contd.) - The results of clustering are highly variable. No general rules for: - Selection of parameters - Scaling - Distance measure - Labeling each cluster by functionality may be difficult - E.g., in one study, editing programs appeared in 23 different clusters - May well require many repetitions of the analysis Summary - **Workload Characterization** = Models of workloads - **Methods** - Averaging, specifying dispersion - Deal with high variance - Single parameter histogram, multi-parameter histograms - Classification of components based on “principal component analysis” (i.e., finding parameter combinations that explain the most variation) - Markov model - Clustering - Divide workloads in groups where each group can be represented by a single benchmark Homework #1 1. [50 points] The CPU time and disk I/Os of seven programs are shown in Table below. Determine the equation for principal factors. <table> <thead> <tr> <th>Program Name</th> <th>Function</th> <th>CPU Time</th> <th>Number of I/Os</th> </tr> </thead> <tbody> <tr> <td>TKB</td> <td>Linker</td> <td>14</td> <td>2735</td> </tr> <tr> <td>MAC</td> <td>Assembler</td> <td>13</td> <td>253</td> </tr> <tr> <td>COBOL</td> <td>Compiler</td> <td>8</td> <td>27</td> </tr> <tr> <td>BASIC</td> <td>Compiler</td> <td>6</td> <td>27</td> </tr> <tr> <td>PASCAL</td> <td>Compiler</td> <td>6</td> <td>12</td> </tr> <tr> <td>EDT</td> <td>Text Editor</td> <td>4</td> <td>91</td> </tr> <tr> <td>SOS</td> <td>Text Editor</td> <td>1</td> <td>33</td> </tr> </tbody> </table> 2. [50 points] Using a spanning-tree algorithm for cluster analysis, prepare a Dendogram for the data shown in Table below. Interpret your analysis results. (note: no unique solution.) <table> <thead> <tr> <th>Program Name</th> <th>Function</th> <th>CPU Time</th> <th>Number of I/Os</th> </tr> </thead> <tbody> <tr> <td>TKB</td> <td>Linker</td> <td>14</td> <td>2735</td> </tr> <tr> <td>MAC</td> <td>Assembler</td> <td>13</td> <td>253</td> </tr> <tr> <td>COBOL</td> <td>Compiler</td> <td>8</td> <td>27</td> </tr> <tr> <td>BASIC</td> <td>Compiler</td> <td>6</td> <td>27</td> </tr> <tr> <td>PASCAL</td> <td>Compiler</td> <td>6</td> <td>12</td> </tr> <tr> <td>EDT</td> <td>Text Editor</td> <td>4</td> <td>91</td> </tr> <tr> <td>SOS</td> <td>Text Editor</td> <td>1</td> <td>33</td> </tr> </tbody> </table> Suggestion on solving homeworks - Use tools such as Matlab whenever appropriate
{"Source-Url": "http://www.cs.wayne.edu/%7Ehzhang/courses/7290/Lectures/3%20-%20Workload%20Characterization%20Techniques.pdf", "len_cl100k_base": 8164, "olmocr-version": "0.1.50", "pdf-total-pages": 66, "total-fallback-pages": 0, "total-input-tokens": 89724, "total-output-tokens": 10356, "length": "2e12", "weborganizer": {"__label__adult": 0.00031280517578125, "__label__art_design": 0.0005545616149902344, "__label__crime_law": 0.0003581047058105469, "__label__education_jobs": 0.004215240478515625, "__label__entertainment": 9.834766387939452e-05, "__label__fashion_beauty": 0.00018417835235595703, "__label__finance_business": 0.0006589889526367188, "__label__food_dining": 0.0003657341003417969, "__label__games": 0.0005536079406738281, "__label__hardware": 0.004486083984375, "__label__health": 0.0006961822509765625, "__label__history": 0.0003447532653808594, "__label__home_hobbies": 0.0002646446228027344, "__label__industrial": 0.0012388229370117188, "__label__literature": 0.00025653839111328125, "__label__politics": 0.00024247169494628904, "__label__religion": 0.0004162788391113281, "__label__science_tech": 0.2489013671875, "__label__social_life": 0.00017654895782470703, "__label__software": 0.0310821533203125, "__label__software_dev": 0.70361328125, "__label__sports_fitness": 0.0003726482391357422, "__label__transportation": 0.0005860328674316406, "__label__travel": 0.0002262592315673828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25895, 0.03696]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25895, 0.68611]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25895, 0.71464]], "google_gemma-3-12b-it_contains_pii": [[0, 190, false], [190, 302, null], [302, 414, null], [414, 892, null], [892, 1006, null], [1006, 1475, null], [1475, 1912, null], [1912, 2024, null], [2024, 2207, null], [2207, 2389, null], [2389, 2637, null], [2637, 2819, null], [2819, 3164, null], [3164, 3383, null], [3383, 4137, null], [4137, 4860, null], [4860, 5042, null], [5042, 5261, null], [5261, 5822, null], [5822, 5916, null], [5916, 6098, null], [6098, 6749, null], [6749, 7423, null], [7423, 7662, null], [7662, 9690, null], [9690, 10274, null], [10274, 10557, null], [10557, 10855, null], [10855, 11180, null], [11180, 11612, null], [11612, 12365, null], [12365, 12755, null], [12755, 12820, null], [12820, 13002, null], [13002, 13437, null], [13437, 13952, null], [13952, 14320, null], [14320, 14986, null], [14986, 15168, null], [15168, 15513, null], [15513, 15904, null], [15904, 16198, null], [16198, 16664, null], [16664, 16957, null], [16957, 17445, null], [17445, 17844, null], [17844, 18178, null], [18178, 18715, null], [18715, 19093, null], [19093, 19520, null], [19520, 19926, null], [19926, 20226, null], [20226, 20654, null], [20654, 21152, null], [21152, 21333, null], [21333, 21694, null], [21694, 21916, null], [21916, 22038, null], [22038, 22708, null], [22708, 23307, null], [23307, 23545, null], [23545, 23897, null], [23897, 24376, null], [24376, 25080, null], [25080, 25815, null], [25815, 25895, null]], "google_gemma-3-12b-it_is_public_document": [[0, 190, true], [190, 302, null], [302, 414, null], [414, 892, null], [892, 1006, null], [1006, 1475, null], [1475, 1912, null], [1912, 2024, null], [2024, 2207, null], [2207, 2389, null], [2389, 2637, null], [2637, 2819, null], [2819, 3164, null], [3164, 3383, null], [3383, 4137, null], [4137, 4860, null], [4860, 5042, null], [5042, 5261, null], [5261, 5822, null], [5822, 5916, null], [5916, 6098, null], [6098, 6749, null], [6749, 7423, null], [7423, 7662, null], [7662, 9690, null], [9690, 10274, null], [10274, 10557, null], [10557, 10855, null], [10855, 11180, null], [11180, 11612, null], [11612, 12365, null], [12365, 12755, null], [12755, 12820, null], [12820, 13002, null], [13002, 13437, null], [13437, 13952, null], [13952, 14320, null], [14320, 14986, null], [14986, 15168, null], [15168, 15513, null], [15513, 15904, null], [15904, 16198, null], [16198, 16664, null], [16664, 16957, null], [16957, 17445, null], [17445, 17844, null], [17844, 18178, null], [18178, 18715, null], [18715, 19093, null], [19093, 19520, null], [19520, 19926, null], [19926, 20226, null], [20226, 20654, null], [20654, 21152, null], [21152, 21333, null], [21333, 21694, null], [21694, 21916, null], [21916, 22038, null], [22038, 22708, null], [22708, 23307, null], [23307, 23545, null], [23545, 23897, null], [23897, 24376, null], [24376, 25080, null], [25080, 25815, null], [25815, 25895, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25895, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 25895, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25895, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25895, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25895, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25895, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25895, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25895, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25895, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25895, null]], "pdf_page_numbers": [[0, 190, 1], [190, 302, 2], [302, 414, 3], [414, 892, 4], [892, 1006, 5], [1006, 1475, 6], [1475, 1912, 7], [1912, 2024, 8], [2024, 2207, 9], [2207, 2389, 10], [2389, 2637, 11], [2637, 2819, 12], [2819, 3164, 13], [3164, 3383, 14], [3383, 4137, 15], [4137, 4860, 16], [4860, 5042, 17], [5042, 5261, 18], [5261, 5822, 19], [5822, 5916, 20], [5916, 6098, 21], [6098, 6749, 22], [6749, 7423, 23], [7423, 7662, 24], [7662, 9690, 25], [9690, 10274, 26], [10274, 10557, 27], [10557, 10855, 28], [10855, 11180, 29], [11180, 11612, 30], [11612, 12365, 31], [12365, 12755, 32], [12755, 12820, 33], [12820, 13002, 34], [13002, 13437, 35], [13437, 13952, 36], [13952, 14320, 37], [14320, 14986, 38], [14986, 15168, 39], [15168, 15513, 40], [15513, 15904, 41], [15904, 16198, 42], [16198, 16664, 43], [16664, 16957, 44], [16957, 17445, 45], [17445, 17844, 46], [17844, 18178, 47], [18178, 18715, 48], [18715, 19093, 49], [19093, 19520, 50], [19520, 19926, 51], [19926, 20226, 52], [20226, 20654, 53], [20654, 21152, 54], [21152, 21333, 55], [21333, 21694, 56], [21694, 21916, 57], [21916, 22038, 58], [22038, 22708, 59], [22708, 23307, 60], [23307, 23545, 61], [23545, 23897, 62], [23897, 24376, 63], [24376, 25080, 64], [25080, 25815, 65], [25815, 25895, 66]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25895, 0.18045]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
86808fe970270aa3c147f1a96124ff50b8fc78a2
Enterprise Architecture Management for the Internet of Things Alfred Zimmermann¹, Rainer Schmidt², Kurt Sandkuhl³, Dierk Jugel¹,³, Michael Möhring⁴ and Matthias Wißotzki³ Abstract: The Internet of Things (IoT) fundamentally influences today’s digital strategies with disruptive business operating models and fast changing markets. New business information systems are integrating emerging Internet of Things infrastructures and components. With the huge diversity of Internet of Things technologies and products organizations have to leverage and extend previous enterprise architecture efforts to enable business value by integrating the Internet of Things into their evolving Enterprise Architecture Management environments. Both architecture engineering and management of current enterprise architectures is complex and has to integrate beside the Internet of Things synergistic disciplines like EAM - Enterprise Architecture and Management with disciplines like: services & cloud computing, semantic-based decision support through ontologies and knowledge-based systems, big data management, as well as mobility and collaboration networks. To provide adequate decision support for complex business/IT environments, it is necessary to identify affected changes of Internet of Things environments and their related fast adapting architecture. We have to make transparent the impact of these changes over the integral landscape of affected EAM-capabilities, like directly and transitively impacted IoT-objects, business categories, processes, applications, services, platforms and infrastructures. The paper describes a new metamodel-based approach for integrating partial Internet of Things objects, which are semi-automatically federated into a holistic Enterprise Architecture Management environment. Keywords: Internet of Things, Enterprise Reference Architecture, Architecture Integration Method, Architecture Metamodel and Ontology 1 Introduction One of the most challenging objects for our current discussion about the digital transformation of our society is the Internet of Things (IoT) [Wa14] and [Pa15]. The Internet of Things enables a large number of physical devices to connect each other to perform wireless data communication and interaction using the Internet as a global communication environment. Information and data are central components of our everyday activities. Social networks, smart portable devices, and intelligent cars, represent a few instances of a pervasive, information-driven vision of current enterprise systems with IoT and service-oriented enterprise architectures. Social graph analysis and management, big data, and cloud data management, ontological modeling, smart devices, personal information systems, hard non-functional requirements, such as location-independent response times and privacy, are challenging aspects of the above software architecture [BCK13]. Service-oriented systems close the business - IT gap by delivering appropriate business functionality efficiently and integrating new service types coming from the Internet of Things [Gu13], [Sp09] and from cloud services environments [Be11], [OG11] and [CSA09]. As the architecture of Internet of Things systems becomes more complex, and we are going rapidly into cloud computing settings, we need a new and improved set of methodological well-supported instruments of Enterprise Architecture Management, which are associated with tools for managing, decision support, diagnostics and for optimization of impacted business models and information systems. The current state of art research for the Internet of Things architecture [Pa15] lacks an integral understanding of Enterprise Architecture and Management [Ia15], [Jo14], [To11] and [Ar12] and shows an abundant set of physical-related standards, methods and tools, and a fast growing magnitude of heterogeneous IoT devices. The aim of our research is to close this gap and enhance analytical instruments for cyclic evaluations of business and system architectures of integrated Internet of Things environments. Novel technologies demand an increased permeability between “inside” and “outside” of the borders of the classic enterprise system with traditional Enterprise Architecture Management. In this paper we are concentrating on following concrete research questions: RQ1: What are new architectural elements and constraints for the Internet of Things? RQ2: What is the blueprint for an extended Enterprise Reference Architecture, which is able to host even new and small type of architectural description for the Internet of Things? RQ3: How looks am mapping scheme for the Internet of Things Reference Architecture into a holistic and adaptable Enterprise Reference Architecture? In this paper we make the following contributions and extend our service-oriented enterprise architecture reference model in the context of a new tailored architecture metamodel integration approach and ontology as fundamental solution elements for an integral Enterprise Architecture of the Internet of Things. We are revisiting and evolving our first version of ESARC–Enterprise Services Architecture Reference Cube [Zi11], [Zi13b]. Our research aims to investigate a metamodel-based model extraction and integration approach for enterprise architecture viewpoints, models, standards, frameworks and tools for the high fragmented Internet of Things. The integration of many dynamically growing distributed Internet of Things objects into an effective and consistent Enterprise Architecture Management is challenging. Currently we are working on the idea of integrating small EA descriptions for each relevant IoT object. These EA-IoT-Mini-Descriptions consists of partial EA IoT data and partial EA IoT models and metamodels. Our goal is to be able to support an integral architecture development, assessments, ar- In enterprise architecture, diagnostics, monitoring with decision support, and optimization of the business, information systems, and technologies. We report about our work in progress research to provide a unified ontology-based methodology for adaptable digital enterprise architecture models from relevant information resources, especially for the Internet of Things. The following Section 2 describes our research platform with fundamental concepts of the Internet of Things. The following Section 3 presents our holistic reference architecture for Enterprise Architecture Management. Section 4 revisits and extends previous research about architectural integration and federation methods. Finally Section 5 concludes our main results and provides an outlook for our work in progress. 2 The Internet of Things The Internet of Things (IoT) fundamentally revolutionizes today’s digital strategies with disruptive business operating models [WR04], and holistic governance models for business and IT [Ro06], in context of current fast changing markets [Wa14]. With the huge diversity of Internet of Things technologies and products organizations have to leverage and extend previous enterprise architecture efforts to enable business value by integrating the Internet of Things into their classic business and computational environments. Reasons for strategic changes by the Internet of Things [Wa14] are: - Information of everything – enables information about what customers really demand, - Shift from the thing to the composition – the power of the IoT results form the unique composition of things in an always-on always-connected environment, - Convergence – integrates people, things, places, and information, - Next-level business – the Internet of Things is changing existing business capabilities by providing a way to interact, measure, operate, and analyze business. The Internet of Things is the result of a convergence of visions [At10] like, a Things-oriented vision, an Internet-oriented vision, and a Semantic-oriented vision. The Internet of Things supports many connected physical devices over the Internet as a global communication platform. A cloud centric vision for architectural thinking of a ubiquitous sensing environment is provided by [Gu13]. The typical configuration of the Internet of Things includes besides many communicating devices a cloud-based server architecture, which is required to interact and perform remote data management and calculations. Sensors, actuators, devices as well as humans and software agents interact and communicate data to implement specific tasks or more sophisticated business or technical processes. The Internet of Things maps and integrates real world objects into the virtual world, and extends the interaction with mobility systems, collaboration support systems, and systems and services for big data and cloud environments. Furthermore, the Internet of Things is a very important influence factor of the potential use of Industry 4.0 [Sc15b]. Therefore, smart products as well as their production is supported by the Internet of Things and can help enterprises to create more customer-oriented products. A main question of current and further research is, how the Internet of Things architecture fits in a context of a services-based enterprise-computing environment? A service-oriented integration approach for the Internet of Things was elaborated in [Sp09]. The core idea for millions of cooperating devices is, how they can be flexibly connected to form useful advanced collaborations within the business processes of an enterprise. The research in [Sp09] proposes the SOCRADES architecture for an effective integration of Internet of Things in enterprise services. The architecture from [Sp09] abstracts the heterogeneity of embedded systems, their hardware devices, software, data formats and communication protocols. A layered architecture structures following bottom-up functionalities and prepares these layers for integration within an Internet of Things focused enterprise architecture: Devices Layer, Platform Abstraction Layer, Security Layer, Device Management Layer with Monitoring and Inventory Services, and Service Lifecycle Management, Service Management Layer, and the Application Interface Layer. Today, the Internet of Things includes a multitude of technologies and specific application scenarios of ubiquitous computing [At10], like wireless and Bluetooth sensors, Internet-connected wearable systems, low power embedded systems, RFID tracking, smartphones, which are connected with real world interaction devices, smart homes and cars, and other SmartLife scenarios. To integrate all aspects and requirements of the Internet of Things is difficult, because no single architecture can support today the dynamics of adding and extracting these capabilities. A first Reference Architecture (RA) for the Internet of Things is proposed by [WS15] and can be mapped to a set of open source products. This Reference Architecture covers aspects like: cloud server-side architecture, monitoring and management of Internet of Things devices and services, a specific lightweight RESTful communication system, and agent and code on often-small low power devices, having probably only intermittent connections. The Internet of Thing architecture has to support a set of generic as well as some specific requirements [WS15], and [Pa15]. Generic requirements result from the inherent connection of a magnitude of devices via the Internet, often having to cross firewalls and other obstacles. Having to consider so many and a dynamic growing number of devices we need an architecture for scalability. Because these devices should be active in a 24x7 timeframe we need a high-availability approach [Ga12], with deployment and auto-switching across cooperating datacenters in case of disasters and high scalable processing demands. Additionally an Internet of Thing architecture has to support automatic managed updates and remotely managed devices. Often connected devices collect and analyze personal or security relevant data. Therefore it is mandatory to support identity management, access control and security management on different levels: from the connected devices through the holistic controlled environment. Specific architectural requirements [WS15] and [At10] result from key categories, such as connectivity and communications, device management, data collection and analysis, computational scalability, and security. Connectivity and communications groups existing protocols like HTTP, which could be an issue on small devices, due to the limited memory sizes and because of power requirements. A simple, small and binary protocol can be combined with HTTP-APIS, and has the ability to cross firewalls. Typical devices of the Internet of Things are currently not or not well managed by device management functions of the current Enterprise Architecture Management. Desirable requirements of device management [WS15] include the ability to locate or disconnect a stolen device, update the software on a device, update security credentials or wiping security data from a stolen device. Internet of Things systems can collect data streams from many devices, store data, analyze data, and act. These actions may happen in near real time, which leads to real-time data analytics approaches. Server infrastructures and platforms should be high scalable to support elastic scaling up to millions of connected devices, supporting alternatively as well smaller deployments. Security is a challenging aspect of this high-distributed typical small environment of Internet of Things. Sensors are able to collect personalized data and can bring these data to the Internet. A layered Reference Architecture for the Internet of Things is proposed in [WS15] and (Fig. 1). Layers can be instantiated by suitable technologies for the Internet of Things. ![Fig. 1. Internet of Things Reference Architecture [WS15]](image) A current holistic approach for the development for the Internet of Things environments is presented in [Pa15]. This research has a close link to our work about leveraging the integration of the Internet of Things into a framework of digital enterprise architectures. The main contribution from [Pa15] considers a role-specific development methodology, and a development framework for the Internet of Things. The development framework contains a set of modeling languages for a vocabulary language to describe domain-specific features of an IoT application, an architecture language for describing application-specific functionality, and a deployment language for deployment features. Associ- ated with this language set are suitable automation techniques for code generation, and linking to reduce the effort for developing and operating device-specific code. The metamodel for Internet of Things applications from [Pa15] defines elements of an Internet of Things architectural reference model like, IoT resources of type: sensor, actuator, storage, and user interface. Internet of Thing resources and their associated physical devices are differentiated in the context of locations and regions. A device provides the capability to interact with users or with other devices. The base functionality of Internet of Things resources is provided by software components, which are handled in a service-oriented way by using computational services. 3 Enterprise Reference Architecture Our principal contribution is an extended approach about the systematic composition and integration of architectural metamodels, ontologies, views and viewpoints within adaptable service-oriented enterprise architecture frameworks for services and cloud computing architectures, by means of different integrated service types and architecture capabilities. ESARC - Enterprise Services Architecture Reference Cube, [Zi11], [Zi13b] and [Zi14] is an integral service-oriented enterprise architecture categorization framework, which sets a classification scheme for main enterprise architecture models, as a guiding instrument for concrete decisions in architectural engineering viewpoints. We are currently integrating metamodels for EAM and the Internet of Things. The ESARC – Enterprise Services Architecture Reference Cube [Zi11] and [Zi13b] (see Fig. 1) completes existing architectural standards and frameworks in the context of EAM – Enterprise Architecture Management [To11], [Be12] and [La13], [Ar12] and extends these architecture standards for services and cloud computing in a more specific practical way. ESARC is an original architecture reference model, which provides a holistic classification model with eight integral architectural domains. ESARC abstracts from a concrete business scenario or technologies, but is applicable for concrete architectural instantiations. Metamodels and their architectural data are the core part of the Enterprise Architecture. Enterprise architecture metamodels [Ar12], [Sa10] should support decision support [JE07] and the strategic [SM13] and IT/Business [La13] alignment. Three quality perspectives are important for an adequate IT/Business alignment and are differentiated as: (i) IT system qualities: performance, interoperability, availability, usability, accuracy, maintainability, and suitability; (ii) business qualities: flexibility, efficiency, effectiveness, integration and coordination, decision support, control and follow up, and organizational culture; and finally (iii) governance qualities: plan and organize, acquire and implement deliver and support, monitor and evaluate. Architecture Governance, as in [WR04] sets the governance frame for well aligned management practices within the enterprise by specifying management activities: plan, define, enable, measure, and control. The second aim of governance is to set rules for architectural compliance respecting internal and external standards. Architecture Governance has to set rules for the empowerment of people, defining the structures and procedures of an Architecture Governance Board, and setting rules for communication. The Business and Information Reference Architecture - BIRA [Zi11], and [Zi13b] provides, a single source and comprehensive repository of knowledge from which concrete corporate initiatives will evolve and link. The BIRA confers the basis for business-IT alignment and therefore models the business and information strategy, the organization, and main business demands as well as requirements for information systems, such as key business processes, business rules, business products, services, and related business control information. Today’s development of cloud computing technologies and standards are growing very fast and provide a growing standardized base for cloud products and service offerings. Fig. 3 shows our integration scenario for an extended Cloud Computing architecture model from [Li11], [Be11], [CSA09], and [OG11b]. ![Fig. 3. Cloud Computing Integration [Li11], [Be11], [CSA09], [OG11]](image) ### 4 Architecture Integration Method Current work revisits and extends our basic enterprise architecture reference model from ESARC (Section 3) and [Zi13a] by federating Internet of Things architectural models (Section 2) from related scientific work, as well as specifications from industrial partners. Our originally developed integration model ESAMI – Enterprise Services Architecture Metamodel Integration – [Zi13b] serves as a method for integrating base models from enterprise architecture standards, like [To11], [Ar12], architectural frameworks [EH09], [EAP15], [DoD09], [MOD05], and [NAF07], metamodels from practice and from tools. ESAMI is based on correlation analysis, having a systematic integration process. Typically this process of pair wise mappings is quadratic complex. We have linearized the complexity of these architectural mappings by introducing a neutral and dynamically extendable architectural reference model, which is supplied and dynamically-extended from previous mapping iterations. The architectural model integration [Zi13a] and [Zi13c] works considering following steps: analyze concepts of each resource by using concept maps; extract viewpoints for each resource: Viewpoint, Model, Element, Example; initialize the architectural reference model from base viewpoints; analyze correlations between base viewpoints and architectural reference model; determine integration options for the resulting viewpoint integration model; develop the synthesis metamodel from base metamodels; consolidate the architectural reference model according the synthesis metamodel, and finally readjust correlations and integration options; develop the ontology of the architectural reference model; develop correspondence rules between model elements; and develop patterns for architecture diagnostics and optimization. First we have to analyze and transform given architecture resources with concept maps and extract their coarse-grained aspects in a standard way [Zi13a], [Zi13c] by delimiting architecture viewpoints, architecture models, their elements, and illustrating these models by a typical example. Architecture viewpoints are representing and grouping conceptual business and technology functions regardless of their implementation resources like people, processes, information, systems, or technologies. They extend these information by additional aspects like quality criteria, service levels, KPI, costs, risks, compliance criteria a.o. We are using modeling concepts from ISI/IEC 42010 [EH09] like Architecture Description, Viewpoint, View, and Model. Architecture models are composed of their elements and relationships, and are represented using architectural diagrams. The integration of a huge amount of dynamically growing Internet of Things objects is a considerable challenge for the extension and dynamically evolution of EA models. Currently we are working on the idea of integrating small EA descriptions for each relevant IoT object. These EA-IoT-Mini-Descriptions consists of partial EA-IoT-Data, partial EA-IoT-Models, and partial EA-IoT-Metamodels associated with main IoT objects like IoT-Resource, IoT-Device, and IoT-Software-Component [Pa15], and [WS14]. Our research in progress main question asks, how we can federate these EA-IoT-Mini-Descriptions to a global EA model and information base by promoting a mixed automatic and collaborative decision process [Ju15]. For the automatic part we currently extend model federation and transformation approaches [Br10], [Tr15] by introducing semantic-supported architectural representations, e.g. by using partial and federated ontologies [Kh11] and associated mapping rules - as universal enterprise architectural knowledge representation, which are combined with special inference mechanisms. 5 Conclusion and Future Work We have developed a metamodel-based model extraction and integration approach for enterprise architecture viewpoints, models, standards, frameworks and tools for EAM towards integrated Internet of Things. Our goal is to support a holistic Enterprise Architecture Management with architecture development, assessments, architecture diagnostics, monitoring with decision support, and optimization of the business, information systems, and technologies. We intend to provide a unified and consistent ontology-based EAM-methodology for the architecture management models of relevant Internet of Things resources, especially integrating service-oriented and cloud computing systems for digital transforming enterprises as well. Referring to our research questions, we looked at: RQ1: What are new architectural elements and constraints for the Internet of Things? First we have delimited main architectural elements for the Internet of Things and adopted and included an IoT reference model from the state of art. We defined in that way our conceptual architectural elements of our “outside” world, which have to be considered for integration with the “inside” model for an integrated Enterprise Architecture. RQ2: What is the blueprint for an extended Enterprise Reference Architecture, which is able to host even new and small type of architectural description for the Internet of Things? We have defined with the Enterprise Reference Architecture our holistic and adaptable framework of architectural domains, viewpoints, and views. RQ3: How looks am mapping scheme for the Internet of Things Reference Architecture into a holistic and adaptable Enterprise Reference Architecture? With the correlation-based integration method we have defined a flexible methodology for the integration of architectural elements by using an extendable architecture reference model. Our approach introduces the new concept of EA-IoT-Mini-Descriptions. Many of the typical high-level model-based integration decisions could not be done during run time at the deployment-level. Given the high number of IoT elements in dynamically complex environment such strong EAM claim would require strong evidence of the feasibility. From our research in progress work on integrating Internet of Things architectures into Enterprise Architecture Management results some interesting theoretical and practical implications. By considering the context of service-oriented enterprise architecture, we have set the foundation for integrating metamodels and related ontologies for orthogonal architecture domains within our Enterprise Architecture Management approach for the Internet of Things. Architectural decisions for Internet of Things objects, like IoT-Resource, Device, and Software Component are closely linked with the code implementation. Therefore, researchers can use our approach for integrating and evaluating Internet of Things in the field of enterprise architecture management. Our results can help practical users to understand the integration of EAM and Internet of Things as well as can support architectural decision making in this area. Limitations can be found e.g. in the field of practical multi-level evaluation of our approach as well as domain-specific adoptions. Future work will include conceptual work to federate EA-IoT-Mini-Descriptions to a global EA model and enterprise architecture repository by promoting a semi-automatic and collaborative [Sc15] decision process [JS14], [Sc14] and [JSZ15]. We are currently extending our model federation and transformation approaches with elements from related work, like [Br10], [Tr15]. We are researching about semantic-supported architectural representations, as enterprise architectural knowledge representations, which are combined with special inference and visualization mechanisms. References Spiess, P. et al.: SOA-Based Integration of the Internet of Things in Enterprise Services. ICWS 2009, pp. 968-975
{"Source-Url": "http://subs.emis.de/LNI/Proceedings/Proceedings244/139.pdf", "len_cl100k_base": 4732, "olmocr-version": "0.1.49", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 24487, "total-output-tokens": 7226, "length": "2e12", "weborganizer": {"__label__adult": 0.0004267692565917969, "__label__art_design": 0.0019817352294921875, "__label__crime_law": 0.0005621910095214844, "__label__education_jobs": 0.0012493133544921875, "__label__entertainment": 0.0002124309539794922, "__label__fashion_beauty": 0.0002532005310058594, "__label__finance_business": 0.0038547515869140625, "__label__food_dining": 0.0004916191101074219, "__label__games": 0.0008111000061035156, "__label__hardware": 0.001837730407714844, "__label__health": 0.0007700920104980469, "__label__history": 0.0006518363952636719, "__label__home_hobbies": 0.00013494491577148438, "__label__industrial": 0.0011873245239257812, "__label__literature": 0.0005240440368652344, "__label__politics": 0.0005536079406738281, "__label__religion": 0.0007042884826660156, "__label__science_tech": 0.34326171875, "__label__social_life": 0.00012046098709106444, "__label__software": 0.0291290283203125, "__label__software_dev": 0.60986328125, "__label__sports_fitness": 0.00028395652770996094, "__label__transportation": 0.0008835792541503906, "__label__travel": 0.00028061866760253906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32919, 0.039]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32919, 0.20741]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32919, 0.88374]], "google_gemma-3-12b-it_contains_pii": [[0, 2653, false], [2653, 5877, null], [5877, 8803, null], [8803, 12355, null], [12355, 14579, null], [14579, 16752, null], [16752, 18555, null], [18555, 20468, null], [20468, 23486, null], [23486, 26595, null], [26595, 29659, null], [29659, 32919, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2653, true], [2653, 5877, null], [5877, 8803, null], [8803, 12355, null], [12355, 14579, null], [14579, 16752, null], [16752, 18555, null], [18555, 20468, null], [20468, 23486, null], [23486, 26595, null], [26595, 29659, null], [29659, 32919, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32919, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32919, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32919, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32919, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32919, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32919, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32919, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32919, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32919, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32919, null]], "pdf_page_numbers": [[0, 2653, 1], [2653, 5877, 2], [5877, 8803, 3], [8803, 12355, 4], [12355, 14579, 5], [14579, 16752, 6], [16752, 18555, 7], [18555, 20468, 8], [20468, 23486, 9], [23486, 26595, 10], [26595, 29659, 11], [29659, 32919, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32919, 0.0]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
fa07c648367094f8ed97cd38360b6d6f8c68df52
**Abstract** Many of today's companies already have integrated workflow management systems (WFMS) within their IT-infrastructure which are mainly used for the core processes of the company. Furthermore these predefined processes are designed and implemented by specialists, whilst the process knowledge of the involved employees remains mostly unused. The daily business life, especially in office environments, often additionally requires flexible, rather short lived processes (ad hoc workflows), that can only be predetermined in advance to some degree. These two types of workflows have many interdependencies which are inadequately or not supported by currently available WFMS. For example some ad hoc processes that are used more than once and therefore get well tried by practical experience, could become established and important processes for the company. Thus these should be transitioned into predefined workflows in a traditional WFMS. The GroupProcess project examines the broad possibilities to support ad hoc processes in companies and creates connections to existing systems like WFMS, Knowledge- and Office-Management-Systems. **1 Introduction/Overview** Many of today's companies have integrated the Workflow Management Systems (WFMS) in their IT-Infrastructure which are used mainly for the core processes of the company. Furthermore the predefined processes are usually designed and implemented exclusively by specialists, whilst the process knowledge of the involved employees remains mostly unused. On the other hand, the daily business life, especially in office or administrative environments, often additionally requires flexible, short lived and sometimes urgent processes (ad hoc workflows), that can only be determined in advance to some degree and which are not or only inadequately supported by currently available WFMS. Until now, this type of processes has been viewed as not worthwhile to be automated. The GroupProcess project offers concepts and prototypes to meet this situation. In section 2 we introduce the goals of the GroupProcess project on the application level. From the nature of ad hoc processes we are convinced that a very suitable or even predestined target platform for their support is a groupware-based environment. Therefore a definition and delimitation of the terms in the field of computer supported cooperative work (CSCW) and groupware are described in section 3. In section 4 and 5 we present the concepts and architecture of the GroupProcess project respectively. As a focus in this context our solution of supporting ad hoc processes in the proposed environment and the integration of ad hoc processes with a traditional workflow management system is described. Section 6 shortly discusses related research. Section 7 outlines concluding remarks, the current state of the project and the future opportunities. **2 Objectives of the GroupProcess Project** The basic objective of the GroupProcess project is to provide a system for the management of ad hoc workflows in office and administrative environments, to enhance the efficiency of ad hoc processes. This basic objective implies several sub-goals that result from the nature of ad hoc processes and the given environment. Because ad hoc processes are only partially predetermined, a system to support ad hoc workflows should be able to support continuous design of a workflow during its execution. The knowledge about the whole process is often divided across several people. Thus every involved person should be able to dynamically add their specific knowledge to the workflow by participating in the design. As a result of this participatory design an entire workflow evolves from the expertise of the persons concerned. One of the greatest challenges of the project is to design an intuitive user interface that enables even non-specialists to design and execute ad hoc workflows. As seen, an enhanced system for ad hoc workflows could be helpful to provide a system that is able to partially automate a process. Additionally a model of the process can be created while executing it. Hence, for a possible next execution a workflow model will already be available. This reduces the time to execute the process and therefore increases efficiency. By supporting ad hoc workflows using such a specialized system, the formerly unused implicit knowledge of the participating people can be transformed into explicit knowledge to become available to the whole organization. Besides, this knowledge can be stored firstly for reporting reasons and secondly to be retrieved if required to graphically display the workflows that have already been executed. This can be considered to be a part of a knowledge management strategy for the organization. Another goal of the GroupProcess project is the integration of the system into a currently available WFMS. We do not want to reinvent or modify the features of conventional WFMS because we are convinced that they are very suitable for their intended environment. A sustainable part of work in an office environment involves a combination of highly structured processes and tasks where the process is fuzzy and the rules, routes and roles are dynamically defined as the work is being done. This is why workflow systems alone are not as successful as expected and deemed to be “too rigid”. Together with the components for less structured processes that provide “soft” interaction, a system arises that covers a greater portion of the existing processes in a business environment. Communication systems like e-mail on the one hand and collaboration systems like WFMS on the other hand should grow together. Thus, there should be a linkage like the GroupProcess system to gain synergetic advantages of both system types. Some ad hoc processes that are used more than once and therefore get well tried by practical experience could become established and important workflows for the company. Thus these should be transitioned into predefined workflows in a WFMS. By using the GroupProcess system, the stored protocols of ad hoc workflows can be used as a basis to design the structured workflow. Thus not the whole workflow needs to be designed from scratch. There are only modifications necessary to transform the ad hoc process into a structured one. This way, the design of workflows becomes a team-based process that involves workflow specialists as well as the users of the workflow system. This helps to increase efficiency and acceptance of the implemented workflows within a company or organization. Besides, there are other emerging effects in combination with existing systems like workflow-, knowledge-, and office-management-systems, which belong to the objectives of the GroupProcess project as well. The ad hoc processes in an office environment and therefore the GroupProcess project itself can be seen as a connecting module to arbitrate between these systems. The aspect of reuse of ad hoc processes can be considered as a linkage to knowledge-management-systems, while the integration of ad hoc workflow management and management of structured, predefined processes can be viewed as a linkage to conventional WFMS. Additionally, the GroupProcess system is integrated into an existing office management environment. 3 Delimitation of GroupProcess within the field of CSCW To hierarchically decompose where the topic of GroupProcess is located within the field of CSCW, this section discusses the terms communication, collaboration and coordination. Later on, these terms will be used as modules for further concepts. Three types of WFMS can be differentiated. Messaging-based Workflow Systems, Document-oriented WFMS and Production WFMS (compare with [5]). As we focus on office environments of companies and organizations, we concentrate on document-oriented WFMS. Peripherally, messaging-oriented WFMS are included in our approach as well. As a result of these considerations, we think that computer supported cooperative work (CSCW) or groupware technologies are the best basis for our concepts, theories and implementations. Fig. 1 shows the different concepts of CSCW communication, collaboration and coordination (compare with [2]). The delimitation of these terms is not homogeneous in the literature. An overview over the controversial discussed different views can be found in [2]. Communication in this context means the submission and exchange of information between people. In the context of groupware this specifically means store-and-forward or push-model. That is, information is transmitted ("pushed") from the sender to the recipient. However, communication-technologies are not appropriate for all kinds of teamwork. In some of these cases collaboration is the better way to work together. Collaboration relies on a shared space. Activities such as problem solving, brainstorming or discussing a topic are all forms of collaboration. Especially for many-to-many interaction, the communication concept reaches its barriers. In addition, messaging systems primarily concern with tracking files as messages in relation to senders and recipients. This makes it difficult for users to track information by topic. Thus, providing a virtual common workspace with a group-centered interface that allows participants to share information and ideas is the logical next extension. In contrast to messaging systems which use the push-model, shared database technology can be described as a pull-model. But also other technologies like video-conferencing are means of collaboration. And there is yet a third category of activities that are supported by groupware: Coordination. Many business activities are of a more structured nature. Companies do not expect people to "collaborate" on processing an expense report; rather, the company defines specific policies about how an expense report has to be routed through the organization to be properly approved. Many people are involved, but the company's policies specify the coordination required between these people to meet a defined objective. The successful completion of a predefined business process depends upon the coordination of people in completing a set of structured tasks in a particular sequence. Coordination is using the concepts of communication and collaboration and adding control. To a great extent this is the domain of workflow automation systems. The GroupProcess project resides mostly in the field of coordination, but in some cases, collaboration is used as a module in structured processes as described in the next section. 4 Conceptual approaches of the GroupProcess project 4.1 The GroupProcess Continuum As outlined above we are convinced that workflow management must encompass and support a combination of both predefined process structures in the sense of process control (full automation), and open and flexible processes, whose structures will be determined or refined due to evolving circumstances during task processing (combination of automation and autonomous workgroup coordination and collaboration). The first type relates to today's existing corporate information systems infrastructures of highly structured large-volume transaction systems. The latter closely refers to the context of office systems based on more flexible paradigms like workgroup computing, CSCW, or groupware. Thus, we do not regard them as two distinct or even opposite concepts, the more or less rigid structures of workflow automation on the one hand, and the flexible team-driven concepts of workgroup computing on the other hand. Rather, we synthesize both approaches using overlapping workflow (sub-)structures to form a basis for flexible and yet productive information systems design. We started on the basis of the workflow-continuum by Nastansky and Hilpert presented in [1]. This has been further developed and extended to meet the requirements of the concepts we want to discuss more detailed in this paper. By presenting this scale we want to point out the categories that need to be differentiated from our point of view. We identify three different workflow categories whose details and substructures will be outlined below. The combination of these three categories provides a scaleable degree of automation for workflow management. We utilize well known concepts of information dissemination and messaging (as described in section 2), modifying and integrating them into a framework from which elements can be derived for maximum synergy depending on the actual requirements. The basic patterns and some annotations describing the different types of workflows from a business process design point of view are summarized in fig. 2. The various annotations are intended to point out the continuous scale property of organizational workflows and the overlaps in underlying information and communication technologies. **1a) Ad hoc Workflows** As outlined above, ad hoc workflows (Fig. 2, col. 1) usually deal with unique and rather short-lived processes. These processes with very low frequency vary largely in their degree of complexity. In general, single tasks of this type of workflow can only be partially predetermined in advance, and are difficult to structure. Until now, processes of this type have been viewed as not worthwhile to be automated. In many cases these workflows are spontaneous and also urgent or confidential. Both, initiation and execution of ad hoc workflows, usually involve different actors. Ad hoc workflows may be recurrent in parts or they may reappear in a similar way again. Typical sample ad hoc workflows can be found for general purposes in office communication environments, in project management of individual tasks, or customer requests that cannot be matched with any known standard service pattern within the organization. (1b) Open team task within an ad hoc workflow Collaboration in its specific meaning in this context has been defined above. Using this definition, a team task can be described as a task that has to be accomplished in a collaborative way. So the main notion is that this task has to be executed and completed by a team. A workflow of the type “Open team task within ad hoc workflow” is a workflow containing at least one step that has to be accomplished as a team task. Before the team job is started or after finishing it, there may be other parts of a routing path (in this case ad hoc defined parts) that have to be accomplished. This may for example be an offer to a customer which has to be discussed with colleagues and afterwards has a structured but yet flexible way to be finished, e.g. putting it in a standard form, printing it and approving it from a person that resides in a higher hierarchical position and is able to make the decision. The embedding of this open team task concept in the GroupProcess approach is shown in fig. 2 in column 2. (1c) Ad hoc workflow with a sub-process or cluster This type of workflow may occur because of two reasons. The first is reducing the complexity of workflows by building clusters as a tool for hierarchical decomposition. This way complex tasks can be created as just one item and then later on be defined more precisely. The second reason is that an ad hoc workflow may contain at least one task that belongs to the responsibility of another person, team or department. In this case, the initiator may want to know how the other organizational entity achieves the objective of the task. Another reason for the subordinate organizational entity to design the process could be that it helps to structure it. Both described extensions of regular ad hoc workflows may occur in one workflow. More than one occurrence is possible as well. (2) Semi-structured Workflows Subsequently, we describe three major types of semi-structured workflows: the intersection of predetermined and open team oriented tasks, the employment of an ad hoc workflow as a part within a business process framework and the ad hoc modification of predetermined generally well-structured workflows. Any of these semi-structured workflow types may be combined with each other. (2a) Open Team Task Within Standard Workflows This type is similar to (1b). The difference is that a well structured workflow exists and the team task is one part of the predefined workflow in this case. The team task is used analogously as described in (1b). An example could be a team meeting on a regular schedule which needs some preparation before and some assessment or evaluation afterwards. The tasks beside the team task could be well structured and therefore be designed a priori for this type of workflow. (2b) Ad hoc Sub-workflow within predefined Workflow This semi-structured workflow is characterized by integrating such types of tasks into predefined workflows that are completely open but the initiator expects that a structure will be established for that task by its editor. In the context of GroupProcess, we call this sub-workflow a cluster. Again, it is rather similar to the same module in (1c). The main difference is that in this case we have an existing well structured workflow, in which one step is always different but it might help that this sub-process is being recorded, e.g. to use it again as an idea to solve a similar problem or for others to learn how problems have been solved or just for reporting reasons. (2c) Ad hoc Modification and Exception Handling of Predetermined Workflows This is another type of predefined workflow with modifications at run-time by ad hoc modifications and dynamic re-routing of the workflow for special cases and exceptions (Fig. 2, col. 2c). In comparison to (2b) we have a generally well structured predefined workflow, without uncertain steps. In some use cases, however, it may be necessary that an exception from the specified way of execution of the workflow becomes necessary. Ad hoc reactions may be required by specific circumstances that come up during everyday work. A workflow system that does not provide the flexibility for the user to respond to this highly probable type of real life necessity forces him or her to leave the context of work within the workflow system - thus possibly causing fatal disruption. A solution to the workflow breakdown could be tried by physically meeting or calling an appropriate person whose interaction is necessary to continue the job. Another way to handle the disruption could be to write a paper based memo or use e-mail describing the nature of the problem and asking for solution. The disadvantages of a required synchronous communication described in the first alternative are obvious. With the second memo based approach this can be prevented. Still, the effort for explaining the information context of the disrupted job may be immense. Exception handling and ad hoc modifications can be distinguished in this type of workflow. Ad hoc modifications can be regarded in two ways: 1) Questions to someone else: An ad hoc workflow is started from the current node and re-enters the predefined workflow model again at the same node after the exception flow has ended. Afterwards the predefined workflow continues normally. 2) Detour: Disregard the transition to the next node in the workflow model. Build an ad hoc workflow as an alternate route and re-enter the workflow model in the next step. Both cases can be looked upon as ad hoc workflows that are started at the current position of a predefined workflow. In some cases it could be useful that the predefined workflow is changed in the way the workflow goes along with the ad hoc change. This could be indicated for instance if the change happens again and again or if the responsible organizational entity wants to change the process in that particular step (delegation). As with all other activities within a workflow system, the exception handling must be thoroughly recorded. The audit trails can be found as entries in the workflow protocol and can then be graphically displayed by the GroupProcess system. (3) Predetermined Workflows These are the well known standard workflows which are used in production WFMS as well as in administrative WFMS. Standard processes usually consist of highly recurrent structures (Fig. 2, col. 3). These workflows pass through the same predetermined order of steps over and over again. Very often they consist of routine activities. A one-time investment in task analysis and the development of automated applications seems to be profitable for high volume processes. It should be considered as well for typical sequential processing patterns being followed at several occasions within an organization and thus being re-usable within many workflows. Pre-designed process models determine the complete procedures with their activities, agents, and routing paths including possible alternatives in advance. The involved editors of the tasks within the processes with their position inside the organizational structure and their roles have to be included. We will now shortly discuss the question which of these workflow types will be supported by the GroupProcess concept directly, which are supported by existing workflow management systems and which are supported by means of a combination of both: The types (1a) to (1c) are directly supported by the GroupProcess system. The functions for the cooperative parts (group tasks) are provided by the underlying groupware platform. The sub-type (2b) is also already supported by the document-oriented WFMS which we chose to combine with the GroupProcess approach (“Workflow enabled Enterprise Office” a product of Pavone AG, Paderborn, Germany). The sub-types (2a) and (2c) will be supported by a combination of GroupProcess and this document-oriented WFMS. The workflow type (3) (predetermined workflows) is supported by current workflow management systems. Although predetermined workflows are not directly supported by the GroupProcess system, there is a linkage between predefined workflows and the processes supported by the GroupProcess system, as the development from predetermined workflows out of ad hoc workflows is one goal of the GroupProcess system. 4.2 A Paradigm shift: Comparison of ad hoc and traditional WFMS In the previous sections it has been discussed that we consider a much more flexible workflow-management solution necessary. To reach this goal, some of the paradigms of current WFMS have to be rethought and perhaps have to be modified. One paradigm of current WFMS is the separation of build time and run time of a workflow (compare with [8]). A workflow model is entirely designed during the build time and afterwards executed during the run time. This approach is not suitable for the management of ad hoc processes. Because of their nature, ad hoc processes cannot be completely predefined. Consequently they should be partially predefined and started afterwards. Thus our conclusion is that build time and run time have to be merged for the purpose of creating a system that is capable to manage ad hoc workflows. This ensures that the design of processes can be continued while the process is already running. Another aspect of the current workflow paradigms is the separation between workflow model and workflow instance. This is a very suitable solution for highly recurrent predefined workflows. The situation for ad Ad hoc workflows is different: Because they are not executed in the same way a second time and have a high dynamic of changes, they do not need to be stored as a model. Rather, the model and the instance form an integrated whole. If a similar ad hoc workflow ought to be used a second time, an ad hoc workflow that has been executed before can be chosen as a template for the new process. This template can then again be modified while the process is already running. Furthermore it should be possible that ad hoc processes are being build by the participants of a workflow execution in the organization. We want to accomplish that goal by enabling the system that a process can be further developed by a different person while the process is running, e.g. by the editor of the current task or by the initiator of the workflow. Participatory design of organizational structures for workflow management systems has already been suggested by Ott and Huth because it is looked upon as a more efficient way to design organizational structures. Based upon these thoughts, the participatory design of the process structure can be viewed as continuation of that approach which also offers some options for a higher efficiency, in this case for process modeling. The organizational structure proposed in the GroupOrga project by Ott and Huth in [4] is also appropriate as an organizational structure to choose the organizational entities for the design of ad hoc workflows. The modeling of organizational structures offers the necessary flexibility to always have the correct organizational objects available. Because of the dynamic and spontaneous nature of ad hoc processes they are often directly bound to people rather than more abstract organizational entities like departments or roles. Ad hoc workflows often take place in the core team of the initiator of the process. If a short-lived process is partially planned, most designers have other persons within their team in mind, that they want to choose directly to get the job done. In this case it is unlikely that an abstract organizational entity is chosen. This is more often the case, if the workflow crosses the frontier of the team or even organizational boundaries. Then the designer does not know in most cases which person of the other team or other organizational unit is able or willing to work on the task. The most important aspect for an ad hoc workflow management system is that it is easy to use in most cases. But if structured workflows should be created by the use of previously stored ad hoc workflows later on, an abstraction from the particular persons to their organizational entity has to be realized. At this point a relation from the person to his or her roles and department needs to be used. This relation is of course not unique. Thus, there has to be some kind of selection of the current positions, roles and department of a particular person which might be suitable as the organizational entity to fulfill the task. An example for the type (2b) "Ad hoc Sub-workflow within predefined Workflow" could be a workflow that is meant to reflect the process of fixing a software problem. The structured steps might include the initial registration of the software problem, submission to a project manager, assignment to a programmer or specialist, routing to quality assurance, delivery to a configuration management specialist, and after figuring out a workaround, posting it to a public reference library (e.g. the World Wide Web or bulletin board) ready to be downloaded by the customers. Throughout this process, however, there are likely to be at least one unstructured step that cannot be anticipated or automated, that is the solving of the problem itself. Coordination, then, is more than the automation of a sequence of structured tasks, bringing people into and out of a process as required. Rather, when we look at how work is really done, we see that knowledge that is essential to the completion of a process is acquired as a result of the relationships among the various participants, outside of the context of the process itself. Complete coordination includes support for informal conversations like e-mail, discussion databases and reference publishing systems, that allow people to gather the information they need to get their jobs done, especially when these conversations happen in the context of a more structured process. Our 5 Architectural considerations The architecture of the GroupProcess system, which is displayed in fig. 3, has been derived from the workflow-continuum and the given environment. Process-, organization- and application-database, organization-modeler and -interface and workflow-modeler and -engine are components of the traditional WFMS (denoted by the dotted line in fig. 3). The core modules of the GroupProcess system are the ad hoc workflow engine, the ad hoc workflow-modeler and -viewer and the organization-modeler, -interface and organization- and application-database (displayed in dark gray in fig. 3). The model of ad hoc workflow is directly stored in the document of the corresponding workflow case. Thus there is no connection to the process database which stores the predefined workflow models. The tool for the transformation of ad hoc processes into structured workflows and the module for e-mail tracking and routing are optional. For the integrated WFMS for ad hoc and predefined workflows all modules are used. The primary target environment of the GroupProcess project is an integrated groupware-based office architecture. Nevertheless, to consider the various occasions that involve routing via e-mail, the prototype pursues the concept of encapsulated message objects that can be sent to and exchanged with external systems as well. As a conclusion, all routing information of an ad hoc workflow in the GroupProcess system is stored in the document itself, regardless which of these two different types of routing occurs (compare with fig. 4). Besides, this also ensures that each document can have its own workflow instance and that the model can retrieved from the document for further use as discussed above. The routing information consists of two major blocks. The ”Workflow protocol” contains detailed information about task definitions that already have been worked off, including information such as the editor, the list of tasks, information about joined documents and the routing path. The protocol is write protected and cannot be changed anymore. The "Workflow definition" contains all routing and task information that have already been specified for the further flow of the document. In many cases, these information can be changed at any time by the current editor of the document. The workflow protocol and workflow definition can be displayed as an integrated workflow model. A workflow-document can also contain the workflow modeler itself, which is necessary to enable the user to enter routing information even if the initiation of the workflow takes place in the mail database of the user instead of the integrated office environment. Therefore the whole modeling system has been created using platform independent internet technologies. This also enables a user to design and use workflows through the web with a browser interface. Moreover the storage of workflow model, content and modeling tool allows workflows to cross organizational boundaries. As the degree of abstraction in ad hoc processes is low, in most cases the editor of a task is a person, that can be chosen from the users favorites list, the address book of the organization or, if the process occurs in an integrated office environment, from the organization database. In that case, the modeler offers advanced options for choosing an editor. These can be all types of editors the organization database offers, e. g. departments, workgroups or roles. The initiator of a workflow is the person that first defines tasks and forthcoming editors of these tasks to be fulfilled in the context of the given document. If tasks in a workflow can be finished by different editors at the same time, a copy of the document has to be created for each parallel editor. These documents are called split documents. The initiator has to decide if the workflow can result in several documents or has to end up in a single document. In that case, the workflow definition is called a "cluster". By definition, a cluster has only one defined ending task. Thus all split documents that have been created within the cluster have to be joined before an editor can finally finish a task. In general, the editor of a task can decide about the further routing path of the document. If the workflow definition already contains tasks other than the current one, the editor can change the definition or accept it as is. Only in a cluster, the design capabilities of the editors are restricted. The definition of the workflow can be locked by the initiator, so the editor of a task cannot change the further document flow. In a locked workflow model, the initiator can design the flow as if the routing would occur in a WFMS for predefined workflows, so definitions such as alternative routing paths are allowed. If the design is not locked but the workflow has been defined as a cluster by the initiator, the restriction is that the design has to lead to one document. In case the current editor of a task needs to further differentiate that task into a sub process to get the required result that is needed to work off the task, the current editor can initiate a sub process. If the current task is located in a cluster, the sub process has to be a cluster as well because the product of the process has to be a single document. Otherwise the editor, who is the initiator of the sub process, can decide whether the sub process should be a cluster or a process that can have more than one resulting document. Special care has to be taken in designing the user interface. It has to be easy to use especially for the most often demanded features. Otherwise the system will fall into disuse and fail. Fig. 5 shows a Screenshot of a prototype of the modeling tool and an idea to integrate an organizational interface. With this interface process and organizational objects can be created using drag-and-drop techniques. If the pictures of the persons are available in the organizational database they can be automatically generated as a user interface that contains the favorites list of people of core team members to choose from. 6 Related work The approach of the GroupProcess project differs from other approaches in the literature. Other current ad hoc WFMS (e.g. [9], [10] or [11]) are often enhancements of WFMS supplying mechanisms to handle “unexpected exceptions” (compare [11]). Other than these, the starting point for the GroupProcess project is an ad hoc workflow management system which is then in a second step integrated with a traditional workflow management system. Another difference from other approaches is the extensive participatory aspect. Participation has already been mentioned in the literature but not to extend of complete participatory design (compare e.g. [12]). Moreover the integration into a groupware platform also differentiates GroupProcess from others approaches which are primarily integrated in transactional information systems for production WFMS. 7 Conclusions A vision of an ad hoc management system that integrates workflow-, knowledge- and office-management-systems from a process driven perspective has been presented. From our point of view, this system is the missing linkage between the above mentioned systems and thus is beneficial for all these types of systems. Therefore it helps to leverage their inherent advantages. Not only because of this, there are already some companies that have shown interest in such a solution. Prototypes of the core module are being implemented and parts can already be used for trial purposes. The tool for the transformation of ad hoc processes into structured workflows and the module for e-mail tracking and routing are not yet finalized. References
{"Source-Url": "http://gcc.uni-paderborn.de/www/wi/wi2/wi2_lit.nsf/7544f3043ee53927c12573e70058bbb6/451341e7d58d62164125690500556c66/$file/st2im06.pdf", "len_cl100k_base": 6612, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 28294, "total-output-tokens": 7897, "length": "2e12", "weborganizer": {"__label__adult": 0.0002994537353515625, "__label__art_design": 0.0015115737915039062, "__label__crime_law": 0.0005407333374023438, "__label__education_jobs": 0.006439208984375, "__label__entertainment": 0.00014197826385498047, "__label__fashion_beauty": 0.0002244710922241211, "__label__finance_business": 0.004730224609375, "__label__food_dining": 0.0003924369812011719, "__label__games": 0.0005016326904296875, "__label__hardware": 0.0012454986572265625, "__label__health": 0.0006890296936035156, "__label__history": 0.00052642822265625, "__label__home_hobbies": 0.00019407272338867188, "__label__industrial": 0.0009775161743164062, "__label__literature": 0.0005106925964355469, "__label__politics": 0.0003898143768310547, "__label__religion": 0.0003924369812011719, "__label__science_tech": 0.1453857421875, "__label__social_life": 0.0002493858337402344, "__label__software": 0.19189453125, "__label__software_dev": 0.6416015625, "__label__sports_fitness": 0.00019538402557373047, "__label__transportation": 0.0005283355712890625, "__label__travel": 0.0002567768096923828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38717, 0.01255]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38717, 0.20664]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38717, 0.94398]], "google_gemma-3-12b-it_contains_pii": [[0, 3626, false], [3626, 8377, null], [8377, 11518, null], [11518, 13463, null], [13463, 18540, null], [18540, 23378, null], [23378, 26807, null], [26807, 29897, null], [29897, 34141, null], [34141, 38717, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3626, true], [3626, 8377, null], [8377, 11518, null], [11518, 13463, null], [13463, 18540, null], [18540, 23378, null], [23378, 26807, null], [26807, 29897, null], [29897, 34141, null], [34141, 38717, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38717, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38717, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38717, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38717, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38717, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38717, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38717, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38717, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38717, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38717, null]], "pdf_page_numbers": [[0, 3626, 1], [3626, 8377, 2], [8377, 11518, 3], [11518, 13463, 4], [13463, 18540, 5], [18540, 23378, 6], [23378, 26807, 7], [26807, 29897, 8], [29897, 34141, 9], [34141, 38717, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38717, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
08f91cdcba966dfe6841281e1ed18842a4757937
CHAPTER 4 A GRAPHICAL PASSWORD BASED AUTHENTICATION SCHEME This chapter presents a proposed graphical password based Two-Factor Authentication scheme for web-based services. The scheme does not require a verifier table and therefore provides security against stolen verifier attack. The graphical password used in the proposed scheme is similar to other recognition-based methods where in the user need not remember a textual password, instead, he recognizes and selects previously chosen secret image to get authenticated. The proposed scheme is novel as it provides strong authentication by way of addressing usability and security issues related to client, server and transmission channel security. It addresses the usability issues by employing graphical passwords. The security issues at client are addressed by employing smart cards and at server using the proposed scheme which resists the stolen verifier attack. Moreover, the transmission channel security is ensured by sending the encrypted message digest of the message, encrypted using server's public key. The security analysis of the proposed scheme against the attacks discussed in section 2.6 is presented. Lastly, the formal analysis of the proposed scheme using Scyther Tool is presented along with the verification code. 4.1 PROPOSED SCHEME 4.1.1 Background This section discusses how the proposed Two-Factor Authentication Scheme works. The idea of the proposed scheme is to provide three 3X3 image grids and allow the user to choose one image as secret from each image grid. At Login, the user has to correctly identify the pre-selected graphical password images and enter the corresponding number written with the image to get authenticated. It is also proposed that the users be allowed to upload their personal photograph set consisting of distinct images (say 27 in a set) at registration time. This will allow them to easily recognize their own personal photographs at login. Here the user should carefully select those pictures for upload that cannot be guessed for password entry by their close aides. It is also suggested that only low resolution images be used to ensure fast access. The advantage of using user’s personal photographs for their account is that it provides a very large password space. Figure 4.1 below shows a sample image grid consisting of users personal photographs. The proposed scheme works as follows: If a user wants to register with an online service; he sends a registration request by clicking on the register link and entering an ID. Upon validating the received request, server sends ‘n’ images to the client which are displayed on three 3x3 grids one after the other (Figure 4.1 - Example Login Interface). The idea behind displaying images on 3 grids one after the other is to increase the guessability and thus avoid the possibility of guessing attack. Figure 4.1 Example Image Grid for Login To register, the user has to choose at least one image from each grid by entering the corresponding image number. In the repetitive process the user must choose at least three images in order to complete the password selection process. At the end of password selection, the user would have selected 3 images as his graphical password for that account. The client now computes the user’s password as $PW_i = h(h(I_1)) \ XOR \ h(I_2) \ XOR \ h(I_3))$ and then sends $PW_i$ to the server for registration. During login, after user’s login request is validated from server, the user is presented with a challenge response mechanism using images where in the user is challenged with at least three 3x3 grids of images. The user then recognizes his Image categories & enters the The proposed method maintains an image database and user profile database at server which contains registered user’s profiles. The user profile table stores the unique user identity $h(\text{ID}_i)$ which is used to check the validity of a registered user and to display the user’s portfolio images accordingly. The user’s portfolio images consist of miscellaneous images besides those which the user had chosen as password. When a registered user tries to login, the $h(\text{ID}_i)$ entry will be verified and the portfolio images will be presented to him as challenge. ### 4.1.2 The Proposed Scheme This section presents the proposed scheme which has three phases namely Registration phase, Login phase and Mutual Authentication & Session key agreement phase. The scheme uses simple hash functions for efficient computations, and random nonce to resist replay attack. **Registration Phase** It is recommended that the registration phase be executed in secure environment such as HTTPS. Whenever a new user wants to register with the system, he chooses an ID $i$ and proceeds as follows: *Step R1:* $U_i$ submits registration request to $S$ by entering an ID. *Step R2:* After checking the availability of received ID, $S$ sends images to the client. The client displays three 3x3 image grids one after the other. Step R3: Ui chooses a password by selecting one image from each grid totaling to 3 images as his graphical password from the 27 displayed images. The client submits Pw_i to S along with the user’s portfolio images. The portfolio image set contains 27 images which includes the user’s secret images + other randomly chosen images by client from the displayed grid. Step R4: Upon receipt of h(Pw_i), S computes the following: \[ Q_i = h(ID_i | y_i); \] \[ V_i = h(Pw_i | ID_i) \oplus Q_i; \] \[ G_i = h(x \oplus h(ID_i)); \] and \[ H_i = h(Q_i) \] Here \( y_i \) = concatenation of IP-address & current time Step R5: S first creates a user profile which stores the user’s h(ID_i), h(y_i) and the portfolio images. The server then personalizes the smart card with \( \{G_i, V_i, H_i, y_i\} \) and delivers it to the user through secure channel. Login Phase Whenever a user wants to login to access his account, he inserts the smart card into the card reader and proceeds as follows: Step L1: Ui requests for login. Step L2: Upon receipt of the login request, the server sends the login page along with server’s Digital Certificate containing its public key. Step L3: User enters his ID_i, the client computes \( H_i^* = h(h(ID_i | y_i)) \); checks whether \( H_i^* \) equals \( H_i \) (stored in smart card); If valid it generates a random secret ‘P_i’. Step L4: It then computes \( R_i = h(ID_i) \oplus h(P_i) \); Encrypts \( R_i, P_i \) and \( h(y_i) \) using server’s public key as $S_i = E_{KUs}(R_i, P_i, h(y_i))$ and sends $S_i$ to server. **Step L5:** Upon receipt of $S_i$, server decrypts it using its private key as $D_{KRs}(R_i, P_i, h(y_i))$. **Step L6:** The server then computes $h(ID_i) = R_i \oplus h(P_i)$. Checks the validity of $h(ID_i)$ and $h(y_i)$ and if it is valid, it computes $h(P_i+1)$; **Step L7:** ‘$S$’ generates a random secret ‘$P_j$’ and retrieves the user’s portfolio image set. **Step L9:** It computes $J_i = E_{h(P_i+1)}(\text{Images}, P_j)$, and sends $J_i$ to user. **Step L10:** Upon receipt of $J_i$, the client Decrypts the Images as $D_{h(P_i+1)}(\text{Images}, P_j)$; and displays received images for password entry, This step is very crucial in resisting the Phishing attack as the attacker cannot reproduce the $h(P_i+1)$ computed using $P_i$ which is never transmitted in plain text form across the channel. **Step L11:** $U_i$ enters his password $PW_i$. **Step L12:** The client computes $V_i^* = V_i \oplus h(PW_i | ID_i)$; Checks if $h(V_i^*)$ equals $H_i$, if valid, it computes $C_i = h(G_i \oplus h(P_j+1))$; **Step L15:** $U_i$ sends $C_i$, Server **Mutual Authentication & Key Agreement Phase** Upon receipt of $C_i$, the server computes the following: **Step VI:** $G_i' = h(x \oplus h(ID_i))$; $C_i = h(G_i' \oplus h(P_j+1))$ and checks whether $C_i$ equals $C_i'$, If it does not hold, $S$ rejects the login request, else, both client and server proceeds to compute the session key as follows: $$Sk = h(h(P_i+1) \oplus h(P_j+1) \oplus G_i)$$ From here on the communications between client and server are encrypted using this session key. **Password Change Phase** A registered user can change his password by sending password change request to the server; but for this, he has to first login to the server. If the login is successful, then the password change phase runs in secure environment meaning that all the communication between client and server is encrypted with the newly generated authenticated session key. Therefore, when the user submits a request for change password, the client performs the following steps: **Step C1:** Client generates a random number \( P_k \) and encrypts it with session key as \( E_{Sk}(P_k) \). **Step C2:** Upon receipt of \( E_{Sk}(P_k) \), server decrypts it as \( D_{Sk}(P_k) \) and checks the freshness of the nonce. If valid it creates \( P_k+1 \) and picks a random set of images for change password. It then encrypts these with \( Sk \) **Step C3:** Server sends \( E_{Sk}(\text{Image Set}, P_k+1) \) **Step C4:** Upon receipt of \( E_{Sk}(\text{Image Set}, P_k+1) \), the client decrypts it as \( D_{Sk}(\text{Image Set}, P_k+1) \) and checks the freshness of nonce. If valid, it displays the images **Step C5:** User chooses a new password \( P_{w_i}^* \) as discussed in 4.2.1 and submits to client **Step C6:** Client computes \( V_i' = h(P_{w_i}^* \mid ID_i) \oplus Q_i \); Replaces \( V_i \) (stored in smart card) with \( V_i' \) Step C7: Client updates the contents of smart card as \{ G_i, V_i', H_i y_i \}; Step C8: It sends encrypted Portfolio Images and P_{k+2} to server Step C9: Upon receipt, the server checks for the freshness of random number and then updates the user profile as discussed in 4.2.1 The important feature of proposed password change phase is that the client does all the required computations and updates the smart card. Thus, the computation and communication cost at the server is drastically reduced. Moreover, the user secret parameter V_i', which is important in password verification at client, is never transmitted over the channel, thus providing the security against interception. The Registration, Mutual Authentication & Key Agreement and Password Change Phases are shown in figures 4.2, 4.3 and 4.4 respectively. **Figure 4.2** Registration Phase Client User enters IDi, client computes Hi = h(IDi, yi) and checks if Hi == H; if it holds, it computes Ri = h(IDi) ⊕ h(Pi), Encrypts Ri, Pi and h(yi) using server's public key as Si = EKUs (Ri, Pi, h(yi)) Decrypts the Images using DhP+1(Images, Pj); displays received images. User enters password, client computes V* = h(Pwi, IDi) ⊕ VI; Checks if h(V*) == H. If valid it computes Ci = h(Gi ⊕ h(Pj+1)) Computes session key SK = h(h(Pi+1) ⊕ h(Pj+1) ⊕ Gi) Common channel Server Computes D_KRs (Ri, Pi, h(yi)); h(IDi) = Ri ⊕ h(Pi) ; Checks the validity of h(IDi) and h(yi) If it is valid, it computes h(Pi+1). Generates random no. Pj; It then retrieves images and computes Ji = EhP+1(Images, Pj) Gi’ = h(x ⊕ h(IDi)); Ci’ = h(Gi’ ⊕ h(Pj+1)); Checks if Ci’ == Ci, If it holds proceeds else terminates the session Computes session key Sk = h( h(Pi+1) ⊕ h(Pj+1) ⊕ Gi) Figure 4.3 Mutual Authentication & Key Agreement Phase Client User Submits Password change request by generating random no. Pk. Encrypts it with Sk, Decrypts as E_Sk(Images, Pk+1) and verifies the freshness of nonce. Client displays images. User Chooses password and submits. Client computes V’ = h(Pwi, IDi) ⊕ Qi; Replaces V with V’ Updates Smart card with {Gi, V’, H, yi} Server Decrypts it and Checks the freshness of Pk, Picks a random Image Set; Sends Images and Pk+1 in encrypted Checks the freshness of Pk+2, Updates user profile Session key encrypted Channel Figure 4.4 Password Change Phase 4.2 SECURITY ANALYSIS This section presents the security analysis of the proposed scheme against common attacks performed on smart card based schemes and graphical password based schemes. Security against Reconnaissance Attack In the registration phase of the proposed scheme, as the user’s registration request is sent to the server, the server in response sends images to the client for choosing the user’s password. Here, an adversary may try to gain information about the set of all images stored in the image database which are randomly picked and displayed at each registration request. In order to do so, he sends repeated requests with different ID’s to the server. But such an act will not help the adversary in gaining knowledge about the images because the server checks whether the requests are repeatedly coming from the same IP address and if so, it sends the same set of images at every request. In the worst case, the adversary may send repeated registration requests with different ID’s from different systems to get the complete set of images in the database. Later on, he may try to use this knowledge to perform a guessing attack on a valid user account which is highly difficult for him as proposed scheme does not allow such attack even if the adversary is in possession of a smart card of a registered user because of local verification at smart card level. Security against Replay Attack Suppose that if an adversary intercepts the messages $S_i$, $J_i$ and $C_i$ during transmission and try to replay it at later time, the attack will fail because the intercepted parameters are computed using the random nonce values which are checked for freshness by the receiver (client / server). Moreover, since the nonce values are never transmitted in plain text form in steps L4, L9 and L15. Hence without the knowledge of $P_i$ & $P_j$, he cannot perform replay attack. Security against Insider Attack As all the user related sensitive information such as user ID, secret questions and other registration related information is stored as message digest, the scheme ensures the resistance against Insider Attack. Security against Stolen Verifier Attack As the scheme doesn’t maintain verifier table, it is secure against stolen verifier attack. Security against Shoulder Surfing Attack The proposed scheme requires the password image number to be entered in a password field (which shows xxxxxx pattern for the entered text). This method of password entry even captured on camera will not reveal the user secret. Hence, it is inferred that the proposed scheme is secure against the shoulder surfing attack. Moreover, the images are displayed randomly at each login; therefore the user might be entering different values for the same image every time. In the proposed scheme, the sequence of entering the password string is not mandatory. **Provision of User Anonymity** During each login, the messages $S_i$, $J_i$ and $C_i$ exchanged in steps L4, L9 & L15 respectively are the message digests. Suppose if the attacker intercepts these messages to get the knowledge of the user ID$_i$, he will not be successful in doing so as the MD’s are irreversible. Moreover, if the attacker tries to perform offline guessing on $S_i$ and $C_i$ to derive ID$_i$, he will not be successful, as the computation of $S_i$ and $J_i$ requires the knowledge of server’s private key and $P_j$ generated at server respectively. Hence, the scheme provides user anonymity. **Security against Server Spoofing Attack** In step L9, the server communicates with user by sending $J_i$. So, to perform spoofing attack successfully on the scheme, the attacker has to create $J_i$ using $h(P_i+1)$. But since $P_i / P_i+1$ were never transmitted, the attacker will not have knowledge of $h(P_i+1)$. If the attacker creates $h(P_i+1)'$ and computes & sends $J_i'$ to the client, this will be easily detected at the client system during computations of Step L10. Hence the scheme protects the user from masqueraded server. **Security against Guessing Attack** In order to perform guessing attack, there are two ways. One way is to create a dictionary of all images available in the server and use it for offline guessing. Creating a dictionary requires access to all images stored in the image database which is highly difficult task. Alternatively, if the user is asked to use his personal photographs for registration and login then it will be infeasible for the attacker to collect all users’ photographs to create a dictionary. Another way of guessing is to intercept all images and messages transmitted during registration and login session of a user and then perform guessing attack to get the password. This is highly difficult to do because during registration, if the attacker intercepts the images and the login messages such as $S_i$, $J_i$ & $C_i$ transmitted in steps L4, L9 and L15, he has to now guess the correct image and compare it with the intercepted message. This attack will fail because $S_i$, $J_i$ are encrypted using server’s public key and random generated key $(h(p_i+1))$ respectively. So, the attacker, without decrypting the intercepted parameters, cannot compare the guessed results with these. Moreover, In order to decrypt it the attacker must have the knowledge of server’s private key and the random number which are highly difficult and impractical to get. Security against Phishing Attack In the scheme, before displaying the received images from the server, the client and server between steps L1 and L4 ensures that both are communicating with valid participants by checking the freshness of random numbers which were never transmitted in plain text over the channel. Hence, it is implied that phishing attack does not work successfully. *Explicit Key Authentication* An authenticated Key Agreement scheme is said to provide Explicit Key Authentication if it provides both the (Implicit) Key Authentication and Key Confirmation. *Implicit Key Authentication* In the proposed scheme, neither the key nor the secret values used to compute the key are transmitted in plaintext over the channel. Therefore, there is no way that a third party could learn the key value being agreed on. *Key Confirmation* In Steps L4 to L15 of verification phase, the client and server mutually authenticate each other and also shares secret values $P_i$ and $P_j$. This shows that only client and server are in possession of the secret values which are later used in computing the session key. Hence, Implicit Key Authentication is achieved. Moreover, in step V1 both client and server assures that both possess the secret session key, this shows that Key confirmation is also achieved. Security of Session Key Known-Key security The proposed scheme provides known-key security as; firstly, the key is never transmitted over the channel during agreement phase, thus providing resistance from interception. Secondly, the session key SK is a message digest which is computed as discussed in step V1 using $G_i$, $h(P_i+1)$ and $h(P_j+1)$. These parameters are also never transmitted over the channel. Suppose, in the worst case scenario, if the past session key was disclosed to the attacker, he still cannot derive it because the key is a message digest which cannot be reversed. This implies that, a new key cannot be generated using the past intercepted key. Forward Secrecy Suppose that the attacker has the knowledge of ‘x’ and he intercepted the earlier transmitted session as well. In order to compromise this session, the attacker must decrypt it using the session key used in that session. Since the session key in every session is a message digest computed using $G_i$, $h(P_i+1)$ (a nonce) and $h(P_j+1)$ (another nonce) and never transmitted over channel, the attacker cannot create the session key with the knowledge of only ‘x’. Hence, it is implied that the scheme achieves forward secrecy. 4.3 **EFFICIENCY ANALYSIS** This section presents the efficiency analysis of the scheme in terms of computational cost, communication cost and the memory required for each user. The scheme uses Hash functions, XOR operations and Encryptions are used for computations at Client and Server. **Table 4.1 Efficiency Analysis of Proposed Scheme** <table> <thead> <tr> <th></th> <th>E1 (T_h)</th> <th>E2 (T_r)</th> <th>E3 (T_E)</th> <th>E4 (T_x)</th> <th>E5 (Bits)</th> <th>E6 (Bits)</th> </tr> </thead> <tbody> <tr> <td><strong>Client</strong></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Registration Phase</td> <td>4</td> <td>0</td> <td>-NA-</td> <td>2</td> <td>512</td> <td>256</td> </tr> <tr> <td>Login, Auth. &amp; Key Agreement phase</td> <td>14*</td> <td>1</td> <td>2</td> <td>9*</td> <td>-NA-</td> <td>512</td> </tr> <tr> <td>Password Change Phase</td> <td>5*</td> <td>1</td> <td>2</td> <td>2</td> <td>Nil, as the contents will be updated</td> <td>540Kb for images and 256 bits for message</td> </tr> <tr> <td><strong>Server</strong></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Registration Phase</td> <td>5</td> <td>0</td> <td>-NA-</td> <td>4</td> <td>256</td> <td>540 Kbits</td> </tr> <tr> <td>Login, Auth. &amp; Key Agreement phase</td> <td>6</td> <td>1</td> <td>2</td> <td>5</td> <td>-NA-</td> <td>540Kb for images and 128 bits for message.</td> </tr> <tr> <td>Password Change Phase</td> <td>0</td> <td>0</td> <td>1</td> <td>0</td> <td>Nil, as the contents will be updated</td> <td>540Kb for images and 128 bits for message</td> </tr> </tbody> </table> * It includes hash and XOR operations performed to get PW_i refer 4.2.1 E1: Number of hash computations (T_h) E2: Number of Random no. generations (T_r) E3: Number of Encryption / Decryption (T_E) E4: Number of XOR and concatenation operations (T_x) E5: Memory needed to store user credentials; E6: Communication cost It is assumed that the Pwi, nonce, output of hash functions and other computations etc. are all 128-bit long and the images are of 20 Kb each. In registration phase, the client and server computes 4 and 5 hash functions with 2 and 4 XOR, concatenation operations respectively. In Login, Authentication & key agreement phase, the client performs fourteen hash computations (including PW_i hash computations), two Encryptions (one asymmetric encryption and one symmetric decryption), one random number generation and nine XOR operations, whereas the server performs six hash, two encryption (one asymmetric decryption and one symmetric encryption), one random number generation and five XOR operations. In password change phase, the client computes five hash functions and two XOR / concatenation operations, one random number generation and two encryptions, whereas the server computes only one encryption. The memory needed to store G_i, V_i, H_i, y_i in smart card is 512 bits (4x128) whereas the memory needed at the server to store h(ID_i), h(y_i) is 256 bits (2 x 128). This value is excluding the size of the images stored in the image database. After the password change phase, there is no need for additional space as the updated V_i' after change password computation will be replaced by the current V_i stored in the smart card. The communication cost of authentication includes the capacity of message transmission involved in the authentication scheme. From client to server the transmission cost is 256 bits (2 x 128) for ID_i & Pw_i in registration phase. During login, authentication & key agreement phase, the cost is 512 bits. From server to client, the capacity is 540 K (27x 20 Kb images) in registration phase and in login phase; it is 540 K (27x 20 Kb) for images and 128 bits (1x 128) for P_j. In password change phase, the cost involved in transmission of all messages including images and encrypted P_k from client to server is 540Kb for images and 256 bits for message, whereas the cost involved from server to client is 540Kb for images and 128 bits for message. Thus, the above analysis shows that the scheme is efficient as most of the computations are based on hash functions which are considered to be efficient functions; hence, the scheme can be used with Java cards or any other second factor device for web based applications. 4.4 FORMAL VERIFICATION This section presents the formal analysis including the source code of the verification in .spdl, and the screen shot of result. As per the discussion of section 2.3.3, the Agent model here consists of two agents i.e. ‘C’ – Client and ‘S’ – Server. Each agent performs the roles, therefore two roles i.e. ‘role C’ and ‘role S’ are defined. Role Event is role H which is defined to handle the intermediate computations. The analysis also modeled the adversary model considering that the adversary has complete control of the network. Since Scyther checks for the freshness and synchronization by default, these attributes are also claimed in analysis. Also note that, this protocol requires public key encryption which is declared here as Ekus function as per the specification of the language. ```plaintext usertype SessionKey; secret k: Function; const succ: Function; const Fresh: Function; const Compromised: Function; const hash: Function; const XOR: Function; const plus: Function; const compare: Function; const Ekus: Function; const Dkrs: Function; const EncryptbyhashofPiplus1: Function; protocol StarCompromise(H) { // Read the names of 3 agents and disclose a session between them including corresponding session key to simulate key compromise role H { const ID,Pi,1,Yi,Pj,Images,Skey: Nonce; var S,C: Agent; read_!H1(H,H, S,C); send_!H2(H,H, (S, hash(XOR(hash(XOR(Skey,hash(ID))),hash(plus(Pj,1))), Ekus(Pi,hash(Yi),hash(ID)), EncryptbyhashofPiplus1(Images,Pj)) ) ) ; # claim_H3(C,Empty, (Compromised)); } } protocol sam08(S,C) { role S { const ID,Pi,1,Yi,Pj,Images,Skey: Nonce; read_1(C,S, Ekus(Pi,hash(Yi),hash(ID))); send_2(S,C, EncryptbyhashofPiplus1(Images,Pj)); read_3(C,S, hash(XOR( hash(XOR(Skey,hash(ID))) ,hash(plus(Pj,1)) ))) claim_S1(S,Nisynch); claim_S2(S,Niagree); claim_S3(S,Secret,Ekus(Pi,hash(Yi),hash(ID)) ) ; claim_S4(S,Secret,EncryptbyhashofPiplus1(Images,Pj) ) ; } } ``` claim_S5(S,Secret,hash(XOR(hash(XOR(Skey,hash(ID))),hash(plus(Pj,1))))); role C{ const ID,Pi,1,Yi,Pj,Images,Skey: Nonce; send_1(C,S, Ekus(Pi,hash(Yi),hash(ID))); read_2(S,C, EncryptbyhashofPiplus1(Images,Pj)); send_3(C,S, hash(XOR(hash(XOR(Skey,hash(ID))),hash(plus(Pj,1))))); claim_C1(C,Nisynch); claim_C2(C,Niagree); claim_C3(C,Secret,Ekus(Pi,hash(Yi),hash(ID))); claim_C4(C,Secret,EncryptbyhashofPiplus1(Images,Pj)); claim_C5(C,Secret,hash(XOR(hash(XOR(Skey,hash(ID))),hash(plus(Pj,1))))); } const Alice,Bob,Eve: Agent; untrusted Eve; const ne: Nonce; const kee: SessionKey; compromised k(Eve,Eve); compromised k(Eve,Alice); compromised k(Eve,Bob); compromised k(Alice,Eve); compromised k(Bob,Eve); Figure 4.5 Screen Shot of Verification Result
{"Source-Url": "https://shodhganga.inflibnet.ac.in/bitstream/10603/3473/11/11_chapter%204.pdf", "len_cl100k_base": 6640, "olmocr-version": "0.1.50", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 37754, "total-output-tokens": 7408, "length": "2e12", "weborganizer": {"__label__adult": 0.0005764961242675781, "__label__art_design": 0.0005431175231933594, "__label__crime_law": 0.0025043487548828125, "__label__education_jobs": 0.00081634521484375, "__label__entertainment": 0.0001347064971923828, "__label__fashion_beauty": 0.00023567676544189453, "__label__finance_business": 0.0008206367492675781, "__label__food_dining": 0.0005507469177246094, "__label__games": 0.0016326904296875, "__label__hardware": 0.002468109130859375, "__label__health": 0.001026153564453125, "__label__history": 0.0003459453582763672, "__label__home_hobbies": 0.00013077259063720703, "__label__industrial": 0.0007796287536621094, "__label__literature": 0.00034499168395996094, "__label__politics": 0.0004525184631347656, "__label__religion": 0.0004935264587402344, "__label__science_tech": 0.17578125, "__label__social_life": 0.00010973215103149414, "__label__software": 0.0214080810546875, "__label__software_dev": 0.78759765625, "__label__sports_fitness": 0.0003921985626220703, "__label__transportation": 0.0004758834838867187, "__label__travel": 0.00023257732391357425}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26550, 0.01795]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26550, 0.39741]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26550, 0.87966]], "google_gemma-3-12b-it_contains_pii": [[0, 1294, false], [1294, 2808, null], [2808, 3688, null], [3688, 5010, null], [5010, 6473, null], [6473, 8030, null], [8030, 9481, null], [9481, 10341, null], [10341, 11823, null], [11823, 13208, null], [13208, 14554, null], [14554, 16028, null], [16028, 17526, null], [17526, 18539, null], [18539, 19760, null], [19760, 21392, null], [21392, 22734, null], [22734, 24283, null], [24283, 25763, null], [25763, 26550, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1294, true], [1294, 2808, null], [2808, 3688, null], [3688, 5010, null], [5010, 6473, null], [6473, 8030, null], [8030, 9481, null], [9481, 10341, null], [10341, 11823, null], [11823, 13208, null], [13208, 14554, null], [14554, 16028, null], [16028, 17526, null], [17526, 18539, null], [18539, 19760, null], [19760, 21392, null], [21392, 22734, null], [22734, 24283, null], [24283, 25763, null], [25763, 26550, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26550, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26550, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26550, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26550, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26550, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26550, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26550, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26550, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26550, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26550, null]], "pdf_page_numbers": [[0, 1294, 1], [1294, 2808, 2], [2808, 3688, 3], [3688, 5010, 4], [5010, 6473, 5], [6473, 8030, 6], [8030, 9481, 7], [9481, 10341, 8], [10341, 11823, 9], [11823, 13208, 10], [13208, 14554, 11], [14554, 16028, 12], [16028, 17526, 13], [17526, 18539, 14], [18539, 19760, 15], [19760, 21392, 16], [21392, 22734, 17], [22734, 24283, 18], [24283, 25763, 19], [25763, 26550, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26550, 0.04739]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
3475c4b784cea621c72cf3d4731bc43e2972808a
CSC 261/461 – Database Systems Lecture 11 Fall 2017 • Read the textbook! – Chapter 8: • Will cover later; But self-study the chapter • Everything except Section 8.4 – Chapter 14: • Section 14.1 – 14.5 – Chapter 15: • Section 15.1 – 15.4 Superkeys and Keys A **superkey** is a set of attributes $A_1, ..., A_n$ s.t. for any other attribute $B$ in $R$, we have $\{A_1, ..., A_n\} \rightarrow B$ A **key** is a *minimal* superkey I.e. all attributes are *functionally determined* by a superkey Meaning that no subset of a key is also a superkey Finding Keys and Superkeys • For each set of attributes $X$ 1. Compute $X^+$ 2. If $X^+ = \text{set of all attributes}$ then $X$ is a superkey 3. If $X$ is minimal, then it is a key Do we need to check all sets of attributes? Which sets? Example of Finding Keys Product(name, price, category, color) {name, category} \rightarrow price {category} \rightarrow color What is a key? Example of Keys Product(name, price, category, color) {name, category} → price {category} → color {name, category}⁺ = {name, price, category, color} = the set of all attributes ⇒ this is a superkey ⇒ this is a key, since neither name nor category alone is a superkey Today’s Lecture 1. 2NF, 3NF and Boyce-Codd Normal Form 2. Decompositions Prime and Non-prime attributes - A **Prime attribute** must be a member of *some* candidate key. - A **Nonprime attribute** is not a prime attribute—that is, it is not a member of any candidate key. Now that we know how to find FDs, it’s a straight-forward process: 1. Search for “bad” FDs 2. If there are any, then keep decomposing the table into sub-tables until no more bad FDs 3. When done, the database schema is normalized Main idea is that we define “good” and “bad” FDs as follows: - $X \rightarrow A$ is a “good FD” if $X$ is a (super)key - In other words, if $A$ is the set of all attributes - $X \rightarrow A$ is a “bad FD” otherwise We will try to eliminate the “bad” FDs! - Via normalization Second Normal Form • Uses the concepts of FDs, primary key • Definitions – **Full functional dependency:** • a FD $Y \rightarrow Z$ where removal of any attribute from $Y$ means the FD does not hold any more Second Normal Form (cont.) - Examples: - \( \{\text{Ssn, Pnumber} \} \rightarrow \text{Hours} \) is a full FD since neither - \( \text{Ssn} \rightarrow \text{Hours} \) nor \( \text{Pnumber} \rightarrow \text{Hours} \) hold - \( \{\text{Ssn, Pnumber} \} \rightarrow \text{Ename} \) is not a full FD (it is called a partial dependency) since \( \text{Ssn} \rightarrow \text{Ename} \) also holds Second Normal Form (2) • A relation schema R is in **second normal form (2NF)** if every non-prime attribute A in R is fully functionally dependent on the primary key • R can be decomposed into 2NF relations via the process of 2NF normalization or “second normalization” Third Normal Form (1) • Definition: – **Transitive functional dependency**: • a FD $X \rightarrow Z$ that can be derived from two FDs $X \rightarrow Y$ and $Y \rightarrow Z$ • Examples: – $Ssn \rightarrow Dmgr\_ssn$ is a transitive FD • Since $Ssn \rightarrow Dnumber$ and $Dnumber \rightarrow Dmgr\_ssn$ hold – $Ssn \rightarrow Ename$ is non-transitive • Since there is no set of attributes $X$ where $Ssn \rightarrow X$ and $X \rightarrow Ename$ Third Normal Form (2) - A relation schema R is in **third normal form (3NF)** if it is in 2NF and no non-prime attribute A in R is transitive dependent on the primary key. - R can be decomposed into 3NF relations via the process of 3NF normalization. Normalizing into 2NF and 3NF (a) EMP_PROJ 2NF Normalization (b) EMP_DEPT 3NF Normalization Figure 14.12 Normalization into 2NF and 3NF. (a) The LOTS relation with its functional dependencies FD1 through FD4. (b) Decomposing into the 2NF relations LOTS1 and LOTS2. (c) Decomposing LOTS1 into the 3NF relations LOTS1A and LOTS1B. (d) Progressive normalization of LOTS into a 3NF design. Normal Forms Defined Informally • **1st normal form** – All attributes depend on **the key** • **2nd normal form** – All attributes depend on **the whole key** • **3rd normal form** – All attributes depend on **nothing but the key** • A relation schema \( R \) is in **second normal form (2NF)** if every non-prime attribute \( A \) in \( R \) is fully functionally dependent on *every* key of \( R \). • A relation schema \( R \) is in **third normal form (3NF)** if it is in 2NF *and* no non-prime attribute \( A \) in \( R \) is transitively dependent on *any* key of \( R \). 1. BOYCE-CODD NORMAL FORM What you will learn about in this section 1. Conceptual Design 2. Boyce-Codd Normal Form 3. The BCNF Decomposition Algorithm 5. BCNF (Boyce-Codd Normal Form) • A relation schema R is in Boyce-Codd Normal Form (BCNF) if whenever an FD $X \rightarrow A$ holds in R, then $X$ is a superkey of R. • Each normal form is strictly stronger than the previous one – Every 2NF relation is in 1NF – Every 3NF relation is in 2NF – Every BCNF relation is in 3NF Figure 14.13 Boyce-Codd normal form (a) LOTS1A <table> <thead> <tr> <th>Property_id#</th> <th>County_name</th> <th>Lot#</th> <th>Area</th> </tr> </thead> </table> FD1 FD2 FD5 BCNF Normalization LOTS1AX <table> <thead> <tr> <th>Property_id#</th> <th>Area</th> <th>Lot#</th> </tr> </thead> </table> LOTS1AY <table> <thead> <tr> <th>Area</th> <th>County_name</th> </tr> </thead> </table> (b) \( R \) \[ \begin{array}{ccc} A & B & C \\ \hline FD1 & & \\ FD2 & & \\ \end{array} \] Boyce-Codd normal form. (a) BCNF normalization of LOTS1A with the functional dependency FD2 being lost in the decomposition. (b) A schematic relation with FDs; it is in 3NF, but not in BCNF due to the f.d. \( C \to B \). • A relation schema $R$ is in **second normal form (2NF)** if every non-prime attribute $A$ in $R$ is fully functionally dependent on *every* key of $R$ • A relation schema $R$ is in **third normal form (3NF)** if it is in 2NF *and* no non-prime attribute $A$ in $R$ is transitively dependent on *any* key of $R$ 4.3 Interpreting the General Definition of Third Normal Form (2) - ALTERNATIVE DEFINITION of 3NF: We can restate the definition as: A relation schema $R$ is in third normal form (3NF) if, whenever a nontrivial FD $X \rightarrow A$ holds in $R$, either a) $X$ is a superkey of $R$ or b) $A$ is a prime attribute of $R$ The condition (b) takes care of the dependencies that “slip through” (are allowable to) 3NF but are “caught by” BCNF which we discuss next. 5. BCNF (Boyce-Codd Normal Form) - Definition of 3NF: - A relation schema R is in 3NF if, whenever a nontrivial FD \( X \rightarrow A \) holds in R, either - a) \( X \) is a superkey of R or - b) \( A \) is a prime attribute of R - A relation schema R is in Boyce-Codd Normal Form (BCNF) if whenever an FD \( X \rightarrow A \) holds in R, then - a) \( X \) is a superkey of R - b) There is no - Each normal form is strictly stronger than the previous one - Every 2NF relation is in 1NF - Every 3NF relation is in 2NF - Every BCNF relation is in 3NF Boyce-Codd normal form (a) LOTS1A <table> <thead> <tr> <th>Property_id#</th> <th>County_name</th> <th>Lot#</th> <th>Area</th> </tr> </thead> </table> FD1 FD2 FD5 BCNF Normalization LOTS1AX <table> <thead> <tr> <th>Property_id#</th> <th>Area</th> <th>Lot#</th> </tr> </thead> </table> LOTS1AY <table> <thead> <tr> <th>Area</th> <th>County_name</th> </tr> </thead> </table> (b) $R$ <table> <thead> <tr> <th>A</th> <th>B</th> <th>C</th> </tr> </thead> </table> FD1 FD2 Figure 14.13 Boyce-Codd normal form. (a) BCNF normalization of LOTS1A with the functional dependency FD2 being lost in the decomposition. (b) A schematic relation with FDs; it is in 3NF, but not in BCNF due to the f.d. $C \rightarrow B$. A relation TEACH that is in 3NF but not in BCNF - Two FDs exist in the relation TEACH: - \{\text{student, course}\} \rightarrow \text{instructor} - \text{instructor} \rightarrow \text{course} - \{\text{student, course}\} is a candidate key for this relation - So this relation is in 3NF but not in BCNF - A relation NOT in BCNF should be decomposed - while possibly forgoing the preservation of all functional dependencies in the decomposed relations. Achieving the BCNF by Decomposition - Three possible decompositions for relation TEACH - D1: \{student, instructor\} and \{student, course\} - D2: \{course, instructor\} and \{course, student\} - D3: \{instructor, course\} and \{instructor, student\} Boyce-Codd Normal Form BCNF is a simple condition for removing anomalies from relations: A relation $R$ is in BCNF if: if $\{X_1, \ldots, X_n\} \rightarrow A$ is a non-trivial FD in $R$ then $\{X_1, \ldots, X_n\}$ is a superkey for $R$ In other words: there are no “bad” FDs Example <table> <thead> <tr> <th>Name</th> <th>SSN</th> <th>PhoneNumber</th> <th>City</th> </tr> </thead> <tbody> <tr> <td>Fred</td> <td>123-45-6789</td> <td>206-555-1234</td> <td>Seattle</td> </tr> <tr> <td>Fred</td> <td>123-45-6789</td> <td>206-555-6543</td> <td>Seattle</td> </tr> <tr> <td>Joe</td> <td>987-65-4321</td> <td>908-555-2121</td> <td>Westfield</td> </tr> <tr> <td>Joe</td> <td>987-65-4321</td> <td>908-555-1234</td> <td>Westfield</td> </tr> </tbody> </table> {SSN} → {Name, City} This FD is \textit{bad} because it is \textit{not} a superkey ⇒ \textbf{Not} in BCNF \textit{What is the key?} {SSN, PhoneNumber} Example <table> <thead> <tr> <th>Name</th> <th>SSN</th> <th>City</th> </tr> </thead> <tbody> <tr> <td>Fred</td> <td>123-45-6789</td> <td>Seattle</td> </tr> <tr> <td>Joe</td> <td>987-65-4321</td> <td>Madison</td> </tr> </tbody> </table> {SSN} \rightarrow \{\text{Name}, \text{City}\} This FD is now good because it is the key. Let’s check anomalies: - Redundancy? - Update? - Delete? Now in BCNF! BCNF Decomposition Algorithm BCNFDcomp(R): Find \( X \) s.t. \( X + \neq X \) and \( X + \neq \{\text{all attributes}\} \) if (not found) then Return \( R \) let \( Y = X + - X, Z = (X +) C \) decompose \( R \) into \( R_1(X + Y) \) and \( R_2(X + Z) \) Return BCNFDcomp(\( R_1 \)), BCNFDcomp(\( R_2 \)) BCNF Decomposition Algorithm BCNFDecomp(R): Find a set of attributes X s.t.: X⁺ ≠ X and X⁺ ≠ [all attributes] Find a set of attributes X which has non-trivial “bad” FDs, i.e. is not a superkey, using closures BCNF Decomposition Algorithm BCNFDcomp(R): Find a set of attributes X s.t.: X⁺ ≠ X and X⁺ ≠ [all attributes] if (not found) then Return R let Y = X⁺ - X, Z = (X⁺)C decompose R into R₁(X È Y) and R₂(X È Z) Return BCNFDcomp(R₁), BCNFDcomp(R₂) If no “bad” FDs found, in BCNF! BCNF Decomposition Algorithm BCNFD decomp(R): Find a set of attributes X s.t.: $X^+ \neq X$ and $X^+ \neq$ [all attributes] if (not found) then Return R let $Y = X^+ - X$, $Z = (X^+)^C$ Let Y be the attributes that $X$ functionally determines (+ that are not in X) And let Z be the other attributes that it doesn’t BCNFDecomp(R): Find a set of attributes $X$ s.t.: $X^+ \neq X$ and $X^+ \neq [\text{all attributes}]$ **if** (not found) **then** Return $R$ **let** $Y = X^+ - X$, $Z = (X^+)^C$ **decompose** $R$ into $R_1(X \cup Y)$ and $R_2(X \cup Z)$ Split into one relation (table) with $X$ plus the attributes that $X$ determines ($Y$)… BCNF Decomposition Algorithm **BCNFDecomp**(R): Find a set of attributes $X$ s.t.: $X^+ \neq X$ and $X^+ \neq$ [all attributes] **if** (not found) **then** Return R **let** $Y = X^+ - X$, $Z = (X^+)^C$ **decompose** R into $R_1(X \cup Y)$ and $R_2(X \cup Z)$ And one relation with X plus the attributes it *does not* determine (Z) BCNF Decomposition Algorithm BCNFDecomp(R): Find a set of attributes X s.t.: $X^+ \neq X$ and $X^+ \neq$ [all attributes] if (not found) then Return R let $Y = X^+ - X$, $Z = (X^+)^C$ decompose R into $R_1(X \cup Y)$ and $R_2(X \cup Z)$ Return BCNFDecomp($R_1$), BCNFDecomp($R_2$) Proceed recursively until no more “bad” FDs! BCNFDecomp(R): If $X \rightarrow A$ causes BCNF violation: Decompose $R$ into $R1 =XA$ $R2 = R -A$ (Note: $X$ is present in both $R1$ and $R2$) BCNFDecomp(R): Find a set of attributes $X$ s.t.: $X^+ \neq X$ and $X^+ \neq [all attributes]$ if (not found) then Return $R$ let $Y = X^+ - X$, $Z = (X^+)^C$ decompose $R$ into $R_1(X \cup Y)$ and $R_2(X \cup Z)$ Return BCNFDecomp($R_1$), BCNFDecomp($R_2$) BCNFDecomp(R): If $X \rightarrow A$ causes BCNF violation: Decompose $R$ into $R_1 = XA$ $R_2 = R - A$ (Note: $X$ is present in both $R_1$ and $R_2$) Example \[ R(A,B,C,D,E) \] \[ \{A\}^+ = \{A,B,C,D\} \neq \{A,B,C,D,E\} \] \[ R_1(A,B,C,D) \] \[ \{C\}^+ = \{C,D\} \neq \{A,B,C,D\} \] \[ R_{11}(C,D) \] \[ R_{12}(A,B,C) \] \[ R_2(A,E) \] \[ \{A\} \rightarrow \{B,C\} \] \[ \{C\} \rightarrow \{D\} \] Acknowledgement • Some of the slides in this presentation are taken from the slides provided by the authors. • Many of these slides are taken from cs145 course offered by Stanford University. • Thanks to YouTube, especially to Dr. Daniel Soper for his useful videos.
{"Source-Url": "http://www.cs.rochester.edu/courses/261/fall2017/lectures/l11.pdf", "len_cl100k_base": 4455, "olmocr-version": "0.1.50", "pdf-total-pages": 45, "total-fallback-pages": 0, "total-input-tokens": 67889, "total-output-tokens": 5998, "length": "2e12", "weborganizer": {"__label__adult": 0.0005860328674316406, "__label__art_design": 0.001373291015625, "__label__crime_law": 0.00098419189453125, "__label__education_jobs": 0.1865234375, "__label__entertainment": 0.0001882314682006836, "__label__fashion_beauty": 0.0004267692565917969, "__label__finance_business": 0.0012636184692382812, "__label__food_dining": 0.001041412353515625, "__label__games": 0.0010461807250976562, "__label__hardware": 0.0016193389892578125, "__label__health": 0.0016870498657226562, "__label__history": 0.001010894775390625, "__label__home_hobbies": 0.00063323974609375, "__label__industrial": 0.0018301010131835935, "__label__literature": 0.0012226104736328125, "__label__politics": 0.0006017684936523438, "__label__religion": 0.000973224639892578, "__label__science_tech": 0.2275390625, "__label__social_life": 0.0006852149963378906, "__label__software": 0.02484130859375, "__label__software_dev": 0.54150390625, "__label__sports_fitness": 0.0005555152893066406, "__label__transportation": 0.0011920928955078125, "__label__travel": 0.0004777908325195313}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 12630, 0.03669]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 12630, 0.4573]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 12630, 0.74281]], "google_gemma-3-12b-it_contains_pii": [[0, 53, false], [53, 260, null], [260, 279, null], [279, 568, null], [568, 817, null], [817, 961, null], [961, 1231, null], [1231, 1306, null], [1306, 1306, null], [1306, 1506, null], [1506, 1739, null], [1739, 2023, null], [2023, 2238, null], [2238, 2640, null], [2640, 2913, null], [2913, 3387, null], [3387, 3639, null], [3639, 3734, null], [3734, 4028, null], [4028, 4270, null], [4270, 4618, null], [4618, 4644, null], [4644, 4772, null], [4772, 5104, null], [5104, 5721, null], [5721, 6035, null], [6035, 6504, null], [6504, 7077, null], [7077, 7654, null], [7654, 8114, null], [8114, 8372, null], [8372, 8651, null], [8651, 9099, null], [9099, 9404, null], [9404, 9714, null], [9714, 9926, null], [9926, 10220, null], [10220, 10540, null], [10540, 10869, null], [10869, 11206, null], [11206, 11547, null], [11547, 11694, null], [11694, 12109, null], [12109, 12363, null], [12363, 12630, null]], "google_gemma-3-12b-it_is_public_document": [[0, 53, true], [53, 260, null], [260, 279, null], [279, 568, null], [568, 817, null], [817, 961, null], [961, 1231, null], [1231, 1306, null], [1306, 1306, null], [1306, 1506, null], [1506, 1739, null], [1739, 2023, null], [2023, 2238, null], [2238, 2640, null], [2640, 2913, null], [2913, 3387, null], [3387, 3639, null], [3639, 3734, null], [3734, 4028, null], [4028, 4270, null], [4270, 4618, null], [4618, 4644, null], [4644, 4772, null], [4772, 5104, null], [5104, 5721, null], [5721, 6035, null], [6035, 6504, null], [6504, 7077, null], [7077, 7654, null], [7654, 8114, null], [8114, 8372, null], [8372, 8651, null], [8651, 9099, null], [9099, 9404, null], [9404, 9714, null], [9714, 9926, null], [9926, 10220, null], [10220, 10540, null], [10540, 10869, null], [10869, 11206, null], [11206, 11547, null], [11547, 11694, null], [11694, 12109, null], [12109, 12363, null], [12363, 12630, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 12630, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 12630, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 12630, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 12630, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 12630, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 12630, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 12630, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 12630, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 12630, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 12630, null]], "pdf_page_numbers": [[0, 53, 1], [53, 260, 2], [260, 279, 3], [279, 568, 4], [568, 817, 5], [817, 961, 6], [961, 1231, 7], [1231, 1306, 8], [1306, 1306, 9], [1306, 1506, 10], [1506, 1739, 11], [1739, 2023, 12], [2023, 2238, 13], [2238, 2640, 14], [2640, 2913, 15], [2913, 3387, 16], [3387, 3639, 17], [3639, 3734, 18], [3734, 4028, 19], [4028, 4270, 20], [4270, 4618, 21], [4618, 4644, 22], [4644, 4772, 23], [4772, 5104, 24], [5104, 5721, 25], [5721, 6035, 26], [6035, 6504, 27], [6504, 7077, 28], [7077, 7654, 29], [7654, 8114, 30], [8114, 8372, 31], [8372, 8651, 32], [8651, 9099, 33], [9099, 9404, 34], [9404, 9714, 35], [9714, 9926, 36], [9926, 10220, 37], [10220, 10540, 38], [10540, 10869, 39], [10869, 11206, 40], [11206, 11547, 41], [11547, 11694, 42], [11694, 12109, 43], [12109, 12363, 44], [12363, 12630, 45]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 12630, 0.08247]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
421e3db4703d7a6bd8a439ffa661efdf06f26c06
[REMOVED]
{"Source-Url": "https://repository.ubn.ru.nl/bitstream/handle/2066/18843/18843.pdf?sequence=1", "len_cl100k_base": 7979, "olmocr-version": "0.1.50", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 35233, "total-output-tokens": 9815, "length": "2e12", "weborganizer": {"__label__adult": 0.0003426074981689453, "__label__art_design": 0.00021636486053466797, "__label__crime_law": 0.0002923011779785156, "__label__education_jobs": 0.0003037452697753906, "__label__entertainment": 3.969669342041016e-05, "__label__fashion_beauty": 0.00012362003326416016, "__label__finance_business": 0.00013971328735351562, "__label__food_dining": 0.0002722740173339844, "__label__games": 0.00032830238342285156, "__label__hardware": 0.0006403923034667969, "__label__health": 0.00032591819763183594, "__label__history": 0.00014781951904296875, "__label__home_hobbies": 5.894899368286133e-05, "__label__industrial": 0.00025916099548339844, "__label__literature": 0.0001838207244873047, "__label__politics": 0.00020253658294677737, "__label__religion": 0.0003902912139892578, "__label__science_tech": 0.004184722900390625, "__label__social_life": 6.4849853515625e-05, "__label__software": 0.00362396240234375, "__label__software_dev": 0.9873046875, "__label__sports_fitness": 0.00024771690368652344, "__label__transportation": 0.0003719329833984375, "__label__travel": 0.0001761913299560547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41259, 0.00919]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41259, 0.6021]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41259, 0.87042]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 135, false], [135, 135, null], [135, 3106, null], [3106, 6522, null], [6522, 9795, null], [9795, 12262, null], [12262, 15028, null], [15028, 18189, null], [18189, 20975, null], [20975, 24173, null], [24173, 27194, null], [27194, 29340, null], [29340, 32726, null], [32726, 35414, null], [35414, 38555, null], [38555, 41259, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 135, true], [135, 135, null], [135, 3106, null], [3106, 6522, null], [6522, 9795, null], [9795, 12262, null], [12262, 15028, null], [15028, 18189, null], [18189, 20975, null], [20975, 24173, null], [24173, 27194, null], [27194, 29340, null], [29340, 32726, null], [32726, 35414, null], [35414, 38555, null], [38555, 41259, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41259, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41259, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41259, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41259, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41259, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41259, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41259, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41259, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41259, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41259, null]], "pdf_page_numbers": [[0, 0, 1], [0, 135, 2], [135, 135, 3], [135, 3106, 4], [3106, 6522, 5], [6522, 9795, 6], [9795, 12262, 7], [12262, 15028, 8], [15028, 18189, 9], [18189, 20975, 10], [20975, 24173, 11], [24173, 27194, 12], [27194, 29340, 13], [29340, 32726, 14], [32726, 35414, 15], [35414, 38555, 16], [38555, 41259, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41259, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
5ce7f1bb49e3df14886a3ca06d92253f9196678c
Concurrent Programming with Threads: Part II Topics - Progress graphs - Semaphores - Mutex and condition variables - Barrier synchronization - Timeout waiting A version of `badcnt.c` with a simple counter loop ```c int ctr = 0; /* shared */ /* main routine creates*/ /* two count threads */ /* count thread */ void *count(void *arg) { int i; for (i=0; i<NITERS; i++) ctr++; return NULL; } ``` note: counter should be equal to 200,000,000 ``` linux> badcnt BOOM! ctr=198841183 linux> badcnt BOOM! ctr=198261801 linux> badcnt BOOM! ctr=198269672 ``` What went wrong? Assembly code for counter loop C code for counter loop for (i=0; i<NITERS; i++) ctr++; Corresponding asm code (gcc -O0 -fforce-mem) ```asm .L9: movl -4(%ebp),%eax cmpl $99999999,%eax jle .L12 jmp .L10 .L12: movl ctr,%eax # Load leal 1(%eax),%edx # Update movl %edx,ctr # Store .L11: movl -4(%ebp),%eax leal 1(%eax),%edx movl %edx,-4(%ebp) jmp .L9 .L10: ``` Head (H_i) Load ctr (L_i) Update ctr (U_i) Store ctr (S_i) Tail (T_i) Concurrent execution Key thread idea: In general, any sequentially consistent interleaving is possible, but some are incorrect! - $l_i$ denotes that thread $i$ executes instruction $l$ - $%eax_i$ is the contents of $%eax$ in thread $i$’s context <table> <thead> <tr> <th>i (thread)</th> <th>instr$_i$</th> <th>$%eax_1$</th> <th>$%eax_2$</th> <th>ctr</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>$H_1$</td> <td>-</td> <td>-</td> <td>0</td> </tr> <tr> <td>1</td> <td>$L_1$</td> <td>0</td> <td>-</td> <td>0</td> </tr> <tr> <td>1</td> <td>$U_1$</td> <td>1</td> <td>-</td> <td>0</td> </tr> <tr> <td>1</td> <td>$S_1$</td> <td>1</td> <td>-</td> <td>1</td> </tr> <tr> <td>2</td> <td>$H_2$</td> <td>-</td> <td>-</td> <td>1</td> </tr> <tr> <td>2</td> <td>$L_2$</td> <td>-</td> <td>1</td> <td>1</td> </tr> <tr> <td>2</td> <td>$U_2$</td> <td>-</td> <td>2</td> <td>1</td> </tr> <tr> <td>2</td> <td>$S_2$</td> <td>-</td> <td>2</td> <td>2</td> </tr> <tr> <td>2</td> <td>$T_2$</td> <td>-</td> <td>2</td> <td>2</td> </tr> <tr> <td>1</td> <td>$T_1$</td> <td>1</td> <td>-</td> <td>2</td> </tr> </tbody> </table> OK Concurrent execution (cont) Incorrect ordering: two threads increment the counter, but the result is 1 instead of 2. <table> <thead> <tr> <th>i (thread)</th> <th>instr_i</th> <th>%eax_1</th> <th>%eax_2</th> <th>ctr</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>H_1</td> <td>-</td> <td>-</td> <td>0</td> </tr> <tr> <td>1</td> <td>L_1</td> <td>0</td> <td>-</td> <td>0</td> </tr> <tr> <td>1</td> <td>U_1</td> <td>1</td> <td>-</td> <td>0</td> </tr> <tr> <td>2</td> <td>H_2</td> <td>-</td> <td>-</td> <td>0</td> </tr> <tr> <td>2</td> <td>L_2</td> <td>-</td> <td>0</td> <td>0</td> </tr> <tr> <td>1</td> <td>S_1</td> <td>1</td> <td>-</td> <td>1</td> </tr> <tr> <td>1</td> <td>T_1</td> <td>1</td> <td>-</td> <td>1</td> </tr> <tr> <td>2</td> <td>U_2</td> <td>-</td> <td>1</td> <td>1</td> </tr> <tr> <td>2</td> <td>S_2</td> <td>-</td> <td>1</td> <td>1</td> </tr> <tr> <td>2</td> <td>T_2</td> <td>-</td> <td>1</td> <td>1</td> </tr> </tbody> </table> Oops! Concurrent execution (cont) How about this ordering? <table> <thead> <tr> <th>i (thread)</th> <th>instr₁</th> <th>%eax₁</th> <th>%eax₂</th> <th>ctr</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>H₁</td> <td></td> <td></td> <td></td> </tr> <tr> <td>1</td> <td>L₁</td> <td></td> <td></td> <td></td> </tr> <tr> <td>2</td> <td>H₂</td> <td></td> <td></td> <td></td> </tr> <tr> <td>2</td> <td>L₂</td> <td></td> <td></td> <td></td> </tr> <tr> <td>2</td> <td>U₂</td> <td></td> <td></td> <td></td> </tr> <tr> <td>2</td> <td>S₂</td> <td></td> <td></td> <td></td> </tr> <tr> <td>1</td> <td>U₁</td> <td></td> <td></td> <td></td> </tr> <tr> <td>1</td> <td>S₁</td> <td></td> <td></td> <td></td> </tr> <tr> <td>1</td> <td>T₁</td> <td></td> <td></td> <td></td> </tr> <tr> <td>2</td> <td>T₂</td> <td></td> <td></td> <td></td> </tr> </tbody> </table> We can clarify our understanding of concurrent execution with the help of the *progress graph* A progress graph depicts the discrete execution state space of concurrent threads. Each axis corresponds to the sequential order of instructions in a thread. Each point corresponds to a possible execution state (Inst₁, Inst₂). E.g., (L₁, S₂) denotes state where thread 1 has completed L₁ and thread 2 has completed S₂. Legal state transitions Interleaved concurrent execution (one processor): or Parallel concurrent execution (multiple processors) or (or) (parallel execution) Key point: Always reason about concurrent threads as if each thread had its own CPU. A trajectory is a sequence of legal state transitions that describes one possible concurrent execution of the threads. Example: H1, L2, U1, H2, L2, S1, T1, U2, S2, T2 Critical sections and unsafe regions L, U, and S form a *critical section* with respect to the shared variable ctr. Instructions in critical sections (wrt to some shared variable) should not be interleaved. Sets of states where such interleaving occurs form *unsafe regions*. Def: A safe trajectory is a sequence of legal transitions that does not touch any states in an unsafe region. Claim: Any safe trajectory results in a correct value for the shared variable \( ctr \). Unsafe trajectories Thread 1 Touching a state of type x is always incorrect. Touching a state of type y may or may not be OK: - correct because store completes before load. - incorrect because order of load and store are indeterminate. Moral: be conservative and disallow all unsafe trajectories. Thread 2 Unsafe region (O,O) -> H1 -> L1 -> U1 -> S1 -> T1 Critical section Critical section Semaphore operations **Question:** How can we guarantee a safe trajectory? - We must *synchronize* the threads so that they never enter an unsafe state. **Classic solution:** Dijkstra's P and V operations on semaphores. - *semaphore*: non-negative integer synchronization variable. - P(s): `[ while (s == 0) wait(); s--; ]` - Dutch for "Proberen" (test) - V(s): `[ s++; ]` - Dutch for "Verhogen" (increment) - OS guarantees that operations between brackets `[ ]` are executed indivisibly. - Only one P or V operation at a time can modify s. - When while loop in P terminates, only that P can decrement s. - Semaphore invariant: `(s >= 0)` Sharing with semaphores Provide mutually exclusive access to shared variable by surrounding critical section with P and V operations on semaphore s (initially set to 1). Semaphore invariant creates a forbidden region that encloses unsafe region and is never touched by any trajectory. Semaphore used in this way is often called a mutex (mutual exclusion). Initially, s = 1 Posix semaphores /* initialize semaphore sem to value */ /* pshared=0 if thread, pshared=1 if process */ void Sem_init(sem_t *sem, int pshared, unsigned int value) { if (sem_init(sem, pshared, value) < 0) unix_error("Sem_init"); } /* P operation on semaphore sem */ void P(sem_t *sem) { if (sem_wait(sem)) unix_error("P"); } /* V operation on semaphore sem */ void V(sem_t *sem) { if (sem_post(sem)) unix_error("V"); } /* goodcnt.c - properly synch’d */ /* version of badcnt.c */ #include <ics.h> define NITERS 10000000 void *count(void *arg); struct { int ctr; /* shared ctr */ sem_t mutex; /* semaphore */ } shared; int main() { pthread_t tid1, tid2; /* init mutex semaphore to 1 */ Sem_init(&shared.mutex, 0, 1); /* create 2 ctr threads and wait */ ... } /* counter thread */ void *count(void *arg) { int i; for (i=0; i<NITERS; i++) { P(&shared.mutex); shared.ctr++; V(&shared.mutex); } return NULL; } Progress graph for goodcnt.c Initially, mutex = 1 Semaphores introduce the potential for deadlock: waiting for a condition that will never be true. Any trajectory that enters the deadlock region will eventually reach the deadlock state, waiting for either \( s \) or \( t \) to become nonzero. Other trajectories luck out and skirt the deadlock region. *Unfortunate fact:* deadlock is often non-deterministic. A deterministic deadlock Initially, s = 1, t = 0. Sometimes though, we get "lucky" and the deadlock is deterministic. Here is an example of a deterministic deadlock caused by improperly initializing semaphore t. **Problem:** correct this program and draw the resulting forbidden regions. Signaling With Semaphores Common synchronization pattern: - Producer waits for slot, inserts item in buffer, and signals consumer. - Consumer waits for item, removes it from buffer, and signals producer. Examples - Multimedia processing: - Producer creates MPEG video frames, consumer renders the frames - Graphical user interfaces - Producer detects mouse clicks, mouse movements, and keyboard hits and inserts corresponding events in buffer. - Consumer retrieves events from buffer and paints the display. Producer-consumer (1-buffer) ```c /* buf1.c - producer-consumer on 1-element buffer */ #include <ics.h> #define NITERS 5 void *producer(void *arg); void *consumer(void *arg); struct { int buf; /* shared var */ sem_t full; /* sems */ sem_t empty; } shared; int main() { pthread_t tid_producer; pthread_t tid_consumer; /* initialize the semaphores */ Sem_init(&shared.empty, 0, 1); Sem_init(&shared.full, 0, 0); /* create threads and wait */ Pthread_create(&tid_producer, NULL, producer, NULL); Pthread_create(&tid_consumer, NULL, consumer, NULL); Pthread_join(tid_producer, NULL); Pthread_join(tid_consumer, NULL); exit(0); } ``` /* buf1.c - producer-consumer on 1-element buffer */ #include <ics.h> #define NITERS 5 #include <ics.h> #define NITERS 5 void *producer(void *arg); void *consumer(void *arg); struct { int buf; /* shared var */ sem_t full; /* sems */ sem_t empty; } shared; int main() { pthread_t tid_producer; pthread_t tid_consumer; /* initialize the semaphores */ Sem_init(&shared.empty, 0, 1); Sem_init(&shared.full, 0, 0); /* create threads and wait */ Pthread_create(&tid_producer, NULL, producer, NULL); Pthread_create(&tid_consumer, NULL, consumer, NULL); Pthread_join(tid_producer, NULL); Pthread_join(tid_consumer, NULL); exit(0); } Initially: empty = 1, full = 0. ```c /* producer thread */ void *producer(void *arg) { int i, item; for (i=0; i<NITERS; i++) { /* produce item */ item = i; printf("produced %d\n", item); /* write item to buf */ P(&shared.empty); shared.buf = item; V(&shared.full); } return NULL; } /* consumer thread */ void *consumer(void *arg) { int i, item; for (i=0; i<NITERS; i++) { /* read item from buf */ P(&shared.full); item = shared.buf; V(&shared.empty); /* consume item */ printf("consumed %d\n", item); } return NULL; } ``` Producer-consumer progress graph The forbidden regions prevent the producer from writing into a full buffer. They also prevent the consumer from reading an empty buffer. **Problem:** Write version for n-element buffer with multiple producers and consumers. Initially, empty = 1, full = 0. Limitations of semaphores Semaphores are sound and fundamental, but they have limitations. - **Difficult to broadcast a signal to a group of threads.** - e.g., *barrier synchronization*: no thread returns from the barrier function until every other thread has called the barrier function. - **Impossible to do timeout waiting.** - e.g., wait for at most 1 second for a condition to become true. For these we must use Pthreads mutex and condition variables. Basic operations on mutex variables ```c int pthread_mutex_init(pthread_mutex_t *mutex, pthread_mutexattr_t *attr) ``` Initializes a mutex variable (`mutex`) with some attributes (`attr`). - attributes are usually NULL. - like initializing a mutex semaphore to 1. ```c int pthread_mutex_lock(pthread_mutex_t *mutex) ``` Indivisibly waits for `mutex` to be unlocked and then locks it. - like P(`mutex`) ```c int pthread_mutex_unlock(pthread_mutex_t *mutex) ``` Unlocks `mutex`. - like V(`mutex`) Basic operations on condition variables ```c int pthread_cond_init(pthread_cond_t *cond, pthread_condattr_t *attr) ``` Initializes a condition variable (cond) with some attributes (attr). - attributes are usually NULL. ```c int pthread_cond_signal(pthread_cond_t *cond) ``` Awakens one thread waiting on condition cond. - if no threads waiting on condition, then it does nothing. - key point: signals are not queued! ```c int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex) ``` Indivisibly unlocks mutex and waits for signal on condition cond - When awakened, indivisibly locks mutex. Advanced operations on condition variables ```c int pthread_cond_broadcast(pthread_cond_t *cond) ``` Awakens *all* threads waiting on condition `cond`. - if no threads waiting on condition, then it does nothing. ```c int pthread_cond_timedwait(pthread_cond_t *cond, pthread_mutex_t *mutex, struct timespec *abstime) ``` Waits for condition `cond` until absolute wall clock time is `abstime` - Unlocks `mutex` on entry, locks `mutex` on awakening. - Use of absolute time rather than relative time is strange. Signaling and waiting on conditions Basic pattern for signaling Pthread_mutex_lock(&mutex); Pthread_cond_signal(&cond); Pthread_mutex_unlock(&mutex); A mutex is always associated with a condition variable. Guarantees that the condition cannot be signaled (and thus ignored) in the interval when the waiter locks the mutex and waits on the condition. Basic pattern for waiting Pthread_mutex_lock(&mutex); Pthread_cond_wait(&cond, &mutex); Pthread_mutex_unlock(&mutex); #include <ics.h> static pthread_mutex_t mutex; static pthread_cond_t cond; static int nthreads; static int barriercnt = 0; void barrier_init(int n) { nthreads = n; Pthread_mutex_init(&mutex, NULL); Pthread_cond_init(&cond, NULL); } void barrier() { Pthread_mutex_lock(&mutex); if (++barriercnt == nthreads) { barriercnt = 0; Pthread_cond_broadcast(&cond); } else Pthread_cond_wait(&cond, &mutex); Pthread_mutex_unlock(&mutex); } Barrier synchronization Call to barrier will not return until every other thread has also called barrier. Needed for tightly-coupled parallel applications that proceed in phases. E.g., physical simulations. timebomb.c: timeout waiting example A program that explodes unless the user hits a key within 5 seconds. ```c #include <ics.h> #define TIMEOUT 5 /* function prototypes */ void *thread(void *vargp); struct timespec *maketimeout(int secs); /* condition variable and its associated mutex */ pthread_cond_t cond; pthread_mutex_t mutex; /* thread id */ pthread_t tid; ``` A routine for building a timeout structure for `pthread_cond_timewait`. ```c /* * maketimeout - builds a timeout object that times out * in secs seconds * struct timespec *maketimeout(int secs) { struct timeval now; struct timespec *tp = (struct timespec *)malloc(sizeof(struct timespec)); gettimeofday(&now, NULL); tp->tv_sec = now.tv_sec + secs; tp->tv_nsec = now.tv_usec * 1000; return tp; } */ ``` int main() { int i, rc; /* initialize the mutex and condition variable */ Pthread_cond_init(&cond, NULL); Pthread_mutex_init(&mutex, NULL); /* start getchar thread and wait for it to timeout */ Pthread_mutex_lock(&mutex); Pthread_create(&tid, NULL, thread, NULL); for (i=0; i<TIMEOUT; i++) { printf("BEEP\n"); rc = pthread_cond_timedwait(& cond, &mutex, maketimeout(1)); if (rc != ETIMEDOUT) { printf("WHEW!\n"); exit(0); } } printf("BOOM!\n"); exit(0); } Thread routine for timebomb.c ```c /* * thread - executes getchar in a separate thread */ void *thread(void *vargp) { (void) getchar(); Pthread_mutex_lock(&mutex); Pthread_cond_signal(&cond); Pthread_mutex_unlock(&mutex); return NULL; } ``` Threads summary Threads provide another mechanism for writing concurrent programs. Threads are growing in popularity • Somewhat cheaper than processes. • Easy to share data between threads. However, the ease of sharing has a cost: • Easy to introduce subtle synchronization errors. For more info: • man pages (man -k pthreads)
{"Source-Url": "http://www.ece.eng.wayne.edu:80/~czxu/ece561/lnotes/THREAD2.pdf", "len_cl100k_base": 4785, "olmocr-version": "0.1.50", "pdf-total-pages": 34, "total-fallback-pages": 0, "total-input-tokens": 51580, "total-output-tokens": 5945, "length": "2e12", "weborganizer": {"__label__adult": 0.0003249645233154297, "__label__art_design": 0.0002663135528564453, "__label__crime_law": 0.00031495094299316406, "__label__education_jobs": 0.0003838539123535156, "__label__entertainment": 6.008148193359375e-05, "__label__fashion_beauty": 0.00012254714965820312, "__label__finance_business": 9.02414321899414e-05, "__label__food_dining": 0.0004303455352783203, "__label__games": 0.0006885528564453125, "__label__hardware": 0.0027179718017578125, "__label__health": 0.00039076805114746094, "__label__history": 0.00019180774688720703, "__label__home_hobbies": 0.00014269351959228516, "__label__industrial": 0.0005621910095214844, "__label__literature": 0.000156402587890625, "__label__politics": 0.000247955322265625, "__label__religion": 0.0005588531494140625, "__label__science_tech": 0.01824951171875, "__label__social_life": 7.2479248046875e-05, "__label__software": 0.005176544189453125, "__label__software_dev": 0.9677734375, "__label__sports_fitness": 0.000400543212890625, "__label__transportation": 0.000530242919921875, "__label__travel": 0.0001766681671142578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16034, 0.02005]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16034, 0.41312]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16034, 0.69618]], "google_gemma-3-12b-it_contains_pii": [[0, 161, false], [161, 596, null], [596, 1082, null], [1082, 1995, null], [1995, 2731, null], [2731, 3434, null], [3434, 3756, null], [3756, 4006, null], [4006, 4175, null], [4175, 4455, null], [4455, 4655, null], [4655, 5054, null], [5054, 5705, null], [5705, 6082, null], [6082, 6540, null], [6540, 7104, null], [7104, 7155, null], [7155, 7518, null], [7518, 7810, null], [7810, 8326, null], [8326, 9709, null], [9709, 10373, null], [10373, 10666, null], [10666, 11131, null], [11131, 11656, null], [11656, 12289, null], [12289, 12817, null], [12817, 13291, null], [13291, 13981, null], [13981, 14353, null], [14353, 14799, null], [14799, 15357, null], [15357, 15623, null], [15623, 16034, null]], "google_gemma-3-12b-it_is_public_document": [[0, 161, true], [161, 596, null], [596, 1082, null], [1082, 1995, null], [1995, 2731, null], [2731, 3434, null], [3434, 3756, null], [3756, 4006, null], [4006, 4175, null], [4175, 4455, null], [4455, 4655, null], [4655, 5054, null], [5054, 5705, null], [5705, 6082, null], [6082, 6540, null], [6540, 7104, null], [7104, 7155, null], [7155, 7518, null], [7518, 7810, null], [7810, 8326, null], [8326, 9709, null], [9709, 10373, null], [10373, 10666, null], [10666, 11131, null], [11131, 11656, null], [11656, 12289, null], [12289, 12817, null], [12817, 13291, null], [13291, 13981, null], [13981, 14353, null], [14353, 14799, null], [14799, 15357, null], [15357, 15623, null], [15623, 16034, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16034, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16034, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16034, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16034, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16034, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16034, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16034, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16034, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16034, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16034, null]], "pdf_page_numbers": [[0, 161, 1], [161, 596, 2], [596, 1082, 3], [1082, 1995, 4], [1995, 2731, 5], [2731, 3434, 6], [3434, 3756, 7], [3756, 4006, 8], [4006, 4175, 9], [4175, 4455, 10], [4455, 4655, 11], [4655, 5054, 12], [5054, 5705, 13], [5705, 6082, 14], [6082, 6540, 15], [6540, 7104, 16], [7104, 7155, 17], [7155, 7518, 18], [7518, 7810, 19], [7810, 8326, 20], [8326, 9709, 21], [9709, 10373, 22], [10373, 10666, 23], [10666, 11131, 24], [11131, 11656, 25], [11656, 12289, 26], [12289, 12817, 27], [12817, 13291, 28], [13291, 13981, 29], [13981, 14353, 30], [14353, 14799, 31], [14799, 15357, 32], [15357, 15623, 33], [15623, 16034, 34]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16034, 0.07692]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
87cba7e736448650bc61a392b84ed028aa681551
Using the MySQL Yum Repository Abstract This document provides some basic instructions for using the MySQL Yum Repository to install and upgrade MySQL. It is excerpted from the MySQL 8.0 Reference Manual. For legal information, see the Legal Notices. For help with using MySQL, please visit the MySQL Forums, where you can discuss your issues with other MySQL users. Document generated on: 2022-09-02 (revision: 74063) # Table of Contents Preface and Legal Notices ........................................................................................................................................... v 1 Installing MySQL on Linux Using the MySQL Yum Repository ........................................................................... 1 2 Upgrading MySQL with the MySQL Yum Repository ......................................................................................... 7 Preface and Legal Notices This document provides some basic instructions for using the MySQL Yum Repository to install and upgrade MySQL. It is excerpted from the MySQL 8.0 Reference Manual. Licensing information—MySQL 8.0. This product may include third-party software, used under license. If you are using a Commercial release of MySQL 8.0, see the MySQL 8.0 Commercial Release License Information User Manual for licensing information, including licensing information relating to third-party software that may be included in this Commercial release. If you are using a Community release of MySQL 8.0, see the MySQL 8.0 Community Release License Information User Manual for licensing information, including licensing information relating to third-party software that may be included in this Community release. Legal Notices Copyright © 1997, 2022, Oracle and/or its affiliates. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable: U.S. GOVERNMENT END USERS: Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs) and Oracle computer documentation or other Oracle data delivered to or accessed by U.S. Government end users are "commercial computer software" or "commercial computer software documentation" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, reproduction, duplication, release, display, disclosure, modification, preparation of derivative works, and/or adaptation of i) Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs), ii) Oracle computer documentation and/or iii) other Oracle data, is subject to the rights and limitations specified in the license contained in the applicable contract. The terms governing the U.S. Government's use of Oracle cloud services are defined by the applicable contract for such services. No other rights are granted to the U.S. Government. This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Intel and Intel Inside are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Epyc, and the AMD logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group. This software or hardware and documentation may provide access to or information about content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services unless otherwise set forth in an applicable agreement between you and Oracle. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services, except as set forth in an applicable agreement between you and Oracle. This documentation is NOT distributed under a GPL license. Use of this documentation is subject to the following terms: You may create a printed copy of this documentation solely for your own personal use. Conversion to other formats is allowed as long as the actual content is not altered or edited in any way. You shall not publish or distribute this documentation in any form or on any media, except if you distribute the documentation in a manner similar to how Oracle disseminates it (that is, electronically for download on a Web site with the software) or on a CD-ROM or similar medium, provided however that the documentation is disseminated together with the software on the same medium. Any other use, such as any dissemination of printed copies or use of this documentation, in whole or in part, in another publication, requires the prior written consent from an authorized representative of Oracle. Oracle and/or its affiliates reserve any and all rights to this documentation not expressly granted above. **Documentation Accessibility** For information about Oracle's commitment to accessibility, visit the Oracle Accessibility Program website at https://www.oracle.com/corporate/accessibility/. **Access to Oracle Support for Accessibility** Oracle customers that have purchased support have access to electronic support through My Oracle Support. For information, visit https://www.oracle.com/corporate/accessibility/learning-support.html#support-tab. Chapter 1 Installing MySQL on Linux Using the MySQL Yum Repository The MySQL Yum repository for Oracle Linux, Red Hat Enterprise Linux, CentOS, and Fedora provides RPM packages for installing the MySQL server, client, MySQL Workbench, MySQL Utilities, MySQL Router, MySQL Shell, Connector/ODBC, Connector/Python and so on (not all packages are available for all the distributions; see Installing Additional MySQL Products and Components with Yum for details). Before You Start As a popular, open-source software, MySQL, in its original or re-packaged form, is widely installed on many systems from various sources, including different software download sites, software repositories, and so on. The following instructions assume that MySQL is not already installed on your system using a third-party-distributed RPM package; if that is not the case, follow the instructions given in Chapter 2, Upgrading MySQL with the MySQL Yum Repository or Replacing a Third-Party Distribution of MySQL Using the MySQL Yum Repository. Steps for a Fresh Installation of MySQL Follow the steps below to install the latest GA version of MySQL with the MySQL Yum repository: Adding the MySQL Yum Repository First, add the MySQL Yum repository to your system’s repository list. This is a one-time operation, which can be performed by installing an RPM provided by MySQL. Follow these steps: b. Select and download the release package for your platform. c. Install the downloaded release package with the following command, replacing platform-and-version-specific-package-name with the name of the downloaded RPM package: ``` $> sudo yum install platform-and-version-specific-package-name.rpm ``` For an EL6-based system, the command is in the form of: `$> sudo yum install mysql80-community-release-el6-{version-number}.noarch.rpm` For an EL7-based system: `$> sudo yum install mysql80-community-release-el7-{version-number}.noarch.rpm` For an EL8-based system: `$> sudo yum install mysql80-community-release-el8-{version-number}.noarch.rpm` For an EL9-based system: `$> sudo yum install mysql80-community-release-el9-{version-number}.noarch.rpm` For Fedora 35: `$> sudo dnf install mysql80-community-release-fc35-{version-number}.noarch.rpm` For Fedora 34: Selecting a Release Series When using the MySQL Yum repository, the latest GA series (currently MySQL 8.0) is selected for installation by default. If this is what you want, you can skip to the next step, Installing MySQL. Within the MySQL Yum repository, different release series of the MySQL Community Server are hosted in different subrepositories. The subrepository for the latest GA series (currently MySQL 8.0) is enabled by default, and the subrepositories for all other series (for example, the MySQL 8.0 series) are disabled by default. Use this command to see all the subrepositories in the MySQL Yum repository, and see which of them are enabled or disabled (for dnf-enabled systems, replace yum in the command with dnf): ```bash $> yum repolist all | grep mysql ``` To install the latest release from the latest GA series, no configuration is needed. To install the latest release from a specific series other than the latest GA series, disable the subrepository for the latest GA series and enable the subrepository for the specific series before running the installation command. If your platform supports `yum-config-manager`, you can do that by issuing these commands, which disable the subrepository for the 5.7 series and enable the one for the 8.0 series: ```bash $> sudo yum-config-manager --disable mysql57-community $> sudo yum-config-manager --enable mysql80-community ``` For dnf-enabled platforms: ```bash $> sudo dnf config-manager --disable mysql57-community $> sudo dnf config-manager --enable mysql80-community ``` Besides using `yum-config-manager` or the `dnf config-manager` command, you can also select a release series by editing manually the `/etc/yum.repos.d/mysql-community.repo` file. This is a typical entry for a release series' subrepository in the file: ```ini [mysql57-community] name=MySQL 5.7 Community Server baseurl=http://repo.mysql.com/yum/mysql-5.7-community/el/6/$basearch/ enabled=1 gpgcheck=1 ``` Disabling the Default MySQL Module Find the entry for the subrepository you want to configure, and edit the `enabled` option. Specify `enabled=0` to disable a subrepository, or `enabled=1` to enable a subrepository. For example, to install MySQL 8.0, make sure you have `enabled=0` for the above subrepository entry for MySQL 5.7, and have `enabled=1` for the entry for the 8.0 series: ``` # Enable to use MySQL 8.0 [mysql80-community] name=MySQL 8.0 Community Server baseurl=http://repo.mysql.com/yum/mysql-8.0-community/el/6/$basearch/ enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql-2022 file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql ``` You should only enable subrepository for one release series at any time. When subrepositories for more than one release series are enabled, Yum uses the latest series. Verify that the correct subrepositories have been enabled and disabled by running the following command and checking its output (for dnf-enabled systems, replace `yum` in the command with `dnf`): ``` $> yum repolist enabled | grep mysql ``` ### Disabling the Default MySQL Module (EL8 systems only) EL8-based systems such as RHEL8 and Oracle Linux 8 include a MySQL module that is enabled by default. Unless this module is disabled, it masks packages provided by MySQL repositories. To disable the included module and make the MySQL repository packages visible, use the following command (for dnf-enabled systems, replace `yum` in the command with `dnf`): ``` $> sudo yum module disable mysql ``` ### Installing MySQL Install MySQL by the following command (for dnf-enabled systems, replace `yum` in the command with `dnf`): ``` $> sudo yum install mysql-community-server ``` This installs the package for MySQL server (`mysql-community-server`) and also packages for the components required to run the server, including packages for the client (`mysql-community-client`), the common error messages and character sets for client and server (`mysql-community-common`), and the shared client libraries (`mysql-community-libs`). ### Starting the MySQL Server Start the MySQL server with the following command: ``` $> systemctl start mysqld ``` You can check the status of the MySQL server with the following command: ``` $> systemctl status mysqld ``` If the operating system is systemd enabled, standard `systemctl` (or alternatively, `service` with the arguments reversed) commands such as `stop`, `start`, `status`, and `restart` should be used to manage the MySQL server service. The `mysqld` service is enabled by default, and it starts at system reboot. See [Managing MySQL Server with systemd](#) for additional information. At the initial start up of the server, the following happens, given that the data directory of the server is empty: - The server is initialized. - SSL certificate and key files are generated in the data directory. - `validate_password` is installed and enabled. - A superuser account `root'@'localhost` is created. A password for the superuser is set and stored in the error log file. To reveal it, use the following command: ```bash $> sudo grep 'temporary password' /var/log/mysqld.log ``` Change the root password as soon as possible by logging in with the generated, temporary password and set a custom password for the superuser account: ```bash $> mysql -uroot -p mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'MyNewPass4!'; ``` **Note** `validate_password` is installed by default. The default password policy implemented by `validate_password` requires that passwords contain at least one uppercase letter, one lowercase letter, one digit, and one special character, and that the total password length is at least 8 characters. For more information on the postinstallation procedures, see [Postinstallation Setup and Testing](#). **Note** *Compatibility Information for EL7-based platforms:* The following RPM packages from the native software repositories of the platforms are incompatible with the package from the MySQL Yum repository that installs the MySQL server. Once you have installed MySQL using the MySQL Yum repository, you cannot install these packages (and vice versa). - akonadi-mysql ### Installing Additional MySQL Products and Components with Yum You can use Yum to install and manage individual components of MySQL. Some of these components are hosted in sub-repositories of the MySQL Yum repository: for example, the MySQL Connectors are to be found in the MySQL Connectors Community sub-repository, and the MySQL Workbench in MySQL Tools Community. You can use the following command to list the packages for all the MySQL components available for your platform from the MySQL Yum repository (for dnf-enabled systems, replace `yum` in the command with `dnf`): ```bash $> sudo yum --disablerepo=* --enablerepo=mysql*-community* list available ``` Install any packages of your choice with the following command, replacing `package-name` with name of the package (for dnf-enabled systems, replace `yum` in the command with `dnf`): ```bash $> sudo yum install package-name ``` For example, to install MySQL Workbench on Fedora: ``` $> sudo dnf install mysql-workbench-community ``` To install the shared client libraries (for dnf-enabled systems, replace `yum` in the command with `dnf`): ``` $> sudo yum install mysql-community-libs ``` ### ARM Support ARM 64-bit (aarch64) is supported on Oracle Linux 7 and requires the Oracle Linux 7 Software Collections Repository (`ol7_software_collections`). For example, to install the server: ``` $> yum-config-manager --enable ol7_software_collections $> yum install mysql-community-server ``` #### Note ARM 64-bit (aarch64) is supported on Oracle Linux 7 as of MySQL 8.0.12. #### Known Limitation The 8.0.12 release requires you to adjust the `libstdc++7` path by executing `ln -s /opt/oracle/oracle-armtoolset-1/root/usr/lib64 /usr/lib64/gcc7` after executing the `yum install` step. ### Updating MySQL with Yum Besides installation, you can also perform updates for MySQL products and components using the MySQL Yum repository. See Chapter 2, *Upgrading MySQL with the MySQL Yum Repository* for details. Chapter 2 Upgrading MySQL with the MySQL Yum Repository For supported Yum-based platforms (see Chapter 1, Installing MySQL on Linux Using the MySQL Yum Repository, for a list), you can perform an in-place upgrade for MySQL (that is, replacing the old version and then running the new version using the old data files) with the MySQL Yum repository. Notes • Before performing any update to MySQL, follow carefully the instructions in Upgrading MySQL. Among other instructions discussed there, it is especially important to back up your database before the update. • The following instructions assume you have installed MySQL with the MySQL Yum repository or with an RPM package directly downloaded from MySQL Developer Zone’s MySQL Download page; if that is not the case, following the instructions in Replacing a Third-Party Distribution of MySQL Using the MySQL Yum Repository. Selecting a Target Series By default, the MySQL Yum repository updates MySQL to the latest version in the release series you have chosen during installation (see Selecting a Release Series for details), which means, for example, a 5.7.x installation is not updated to a 8.0.x release automatically. To update to another release series, you must first disable the subrepository for the series that has been selected (by default, or by yourself) and enable the subrepository for your target series. To do that, see the general instructions given in Selecting a Release Series. For upgrading from MySQL 5.7 to 8.0, perform the reverse of the steps illustrated in Selecting a Release Series, disabling the subrepository for the MySQL 5.7 series and enabling that for the MySQL 8.0 series. As a general rule, to upgrade from one release series to another, go to the next series rather than skipping a series. For example, if you are currently running MySQL 5.6 and wish to upgrade to 8.0, upgrade to MySQL 5.7 first before upgrading to 8.0. Important For important information about upgrading from MySQL 5.7 to 8.0, see Upgrading from MySQL 5.7 to 8.0. Upgrading MySQL Upgrade MySQL and its components by the following command, for platforms that are not dnf-enabled: ```bash sudo yum update mysql-server ``` For platforms that are dnf-enabled: ```bash sudo dnf upgrade mysql-server ``` Alternatively, you can update MySQL by telling Yum to update everything on your system, which might take considerably more time. For platforms that are not dnf-enabled: ```bash sudo yum update ``` For platforms that are dnf-enabled: ```bash sudo dnf upgrade ``` **Restarting MySQL** The MySQL server always restarts after an update by Yum. Prior to MySQL 8.0.16, run `mysql_upgrade` after the server restarts to check and possibly resolve any incompatibilities between the old data and the upgraded software. `mysql_upgrade` also performs other functions; for details, see `mysql_upgrade — Check and Upgrade MySQL Tables`. As of MySQL 8.0.16, this step is not required, as the server performs all tasks previously handled by `mysql_upgrade`. You can also update only a specific component. Use the following command to list all the installed packages for the MySQL components (for dnf-enabled systems, replace `yum` in the command with `dnf`): ``` sudo yum list installed | grep "^mysql" ``` After identifying the package name of the component of your choice, update the package with the following command, replacing `package-name` with the name of the package. For platforms that are not dnf-enabled: ``` sudo yum update package-name ``` For dnf-enabled platforms: ``` sudo dnf upgrade package-name ``` **Upgrading the Shared Client Libraries** After updating MySQL using the Yum repository, applications compiled with older versions of the shared client libraries should continue to work. *If you recompile applications and dynamically link them with the updated libraries:* As typical with new versions of shared libraries where there are differences or additions in symbol versioning between the newer and older libraries (for example, between the newer, standard 8.0 shared client libraries and some older—prior or variant—versions of the shared libraries shipped natively by the Linux distributions’ software repositories, or from some other sources), any applications compiled using the updated, newer shared libraries require those updated libraries on systems where the applications are deployed. As expected, if those libraries are not in place, the applications requiring the shared libraries fail. For this reason, be sure to deploy the packages for the shared libraries from MySQL on those systems. To do this, add the MySQL Yum repository to the systems (see Adding the MySQL Yum Repository) and install the latest shared libraries using the instructions given in Installing Additional MySQL Products and Components with Yum.
{"Source-Url": "https://downloads.mysql.com/docs/mysql-repo-excerpt-8.0-en.pdf", "len_cl100k_base": 4714, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 25499, "total-output-tokens": 5425, "length": "2e12", "weborganizer": {"__label__adult": 0.0002696514129638672, "__label__art_design": 0.0002593994140625, "__label__crime_law": 0.00037550926208496094, "__label__education_jobs": 0.0012998580932617188, "__label__entertainment": 0.00012576580047607422, "__label__fashion_beauty": 9.363889694213869e-05, "__label__finance_business": 0.00119781494140625, "__label__food_dining": 0.00017130374908447266, "__label__games": 0.00144195556640625, "__label__hardware": 0.0013475418090820312, "__label__health": 0.0001710653305053711, "__label__history": 0.00019609928131103516, "__label__home_hobbies": 0.00015175342559814453, "__label__industrial": 0.00021266937255859375, "__label__literature": 0.00018894672393798828, "__label__politics": 0.0001430511474609375, "__label__religion": 0.00029754638671875, "__label__science_tech": 0.00701904296875, "__label__social_life": 0.00012290477752685547, "__label__software": 0.4921875, "__label__software_dev": 0.4921875, "__label__sports_fitness": 0.00016307830810546875, "__label__transportation": 0.00019800662994384768, "__label__travel": 0.00019419193267822263}, "weborganizer_max": "__label__software", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22388, 0.01306]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22388, 0.18458]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22388, 0.85417]], "google_gemma-3-12b-it_contains_pii": [[0, 31, false], [31, 423, null], [423, 889, null], [889, 889, null], [889, 4784, null], [4784, 7039, null], [7039, 9420, null], [9420, 11379, null], [11379, 13879, null], [13879, 16475, null], [16475, 17561, null], [17561, 17561, null], [17561, 20102, null], [20102, 22388, null]], "google_gemma-3-12b-it_is_public_document": [[0, 31, true], [31, 423, null], [423, 889, null], [889, 889, null], [889, 4784, null], [4784, 7039, null], [7039, 9420, null], [9420, 11379, null], [11379, 13879, null], [13879, 16475, null], [16475, 17561, null], [17561, 17561, null], [17561, 20102, null], [20102, 22388, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 22388, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22388, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22388, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22388, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22388, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22388, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22388, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22388, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22388, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22388, null]], "pdf_page_numbers": [[0, 31, 1], [31, 423, 2], [423, 889, 3], [889, 889, 4], [889, 4784, 5], [4784, 7039, 6], [7039, 9420, 7], [9420, 11379, 8], [11379, 13879, 9], [13879, 16475, 10], [16475, 17561, 11], [17561, 17561, 12], [17561, 20102, 13], [20102, 22388, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22388, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
f002f085b7feaadfdc678daf40f67afb264e2d39
Abstract: The primary motivation for this project is to attempt to formulate an NLP problem as a generalized machine learning problem and solve them using various machine learning techniques. There are various challenges involved in using language element such as text as the features for a machine learning problem. There are various language models that have come up such as word-2-vec, BERT etc… We would like to attempt to solve a common problem in which the primary features are the raw text elements involved and use various language models to formulate/featurize them as a machine learning problem and solve them using various generalized ML techniques. We also like to benchmarks in terms of data size and algorithmic choice for identifying insincere/sarcastic language in this specific context. Data availability is an issue in these problem spaces. Language based phenomenon(insincere/hateful/tone) are difficult to detect, and we believe there is value in understanding how much data is needed to train models that can effectively model them. In this project we have attempted to solve the problem of classifying insincere question in the online Q&A platform Quora by using word embeddings to featurize the text elements. We were able to observe that these embeddings provide a good set of features that are able to capture trends in the data with respect to the binary classes we want to classify and help in building predictive models that have good ROC-AUC performance even with an unbalanced class distribution. Method: We have a classification problem in our hands with an unbalanced dataset. The positive examples account for little more than 6 % of the total examples. We intend on featurizing the text elements using word embeddings and possibly augment them with other features such as named entities and build a classification model. The model uses manually tagged output (by experts) which had to be kept in mind. Data Preparation: The data available for this task is the text element of the question and a flag indicating if it is an insincere question. We need to featurize them using pre-trained word embeddings. We used 2 different approaches to vectorize the text using embeddings. The first one is based on embedding obtained using word2vec method. We tried 4 different word embeddings for this class, namely, GloVe, GoogleNews word vectors, Wikinews word vectors and Paragram word vectors. All these word embeddings are word2vec based and encode word tokens using a vector of 300 elements and vary in the text corpus that they are trained on. For every question (row) we generated a feature mapping that basically averages out the the embedding for each token in the text. If a token is not available in the corpus we just ignore them in computing the average vector. After the mapping process we get a feature space that has 300 features that are floating point elements. A second approach to vectorizing the text is based on the BERT method. The primary difference of BERT from word2vec is that they provide embeddings for a word that are much more dynamically informed by the words around them. This will let us compare the performance difference between BERT class of word embeddings and the word2vec class of word embeddings and see if BERT embeddings provide any improvement over word2vec based embeddings. There are many possible combinations one could use to vectorize based on BERT. For this case we chose to use the average of the values in the last 4 layers of the network. [It has to be mentioned that extracting BERT embeddings is a computationally expensive process and hence we were not able to extract BERT embeddings for the entire dataset we had. The models based on BERT were relatively data deficient and are built and tested from 100k rows of translated data]. BERT gives us a vector with 768 features. These embedded features will help us run various modeling approaches to perform the classification task at hand. Preliminary analysis: Since we have a feature space that has relatively large number of columns (300 or 768), we look at the cross correlation between the features to ensure that there aren’t many features that are highly correlated with each other. On the left we have the cross correlation heat map for the features obtained using the GloVe word embeddings. We can see that features mostly have a small degree of positive correlation between each other. Negatively correlated feature combinations are very minimal and the positive correlation observed between feature pairs are very small. The cross correlation between features is slightly lesser on other word2vec embeddings. For features obtained using BERT, a similar behavior was observed. The cross correlation between features was mostly near zero in that case. K-means clustering: Given that we have translated our text elements into numerical features, we can cluster them using k-mean clustering and see if we can get some clusters that isolate the positive examples, which in our case are the insincere questions. Clustering was performed using k-means approach on the word2vec based features and the BERT based features. We can see that the data that results from vectorization using the word embeddings does form clusters that to an extent isolates the binary classes in the problem. In the tables below we have the results of the k-means clustering performed using features obtained from Glove word embeddings [Left] and BERT embeddings [Right]. We show the number of clusters and the positive rate [precision] (fraction of positive examples in the cluster) and total positive percent [recall] (fraction of all positive examples in the problem) for the best cluster in terms of positive rate. We can see that as the number of clusters increase, a cluster develops which, includes significant fraction of the positive examples. For example for GloVe embeddings, for a 15 clusters k-means, there is a cluster that has almost 40% of instances corresponding to positive examples in them, incorporating 38% of all the positive examples. Thus we can see that there is some trend in the features obtained using the word embedding. A similar trend was observed on the features developed using the other word2vec word embeddings as well. Similarly for features obtained using the BERT embeddings we see a pretty similar trend. The results for the clustering performed on BERT embedded features are shown in the second table. The clusters obtained can themselves be used as a simple prediction model where the probability of belong to positive class can be assigned as the positive_rate corresponding to the cluster with nearest cluster center for each examples. <table> <thead> <tr> <th>Num clusters</th> <th>pos_rate (precision)</th> <th>total_pos_percent (recall)</th> </tr> </thead> <tbody> <tr> <td>4</td> <td>0.142857</td> <td>0.000054</td> </tr> <tr> <td>6</td> <td>0.205569</td> <td>0.529099</td> </tr> <tr> <td>8</td> <td>0.218020</td> <td>0.483786</td> </tr> <tr> <td>10</td> <td>0.249464</td> <td>0.461881</td> </tr> <tr> <td>12</td> <td>0.294310</td> <td>0.448513</td> </tr> <tr> <td>15</td> <td>0.396516</td> <td>0.382530</td> </tr> </tbody> </table> Clustering results for GloVe embedded features <table> <thead> <tr> <th>Num clusters</th> <th>pos_rate (precision)</th> <th>total_pos_percent (recall)</th> </tr> </thead> <tbody> <tr> <td>4</td> <td>0.163753</td> <td>0.734255</td> </tr> <tr> <td>8</td> <td>0.227096</td> <td>0.571554</td> </tr> <tr> <td>12</td> <td>0.272639</td> <td>0.541987</td> </tr> <tr> <td>15</td> <td>0.319149</td> <td>0.459237</td> </tr> <tr> <td>20</td> <td>0.375439</td> <td>0.374388</td> </tr> <tr> <td>25</td> <td>0.399398</td> <td>0.371239</td> </tr> </tbody> </table> Clustering results for BERT embedded features Principal Component Analysis: Principal component analysis was applied to understand the variance explained by the data and their spatial orientation. On the GloVe embedded dataset, the first 100 principal components (out of 300) explained around 81.8% of the total variance and for the BERT embedded features the first 100 principal components explained around 71.8% of the total variance. In our modeling approaches the effect of applying feature reduction by using PCA, by transforming the original feature space into the another feature space composed of the top ‘k’ principal components was studied as well. Below we can see the plot showing the total variance explained by top ‘k’ principal components for GloVe and BERT embedded features Modeling results: We began by building a naive bayes classifier to get a baseline estimate and followed it up with other predictive algorithms. Naive Bayes method gave us a baseline performance of AUC-ROC = 0.8646. This was followed up with other predictive modeling algorithm. Predictive modeling using logistic regression, SVMs, neural networks and random forests are ran and the results obtained are shown below. Since the data set is heavily unbalanced (positive examples account for 6% of the total examples), accuracy might not be a good metric. The AUC value of the ROC curve and the precision-recall values should give us a better sense of the model fit. Below we have tabulated results which shows the ROC-AUC and precision-recall values for various prediction cutoffs (in case of logistic regression and neural network models) and for various regularization parameter values for SVMs. A 0.80 to 0.20 ratio of train to test data split was used. Logistic Regression: The logistic regression algorithm was applied to the dataset obtained using the word2vec, BERT and the PCA reduced versions of the same. The ROC-AUC performance of these data variants are shown below. We can see that the features obtained using BERT embeddings have a better AUC score but they come at the cost of a slightly increased variance. We can also observe the fact that the PCA transformation on the data to a lower dimensional space didn’t help in improving the model performance (We show the results for PCA with features transformation with top 100 principal components). We can further look at the precision-recall scores for varying prediction threshold for probability of positive class to make predictions and get a sense of the model performance. Below is the precision/Recall performance for logistic regression model with L2- regularization. We can also observe that the logistic regression model on the BERT dataset gives a better precision-recall performance in comparison with the model built using the GloVe embeddings. (values are precision/recall for various prediction probability cutoff) <table> <thead> <tr> <th>Embedding</th> <th>0.1</th> <th>0.2</th> <th>0.3</th> <th>0.4</th> <th>0.5</th> <th>0.6</th> <th>0.7</th> <th>0.8</th> <th>0.9</th> </tr> </thead> <tbody> <tr> <td>GloVe</td> <td>0.33/0.80</td> <td>0.45/0.60</td> <td>0.52/0.36</td> <td>0.61/0.28</td> <td>0.62/0.20</td> <td>0.65/0.15</td> <td>0.67/0.09</td> <td>0.72/0.05</td> <td></td> </tr> <tr> <td>BERT</td> <td>0.37/0.81</td> <td>0.49/0.70</td> <td>0.55/0.60</td> <td>0.59/0.50</td> <td>0.63/0.43</td> <td>0.67/0.35</td> <td>0.70/0.25</td> <td>0.74/0.16</td> <td>0.77/0.08</td> </tr> </tbody> </table> Below we have the ROC plots for logistic regression models built using GloVe (left) feature set and BERT (right) feature set on both train and test data set. We can see the slight variance associated with BERT feature set by the small gap between the curves for train and test data set. NOTE: In the subsequent discussions we just mention the ROC-AUC score and not show the ROC plot for brevity. Support Vector Machines: SVM algorithm when applied to the dataset variants had a very comparable performance. Given that we do not get the prediction probabilities directly from an SVM model we can’t directly come up with an ROC-AUC score to compare with the logistic regression model. However we can compute prediction probabilities using ‘Platt scaling’ approach. The ROC-AUC computed using those values, resulted in value lesser than the one obtained using logistic regression model. Over to the right we have the results from SVM with RBF kernel on the GloVe embedded dataset for varying regularization parameter values. We can see that as we increase the regularization parameter, we can see an increase in recall rate. Computationally SVMs took much longer to complete than other algorithms. <table> <thead> <tr> <th>Embedding</th> <th>C = 10</th> <th>C = 100</th> <th>C = 200</th> <th>C = 500</th> <th>C = 1000</th> </tr> </thead> <tbody> <tr> <td>GloVe</td> <td>0.69/0.09</td> <td>0.70/0.22</td> <td>0.71/0.25</td> <td>0.72/0.29</td> <td>0.72/0.32</td> </tr> </tbody> </table> Precision and recall scores for SVM (RBF kernel) by regularization parameter Neural Networks: Neural Network Models were tried on the datasets generated and their performance were studied between the word2vec embedded feature sets and the BERT embedded feature sets. It can be observed that the BERT based models result in a better bias performance (lower bias) but the variance of the models increase. We need significant regularization to reduce the variance of the resulting model. The models built using the word2vec embedded features (GloVe, GoogleNews etc…) show very low variance (The ROC_AUC values between train dataset and the test dataset are close to each other) even without high regularization. Below is the precision/Recall performance for neural network models with 1 hidden layers with ReLu activation function for the hidden layers and sigmoid activation for the output layer. For the GloVe based model a regularization value of 0.0001 is used and for the BERT model, a regularization value of 0.001 is used. The training is run for 100 epochs as the model appeared to converge before that. Below we can see the precision-recall results for the models. For the GloVe feature set we show a model with 64 hidden nodes and for the BERT feature set we show one model with 64 hidden nodes and another one with 300 hidden nodes. The recall metric is not encouraging. Also the downside of SVMs is the run time. As we increase the regularization parameter the run middle ground. SVMs which do not give a probabilistic output (though we can get one using Platt Scaling), give a good precision, but neural network models we can see the flexibility to tune the model towards a higher precision or a recall or settle somewhere in a models on BERT with more data, since we are able to see a good bias performance but a poor return on the variability. Also with the use case we can choose the model that works best or tune them to better suit our use case. Different activation functions were tried for the hidden layers namely, Tanh, ReLu and Leaky ReLu. The performance weren’t that much different. Model variants with an extra hidden layers are tried and the performance is pretty much the same (based of ROC-AUC). Dropouts helped with networks with 2 layers and by trial and error it was found that a dropout factor of 0.2 works best by a small degree. For leaky ReLu, a leakage factor of 0.01 works best. Regularization didn’t affect the model performance for GloVe feature sets by a significant extent. But for the BERT feature set, regularization in very critical as it was possible to get an ROC-AUC score of close to 1.0 on the train set but with a large model variance. Hence stricter regularization was needed to bring down the model variance. The table above shows the variation of train and test dataset ROC-AUC score for varying regularization parameter values for a single layer 300 node network on BERT feature set. We can see that with a regularization value of 0.00001 the ROC-AUC for the train dataset is almost 1.0 but the ROC-AUC score on the test set drops to 0.879. Random Forests (Ensembles): Random forests algorithm was applied to the task at hand and their performance was studied. Compared to the logistic regression and the neural network models the performance of the random forests model was rather poor. We used gini coefficient as the split condition. In terms of ROC AUC performance, we were able to get to a value close to 0.9 with 50 trees in the model. But interestingly enough, we observed that the ROC-AUC score of the test data set is higher than the ROC-AUC of the training data. This points to the model underfitting the training data. The gap between the training data and test data ROC-AUC somewhat reduces as we increase the number of trees. But the fact that we observe the trend as described above, points to this class of modeling algorithm underfitting the data at hand. Over to the right we have the ROC-AUC scores for the training and the test data set by the number of trees in the ensemble. The random forests algorithm typically address the issue of variance by building multiple deep decision trees that are uncorrelated to each other (one of the random element of the algorithm), but are bound by the bias of the independent trees. It appears that the independent trees are limited by the bias performance they could achieve and hence this class of modeling algorithm falls short in comparison with the others that we discussed before. We have 300 features that are numerical in this problem. And it appears that such a feature space didn’t suit the random forests algorithm well enough. It could be the fact that multiple features contribute towards the successful prediction of an example. With the randomness element of the random forests, it could well be that in each of the independent tree we exclude some of the significant features that are contributing towards the successful prediction of the example, and hence affect the bias performance of the model. Or it could generally be the case that a multi-dimensional linear hyperplane could be a good separator of the classes and throwing in an algorithm like random forests that iteratively divides the feature space might not be a good candidate in here. The random forests were tried only on the word2vec based features (GloVe, GoogleNews, WikiNews). Since they struggled on a 300 feature numerical feature set, we didn’t try it on a 768 feature numerical feature set in BERT. Modeling results conclusion: Looking at the results above we can see that, between the neural network and logistic regression models, the neural network model is better than the logistic regression by a small degree. We can also see the possibility of improving upon the neural network models on BERT with more data, since we are able to see a good bias performance but a poor return on the variability. Also with neural network models we can see the flexibility to tune the model towards a higher precision or a recall or settle somewhere in a middle ground. SVMs which do not give a probabilistic output (though we can get one using Platt Scaling), give a good precision, but the recall metric is not encouraging. Also the downside of SVMs is the run time. As we increase the regularization parameter the run time increases a lot. The run-time is the fastest for logistic regression and is relatively low for neural networks (100 epochs in our case) in comparison with the SVMs. We tried RBF kernel for SVM as the results from clustering showed trends that would make this a good choice. The linear kernels will be very similar to the logistic regression model and the computational time involved made us hold off on using any sort of polynomial kernels. The ensemble model based on random forests didn't perform well enough in comparison with logistic regression and the neural networks. We could also observe that the application of PCA based feature reduction didn't help in improving the model performance. In fact they underperformed in comparison with the equivalent model when applied on the original dataset. Analysis of Data Size on model performance: Another experiment that we were interested in running was to study the effect of training data size on model fit. Sometimes in problem spaces like this, the data availability might be an issue or it might take time to collect more data. So it would be interesting to see if the model performance increase with more data. We ran 300 plus models varying with differing amounts of data, algorithms, and embeddings. For algorithms we examined deep learning, gradient boosted trees, logistic regression, and a naïve Bayes classifier. We used blocks of incrementing blocks of 5600 records for 30 models per embedding per algorithm. These were evaluated on 13118 held out records (1% of the total available data). We found neither embedding, data size, nor algorithm choice to be statistically significant predictor of model accuracy compared to a baseline Naïve Bayes, Glove embedded model. We found neither embedding, data size, nor algorithm choice to be statistically significant predictor of estimated model area under the receiver operator curve compared to a baseline Naïve Bayes, Glove embedded model. The Chi-square analysis of variance results are shown here. The results here were based only on the word2vec based features. We didn't run a similar study of BERT based features as we didn't translate the entire raw data into BERT embedded features due to computational constraints. Other Attempts: - We did try to attempt LSTM where instead of an aggregated embedding for every example in the data we could use the embeddings for each token as a sequence and run them through an LSTM algorithm and study the performance of the resulting model. We needed more data processing and engineering work to be able to do that from where we were at that point. It would be an other interesting thing to pursue and see how the memory and related dependencies affect the model performance. Future Work: - Get BERT embeddings on the entire dataset and study how it affects the model performance. We saw the BERT models achieving a good bias performance but falling behind on variance. May be with more training data, the variance issue could be addressed a bit. We can also explore different ways of extracting the features from a BERT network. In the work we have done, the average of the last 4 layers of the BERT network were used to compute embeddings for an example. May be a different extraction method can be much more suited to this use case. It would be interesting to study the performance of the modeling approaches under various different BERT feature extraction procedures. - Pursue LSTM and other recurrent neural network approaches. Studying how the dependencies influence the predictive ability of the models for this use case could be interesting. - Combine other features outside of the word embeddings and see if they improve the predictive ability as an aggregate. For example entity recognition and features extracted from that could be a different set of features to augment the features from word embeddings. References: Work Division: - Michael Lanier – Baseline Naïve Bayes, Logistic Regression, Random Forests, GBMs, analysis of data size and embeddings on model fit, LSTM. - Sibi Shanmugaraj – Data Preparation using embeddings, Exploratory analysis, Clustering, Logistic Regression, SVM, Neural Networks and PCA. - Nagarjuna Rao Chakka – attempt on entity recognition for additional features to augment word embeddings.
{"Source-Url": "http://cs229.stanford.edu/proj2019aut/data/assignment_308832_raw/26635301.pdf", "len_cl100k_base": 5166, "olmocr-version": "0.1.50", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 15258, "total-output-tokens": 5480, "length": "2e12", "weborganizer": {"__label__adult": 0.0004067420959472656, "__label__art_design": 0.0007443428039550781, "__label__crime_law": 0.0006690025329589844, "__label__education_jobs": 0.00428009033203125, "__label__entertainment": 0.00018274784088134768, "__label__fashion_beauty": 0.00026226043701171875, "__label__finance_business": 0.0004210472106933594, "__label__food_dining": 0.0004734992980957031, "__label__games": 0.0006566047668457031, "__label__hardware": 0.001132965087890625, "__label__health": 0.0012035369873046875, "__label__history": 0.0004143714904785156, "__label__home_hobbies": 0.00015926361083984375, "__label__industrial": 0.0006937980651855469, "__label__literature": 0.0012025833129882812, "__label__politics": 0.0004394054412841797, "__label__religion": 0.0006613731384277344, "__label__science_tech": 0.390869140625, "__label__social_life": 0.0002290010452270508, "__label__software": 0.0251312255859375, "__label__software_dev": 0.56884765625, "__label__sports_fitness": 0.0003333091735839844, "__label__transportation": 0.0005273818969726562, "__label__travel": 0.0002149343490600586}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23651, 0.06347]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23651, 0.10603]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23651, 0.92903]], "google_gemma-3-12b-it_contains_pii": [[0, 4796, false], [4796, 9584, null], [9584, 13836, null], [13836, 18808, null], [18808, 23651, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4796, true], [4796, 9584, null], [9584, 13836, null], [13836, 18808, null], [18808, 23651, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23651, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23651, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23651, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23651, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23651, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23651, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23651, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23651, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23651, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23651, null]], "pdf_page_numbers": [[0, 4796, 1], [4796, 9584, 2], [9584, 13836, 3], [13836, 18808, 4], [18808, 23651, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23651, 0.2987]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
044ca05c6b8c5722ee8ed0d887cd981137ff0455
Program Verification: Lecture 11 José Meseguer Computer Science Department University of Illinois at Urbana-Champaign Construction of the Initial Algebra $\mathcal{T}_{\Sigma/E}$ $\mathcal{T}_\Sigma$ is initial in the class $\text{Alg}_\Sigma$ of all $\Sigma$-algebras. To give an initial algebra semantics to Maude functional modules of the form $\text{fmod}(\Sigma, E)$ we need an initial algebra in the class $\text{Alg}_{(\Sigma, E)}$ of all $(\Sigma, E)$-algebras, with $\Sigma$ sensible, kind complete, and with nonempty sorts. We shall construct such an algebra, denoted $\mathcal{T}_{\Sigma/E}$, and show that it is indeed initial in $\text{Alg}_{(\Sigma, E)}$, i.e., for any $(\Sigma, E)$-algebra $\mathcal{A}$ there is a unique $\Sigma$-homomorphism $\mathcal{T}_{\Sigma/E} \xrightarrow{E} \mathcal{A}$. If the equations $E$ are sort-decreasing, ground confluent and operationally terminating will show that there is an isomorphism $\mathcal{T}_{\Sigma/E} \cong \mathcal{C}_{\Sigma/E}$, a very intuitive semantics. We construct $\mathcal{T}_{\Sigma/E}$ out of the provability relation $(\Sigma, E) \vdash t = t'$; that is, out of the relation $t =_E t'$. But, by definition $t =_E t' \iff (\Sigma, \overrightarrow{E} \cup \overleftarrow{E}) \vdash t \rightarrow^* t'$. Therefore, $=_E$, besides being reflexive and transitive is symmetric, and therefore is an equivalence relation on terms. But since if $t =_E t'$, then there is a connected component $[s]$ such that $t, t' \in T_{\Sigma,[s]}$, in particular $=_{E}$ is also an equivalence relation on $T_{\Sigma,[s]}$. Therefore, we have a quotient set $T_{\Sigma/E,[s]} = T_{\Sigma,[s]}/=_{E}$. We can then define the $S$-indexed family of sets $T_{\Sigma/E} = \{T_{\Sigma/E,s}\}_{s \in S}$, where, by definition, $$T_{\Sigma/E,s} = \{[t] \in T_{\Sigma/E,[s]} \mid (\exists t') t' \in [t] \land t' \in T_{\Sigma,s}\},$$ where $[t]$, or $[t]_E$, abbreviate $[t] =_E$. 3 Construction of $\mathcal{T}_{\Sigma/E}$ (III) To make $T_{\Sigma/E}$ into a $\Sigma$-algebra $\mathcal{T}_{\Sigma/E} = (T_{\Sigma/E}, -T_{\Sigma/E})$, interpret a constant $a : nil \rightarrow s$ in $\Sigma$ by its equivalence class $[a]$. Similarly, given $f : s_1 \ldots s_n \rightarrow s$ in $\Sigma$, and given $[t_i] \in T_{\Sigma/E,s_i}$, $1 \leq i \leq n$, define $$f^s_{T_{\Sigma/E}}([t_1], \ldots, [t_n]) = [f(t'_1, \ldots, t'_n)],$$ where $t'_i \in [t_i] \land t'_i \in T_{\Sigma,s_i}$, $1 \leq i \leq n$. Checking that the above definition does not depend on either: (1) the choice of the $t'_i \in [t_i]$, or (2) the choice of the subsort-overloaded operator $f : s_1 \ldots s_n \rightarrow s$ in $\Sigma$, so that it is well-defined and indeed defines an order-sorted $\Sigma$-algebra is left as an easy exercise. **Initiability Theorem for** \( \mathcal{T}_{\Sigma/E} \) **Theorem:** For \((\Sigma, E)\) with \(\Sigma\) sensible, kind complete, and with nonempty sorts, \(\mathcal{T}_{\Sigma/E} \models E\). Furthermore, \(\mathcal{T}_{\Sigma/E}\) is initial in the class \(\text{Alg}_{(\Sigma,E)}\). That is, for any \(\mathcal{A} \in \text{Alg}_{(\Sigma,E)}\) there is a unique \(\Sigma\)-homomorphism \(\_E^E_{\mathcal{A}} : \mathcal{T}_{\Sigma/E} \rightarrow \mathcal{A}\). **Proof:** We first need to show that \(\mathcal{T}_{\Sigma/E} \models E\), i.e., that \(\mathcal{T}_{\Sigma/E} \models t = t'\) for each \((t = t') \in E\). That is, for each assignment \(a : X \rightarrow \mathcal{T}_{\Sigma/E}\) we must show that \(t a_{\mathcal{T}_{\Sigma/E}} = t' a_{\mathcal{T}_{\Sigma/E}}\). But the unique \(\Sigma\)-homomorphism \(\_T_{\Sigma/E}^E : \mathcal{T}_\Sigma \rightarrow \mathcal{T}_{\Sigma/E}\) guaranteed by \(\mathcal{T}_\Sigma\) initial is just the passage to equivalence classes \(t \mapsto [t]\) and therefore **surjective**. Therefore, we can always choose a substitution $\theta : X \rightarrow T_\Sigma$ such that $a = \theta; -T_\Sigma/E$. Therefore, by the Freeness Corollary we have $aT_\Sigma/E = -\theta; -T_\Sigma/E$ (see diagram next page). Therefore, $taT_\Sigma/E = t'aT_\Sigma/E$ is just the equality $[t\theta]_E = [t'\theta]_E$, which holds iff $t\theta =_E t'\theta$, which itself holds by $(t = t') \in E$ and the Lemma in the proof of the Soundness Theorem. Therefore, $T_\Sigma/E \models E$. Initiability Theorem for $T_\Sigma/E$ (II) Lifting of $\alpha$ to a Substitution $\theta$ \[ X \xrightarrow{\eta_X} T_{\Sigma}(X) \] \[ \xrightarrow{\theta} T_{\Sigma} \] \[ \xrightarrow{\alpha} T_{\Sigma/E} \] Let us now show that for each $A \in \text{Alg}_{(\Sigma, E)}$ there is a unique $\Sigma$-homomorphism $\frac{E}{A} : T_{\Sigma/E} \rightarrow A$. We first prove uniqueness. Suppose that we have two homomorphisms $h, h' : T_{\Sigma/E} \rightarrow A$. Then, composing with $\frac{E}{A} : T_{\Sigma} \rightarrow T_{\Sigma/E}$ on the left we get, $\frac{E}{A} ; h, \frac{E}{A} ; h' : T_{\Sigma} \rightarrow A$, and by the initiality of $T_{\Sigma}$ we must have, $\frac{E}{A} ; h = \frac{E}{A} ; h' = \frac{E}{A}$. But recall that $\frac{E}{A} : T_{\Sigma} \rightarrow T_{\Sigma/E}$ is surjective, and therefore (Ex.9.1) epi, which forces $h = h'$, as desired. To show existence of $\mathcal{E}_A : \mathcal{T}_{\Sigma/E} \rightarrow \mathcal{A}$, given $[t] \in T_{\Sigma/E,s}$, define $[t]_{\mathcal{A},s}^E = t'_{\mathcal{A},s}$, where $t' \in [t] \wedge t' \in T_{\Sigma,s}$. Then show (exercise) that: - $[t]_{\mathcal{A},s}^E$ is independent of the choice of $t'$ because of the hypothesis $\mathcal{A} \models E$ and the Soundness Theorem; and - the family of functions $\mathcal{E}_A = \{\mathcal{E}_{\mathcal{A},s}\}_{s \in S}$ thus defined is indeed a $\Sigma$-homomorphism. q.e.d. The Mathematical and Operational Semantics Coincide As stated in pg. 2, the semantics of a Maude functional module \( \text{fmod}(\Sigma, E)\text{endfm} \) is an initial algebra semantics, given by \( \mathcal{T}_{\Sigma/E} \). Let us call \( \mathcal{T}_{\Sigma/E} \) the module's mathematical semantics. This semantics does not depend on any executability assumptions about \( \text{fmod}(\Sigma, E)\text{endfm} \): it can be defined for any equational theory \( (\Sigma, E) \). Call \( \text{fmod}(\Sigma, E)\text{endfm} \) admissible if the equations \( E \) are ground confluent, sort-decreasing, and terminating. Under these executability requirements we have another semantics for \( \text{fmod}(\Sigma, E)\text{endfm} \): the canonical term algebra \( \mathcal{C}_{\Sigma/E} \) defined in Lecture 5. This is the most intuitive computational model for \( \text{fmod}(\Sigma, E)\text{endfm} \). Call it its operational semantics. But both semantics coincide! The Canonical Term Algebra is Initial **Theorem:** If the rules $\vec{E}$ are sort-decreasing, ground confluent and operationally terminating, then, $C_{\Sigma/E}$ is isomorphic to $T_{\Sigma/E}$ and is therefore initial in $\text{Alg}_{(\Sigma,E)}$. **Proof:** An easy generalization of Ex.9.3 shows that if $\mathcal{I}$ is initial for a given class of algebras closed under isomorphisms and $\mathcal{J}$ is isomorphic to $\mathcal{I}$, then $\mathcal{J}$ is also initial for that class. Since (Ex.10.2) $\text{Alg}_{(\Sigma,E)}$ is closed under isomorphisms, we just have to show $T_{\Sigma/E} \cong C_{\Sigma/E}$. Define $\text{can}_E = \{\text{can}_{E,s} : T_{\Sigma/E,s} \rightarrow C_{\Sigma/E,s}\}_{s \in S}$ by, \[ \text{can}_{E,s}[t] = \text{can}_E(t). \text{ This is independent of the choice of } t, \text{ since } t =_E t' \text{ iff } E \vdash t = t' \text{ iff (by } E \text{ confluent) } t \Downarrow_E t', \text{ iff } \\ \text{can}_E(t) = \text{can}_E(t'). \text{ } \text{can}_{E,s} \text{ is surjective by construction and injective by these equivalences; therefore } \text{can}_E \text{ is bijective.} \] Let us see that $\text{can}_E : T_{\Sigma/E} \rightarrow C_{\Sigma/E}$ is a $\Sigma$-homomorphism. Preservation of constants is trivial. Let $f : s_1 \ldots s_n \rightarrow s$ in $\Sigma$, and $[t_i] \in T_{\Sigma/E,s_i}$, $1 \leq i \leq n$. We must show, $$\text{can}_{E,s}(f^{s_1 \ldots s_n,s}_{T_{\Sigma/E}}([t_1], \ldots, [t_n])) = f^{s_1 \ldots s_n,s}_{C_{\Sigma/E}}(\text{can}_E(t_1), \ldots, \text{can}_E(t_n)).$$ The key observation is that $\text{can}_E(t_i) \in T_{\Sigma,s_i}$, $1 \leq i \leq n$. This is because: - by definition of $[t_i]$ there must be a $t'_i \equiv_E t_i$ with $t'_i \in T_{\Sigma,s_i}$, $1 \leq i \leq n$; and - by the sort-decreasingness assumption for $E$, since $t'_i \rightarrow^*_E \text{can}(t'_i) = \text{can}(t_i)$, if $t'_i \in T_{\Sigma,s_i}$, $1 \leq i \leq n$, then $\text{can}_E(t_i) \in T_{\Sigma,s_i}$, $1 \leq i \leq n$. Therefore, we have: \[ \begin{align*} \text{can}_{E,s}(f_{T_{\Sigma/E}}^{s_1 \ldots s_n,s}(\llbracket t_1 \rrbracket, \ldots, \llbracket t_n \rrbracket)) &= \quad \text{(by definition of } f_{T_{\Sigma/E}}^{s_1 \ldots s_n,s}) \\ \text{can}_{E,s}([f(\text{can}_E(t_1), \ldots, \text{can}_E(t_n))]) &= \quad \text{(by definition of } \text{can}_{E,s}) \\ \text{can}_E(f(\text{can}_E(t_1), \ldots, \text{can}_E(t_n))) &= \quad \text{(by definition of } f_{C_{\Sigma/E}}^{s_1 \ldots s_n,s}) \\ f_{C_{\Sigma/E}}^{s_1 \ldots s_n,s}(\text{can}_E(t_1), \ldots, \text{can}_E(t_n)) &= \quad \text{(by definition of } f_{C_{\Sigma/E}}^{s_1 \ldots s_n,s}) \end{align*} \] as desired. All now reduces to proving the following easy lemma, which is left as an exercise: **Lemma.** The bijective $S$-sorted map $\text{can}_{E}^{-1} : C_{\Sigma/E} \rightarrow T_{\Sigma/E}$ is a $\Sigma$-homomorphism $\text{can}_{E}^{-1} : C_{\Sigma/E} \rightarrow T_{\Sigma/E}$. q.e.d The canonical term algebra $\mathcal{C}_{\Sigma/E}$ is in some sense the most intuitive representation of the initial algebra from a computational point of view. Let us see in a simple example what the coincidence between mathematical and operational semantics means. For example, the equations in the \texttt{NATURAL} module are ground confluent and terminating. Its canonical forms are the natural numbers in Peano notation. And its operations are the successor and addition functions. Indeed, given two Peano natural numbers $n, m$ the general definition of $f^{s_1 \ldots s_n, s}_{\mathcal{C}_{\Sigma/E}}$ specializes for $f = +, -$ to the definition of addition, $n +_{\texttt{NATURAL}} m = \text{can}_{\texttt{NATURAL}}(n + m)$, so that $+_{\texttt{NATURAL}}$ is the addition function. \[ T_{\Sigma_{\text{NATURAL}}/E_{\text{NATURAL}}} \begin{array}{cccc} \ldots & \ldots & \ldots & \ldots \\ ppss0 & s0 + 0 & ss0 + 0 & \\ 0 + 0 & 0 + s0 & s0 + s0 & \\ ps0 & pss0 & pss0 & \\ 0 & s0 & ss0 & \\ \end{array} \} \[ C_{\Sigma_{\text{NATURAL}}/E_{\text{NATURAL}}} \end{array} \] More generally, we are interested in the agreement between the mathematical and operational semantics of an admissible Maude module of the form $\text{fmod}(\Sigma, E \cup B)\text{endfm}$, with $A$ a (possibly empty) set of associativity, commutativity, and identity axioms. The following, easy but nontrivial, generalization of the above theorem is left as an exercise. **Theorem:** Let the equations $E$ in $(\Sigma, E \cup B)$ be sort-decreasing, ground confluent and weakly operationally terminating modulo $B$; and let $\Sigma$ be preregular modulo $B$. Then, $C_{\Sigma, E/B}$ is isomorphic to $T_{\Sigma/E \cup B}$ and is therefore initial in $\text{Alg}(\Sigma, E \cup B)$. Verification of Maude Functional Modules We are now ready to begin discussing program verification for deterministic declarative programs, and, more specifically, for Maude functional modules of the form \( f\text{mod}(\Sigma, E \cup B)\text{endfm} \), where we assume \( E \) ground confluent, sort-decreasing, and weakly operationally terminating modulo \( B \), and \( \Sigma \) preregular modulo \( B \). Their mathematical semantics is given by the initial algebra \( T_{\Sigma/E \cup B} \). Their (concrete) operational semantics is given by equational simplification with \( E \) modulo \( B \). Both semantics coincide in the canonical term algebra, since we have the \( \Sigma \)-isomorphism, \[ T_{\Sigma/E \cup B} \cong C_{\Sigma,E/B}. \] What are properties of a module $\text{fmod}(\Sigma, E \cup B)\text{endfm}$? They are sentences $\varphi$, perhaps in equational logic, or, more generally, in first-order logic, in the language of a signature containing $\Sigma$. When do we say that the above module satisfies property $\varphi$? When we have, $$T_{\Sigma/E \cup B} \models \varphi.$$ How do we verify such properties? A Simple Example: Associativity of Addition Consider the module, \[ \text{fmod NATURAL is} \begin{align*} \text{sort Natural .} \\ \text{op 0 : } \to \text{ Natural [ctor] .} \\ \text{op s : Natural } \to \text{ Natural [ctor] .} \\ \text{op _+_ : Natural Natural } \to \text{ Natural .} \\ \text{vars N M L : Natural .} \\ \text{eq N + 0 = N .} \\ \text{eq N + s(M) = s(N + M) .} \end{align*} \text{endfm} \] A property \( \varphi \) satisfied by this module is the \textbf{associativity} of addition, that is, the equation, \[ (\forall N, M, L) \ N + (M + L) = (N + M) + L. \] Need More than Equational Deduction Since the initial algebra $T_{\Sigma/E \cup B}$ associated to a module \( f\text{mod}(\Sigma, E \cup B) \text{endfm} \) satisfies the equations $E \cup B$, by the **Soundness Theorem** for equational deduction, whenever we can prove an equation $\varphi$ by $E \cup B \vdash \varphi$, we must have $T_{\Sigma/E \cup B} \models \varphi$, and therefore the module satisfies $\varphi$. Therefore, equational deduction is always a **sound proof method** to verify properties of functional modules. However, it is quite limited, and generally insufficient for many properties. In particular, for $\varphi$ the associativity of addition and $E$ the equations in \textsc{natural} (in this case $A = \emptyset$) we cannot prove $E \vdash \varphi$. This is easy to see, since associativity is not a property satisfied by all models of $E$. Consider, for example, the initial model obtained by adding a nonstandard number $a$, \[ \text{fmod NON-STANDARD-NAT is} \begin{align*} \text{sort Natural .} \\ \text{ops 0 a : -> Natural [ctor] .} \\ \text{op s : Natural -> Natural [ctor] .} \\ \text{op _+_ : Natural Natural -> Natural .} \\ \text{vars N M L : Natural .} \\ \text{eq N + 0 = N .} \\ \text{eq N + s(M) = s(N + M) .} \end{align*} \text{endfm} \] This initial model satisfies $E$, but does not satisfy associativity, since $a + (a + a) \neq (a + a) + a$. In fact, no equations apply to either side. Therefore, $E \not\vdash x + (y + z) = (x + y) + z$. The point is that associativity is an inductive property of natural number addition; that is, one satisfied by the initial model of $E$, but not in general by other models of $E$. What we need are inductive proof methods based on a more powerful proof system $\vdash_{ind}$, satisfying the soundness requirement, $$E \cup B \vdash_{ind} \phi \Rightarrow \mathcal{T}_{\Sigma/E \cup B} \models \phi.$$ Also, it should prove all that equational deduction can prove and more. That is, for formulas $\varphi$ that are equations it should satisfy, $$E \cup B \vdash \phi \Rightarrow E \cup B \vdash_{ind} \phi.$$ Because of Gödel's Incompleteness Theorem, in general we cannot hope to have completeness of inductive inference, that is, to have an equivalence \[ E \cup B \vdash_{\text{ind}} \phi \iff T_{\Sigma/E \cup B} \models \phi \] although this may be possible for some very specific theories \((\Sigma, E)\) for which a complete proof system, or even an algorithm (a decision procedure), providing this equivalence exists. The inductive inference system that we will justify and use generalizes the usual proofs by natural number induction. In fact, in our example of associativity of natural number addition it actually specializes to the usual proof method by natural number induction. Sufficient Completeness is Crucial for Inductive Proofs fmod NON-STANDARD-NAT is sort Natural . op 0 : -> Natural [ctor] . op s : Natural -> Natural [ctor] . op a : -> Natural . op _+_ : Natural Natural -> Natural . vars N M L : Natural . eq N + 0 = N . eq N + s(M) = s(N + M) . endfm For the above signature $\Sigma$ and equations $E$, and $T_{\Sigma/E}$ the initial algebra, $T_{\Sigma/E} \not\models a + (a + a) = (a + a) + a$, since both terms are in canonical form and the equations are confluent. However, natural number induction on the declared constructors easily proves associativity of $+$. Therefore, induction without sufficient completeness is unsound. Exercises • Consider the \texttt{NAT-PREFIX} specification of Lecture 2. Prove that the natural numbers \( \mathbb{N} \), with zero, successor and the addition function are isomorphic to the initial algebra of that specification. • Give your own algebraic specification of the Booleans in Maude (use a sort, say \texttt{Truth}, and constants \texttt{tt}, \texttt{ff}, to avoid any confusion with the built-in module \texttt{BOOL} in Maude) with disjunction, conjunction, and negation, and prove that the standard Booleans are isomorphic to the initial algebra of your specification.
{"Source-Url": "https://courses.engr.illinois.edu/cs476/lecture_notes/lec11_sep_29.pdf", "len_cl100k_base": 5495, "olmocr-version": "0.1.50", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 50563, "total-output-tokens": 6656, "length": "2e12", "weborganizer": {"__label__adult": 0.00044083595275878906, "__label__art_design": 0.0005955696105957031, "__label__crime_law": 0.0007219314575195312, "__label__education_jobs": 0.007396697998046875, "__label__entertainment": 0.00012564659118652344, "__label__fashion_beauty": 0.0002605915069580078, "__label__finance_business": 0.0004634857177734375, "__label__food_dining": 0.0007801055908203125, "__label__games": 0.001033782958984375, "__label__hardware": 0.0017938613891601562, "__label__health": 0.0014104843139648438, "__label__history": 0.0005235671997070312, "__label__home_hobbies": 0.0003452301025390625, "__label__industrial": 0.0014944076538085938, "__label__literature": 0.0007486343383789062, "__label__politics": 0.0005898475646972656, "__label__religion": 0.0009136199951171876, "__label__science_tech": 0.379150390625, "__label__social_life": 0.0002665519714355469, "__label__software": 0.007289886474609375, "__label__software_dev": 0.591796875, "__label__sports_fitness": 0.0004546642303466797, "__label__transportation": 0.0013408660888671875, "__label__travel": 0.0002872943878173828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17226, 0.01412]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17226, 0.4219]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17226, 0.73169]], "google_gemma-3-12b-it_contains_pii": [[0, 120, false], [120, 1030, null], [1030, 1942, null], [1942, 2775, null], [2775, 3811, null], [3811, 4342, null], [4342, 4511, null], [4511, 5170, null], [5170, 5704, null], [5704, 6671, null], [6671, 7800, null], [7800, 8675, null], [8675, 9633, null], [9633, 10427, null], [10427, 10763, null], [10763, 11446, null], [11446, 12199, null], [12199, 12590, null], [12590, 13173, null], [13173, 13953, null], [13953, 14666, null], [14666, 15278, null], [15278, 15963, null], [15963, 16642, null], [16642, 17226, null]], "google_gemma-3-12b-it_is_public_document": [[0, 120, true], [120, 1030, null], [1030, 1942, null], [1942, 2775, null], [2775, 3811, null], [3811, 4342, null], [4342, 4511, null], [4511, 5170, null], [5170, 5704, null], [5704, 6671, null], [6671, 7800, null], [7800, 8675, null], [8675, 9633, null], [9633, 10427, null], [10427, 10763, null], [10763, 11446, null], [11446, 12199, null], [12199, 12590, null], [12590, 13173, null], [13173, 13953, null], [13953, 14666, null], [14666, 15278, null], [15278, 15963, null], [15963, 16642, null], [16642, 17226, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 17226, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17226, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17226, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17226, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 17226, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17226, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17226, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17226, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17226, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17226, null]], "pdf_page_numbers": [[0, 120, 1], [120, 1030, 2], [1030, 1942, 3], [1942, 2775, 4], [2775, 3811, 5], [3811, 4342, 6], [4342, 4511, 7], [4511, 5170, 8], [5170, 5704, 9], [5704, 6671, 10], [6671, 7800, 11], [7800, 8675, 12], [8675, 9633, 13], [9633, 10427, 14], [10427, 10763, 15], [10763, 11446, 16], [11446, 12199, 17], [12199, 12590, 18], [12590, 13173, 19], [13173, 13953, 20], [13953, 14666, 21], [14666, 15278, 22], [15278, 15963, 23], [15963, 16642, 24], [16642, 17226, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17226, 0.0]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
f4bb7dca5e909c0c8746c6ab1352d92e834d4376
Validating System-Level Error Recovery for Spacecraft Robyn R. Lutz Jet Propulsion Laboratory California Institute of Technology Pasadena, CA 91109 Johnny S. K. Wong Department of Computer Science Iowa State University Ames, IA 50011 Abstract The system-level software onboard a spacecraft is responsible for recovery from communication, thermal, power, and computer-health anomalies that may occur. The recovery must occur without disrupting any critical scientific or engineering activity that is executing at the time of the error. Thus, the error-recovery software may have to execute concurrently with the ongoing acquisition of scientific data or with spacecraft maneuvers. This paper provides a technique by which the rules that constrain the concurrent execution of these processes can be modeled in a graph. An algorithm is described that uses this model to validate that the constraints hold for all concurrent executions of the error-recovery software with the software that controls the science and engineering events on the spacecraft. 1 Introduction Spacecraft software processes are composed of commands. Commands are instructions to the spacecraft to take specific actions at specific times. The processes are constrained at the command level by documented rules. Intercommand constraints are rules that govern the ordering of the commands, the timing relationships that must exist between certain commands, and the commands’ access to shared variables. Every possible interleaving of commands from the asynchronous processes that cooperate during error recovery must satisfy these intercommand constraints. A failure to do so can jeopardize the collection of scientific data, a spacecraft subsystem, or even the spacecraft itself. The research described here provides a technique for validating that the intercommand constraints are satisfied during error recovery onboard the spacecraft. The work was carried out in the context of the Galileo spacecraft. The examples are drawn from Galileo’s error-recovery processes and from the preliminary sequence for Galileo’s planetary encounter with Jupiter. Ongoing research is evaluating the applicability of these results to other spacecraft as well as to other asynchronous systems with critical precedence and timing constraints. 1.1 The Problem Validating the system-level error recovery on the spacecraft requires the capability to analyze the possible interactions among concurrent processes. This is a difficult endeavor. A single fault on the spacecraft may at times trigger several different processes whose commands must not interfere with each other. More than one fault may also occur at a time, causing several error-recovery processes to be invoked. In addition, there is at any time a unique sequence of uplinked commands executing on the spacecraft. Each sequence is a set of time-tagged commands relating to the upcoming mission activities. A sequence is periodically sent to the spacecraft from the ground and stored in the spacecraft’s temporary memory until the time comes for each command in it to execute. Some stored sequences of commands are so critical to the success or failure of the mission that they are labeled “critical sequences.” The sequences of commands used at launch or to direct Galileo’s probe relay and orbital insertion activities at Jupiter are examples... of critical sequences. A critical sequence, unlike a non-critical sequence, must continue to execute even during system-level error recovery. The constraints imposed on the commands are an effort to preclude conflicting interactions among the possibly concurrent processes. Some commands can interfere with the effect of other commands if they are executed too closely or too far apart in time. Certain commands must precede or follow other commands to accomplish the desired action. Some commands change the values of parameters used by other commands. Commands relating to power or propellant usage, to temperature or attitude control, to spacecraft or data modes, can endanger the collection of scientific data, a subsystem, or the spacecraft if intervening commands issued by another process leave the spacecraft in an unexpected state. 1.2 Background The solution to the problem of validating system-level error recovery must take into account both precedence constraints (the order in which commands occur) and timing constraints (the time between commands). These two types of constraints are fundamentally different in that precedence does not require a notion of duration [16]. Consequently, the tools that currently exist to model precedence constraints tend to ignore timing requirements and to be inadequate for modeling the timing constraints on spacecraft [6, 12, 15]. On the other hand, many techniques that are currently available to model timing constraints tend to ignore precedence constraints. Some techniques consider both timing and precedence constraints, but their definition of timing constraints only in terms of periodic events (e.g., sampling rates), fixed execution times for events, and deadline or timeliness requirements provides too limited a model for the timing issues constraining spacecraft commands [2, 19, 20, 22]. Discrete event simulation or emulation models the states of the spacecraft and the events that cause a transition from one state to another [5, 7, 13]. A simulation is run iteratively with a series of different combinations of process start times or fault-injection times. Each simulation produces a timeline which can then be analyzed (automatically or by hand) for intercommand constraint violations. Simulation is a large-scale technique, costly in terms of space (since there are many states that must be modeled and stored) and in terms of time (since typically many iterations must be made to test a scenario). A wide variety of powerful formalisms exists to model the specifications and behavior of real-time concurrent systems. Many of these formalisms address to some degree the problem of checking timing constraints. However, none of the available methods readily translates to the domain of validating error recovery on spacecraft. Petri net extensions model periodic events and deadlines [4, 9]. Automata-based methods model processes as a machine and try to prove a predicate (which may involve upper and lower time bounds on events) true for the reachable states in the machine [11]. Real Time Logic models the timing aspects of a system specification to establish timing properties (periodic events and deadlines) [8]. Various extensions to temporal logic and temporal logic model checking have been developed to formally describe timing requirements and to verify automatically that the system satisfies them [16, 3]. These methods provide a good basis for specifying timing requirements but are either more ambitious (in that they model states) or less expressive (in that they only model a subset of timing constraints) than is needed for the spacecraft. The work described here discusses many of the same timing issues addressed by recent work in interval temporal logic [17, 18]. However, the emphasis there is on specifying and verifying system requirements (what the spacecraft can do) while the emphasis here is on verifying operational constraints (what the spacecraft may do). Autonomous error recovery onboard the spacecraft requires the spacecraft to have capabilities that it is only permitted to exercise subject to certain constraints. This paper offers a partial solution to the problem of modeling distance in time within the context of spacecraft error recovery. Operationally, several stages of the software development process involve analysis of the interactions among spacecraft processes. The subsystem software designers analyze the interactions as the software is designed and written. During subsystem testing, system integration, and prelaunch testing, a limited set of the interactions is simulated on testbed hardware. The process of developing sequences of commands also includes checks for constraint violations. Limited simulation of the sequence may occur. Critical sequences—those which must remain active even during a spacecraft error recovery—receive special attention during the sequence-development process. Similarly, changes and updates to the error-recovery software are checked and double-checked. Since this software is usually invoked only when a failure has already occurred on the spacecraft, possibly leaving the system in a vulnerable state, it is essential that the error recovery occur quickly, correctly, and predictably. With the software tools currently available, analyzing the possible intercommand constraint violations is a tedious and complex task. The interactions among the error-recovery processes and the sequences of commands are both important to the spacecraft’s safety and difficult to visualize fully. The method described here, which provides a graphical representation of the relevant command constraints and an algorithm to aid in detecting unsafe interleavings of the commands in the processes, can facilitate and improve the analysis of whether the intercommand constraints are always satisfied. 1.3 Proposed Solution The model proposed here represents precedence, timing, and data-dependency constraints on spacecraft commands by means of a labeled graph in which the nodes represent commands and the edges represent documented constraints on those commands. A classification of the types of intercommand constraints is presented in Section 2. Section 3 describes an algorithm, called the constraints checker, which accepts as input the constraints graph, a number of potentially concurrent processes, and user-provided intervals during which the processes can execute. The constraints checker associates each edge type in the graph with an algebraic predicate. The predicate relates the time of issuance of the commands which are the edge’s endpoints to the constraint represented by the edge. The constraints checker tests whether the appropriate predicate holds for each edge in the graph. An edge which fails to satisfy the required predicate is flagged as potentially unsafe. In that case there exists some interleaving of the commands in the processes which can cause the constraint represented by that edge to be violated. Section 4 offers concluding remarks in the context of the spacecraft. 2 Modeling the Intercommand Constraints in a Graph Intercommand constraints are rules that govern the ordering or timing relationships between commands. There are two main classes of intercommand constraints: precedence constraints and timing constraints. Data-dependency constraints are typically represented as precedence constraints. The classification into timing and precedence constraints corresponds loosely to the standard formal division of program correctness into safety properties and liveness properties. Safety properties can be stated informally as “nothing bad ever happens” and liveness properties as “something good eventually happens” [21]. 2.1 Timing Constraints Intercommand timing constraints are safety properties. They impose a quantitative temporal relationship between the commands. If \( c_i \) and \( c_j \) are distinct commands and \( t_1 \) and \( t_2 \) are time parameters, then the following five types of timing constraints can occur: (Examples are paraphrased from Galileo documentation.) 1. Minimum-interval constraint. If \( c_i \) occurs then \( c_j \) cannot occur within time \( t_1 \) of it. An example is, “A SCAN command shall not be sent within 10 seconds of a SLEW command.” 2. Outside-interval constraint. If \( c_i \) occurs, then \( c_j \) can only occur outside a range \( t_1 \) to \( t_2 \). An example is, “The time separation between powering on the S-Band transmitter and powering on the X-Band transmitter shall be either less than one-half minute or greater than 6 minutes.” Note that a minimum-interval constraint is a special case of an outside-interval constraint with \( t_1 = 0 \). 3. Forbidden combination constraint. If \( c_i \) occurs then \( c_j \) cannot occur. This is a special case of an outside-interval constraint in which \( t_1 = -\infty \) and \( t_2 = +\infty \). An example is that if one of the two optics heaters is commanded on, then the other optics heater cannot be commanded on. 4. Maximum-interval constraint. If \( c_i \) occurs, then \( c_j \) can only occur within time \( t_1 \) of it. An example is, “The initialization command shall be sent within 8 seconds after turn-on.” 5. Inside-interval constraint. If \( c_i \) occurs then \( c_j \) can only occur within a range \( t_1 \) to \( t_2 \). An example is, “Each Low-Gain Antenna Motor power on command shall be followed no sooner than 9 seconds and no later than 30 seconds by the Low-Gain Antenna motor power off command.” Note that a maximum-interval constraint is a special case of an inside-interval constraint with \( t_1 = 0 \). In some cases a “nominal-to-worst-case” execution time may be documented for command \( c_i \). This pair of values indicates the expected time that it takes com- mand $c_i$ to complete as well as the longest completion time for which the developers must plan. The worst-case execution time for $c_i$ can be considered to be an additional timing constraint on when command $c_j$ can occur. In the case of a minimum-interval timing constraint, call it $t_1$, the worst-case execution time can be added to $t_1$ in the graph. However, in the case of a maximum-interval timing constraint, adding the worst-case execution time to the initial timing constraint can mask a violation of the constraint by lessening the time interval between commands $c_i$ and $c_j$. Instead, the nominal execution time (or, to be conservative, 0) is added to the timing constraint. The cases for the inside-interval and the outside-interval edge types follow accordingly. In this way the time required to execute the command $c_i$ can be modeled. 2.2 Precedence Constraints While intercommand timing constraints are clearly safety properties, intercommand precedence constraints contain aspects of both safety and liveness properties [11]. Precedence constraints enforce an ordering of commands and so involve functional correctness, a concern of safety properties. Precedence constraints also involve liveness properties since they assert that if one command occurs, then another command must precede it: "If $c_j$ occurs, then a $c_i$ must precede it." An example is, "Spin Detector B can only be powered on after Spin Detector A is turned off." Thus, whereas timing constraints assert that "every $c_i$ can only occur with timing relationship $\tau$ to $c_j$", precedence constraints assert that "for every $c_j$, there must exist a $c_i$ that precedes it." If a timing constraint exists between commands $c_i$ and $c_j$, either command can legally occur alone. However, if a precedence constraint exists between commands $c_i$ and $c_j$, for example, "If $c_j$ occurs then $c_i$ must precede it," then $c_j$ cannot occur in isolation from $c_i$. Many constraints of the form "State $A$ is a precondition for issuing command $c_j$," where state $A$ can be commanded, can be adequately though imperfectly modeled as precedence constraints. An example is the rule that "10-Newton thruster firings must be performed with the scan platform in a safe position." In the context of the spacecraft, it suffices to ensure that the command to place the scan platform in a safe position precedes the command for thruster firing. By representing the state (scan platform in the safe position) by a command (place the scan platform in the safe position), the constraint can be modeled in the graph. If the required state cannot be commanded (e.g., "low-radiation environment"), then it cannot be modeled in the graph. A similar abstraction occurs with forbidden-combination constraints. For example, the constraints graph approximates a constraint forbidding the issuance of a command to turn on optics heater B if optics heater A is on by means of an edge that forbids the issuance of a command to turn on optics heater B following the issuance of a command to turn on optics heater A. 2.3 Data-dependency Constraints Data-dependency constraints are restrictions placed on the order of commands when two or more processes access the same variable and at least one process changes the value of the variable [1]. In such cases a concurrent execution of the processes can lead to a result different from the sequential execution of the processes. To forestall the data inconsistency that could result from this, a data-dependency constraint is used to specify the order in which the commands that read/write the variable must occur. Such a data-dependency constraint is expressed as a precedence constraint. 2.4 The Constraints Graph Model The intercommand constraints are modeled in a directed graph, called a constraints graph. Each edge in the constraints graph represents a constraint on the relationship of the commands which form that edge's nodes. For example, the timing constraint "A SCAN command shall not be sent within 10 seconds of a SLEW command" would be modeled as a labeled edge from a node labeled SLEW to a node labeled SCAN. The subgraph composed of just the precedence edges must be acyclic since otherwise every occurrence of a command in a cycle would have to be preceded by another occurrence of the same command. A node (representing a command) has three labels associated with it: the command mnemonic that identifies the command, the nominal (predicted) execution time of the command, if any, and the worst-case execution time of the command, if any. An edge (representing a constraint) has five labels associated with it: the type of edge, up to two time fields, a key to where the constraint is documented, and the variable associated with the edge, if it represents a data-dependency constraint. The constraints graph is sparse, and can be stored in O(|V| + |E|) space via an adjacency list representation [14]. 3 Detecting Constraint Violations During Error Recovery 3.1 Overview of the Constraints Checker The constraints graph and a set of processes (time-tagged lists of commands) are input to the constraints checker. Because there is little branching in spacecraft system-level error-recovery processes and command sequences, the commands in the alternative paths of a process can be interleaved for input to the constraints checker. The constraints checker algorithm fixes one process' timeline and determines the range of start times that the fixed timeline and the constraint represented by each edge impose on the other process' start time. Each edge type is associated with an algebraic predicate which relates the time of issuance of the commands which are the edge's endpoints to the constraint represented by the edge. The constraints checker tests whether the appropriate predicate holds for each edge in the constraints graph. An edge which fails to satisfy the required predicate is flagged. In such a case some interleaving of the commands in the processes can cause the constraint represented by that edge to be violated. The algorithm and its variables are described briefly here, with details given in [10]. 3.2 The Constraints Checker Algorithm Let \( (e_i, e_j) \) be an edge, with process \( P_i \) issuing command \( c_i \) and process \( P_j \) issuing command \( c_j \). If an edge violation occurs as the result of interleaving several processes, that same edge violation still occurs as the result of interleaving the two processes that issue the edge's nodes. Any interleaving of two processes that can occur via the concurrent execution of more than these two processes can occur with the concurrent execution of only these two processes. It thus suffices to check for each edge every ordered pair \( (c_i', c_j') \) where \( c_i' \) is an instance of \( c_i \), \( c_j' \) is an instance of \( c_j \), and \( c_i' \) and \( c_j' \) are issued by distinct processes. The constraints checker distinguishes between precedence edges and time-constrained edges. A precedence edge requires that every \( c_j \) be preceded by a \( c_i \). The algorithm in effect examines all instances of \( c_i \) for each instance of \( c_j \). To capture the existential quantifier in the predicate for a precedence edge ("There exists a \( c_j \) that precedes this \( c_i \)"), the constraints checker refers to information from the user. The user decides which of the processes can be considered to always execute in the current scenario. A \( c_i \) in a process that may or may not execute cannot satisfy a precedence edge. The variable \( Gc_i \) is the earliest time by which a process that is guaranteed to execute can guarantee that \( c_i \) will occur. \( Gc_i \) is user-provided for \( c_i \) on precedence edges. \( Gc_i = \infty \) if no such guarantee can be made. The constraints checker uses \( Gc_i \) to detect a possible violation of a precedence edge. The precedence edge requires that \( c_j \) not occur before \( Gc_i \). Let \( StartP_j \) be the actual start time of the process issuing \( c_j \). \( Delta_j \) be the time interval from \( StartP_j \) until command \( c_j \) is issued, and \( EInitP_j \) be the earliest time at which process \( P_j \) can start. When \( StartP_j \) is fixed, the command \( c_j \) occurs at \( StartP_j + Delta_j \). However, when the start time of the process issuing \( c_i \) is fixed and the value of \( StartP_j \) is not fixed, the algorithm then considers the earliest time at which a \( c_j \) can occur—namely at \( EInitP_j + Delta_j \). If the earliest time at which a \( c_i \) can occur precedes or equals \( Gc_i \), then the edge is flagged. This means that the constraint that the edge represents is not always satisfied by the process interactions. A time-constrained edge, on the other hand, requires that if command \( c_j \) occurs, then command \( c_j \) does not occur within some time interval. Whereas a precedence edge requires a \( c_i \) for every \( c_j \), in a time-constrained edge the presence of one command does not require the presence of the other. The predicates for the time-constrained edges are of the form "every \( c_j \) that follows this \( c_i \) must satisfy a certain timing constraint." For a timing constraint, define the time interval \( Pos \) to be the range of possible times at which the process whose timeline is variable may start according to the user. Define \( Safe \) to be the set of safe times at which that process may start, that is, the set of all times that satisfy the predicate for that edge type. Then a time- constrained edge is satisfied if the set of possible times for the process whose timeline is variable is contained within the set of safe times for that process: \( \text{Poss} \subseteq \text{Safe} \). Each of the five time-constrained edge types described in Section 2 has associated with it an algebraic predicate that is used to determine the values of the set Safe. For example, the predicate for a maximum-interval time-constrained edge (“command \( c_j \) cannot occur more than time \( t_1 \) after command \( c_i \)” is: \( \text{Start}P_i + \text{Delta}i \leq \text{Start}P_j + \text{Delta}j \leq \text{Start}P_i + \text{Delta}i + t_1 \). The predicates for the other edge types follow accordingly [10]. The Constraints Checker Algorithm: ```plaintext for each edge \((c_i, c_j)\) in the constraints graph begin if edge type = precedence then begin for each instance \( c'_i \) of \( c_i \) fix \( P_i \) or \( P_j \) according to the rules; if \( P_i \) is fixed and \( (\text{Start}P_j + \text{Delta}j) \leq \text{Gci} \) then output warning flag; if \( P_j \) is fixed and \( (\text{EInit}P_i + \text{Delta}i) \leq \text{Gci} \) then output warning flag end else if edge type = time-constrained edge then begin for each pair \((c'_i, c'_j)\) of instances of \( c_i \) and \( c_j \) where \( P_i \neq P_j \) fix \( P_i \) or \( P_j \) according to the rules; if Poss \( \subseteq \text{Safe} \) then output warning flag end end ``` The time complexity of the algorithm is \( O(\sum E | n^2 \cdot K^2) \), where \( E \) is the set of edges, \( n \) is the number of processes, and \( K \) depends on the number of instances of a command per process [10]. The constraint checker’s runtime is reduced by the facts that the constraints graph is sparse, that there are usually few instances of each command per process, that there is minimal branching in the processes, and that the number of concurrent error-recovery processes is small. ### 3.3 Output The constraints checker makes an assertion about the allowable start times of the process whose timeline is not fixed. It makes this assertion based on the edge type currently being surveyed, on the constraint represented by the edge, on the offset between the processes’ start times and their issuances of \( c_i \) and \( c_j \), and on the fixed start time of one process. If the constraint checker’s assertion concerning when the other processes should start is inconsistent with the user-provided range of start times, then the edge is flagged. In order for the user to be able to reconstruct the concurrent execution which caused an edge to be flagged (i.e., the intercommand constraint that the edge represents to be violated), the constraints checker outputs the identity of any flagged edge and its nodes, as well as the identity of the two processes whose concurrent execution caused the constraint violation. If the edge was flagged due to erroneous information in one of the edge or node labels, the user can readily correct the input data and run the constraints checker again to verify the adequacy of the correction. If the edge was flagged due to a problem with the existing error-recovery schedule, the data output with the flagged edge helps the user identify the problem. The user responds by shifting the processes’ timelines or by curtailing the concurrency that allowed the intercommand constraint to be violated. The goal is to adjust or limit the concurrent execution of the processes so that the edge will not be flagged in a subsequent run. If an edge is not flagged either because the calculated start time is within the user-provided time range or because the user did not provide a range of possible start times, then the predicate is stored. Each pair of processes that forms the nodes of an edge yields a predicate relating the fixed start time of one process to the variable start time of the other process. As the edges are considered one-by-one in the constraints checker, the constraints that the edges impose on the scheduling of the processes accumulate. After all the edges have been surveyed, the constraints checker computes for every distinct pair of processes the range of safe start times of one process relative to the other. Within this time interval the intercommand constraints are satisfied. ### 4 Conclusion This paper has described a partial solution to the problem of validating that the concurrent execution of system-level error recovery processes with the crit- ical command sequences satisfies intercommand constraints. The paper has presented a method by which the timing, precedence, and data-dependency constraints on the commands can be modeled in a constraints graph. A constraints checker algorithm is provided which uses the constraints graph to check for command interleavings that can violate the constraints. The error-recovery scenarios chosen to test the algorithm involved failures during the execution of the critical command sequence that controls Galileo's arrival at Jupiter. The activities of the processes that must cooperate during error recovery are highly constrained due to the complexity and time criticality of the engineering and science during the planetary encounter. There are thus many opportunities for unsafe error-recovery schedules. The constraints checker offers a way to discover such process interactions early in the software development process. The constraints checker algorithm is designed specifically to help answer the question of whether existing system-level error recovery is adequate. It offers a flexible, embeddable, and relatively simple alternative to simulation of error-recovery scenarios. In the context of the spacecraft, the algorithm identifies the unexpected effects resulting from the interleaving of error-recovery processes and mission-critical sequences of commands. In a broader context, the research presented here is part of an ongoing effort to investigate the behavior of concurrently executing processes subject to precedence and timing constraints. Acknowledgments The first author thanks Chris P. Jones of the Jet Propulsion Laboratory for many helpful insights. The work of the first author described in this paper was started at Iowa State University, supported by grant NGT-50269 from the National Aeronautics and Space Administration, and was completed at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. References
{"Source-Url": "https://softwaresafety.cs.iastate.edu/files/inline-files/aiaa91.pdf", "len_cl100k_base": 6007, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 27858, "total-output-tokens": 7752, "length": "2e12", "weborganizer": {"__label__adult": 0.0003485679626464844, "__label__art_design": 0.00040340423583984375, "__label__crime_law": 0.0004820823669433594, "__label__education_jobs": 0.001163482666015625, "__label__entertainment": 0.00013744831085205078, "__label__fashion_beauty": 0.00020205974578857425, "__label__finance_business": 0.0003733634948730469, "__label__food_dining": 0.0004336833953857422, "__label__games": 0.001102447509765625, "__label__hardware": 0.0022869110107421875, "__label__health": 0.0007257461547851562, "__label__history": 0.0005140304565429688, "__label__home_hobbies": 0.0001443624496459961, "__label__industrial": 0.0007872581481933594, "__label__literature": 0.0004444122314453125, "__label__politics": 0.00038504600524902344, "__label__religion": 0.0005092620849609375, "__label__science_tech": 0.381103515625, "__label__social_life": 0.00012218952178955078, "__label__software": 0.014068603515625, "__label__software_dev": 0.59228515625, "__label__sports_fitness": 0.00037598609924316406, "__label__transportation": 0.00130462646484375, "__label__travel": 0.0002440214157104492}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33271, 0.0252]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33271, 0.79206]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33271, 0.90139]], "google_gemma-3-12b-it_contains_pii": [[0, 3371, false], [3371, 8476, null], [8476, 13184, null], [13184, 17690, null], [17690, 22797, null], [22797, 27435, null], [27435, 31448, null], [31448, 33271, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3371, true], [3371, 8476, null], [8476, 13184, null], [13184, 17690, null], [17690, 22797, null], [22797, 27435, null], [27435, 31448, null], [31448, 33271, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33271, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33271, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33271, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33271, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33271, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33271, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33271, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33271, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33271, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33271, null]], "pdf_page_numbers": [[0, 3371, 1], [3371, 8476, 2], [8476, 13184, 3], [13184, 17690, 4], [17690, 22797, 5], [22797, 27435, 6], [27435, 31448, 7], [31448, 33271, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33271, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
bf5584bc9771e8e260eca65b1aec5fea62898578
A Novel Authentication Procedure for Secured Web Login using Coloured Petri Net D. Lalitha 1*, S. Vaithyasubramanian 2*, K. Vengatkrishnan 3, A. Christy 4, M. I. Mary Metilda 5 1, 2, 3, 5Department of Mathematics, 4 Department of CSE Sathyabama Institute of Science and Technology, Chennai, India. * Corresponding Authors: 1 lalkrish2007@gmail.com, 2 discretevs@gmail.com 3 vengat0809@gmail.com, 4 ac.christy@gmail.com, 5 metilda81@gmail.com Abstract - To secure the information in the World Wide Web, websites ask users to create their own login identification and password, where in general a password is a simple string of characters. To make the information more secure, an access code of an array of alphanumeric and special characters can be generated by using a Petri net model. A string language can be generated by labelling transitions of a Petri net. Similarly two dimensional array languages can also be generated by Petri nets. Coloured Petri net has also been defined to generate array language. This kind of access would be a three factor authentication. In this paper we propose and develop the application of array generating Petri net to enhance information security. Keywords - Access Code, Array Languages, Coloured Tokens, Inhibitor Arcs, Information Security, Petri net, Three Factor Authentication I. INTRODUCTION Petri net [3] is one of many Mathematical models available to model distributed systems. Some of the components of such systems may exhibit concurrency and parallelism. Tokens in Petri net generally represent the resources required for activities to take place. All the tokens in a basic Petri net are black dots. In a complex system the resources may have different attributes. It is not possible to represent the various attributes of the resources by just black tokens. Hence the basic Petri nets are not suitable to model such systems. Several extensions of the basic Petri net are available in literature. In (CPN) Coloured Petri Nets[1, 2, 4] tokens carry different values. String languages and two dimensional array languages have been discussed in detail in Formal Languages. Array language can be generated by a Petri net [5-9]. Tokens are rectangular or hexagonal arrays over a given alphabet. Catenation rules are defined as label of the transitions. When transitions fire the array grows in size and will reach the output place. When enabled transitions are fired, arrays move around. The set of arrays that reach the final places is defined as the array language generated by the net. Coloured tokens have also been used to generate arrays [5]. CPN is used to facilitate more control over firing and also to have more variety on the data to be used. The Petri net model introduced in this paper has tokens with three attributes. The first attribute gives the identification of the token. It is used to differentiate the various tokens, which may reside in the same place. The second attribute will signify the position of the token or the place in which it resides. The third attribute gives the value it takes at a given point of time. The transition gets enabled or disabled, depending on the conditions placed. The attribute of the tokens released will also depend on the conditions attached with the transition and also on the attribute of the tokens consumed. Password is in general a string of characters used for authenticity to gain access to any resource. To gain access the password entered by the user has to match with the original password created. Otherwise the resource becomes inaccessible. Passwords are generally short so that it can be easily memorized. The easier the password is to remember, the easier it is for the hackers to crack. Several papers have been published for tackling this issue [10-12]. The user has to balance between the necessity of a highly secure password and a password which can be easily recalled. Creating a string password could be like handling a two edged sword. The number of combinations to create a string password consisting of three lower case letters and two digits is 36^5. For a cracker using a standard personal computer, the maximum time required to guess the password of length 5 is 1 second. The length of the password is also a factor involved in authenticating a string password. If six characters are to be used in a string password the number of combinations is 36^6. An alternative to alphanumeric password, array passwords were defined [9, 10]. If six characters are to be used in an array password, it can be created in 4 different ways. The array size could be 1x6 or 6x1 or 2x3 or 3x2. Hence there are 4 x 36^6 number of different combinations with 6 characters. Hence obviously it would take a longer time to guess an array password which is made up of the same number of characters as the string password. This paper is organized as follows: The second section defines the basic Petri net model which generates rectangular arrays over an alphabet. Examples are given to DOI 10.5013/IJSSST.a.19.06.33 33.1 ISSN: 1473-804x online, 1473-8031 print show how the introduction of inhibitor arcs in the net increases the generative power of the model. The third section defines and also gives examples of the Coloured Petri net model which generates rectangular arrays. In the fourth section, Petri net has been used to create an access code. This access code is an array made up of any of the 95 printable characters. Flow chart is drawn to show how these Petri nets are called for the creation of the access code. Whenever the user logs in to use the resource, the access code has to be authenticated. Hence another flow chart is drawn to give the steps for authenticating the user. Three factors are involved in this authentication process. Number of rows and number of columns are two factors. The user also has to decide, when creating the access code, the order in which the characters are keyed in. It could be row-wise or column-wise. To access the resource he has to use the same order for entering the characters, otherwise the access will be denied. Before even entering the exact characters of the access code, three different factors are verified. The last section is the conclusion, which connects all the sections of the chapter. II. BASIC NOTATIONS All the preliminary notations required are explained in this section. Differences between an ordinary Petri net and an array generating Petri net are explained in detail. The Petri net model called Array Generating Petri net (AGPn) is defined and explained with an example. Notations used in Array Languages: Two arrays can be joined column-wise if they have the same number of rows and can be joined row-wise if they have the same number of columns. If A and B are two arrays having same number of columns then \( A \otimes B \) denotes the array which is got by joining the two arrays column-wise. If A and B are two arrays having same number of rows then \( A \oplus B \) denotes the array which is got by joining the two arrays row-wise. In formal language theory, \( a^n \) denotes a column of \( n \) a’s and \( a^m \) denotes a row of m a’s. Generally for an \( m \times n \) array A, \( A \otimes b^m \) will denote joining a row of \( b^m \)'s to the array A after its last row and \( b^m \otimes A \) will denote joining a row of \( b^m \)'s to the array A, before its first row. Similarly \( A \oplus h_n \) will denote joining a column of \( h_n \)'s to the array A, after its last column and \( h_n \oplus A \) will denote joining a column of \( h_n \)'s to the array A, before its first column. If A is an \( m \times n \) array and B is a \( k \times n \) array, then \( A \otimes b^m \oplus B \) denotes the joining of A, followed by a row of \( b^m \)'s and then followed by the array B. The resulting array would be of size \((m+k+1)\times n\). If A is an \( m \times n \) array and B is an \( m\times k \) array, then \( A \oplus h_n \oplus B \) denotes the joining of A, followed by a column of \( h_n \)'s and then followed by the array B. The size of the resulting array would be \( m \times (n+k+1) \). Differences between Ordinary Petri Net and Array Generating Petri Net: In a basic Petri net tokens are just black dots. In a basic Petri net firing a transition moves the tokens from input places to output places. But in array generating Petri net model tokens have attributes. Array over a given alphabet is used as one of the attributes of the token used. A function is defined for every transition of the net. This function defines the changes that take place in the attributes of the token. CPnAL: Coloured Petri net Array Language Coloured Petri net is used whenever a variety of tokens is required for the model. In this section an Array Generating Coloured Petri net (AGCPn) is defined. In this model each token has three attributes, called the token attributes (TA). The language generated by AGCPn is denoted by CPnAL. The model is explained with an example. The declaration of the colour set is done in the net. In general integer values are denoted by ‘int’. Two dimensional arrays are denoted by ‘ARRAY2’ and strings are denoted by ‘str’. Definition of AGCPn An AGCPn is defined as a nine component tuple \((\Sigma, P, V, T, I, O, TA, \phi, F)\) where \( \Sigma \) a set of characters, \( P \) is the set of places of the net, \( V \) is the Colour Set, \( T \) is the set of transitions such that \( P \cap T \) is empty, \( I \) is the input function from \( T \) to bags of places, \( O \) is the output function from \( T \) to set of places, \( TA \) is the initial Token Attributes which is a tuple of the form \((id, p, v)\), where id is the identification of the token, \( p \) the position of the token and \( v \) the value of the token. \( \phi \) is a Transition function, defined as follows: for all \( t \in T \), \( \phi_t \) is a partial function from a set of attributes to a set of attributes. F a subset of \( P \), is called the final set of places. Definition of CPnAL The language CPnAL, generated by AGCPn is the set of all arrays that come into the final set of places. The enabled transition \( t \) fires and the tokens are moved from the input place to its output place. The attributes associated with the token, changes according to the function \( \phi_t \). The resulting token will also depend on the conditions satisfied by the attributes of the token. The example 1 defines a net which generates all square arrays made up of a’s whose side is of the form \( 2^k \). Example 1 AGCP_{n2} = (Σ, P, T, I, O, V, TA, ϕ, F) where Σ={a, b}, P={P1, P2, P3, P4}, T={t1, t2}, Input output functions can be seen in the net given in Figure 1. V- The tokens used are arrays and integers. The declarations are given in Figure 1. A is a two dimensional array whose initial value is assigned. I and J are integers whose initial value is also fixed. The Initial Token Attribute is the set: \{(a, p_1, 4), (J, p_4, 1), (J, p_4, 1)\} Where the function ϕ is defined in the net and F = \{p_1\}. The function ϕ and ϕ2 describe the actions of the transitions. Initially t1 is enabled to fire. Whenever t1 fires a column of b’s is joined to the array in P1 and is put in P2. This happens whenever an array reaches P1. Now t2 is enabled to fire. Since t2 has three input places, the result of firing t2 will depend on the I, J values. If I > 0, then two columns of a’s is joined to A and it is put in P2, in P3, the value of I is replaced by I - 1 and in P4, the value J is unaltered. If I = 0, then a row of a’s is joined and is put in P1, the values of both I and J are replaced by J + 1. The arrays that reach P1 are included in the language. The \(k + 1^{th}\) array is got from \(k^{th}\) array by adding a column of b’s, followed by an array of size \(k \times 2k\) of a’s and finally a row of a’s is added. The size of the \(k + 1^{th}\) array is \((k+1) \times (k^2 + 2k + 1) = (k+1) \times (k+1)^2\). The language CPn_{2AL} consists of arrays of size \(n \times n^2\). A derivation of the second array from the first array is given below: \[ \begin{array}{c} A \\ \hline \begin{array}{c} p_1 \\ p_2 \\ p_2 \\ p_1 \end{array} \end{array} \] III. CREATING AND AUTHENTICATING ACCESS CODE AGCPn defined in the previous section, is used to create an access code for network security. When it comes to information security string password made up of alphanumeric and special characters are used for login authentication. Constantly efforts are made to create secure passwords so that it is not easy to guess. In spite of all the security provided, information gets hacked as these string passwords are easy to crack and there are tools for doing it. In this section a code which is in the form of a matrix is generated by AGCPn. In the process of generating the array password the user has to decide three different factors. As the first factor user needs to choose the array size, the row-size (m) and column-size (n). As the second factor user has to decide on how the characters are going to be entered into the array. The user can enter the characters row-wise (R) or column-wise (C). Deciding on every character of the array is the third factor involved. Hence when creating the access code, the user chooses the value of all the parameters (m, n, R/C). Every time the user is logging into the website, all these parameters will have to match, otherwise the access will be denied. Two different Petri net models are used for generation of the access code. In the first model the array elements are entered one by one as a single character without using any of the “Tab” key or “Enter” key. In the second model a blank array is created of the required dimensions. The array elements are entered one by one as a single character, but in between every two character the “Tab” key is used to move the control to the next position in the array. Flow charts are drawn to give the procedure step by step in calling the Petri net. First time when the net is called, to generate the access code, the required parameters are given. Using the parameters and the characters entered, the access code is generated and sent back to flow chart. The access code for the user is saved for future reference. The second flow chart shows the steps involved in letting or refusing the user access into the website. In the flow chart given in the Figure 2, the values for m and n are taken from the user to generate the array. The user has two options in entering the elements of the array, rowwise or column-wise. This order is also taken as the specified order (R/C) from the user. The flow chart gives the value of the parameters m, n and calls RPn (m, n) or CPn(m, n) depending on the choice of the order. The relevant Petri net generates the access code and sends it back to be stored. The user can use any of the 95 printable characters to generate his array password. The control characters cannot be used. If any other character is used, then the array does not get generated. **Figure 2 Working Model for calling the relevant Petri net.** AGCPn to generate the access code: If R is chosen by the user, then the flow chart calls RPn (m, n, R) by giving the values of m and n as its parameters. The Petri net in example 2 is used to generate the array password. **Example 2** \[ \text{AGCPn} = \{ \Sigma, \phi, \mathbf{P}, \mathbf{T}, \mathbf{I}, \mathbf{O}, \mathbf{T} \}, \text{Input and Output functions are shown in the Figure 3.} \] The colour set involved in the net are two dimensional arrays, strings and integer. The arrays and strings are made of the elements of \( \Sigma \). \( \lambda \) denotes the empty string. Initial Attributes of the tokens: \[ TA = \{(S, P_1, \lambda), (C, P_2, C), (i, P_3, 0), \cdots \} \] DOI 10.5013/IJSSST.a.19.06.33 The partial function $\varphi$ states how the attributes of each token changes, when any enabled transition is fired. $F = \{P_8\}$. The final place stores the access code which is an $m \times n$ array. Initially there is an empty string in $P_1$, $i = 1$, $j = 0$. Assume the user is entering an access code made up of two rows and three columns. When the first character is entered, the condition for firing $t_1$ is satisfied and so the character is joined with $\lambda$ column-wise and put in $P_1$. The value of $i$ is retained but value of $j$ is incremented by 1. The same condition will hold till all the three characters of the first row are entered and it joins to form a string. At this stage $j = 3$, so the condition for firing $t_1$ is not satisfied but the condition for firing $t_2$ is satisfied. When firing $t_2$ the string from $P_1$ is put into $P_7$ and it is stored as $R[1]$. The value of $i$ is incremented by 1 and the value of $j$ is initialized to 0. The empty string is put back in $P_1$. The whole process is repeated again and the second row gets generated using the next three characters put in $P_2$. Again $j$ becomes three, with $i = 2$, and so $t_3$ fires moving the string in to $P_7$ and it is stored as $R[2]$. Now $i = 3$ which is greater than $m$ and so $t_3$ fires. Both $R[1]$ and $R[2]$ are joined row-wise and is moved to the place $P_8$. The array which comes into $P_8$ is called Access code. If instead of the 95 printable characters any other control character is used, then the array does not reach the output place. If $C$ is chosen by the user, then the flow chart calls $CP(m, n)$ by giving the values of $m$ and $n$ as its parameters. In this case the Petri net in example 3 is used to generate the array password. Example 3 $CP(m, n, R) = (\Sigma, P, V, T, I, O, TA, \varphi, F)$, where $\Sigma$ is the set of all alphanumeric and special characters, $P = \{P_1, P_2, \ldots, P_8\}$, $T = \{t_1, t_2, t_3\}$. Input and Output functions are shown in the Figure 4. The colour set would be two dimensional arrays, strings and integers. $\lambda$ denotes the empty string. Initial Attributes of the tokens: The partial function $\varphi$ states how the attributes of each token changes, when any enabled transition is fired. $F = \{P_8\}$. The final place stores the access code which is an $m \times n$ array. In this example every character of the first column is filled. As each character is taken the value of $i$ increases till it reaches $m$ and the value of $j$ is not incremented. Once $i$ reaches the value of $m$, $i$ is initialized to 0 and $j$ gets incremented. Finally all the columns reaching $P_7$ are joined column-wise. If the ASCII values of the characters used are in the range 33 to 126, then the access code gets generated. Otherwise no array is sent from the net. In this case, the user has to start all over again to enter the various characters. When all the $mn$ characters are entered properly, then the generated access code is sent back and stored as the login credential of the user. When the user tries to access the website once again, the user is asked to enter the size of the code along with the order in which the characters are entered in to the matrix. If any of them does not match then the access is denied. If these details, matches with what the user has saved already, then the array entered is checked with the access code generated. If the array matches then the access is authorized. The flow chart given in Figure 5 shows the steps involved in checking the credentials. If the access code is rightly entered, then access is given or else it is denied. In examples 2 and 3, the entries of the array were taken one by one in the place $P_2$ of the net, without using a “Tab” key or an “Enter” key. Once the three parameters are cross checked, the array characters are entered just like a string password. The only difference between string password and array password is that there are three factors involved in authentication. In example 4 the array characters are not taken as a single string. In between any two characters “Tab” key is used. A blank array of the required size is taken as the start array. Any of the 95 printable characters can be used to fill the array. Let us assume that user is filling the array row-wise. Then the first $n$ printable characters are stored in the first row. In between any two of these $n$ characters the “Tab” key is used. This will move the control horizontally. Once all the $n$ positions are filled the control will move to the first element of the next row. This process is repeated till all the rows of the array are filled. The “End” key is used finally to indicate the end of array. ```plaintext Color Int = int; Color C = Char; Color S = string; Color Arr = ARRAY2; Val I = 0; Val j = 1; Val S = ‘’ ; (Empty String) ``` In the next model only one net is used for generating the access code irrelevant of how the user is entering his characters. If the user is entering the characters row-wise, then the values of m and n are passed on as the row size and column size respectively. The access code generated by the net is stored as the array password. If the user is entering the characters column-wise, then the values of m and n are passed on as the column size and row size respectively. In this case the access code generated by the net would be of size n x m. The transpose of the received array is stored as the array password. If the full array is not entered by the user, then net sends an error message which will be a string. Hence if the return value is an error message, then the user has to re-create the array password. Flow chart given in Figure 6 shows the steps involved. A blank matrix with m rows and n columns is denoted by \( B(m\lambda, n\lambda) \). Any entry which is blank is denoted by \( \lambda \). All the mn entries of \( B(m\lambda, n\lambda) \) are blank: \[ B(2\lambda, 3\lambda) = \begin{bmatrix} \lambda & \lambda & \lambda \\ \lambda & \lambda & \lambda \end{bmatrix}. \] To start with a blank matrix of size m x n is in the place \( P_1 \). The array is built by the user. Initially i, j values are 1. The following keys are used: Any of the 95 printable characters. "TAB" as a separator between any two characters. "END" key to indicate the end of the array password. Every printable character is stored in the \( j \)th position of the blank array in the place \( P_i \). When the “TAB” key is pressed the value of \( j \) is checked. If the value of \( j \) is less than \( n \) then it is incremented by 1. If the value of \( j \) equals \( n \), then \( i \) is incremented by 1 and \( j \) is initialized back to 1. This process is repeated till the array is filled with all the printable characters. By pressing the “End” key the user signifies that all his characters have been entered. **Example 4** The parameters \( m \) and \( n \) are given by the flow chart which is invoking the Petri net. \( \text{BA}\!\!\!\!\!\text{P}(m, n) = \{\Sigma, P, V, T, I, O, TA, \varphi, F\} \). Where \( \Sigma \) is the set of all 95 printable characters and “Tab”, \( P = \{P_1, P_2, \ldots, P_8\} \), \( T = \{t_1\} \), Input and Output functions are shown in the Figure 7. The colour set would be two dimensional array, string, character and integers. The two dimensional array would be over \( \Sigma \). The character can be chosen from \( \Sigma \) and the “Tab” key. String is made up of \( \Sigma \). Initial Attributes of the tokens: \[ TA = \{(B, P_1, B), (C, P_2, C), (j, P_3), (r, P_5, m), \varphi, (c, P_6, n)\} \] The partial function \( \varphi \) states how the attributes of each token changes, when any enabled transition is fired. \( F = \{P_7, P_8\} \). To start with a blank array of size \( m \times n \) is in the place \( P_1 \). The array is built by the user. Initially, \( i, j \) values will be 1. At a given time only one character is entered by the user. The first character is stored in \( b_{11} \). The next character is "Tab" so the value of \( j \) is incremented, the head moves to \( b_{12} \) and the next character is stored in \( b_{12} \). Once \( j \) reaches the value of \( n \) then \( i \) is incremented by 1 and the value of \( j \) is initialized to 1. This goes on till the "Tab" key is pressed at \( i = m \) and \( j = n \). Then the blank matrix is filled with all the required characters and the array is moved to \( P_7 \) as the Access Code. In any other case the Error message is put in \( P_8 \). If the user enters all the \( mn \) printable characters with the "Tab" in between every two printable characters, then the access code is formed as an array. In any other case the array does not reach \( P_7 \). Only an error message reaches the place \( P_8 \). The flow chart in Figure 6 either receives an array (access-code) or a string (error-message) from the net in the Figure 7. If an error message is received, then the user has to start all over again to form his access code. Till he has his \( mn \) printable characters in the positions of his array the net keeps giving error messages. The flow chart in Figure 5 is used once again to authenticate the access. IV. CONCLUSION Digital technology forces people to use the various websites on day to day basis. Password is the key factor to access any application in the computer or mobile phones. Alphanumeric, graphical passwords and biometric authentication are the different types of validation process. available at present. Alphanumeric passwords are the most common mode of authentication. Since information security is always a concern for network users, instead of using a string password, an array password can be used. The Petri net model is used to generate an access code. Coloured Petri net models to generate array languages are defined. Attributes are used for tokens to impose more conditions for enabling transitions. Entries of the array password are taken from the user in two different ways. Either the “Tab” key can be used in between every two characters which will move the control accordingly or the characters can be entered without using the “tab” key. Petri net models have been designed for both cases. When an array password is used, the possibility of the information getting hacked is considerably less. REFERENCES
{"Source-Url": "http://ijssst.info/Vol-19/No-6/paper33.pdf", "len_cl100k_base": 6528, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 27115, "total-output-tokens": 7726, "length": "2e12", "weborganizer": {"__label__adult": 0.0005278587341308594, "__label__art_design": 0.0007252693176269531, "__label__crime_law": 0.0011510848999023438, "__label__education_jobs": 0.0011768341064453125, "__label__entertainment": 0.00013518333435058594, "__label__fashion_beauty": 0.00026154518127441406, "__label__finance_business": 0.0005764961242675781, "__label__food_dining": 0.0005092620849609375, "__label__games": 0.0007925033569335938, "__label__hardware": 0.0022449493408203125, "__label__health": 0.0013017654418945312, "__label__history": 0.00034308433532714844, "__label__home_hobbies": 0.0002036094665527344, "__label__industrial": 0.0008869171142578125, "__label__literature": 0.00045871734619140625, "__label__politics": 0.0004165172576904297, "__label__religion": 0.000591278076171875, "__label__science_tech": 0.322998046875, "__label__social_life": 0.00015413761138916016, "__label__software": 0.01351165771484375, "__label__software_dev": 0.64990234375, "__label__sports_fitness": 0.00033926963806152344, "__label__transportation": 0.0006256103515625, "__label__travel": 0.00018906593322753904}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28183, 0.03759]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28183, 0.83995]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28183, 0.89082]], "google_gemma-3-12b-it_contains_pii": [[0, 5079, false], [5079, 10540, null], [10540, 13562, null], [13562, 15815, null], [15815, 18182, null], [18182, 20690, null], [20690, 22178, null], [22178, 23583, null], [23583, 25335, null], [25335, 28183, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5079, true], [5079, 10540, null], [10540, 13562, null], [13562, 15815, null], [15815, 18182, null], [18182, 20690, null], [20690, 22178, null], [22178, 23583, null], [23583, 25335, null], [25335, 28183, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28183, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28183, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28183, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28183, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28183, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28183, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28183, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28183, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28183, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28183, null]], "pdf_page_numbers": [[0, 5079, 1], [5079, 10540, 2], [10540, 13562, 3], [13562, 15815, 4], [15815, 18182, 5], [18182, 20690, 6], [20690, 22178, 7], [22178, 23583, 8], [23583, 25335, 9], [25335, 28183, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28183, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
2a31a40d36ccd43d029538d78ba13f2c77356a23
Chapter 4 More NP-Complete Problems CS 573: Algorithms, Fall 2013 September 5, 2013 4.0.0.1 Recap NP: languages that have polynomial time certifiers/verifiers. A language $L$ is NP-Complete $\iff$ - $L$ is in NP - for every $L'$ in NP, $L' \leq_P L$ $L$ is NP-Hard if for every $L'$ in NP, $L' \leq_P L$. Theorem 4.0.1 (Cook-Levin). Circuit-SAT and SAT are NP-Complete. 4.0.0.2 Recap contd Theorem 4.0.2 (Cook-Levin). Circuit-SAT and SAT are NP-Complete. Establish NP-Completeness via reductions: - SAT $\leq_P$ 3-SAT and hence 3-SAT is NP-complete - 3-SAT $\leq_P$ Independent Set (which is in NP) and hence Independent Set is NP-Complete - Vertex Cover is NP-Complete - Clique is NP-Complete - Set Cover is NP-Complete 4.0.0.3 Today Prove - Hamiltonian Cycle Problem is NP-Complete. - 3-Coloring is NP-Complete. - Subset Sum. 4.1 NP-Completeness of Hamiltonian Cycle 4.1.1 Reduction from 3SAT to Hamiltonian Cycle 4.1.1.1 Directed Hamiltonian Cycle Input Given a directed graph \( G = (V, E) \) with \( n \) vertices Goal Does \( G \) have a Hamiltonian cycle? - A Hamiltonian cycle is a cycle in the graph that visits every vertex in \( G \) exactly once 4.1.1.2 Directed Hamiltonian Cycle is NP-Complete - Directed Hamiltonian Cycle is in \( NP \) - Certificate: Sequence of vertices - Certifier: Check if every vertex (except the first) appears exactly once, and that consecutive vertices are connected by a directed edge - Hardness: We will show 3-SAT \( \leq_P \) Directed Hamiltonian Cycle 4.1.1.3 Reduction Given 3-SAT formula \( \varphi \) create a graph \( G_\varphi \) such that - \( G_\varphi \) has a Hamiltonian cycle if and only if \( \varphi \) is satisfiable - \( G_\varphi \) should be constructible from \( \varphi \) by a polynomial time algorithm \( \mathcal{A} \) Notation: \( \varphi \) has \( n \) variables \( x_1, x_2, \ldots, x_n \) and \( m \) clauses \( C_1, C_2, \ldots, C_m \). 4.1.1.4 Reduction: First Ideas - Viewing SAT: Assign values to \( n \) variables, and each clauses has 3 ways in which it can be satisfied. - Construct graph with \( 2^n \) Hamiltonian cycles, where each cycle corresponds to some boolean assignment. - Then add more graph structure to encode constraints on assignments imposed by the clauses. 4.1.1.5 The Reduction: Phase I - Traverse path $i$ from left to right if and only if $x_i$ is set to true. - Each path has $3(m + 1)$ nodes where $m$ is number of clauses in $\varphi$; nodes numbered from left to right (1 to $3m + 3$) 4.1.1.6 The Reduction: Phase II - Add vertex $c_j$ for clause $C_j$. $c_j$ has edge from vertex $3j$ and to vertex $3j + 1$ on path $i$ if $x_i$ appears in clause $C_j$, and has edge from vertex $3j + 1$ and to vertex $3j$ if $\neg x_i$ appears in $C_j$. ![Diagram showing the reduction phases with vertices and edges labeled accordingly.](image-url) 4.1.1.7 Correctness Proof Proposition 4.1.1. $\varphi$ has a satisfying assignment $\iff G_\varphi$ has a Hamiltonian cycle. Proof: ⇒ Let $\alpha$ be the satisfying assignment for $\varphi$. Define Hamiltonian cycle as follows - If $\alpha(x_i) = 1$ then traverse path $i$ from left to right - If $\alpha(x_i) = 0$ then traverse path $i$ from right to left. - For each clause, path of at least one variable is in the “right” direction to splice in the node corresponding to clause. 4.1.1.8 Hamiltonian Cycle $\Rightarrow$ Satisfying assignment Proof continued Suppose $\Pi$ is a Hamiltonian cycle in $G_\varphi$ - If $\Pi$ enters $c_j$ (vertex for clause $C_j$) from vertex $3j$ on path $i$ then it must leave the clause vertex on edge to $3j + 1$ on the same path $i$ - If not, then only unvisited neighbor of $3j + 1$ on path $i$ is $3j + 2$ - Thus, we don’t have two unvisited neighbors (one to enter from, and the other to leave) to have a Hamiltonian Cycle $\blacksquare$ • Similarly, if Π enters $c_j$ from vertex $3j + 1$ on path $i$ then it must leave the clause vertex $c_j$ on edge to $3j$ on path $i$ 4.1.1.9 Example 4.1.1.10 Hamiltonian Cycle $\implies$ Satisfying assignment (contd) • Thus, vertices visited immediately before and after $C_i$ are connected by an edge • We can remove $c_j$ from cycle, and get Hamiltonian cycle in $G - c_j$ • Consider Hamiltonian cycle in $G - \{c_1, \ldots c_m\}$; it traverses each path in only one direction, which determines the truth assignment 4.1.2 Hamiltonian cycle in undirected graph 4.1.2.1 (Undirected) Hamiltonian Cycle Problem 4.1.2 (Undirected Hamiltonian Cycle). Input: Given undirected graph $G = (V, E)$. Goal: Does $G$ have a Hamiltonian cycle? That is, is there a cycle that visits every vertex exactly one (except start and end vertex)? 4.1.2.2 NP-Completeness Theorem 4.1.3. Hamiltonian cycle problem for undirected graphs is NP-Complete. Proof: • The problem is in NP; proof left as exercise. • Hardness proved by reducing Directed Hamiltonian Cycle to this problem. 4.1.2.3 Reduction Sketch **Goal:** Given directed graph $G$, need to construct undirected graph $G'$ such that $G$ has Hamiltonian Path if and only if $G'$ has Hamiltonian path Reduction - Replace each vertex $v$ by 3 vertices: $v_{in}$, $v$, and $v_{out}$ - A directed edge $(a, b)$ is replaced by edge $(a_{out}, b_{in})$ 4.1.2.4 Reduction: Wrapup - The reduction is polynomial time (exercise) - The reduction is correct (exercise) 4.2 NP-Completeness of Graph Coloring 4.2.0.5 Graph Coloring **Graph Coloring** - **Instance:** $G = (V, E)$: Undirected graph, integer $k$. - **Question:** Can the vertices of the graph be colored using $k$ colors so that vertices connected by an edge do not get the same color? 4.2.0.6 Graph 3-Coloring **3 Coloring** - **Instance:** $G = (V, E)$: Undirected graph. - **Question:** Can the vertices of the graph be colored using 3 colors so that vertices connected by an edge do not get the same color? 4.2.0.7 Graph Coloring **Observation:** If $G$ is colored with $k$ colors then each color class (nodes of same color) form an independent set in $G$. Thus, $G$ can be partitioned into $k$ independent sets $\iff G$ is $k$-colorable. - Graph 2-Coloring can be decided in polynomial time. - $G$ is 2-colorable $\iff G$ is bipartite! There is a linear time algorithm to check if $G$ is bipartite using BFS (we saw this earlier). 4.2.1 Problems related to graph coloring 4.2.1.1 Graph Coloring and Register Allocation Register Allocation Assign variables to (at most) $k$ registers such that variables needed at the same time are not assigned to the same register. Interference Graph Vertices are variables, and there is an edge between two vertices, if the two variables are “live” at the same time. Observations - [Chaitin] Register allocation problem is equivalent to coloring the interference graph with $k$ colors - Moreover, 3-COLOR $\leq_P$ k-Register Allocation, for any $k \geq 3$ 4.2.1.2 Class Room Scheduling Given $n$ classes and their meeting times, are $k$ rooms sufficient? Reduce to Graph $k$-Coloring problem Create graph $G$ - a node $v_i$ for each class $i$ - an edge between $v_i$ and $v_j$ if classes $i$ and $j$ conflict Exercise: $G$ is $k$-colorable $\iff$ $k$ rooms are sufficient 4.2.1.3 Frequency Assignments in Cellular Networks Cellular telephone systems that use Frequency Division Multiple Access (FDMA) (example: GSM in Europe and Asia and AT&T in USA) - Breakup a frequency range $[a, b]$ into disjoint bands of frequencies $[a_0, b_0], [a_1, b_1], \ldots, [a_k, b_k]$ - Each cell phone tower (simplifying) gets one band - Constraint: nearby towers cannot be assigned same band, otherwise signals will interfere Problem: given $k$ bands and some region with $n$ towers, is there a way to assign the bands to avoid interference? Can reduce to $k$-coloring by creating interference/conflict graph on towers. 4.2.2 Showing hardness of 3 COLORING 4.2.2.1 3-Coloring is NP-Complete - 3-Coloring is in NP. - Certificate: for each node a color from $\{1, 2, 3\}$. - Certifier: Check if for each edge $(u, v)$, the color of $u$ is different from that of $v$. - Hardness: We will show 3-SAT $\leq_P$ 3-Coloring. 4.2.2.2 Reduction Idea Start with \textbf{3SAT} formula (i.e., 3CNF formula) $\varphi$ with $n$ variables $x_1, \ldots, x_n$ and $m$ clauses $C_1, \ldots, C_m$. Create graph $G_\varphi$ such that $G_\varphi$ is 3-colorable $\iff$ $\varphi$ is satisfiable (A) Need to establish truth assignment for $x_1, \ldots, x_n$ via colors for some nodes in $G_\varphi$. (B) Create triangle with nodes \textbf{true}, \textbf{false}, \text{base}. (C) For each variable $x_i$ two nodes $v_i$ and $\bar{v}_i$ connected in a triangle with the special node \text{base}. (D) If graph is 3-colored, either $v_i$ or $\bar{v}_i$ gets the same color as \textbf{true}. Interpret this as a truth assignment to $v_i$. (E) Need to add constraints to ensure clauses are satisfied (next phase). 4.2.2.3 Figure 4.2.2.4 Clause Satisfiability Gadget For each clause $C_j = (a \lor b \lor c)$, create a small gadget graph - gadget graph connects to nodes corresponding to $a, b, c$ - needs to implement OR OR-gadget-graph: 4.2.2.5 OR-Gadget Graph Property: if $a, b, c$ are colored \textbf{false} in a 3-coloring then output node of OR-gadget has to be colored \textbf{false}. Property: if one of $a, b, c$ is colored \textbf{true} then OR-gadget can be 3-colored such that output node of OR-gadget is colored \textbf{true}. 4.2.2.6 Reduction (A) Create triangle with nodes true, false, base. (B) for each variable $x_i$ two nodes $v_i$ and $\bar{v}_i$ connected in a triangle with the above base vertex. (C) For each clause $C_j = (a \lor b \lor c)$, add OR-gadget graph with input nodes $a, b, c$ and connect output node of gadget to both false and base. 4.2.2.7 Reduction Claim 4.2.1. No legal 3-coloring of above graph (with coloring of nodes $T, F, B$ fixed) in which $a, b, c$ are colored false. If any of $a, b, c$ are colored True then there is a legal 3-coloring of above graph. 4.2.2.8 3 coloring of the clause gadget 4.2.2.9 Reduction Outline Example 4.2.2. $\phi = (u \lor \neg v \lor w) \land (v \lor x \lor \neg y)$ 4.2.2.10 Correctness of Reduction φ is satisfiable implies $G_\varphi$ is 3-colorable (A) if $x_i$ is assigned 1, color $v_i$ true and $\bar{v}_i$ false. (B) for each clause $C_j = (a \lor b \lor c)$ at least one of $a, b, c$ is colored True. OR-gadget for $C_j$ can be 3-colored such that output is True. $G_\varphi$ is 3-colorable implies φ is satisfiable (A) If $v_i$ is colored true then set $x_i$ to be 1, this is a legal truth assignment. (B) Consider any clause $C_j = (a \lor b \lor c)$. it cannot be that all $a, b, c$ are all colored false. If so, output of OR-gadget for $C_j$ has to be colored false but output is connected to base and false! 4.2.3 Graph generated in reduction... 4.2.3.1 ... from 3SAT to 3COLOR 4.3 Hardness of Subset Sum 4.3.0.2 Subset Sum **Subset Sum** *Instance*: $S$ - set of positive integers, $t$: an integer number (Target) *Question*: Is there a subset $X \subseteq S$ such that $\sum_{x \in X} x = t$? Claim 4.3.1. *Subset Sum* is NP-Complete. 4.3.0.3 Vector Subset Sum We will prove following problem is NP-Complete... **Vec Subset Sum** *Instance*: $S$ - set of $n$ vectors of dimension $k$, each vector has non-negative numbers for its coordinates, and a target vector $\vec{t}$. *Question*: Is there a subset $X \subseteq S$ such that $\sum_{\vec{x} \in X} \vec{x} = \vec{t}$? Reduction from 3SAT. 4.3.1 Vector Subset Sum 4.3.1.1 Handling a single clause Think about vectors as being lines in a table. **First gadget** Selecting between two lines. <table> <thead> <tr> <th>Target</th> <th>??</th> <th>??</th> <th>01</th> <th>??</th> </tr> </thead> <tbody> <tr> <td>$a_1$</td> <td>??</td> <td>??</td> <td>01</td> <td>??</td> </tr> <tr> <td>$a_2$</td> <td>??</td> <td>??</td> <td>01</td> <td>??</td> </tr> </tbody> </table> Two rows for every variable $x$: selecting either $x = 0$ or $x = 1$. 4.3.1.2 Handling a clause... We will have a column for every clause... <table> <thead> <tr> <th>numbers</th> <th>...</th> <th>$C \equiv a \lor b \lor \bar{c}$</th> <th>...</th> </tr> </thead> <tbody> <tr> <td>$a$</td> <td>...</td> <td>01</td> <td>...</td> </tr> <tr> <td>$\bar{a}$</td> <td>...</td> <td>00</td> <td>...</td> </tr> <tr> <td>$b$</td> <td>...</td> <td>01</td> <td>...</td> </tr> <tr> <td>$\bar{b}$</td> <td>...</td> <td>00</td> <td>...</td> </tr> <tr> <td>$c$</td> <td>...</td> <td>00</td> <td>...</td> </tr> <tr> <td>$\bar{c}$</td> <td>...</td> <td>01</td> <td>...</td> </tr> <tr> <td>$C$ fix-up 1</td> <td>000</td> <td>07</td> <td>000</td> </tr> <tr> <td>$C$ fix-up 2</td> <td>000</td> <td>08</td> <td>000</td> </tr> <tr> <td>$C$ fix-up 3</td> <td>000</td> <td>09</td> <td>000</td> </tr> <tr> <td><strong>TARGET</strong></td> <td></td> <td>10</td> <td></td> </tr> </tbody> </table> 11 4.3.1.3 3SAT to Vec Subset Sum <table> <thead> <tr> <th>numbers</th> <th>( a \lor \bar{b} \lor d \lor \bar{c} \lor d )</th> <th>( b \lor \bar{c} \lor d )</th> <th>( d \lor \bar{c} \lor \bar{a} )</th> <th>( \bar{b} \lor c \lor \bar{d} )</th> <th>( \bar{c} \lor a \lor \bar{d} )</th> </tr> </thead> <tbody> <tr> <td>a</td> <td>1 0 0 0 0 00 01</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>b</td> <td>0 1 0 0 0 0 0 01</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>c</td> <td>1 0 0 0 0 0 0 01</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>d</td> <td>0 0 0 0 0 0 0 01</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>a 1</td> <td>0 0 0 0 0 0 0 01</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>b 0</td> <td>0 0 0 0 0 0 0 01</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>c 0</td> <td>0 0 0 0 0 0 0 01</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>d 0</td> <td>0 0 0 0 0 0 0 01</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>C fix-up 1</td> <td>0 0 0 0 0 0 0 01</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>C fix-up 2</td> <td>0 0 0 0 0 0 0 01</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>C fix-up 3</td> <td>0 0 0 0 0 0 0 01</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>D fix-up 1</td> <td>0 0 0 0 0 0 0 01</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>D fix-up 2</td> <td>0 0 0 0 0 0 0 01</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>D fix-up 3</td> <td>0 0 0 0 0 0 0 01</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>TARGET</td> <td>1 1 1 1 10 10</td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> 4.3.1.4 Vec Subset Sum to Subset Sum <table> <thead> <tr> <th>numbers</th> </tr> </thead> <tbody> <tr> <td>010000000001</td> </tr> <tr> <td>010000000000</td> </tr> <tr> <td>000100000001</td> </tr> <tr> <td>000100000010</td> </tr> <tr> <td>000001000100</td> </tr> <tr> <td>000001000001</td> </tr> <tr> <td>000000010000</td> </tr> <tr> <td>000000001000</td> </tr> <tr> <td>000000001010</td> </tr> <tr> <td>000000000007</td> </tr> <tr> <td>000000000008</td> </tr> <tr> <td>000000000009</td> </tr> <tr> <td>000000000070</td> </tr> <tr> <td>000000000800</td> </tr> <tr> <td>000000000900</td> </tr> <tr> <td>010101011010</td> </tr> </tbody> </table> 4.3.1.5 Other NP-Complete Problems - 3-Dimensional Matching - Subset Sum Read book. 4.3.1.6 Need to Know NP-Complete Problems - 3SAT. - Circuit-SAT. - Independent Set. - Vertex Cover. - Clique. - Set Cover. • Hamiltonian Cycle (in Directed/Undirected Graphs). • 3Coloring. • 3-D Matching. • Subset Sum. 4.3.1.7 Subset Sum and Knapsack Subset Sum Problem: Given $n$ integers $a_1, a_2, \ldots, a_n$ and a target $B$, is there a subset of $S$ of \{\(a_1, \ldots, a_n\}\} such that the numbers in $S$ add up precisely to $B$? Subset Sum is **NP-Complete**— see book. Knapsack: Given $n$ items with item $i$ having size $s_i$ and profit $p_i$, a knapsack of capacity $B$, and a target profit $P$, is there a subset $S$ of items that can be packed in the knapsack and the profit of $S$ is at least $P$? Show Knapsack problem is **NP-Complete** via reduction from Subset Sum (exercise). 4.3.1.8 Subset Sum and Knapsack Subset Sum can be solved in $O(nB)$ time using dynamic programming (exercise). Implies that problem is hard only when numbers $a_1, a_2, \ldots, a_n$ are exponentially large compared to $n$. That is, each $a_i$ requires polynomial in $n$ bits. *Number problems* of the above type are said to be **weakly NP-Complete**.
{"Source-Url": "https://courses.engr.illinois.edu/cs573/fa2013/lec/slides/04_notes.pdf", "len_cl100k_base": 5676, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 35879, "total-output-tokens": 5941, "length": "2e12", "weborganizer": {"__label__adult": 0.0005345344543457031, "__label__art_design": 0.0005393028259277344, "__label__crime_law": 0.0010557174682617188, "__label__education_jobs": 0.0116424560546875, "__label__entertainment": 0.00015532970428466797, "__label__fashion_beauty": 0.0003104209899902344, "__label__finance_business": 0.00047707557678222656, "__label__food_dining": 0.0007905960083007812, "__label__games": 0.00223541259765625, "__label__hardware": 0.0030879974365234375, "__label__health": 0.00170135498046875, "__label__history": 0.0006976127624511719, "__label__home_hobbies": 0.0003387928009033203, "__label__industrial": 0.0013580322265625, "__label__literature": 0.0006513595581054688, "__label__politics": 0.0006608963012695312, "__label__religion": 0.0007877349853515625, "__label__science_tech": 0.400146484375, "__label__social_life": 0.0002334117889404297, "__label__software": 0.008514404296875, "__label__software_dev": 0.56201171875, "__label__sports_fitness": 0.0006871223449707031, "__label__transportation": 0.0012760162353515625, "__label__travel": 0.0003008842468261719}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16110, 0.10071]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16110, 0.43506]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16110, 0.80355]], "google_gemma-3-12b-it_contains_pii": [[0, 843, false], [843, 2286, null], [2286, 2877, null], [2877, 3866, null], [3866, 4937, null], [4937, 6315, null], [6315, 8135, null], [8135, 9437, null], [9437, 10148, null], [10148, 10877, null], [10877, 12519, null], [12519, 15077, null], [15077, 16110, null]], "google_gemma-3-12b-it_is_public_document": [[0, 843, true], [843, 2286, null], [2286, 2877, null], [2877, 3866, null], [3866, 4937, null], [4937, 6315, null], [6315, 8135, null], [8135, 9437, null], [9437, 10148, null], [10148, 10877, null], [10877, 12519, null], [12519, 15077, null], [15077, 16110, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16110, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16110, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16110, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16110, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 16110, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16110, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16110, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16110, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16110, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16110, null]], "pdf_page_numbers": [[0, 843, 1], [843, 2286, 2], [2286, 2877, 3], [2877, 3866, 4], [3866, 4937, 5], [4937, 6315, 6], [6315, 8135, 7], [8135, 9437, 8], [9437, 10148, 9], [10148, 10877, 10], [10877, 12519, 11], [12519, 15077, 12], [15077, 16110, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16110, 0.19922]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
5ef09b36c4c562f451a2e842c76dd7d326ecbfbb
Making RCU Respect Your Device's Battery Lifetime On-The-Job Energy-Efficiency Training For RCU Maintainers Overview - What is RCU? - “The Good Old Days” - Overview of RCU's many variants of energy efficiency - Current state of RCU energy efficiency - Future directions What is RCU? - For an overview, see [http://lwn.net/Articles/262464/](http://lwn.net/Articles/262464/) - For the purposes of this presentation, think of RCU as something that defers work, with one work item per callback - Each callback has a function pointer and an argument - Callbacks are queued on per-CPU lists, invoked after grace period - Invocation can result in OS jitter and real-time latency - Deferring the work a bit longer than needed is OK, deferring too long is bad – but failing to defer long enough is fatal What is RCU? - RCU uses a state machine driven out of the scheduling-clock interrupt to determine when it is safe to invoke callbacks. - Actual callback invocation is done from softirq. RCU Area of Applicability - **Read-Mostly, Stale & Inconsistent Data OK** (RCU Works Great!!!) - **Read-Mostly, Need Consistent Data** (RCU Works OK) - **Read-Write, Need Consistent Data** (RCU Might Be OK...) - **Update-Mostly, Need Consistent Data** (RCU is Really Unlikely to be the Right Tool For The Job, But SLAB_DESTROY_BY_RCU Is A Possibility) Use the right tool for the job!!! Making RCU Respect Your Device's Battery Lifetime For More Information on RCU... - Documentation/RCU in the Linux® kernel source code - “User-Level Implementations of Read-Copy Update” (Mathieu Desnoyers et al.) - http://doi.ieeecomputersociety.org/10.1109/TPDS.2011.159 - “The RCU API, 2010 Edition” - http://lwn.net/Articles/418853/ - “What is RCU” LWN series - http://lwn.net/Articles/262464/ (What is RCU, Fundamentally?) - http://lwn.net/Articles/263130/ (What is RCU's Usage?) - http://lwn.net/Articles/264090/ (What is RCU's API?) - “Introducing technology into the Linux kernel: a case study” - http://doi.acm.org/10.1145/1400097.1400099 - “Meet the Lockers” (Neil Brown) - http://lwn.net/Articles/453685/ - “Read-Copy Update” (2001 OLS paper, still used in a number of college courses) - Plus more at: http://www.rdrop.com/users/paulmck/RCU RCU: Tapping The Awesome Power of Procrastination For Two Decades!!! Making RCU Respect Your Device's Battery Lifetime “The Good Old Days” Not Much “Good Old Days” Code Left in RCU Making RCU Respect Your Device's Battery Lifetime Not Much “Good Old Days” Code Left in RCU Why did I wait so long to conserve energy??? Why Did I Wait Until 2011 to Conserve Energy? - The fact is that I didn't wait that long!!! - But RCU's energy-efficiency code is unusual in that it has been rewritten a great many times - RCU has been concerned about energy efficiency for about ten years - Not much energy-efficiency code in RCU in the 1990s: Why? - Other minor changes: - Expedited grace periods - Additions to rcutorture - Additional list-traversal primitives - Upgrading real-time response - Plus the usual list of fixes, improvements, and adaptations “The Good Really Old Days” - RCU used by DYNIX/ptx: Heavy database servers - Used for a number of applications: - Fraud detection in large financial systems - Inventory monitoring/control for large retail firms - Rental car tracking/billing - Manufacturing coordination/control - Including manufacturing of airliners Making RCU Respect Your Device's Battery Lifetime Airliner Manufacturing Plants Had Lots of These: Author: William M. Plate Jr. (Public Domain) Airliner Manufacturing Plants Had Lots of These At About 40KW Each Author: William M. Plate Jr. (Public Domain) And if You Think That *Welders* Are Power-Hungry... If You Are Running a Bunch of Welders or Turbines... - Not only are you not going to care much about RCU's contribution to power consumption... If You Are Running a Bunch of Welders or Turbines... - Not only are you not going to care much about RCU's contribution to power consumption... - You are not going to care much about the whole server's contribution to power consumption! - But of course things look very different for small battery-powered devices... RCU's Many Energy-Efficiency Implementations Initial RCU Did Have One Energy-Efficiency Feature - Initial DYNIX/ptx RCU had light-weight read-side primitives - “Free” is a very good price!!! - This meant that the system returned to idle more quickly than it would with heavier-weight synchronization primitives - But 1990s systems consumed more power idle than when running! - This was because the idle loop fit into cache, thus allowing the CPU to execute useless instructions at a very high rate - But today's CPUs have many energy-efficiency features - And have very low idle power, especially for long-duration idle periods - So why does RCU need to worry about energy efficiency??? - After all, it is just a synchronization primitive!!! RCU Driven From Scheduling Clock Interrupt What RCU Did (2003) Scheduling-Clock Interrupts RCU's Use of Scheduling-Clock Interrupts Wastes Power and Prevents Deep CPU Sleep States What Is Required No Scheduling-Clock Interrupts, CPU Enters Deep Sleep RCU Driven From Scheduling Clock Interrupt What RCU Did (2003) Scheduling-Clock Interrupts RCU's Use of Scheduling-Clock Interrupts Wastes Power and Prevents Deep CPU Sleep States What Is Required No Scheduling-Clock Interrupts, CPU Enters Deep Sleep Which is why RCU has a dyntick-idle subsystem! RCU and Dyntick Idle (AKA CONFIG_NO_HZ=y) - List of implementations: - 2004: Dyntick-idle bit vector - Manfred Spraul locates theoretical bug RCU and Dyntick Idle (AKA CONFIG_NO_HZ=y) - List of implementations: - 2004: Dyntick-idle bit vector - Manfred Spraul locates theoretical bug - A few months before the mainframe guys encounter it List of implementations: - 2004: Dyntick-idle bit vector - Manfred Spraul locates theoretical bug - A few months before the mainframe guys encounter it - But after it had been in-tree for *four years* List of implementations: - 2004: Dyntick-idle bit vector - Manfred Spraul locates theoretical bug - A few months before the mainframe guys encounter it - But after it has been in-tree for four years - 2008: -rt version (with Steven Rostedt) - Very complex: http://lwn.net/Articles/279077/ - 2009: Separate out NMI accounting - Greatly simplified: No proof of correctness required ;-) Making RCU Respect Your Device's Battery Lifetime Making RCU Respect Your Device's Battery Lifetime RCU and Dyntick Idle as of Early 2010 Dyntick-Idle Mode Enables CPU Deep-Sleep States Enter Dyntick-Idle Mode Scheduling-Clock Interrupts Need to Process RCU Callbacks Before Entering Dyntick-Idle Mode RCU Grace Period Ends So RCU is Perfectly Energy Efficient, Right? So RCU is Perfectly Energy Efficient, Right? - Well, I thought that RCU was very energy efficient - Then in early 2010 I got a call from someone working on a battery-powered multicore system - And he was very upset with RCU - Why? Making RCU Respect Your Device's Battery Lifetime RCU Energy Inefficiency No RCU Read-Side Critical Sections! Enter Dyntick-Idle Mode CPU 0 Scheduling-Clock Interrupts RCU Callbacks Prevent Dyntick-Idle Mode Entry CPU is Draining the Battery For No Good Reason!!! RCU and Dyntick Idle (AKA CONFIG_NO_HZ=y) - List of implementations: - 2004: Dyntick-idle bit vector - Manfred Spraul locates theoretical bug - A few months before the mainframe guys encounter it - But after it has been in-tree for four years - 2008: -rt version (with Steven Rostedt) - Very complex: http://lwn.net/Articles/279077/ - 2009: Separate out NMI accounting - Greatly simplified: No proof of correctness required - 2010: CONFIG_RCU_FAST_NO_HZ for small systems - Force last CPU into dyntick-idle mode Making RCU Respect Your Device's Battery Lifetime CONFIG_RCU_FAST_NO_HZ No RCU Read-Side Critical Sections! Enter Dyntick-Idle Mode CPU 0 Scheduling-Clock Interrupts All Other CPUs Idle, Grace Period Ends Immediately CPU 1 RCU Callbacks Invoked Immediately © 2009 IBM Corporation So RCU is Perfectly Energy Efficient, Right? So RCU is Perfectly Energy Efficient, Right? - This time, I was wiser: - I suspected CONFIG_FAST_NO_HZ needed on large systems - And someone mentioned this to me in late 2011 - But some things never change: He was very upset with RCU - Why? Might Never Have All But One CPU Dyntick-Idled!!! The more CPUs you have, the worse this effect gets RCU and Dyntick Idle (AKA CONFIG_NO_HZ=y) List of implementations: - 2004: Dyntick-idle bit vector - Manfred Spraul locates theoretical bug - A few months before the mainframe guys encounter it - But after it has been in-tree for four years - 2008: -rt version (with Steven Rostedt) - Very complex: http://lwn.net/Articles/279077/ - 2009: Separate out NMI accounting - Greatly simplified: No proof of correctness required - 2010: CONFIG_RCU_FAST_NO_HZ for small systems - Force last CPU into dyntick-idle mode - 2012: CONFIG_RCU_FAST_NO_HZ for large systems - Force CPUs with callbacks into dyntick-idle, but wake them up later - (See 2012 ELC presentation) Large-System CONFIG_RCU_FAST_NO_HZ: Before Large-System CONFIG_RCU_FAST_NO_HZ: After Extra work at idle entry might (or might not) save work later Large-System CONFIG_RCU_FAST_NO_HZ: Results - Performance work showed equivocal results - Often a great reduction in wakeups, but not always as large of energy savings as desired - Repeated attempts to drain callbacks on idle entry do not seem to be as helpful as desired - Can CONFIG_RCU_FAST_NO_HZ reduce scheduling-clock ticks with less idle-entry RCU-callback work? - To find out, let's look at RCU grace-period and callback handling - Grace period: The period of time that RCU defers work Making RCU Respect Your Device's Battery Lifetime Grace-Period Handling In The Good Really Old Days Scheduling-clock interrupt <table> <thead> <tr> <th>GP #</th> <th>CPU 0</th> <th>CPU 1</th> <th>CPU 2</th> <th>CPU 3</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>1</td> <td>2</td> <td>3</td> <td></td> </tr> </tbody> </table> © 2009 IBM Corporation Making RCU Respect Your Device's Battery Lifetime RCU Callback Handling In The Good Really Old Days CPU 0 ->nxtlist ->nxttail[] RCU_DONE_TAIL RCU_WAIT_TAIL RCU_NEXT_READY_TAIL RCU_NEXT_TAIL A --> B --> C --> D --> E RCU Callback Handling In The Good Really Old Days Advance callbacks CPU 0 ->nxtlist ->nxttail[] RCU_DONE_TAIL RCU_WAIT_TAIL RCU_NEXT_READY_TAIL RCU_NEXT_TAIL A → B → C → D → E Making RCU Respect Your Device's Battery Lifetime Making RCU Respect Your Device's Battery Lifetime RCU Callback Handling In The Good Really Old Days CPU 0 Invoke callbacks ->nxtlist ->nxttail[] RCU_DONE_TAIL RCU_WAIT_TAIL RCU_NEXT_READY_TAIL RCU_NEXT_TAIL Making RCU Respect Your Device's Battery Lifetime RCU Callback Handling In The Good Really Old Days New callbacks arrive CPU 0 0 1 ->nxtlist ->nxttail[] RCU_DONE_TAIL RCU_WAIT_TAIL RCU_NEXT_READY_TAIL RCU_NEXT_TAIL Grace-Period Handling And TREE_RCU - Problem: Lock contention - Solution: Apply hierarchy in the form of TREE_RCU Making RCU Respect Your Device's Battery Lifetime Grace-Period Handling And TREE_RCU: 4096 CPUs ``` struct rcu_state struct rcu_node struct rcu_node struct rcu_data CPU 15 struct rcu_data CPU 4095 struct rcu_data CPU 0 struct rcu_data CPU 4080 Level 0: 1 rcu_node Level 1: 4 rcu_nodes Level 2: 256 rcu_nodes Total: 261 rcu_nodes ``` Making RCU Respect Your Device's Battery Lifetime Grace-Period Handling And TREE_RCU: 4 CPUs CPU 2 & 3 awareness of race-period start delayed Grace-Period Handling, TREE_RCU, and dyntick-idle Callbacks registered here ... ... are guaranteed done here Grace-Period Handling, TREE_RCU, and dyntick-idle Callbacks registered here ... ... are guaranteed done here But CPU 3 is asleep and unaware! Dealing With dyntick-idle Grace-Period Latency - Don't allow CPUs with callbacks to go dyntick-idle - Which would unfortunately put us back where we started - Try to force RCU state machine to drain callbacks - Already tried that, consumes too much CPU for too little benefit - Impose time limit on dyntick-idle sojourns with callbacks - About 6 seconds if all lazy and about 4 jiffies if at least one non-lazy - Seems to work reasonably well: times can be adjusted at runtime - But still greatly degrades grace-period latency for dyntick-idle CPUs - Mark callbacks with corresponding grace-period number Grace-Period Handling, TREE_RCU, and dyntick-idle Callbacks registered here are marked with grace period 2 And will be recognized as ready when CPU 3 awakens But What If No Other CPU Needs Grace Period? Callbacks registered and marked here, but grace period 2 never starts!!! Dealing With dyntick-idle Grace-Period Latency - Don't allow CPUs with callbacks to go dyntick-idle - Which would unfortunately put us back where we started - Try to force RCU state machine to drain callbacks - Already tried that, consumes too much CPU for too little benefit - Impose time limit on dyntick-idle sojourns with callbacks - About 6 seconds if all lazy and about 4 jiffies if at least one non-lazy - Seems to work reasonably well: times can be adjusted at runtime - But still degrades grace-period latency for dyntick-idle CPUs, so... - Mark callbacks with corresponding grace-period number - But cannot start later grace periods, so... - Register corresponding grace period with RCU core Grace-Period Handling, TREE_RCU, and dyntick-idle Callbacks registered here are marked with grace period 2 And RCU knows to start grace period 2 Grace-Period Handling, TREE_RCU, and dyntick-idle Callbacks registered here are marked with grace period 2 And RCU knows to start grace period 2 And that grace period 3 is not needed Preliminary Energy Efficiency Results - Data courtesy of Dietmar Eggemann and Robin Randhawa of ARM on early-silicon big.LITTLE system - Early results equivocal, but RCU_FAST_NO_HZ might not be helping much on big.LITTLE - Looking into kthread throttling and tuning - Also double-checking experiment setup - Alternative approach: no-CBs CPUs! - But what is big.LITTLE??? Making RCU Respect Your Device's Battery Lifetime ARM big.LITTLE Architecture Cortex-A15 Twice as fast Cortex-A15 big ~3 times more energy efficient Cortex-A7 Cortex-A7 Cortex-A7 LITTLE ARM big.LITTLE Architecture: Strategy - Run on the LITTLE by default - Run on big if heavy processing power is required - In other words, if feasible, run on LITTLE for efficiency, but run on big if necessary to preserve user experience - This suggests that RCU callbacks should run on LITTLE CPUs Making RCU Respect Your Device's Battery Lifetime ARM big.LITTLE Without no-CBs CPUs ![Diagram showing the relationship between big and LITTLE CPUs with grace period and busy states](image-url) - **Big CPU** - Busy - Grace Period - CB - **LITTLE CPU** - Busy - Busy - Busy Making RCU Respect Your Device's Battery Lifetime ARM big.LITTLE With no-CBs CPUs big CPU Busy Grace Period LITTLE CPU Busy Busy CB Busy Making RCU Respect Your Device's Battery Lifetime ARM big.LITTLE With no-CBs CPUs: No Free Lunch Diagram showing the concepts of big CPU and LITTLE CPU, busy states, and the grace period. Making RCU Respect Your Device's Battery Lifetime ARM big.LITTLE With no-CBs CPUs: Preliminary Results - Reference System: RCU_NOCB_CPU=n - Test System: RCU_NOCB_CPU=y, big CPUs offloaded, kthreads confined to LITTLE CPUs - Approximate power savings: - cyclictest: 10% - andebench8: 2% - audio: 10% - bbench_with_audio: 5% - Next steps: - Get no-CBs CPUs to production quality - More adjustment to RCU_FAST_NO_HZ Offloadable RCU Callbacks: Limitations and Futures - Probably several remaining bugs in no-CBs CPUs - Not yet production quality - Must reboot to reconfigure no-CBs CPUs - Should be just fine for many uses - Hopefully also OK for HPC and real-time workloads - No energy-efficiency code: lazy & non-lazy CBs? Non-lazy! - But non-lazy Cbs are common case, so deferring interpretation of laziness. - No-CBs CPUs' kthreads not subject to priority boosting - Probably not a near-term problem - Setting all no-CBs CPUs' kthreads to RT prio w/out pinning them: bad! - At least on large systems: Probably OK near-term, maybe long term as well - Note: I do not expect no-CBs path to completely replace current CB path To Probe More Deeply Into no-CBs CPUs... - “Relocating RCU callbacks” by Jon Corbet - http://lwn.net/Articles/522262/ - “What Is New In RCU for Real Time (RTLWS 2012)” - Slides 21-on - “Getting RCU Further Out of the Way (Plumbers 2012)” - “Cleaning Up Linux’s CPU Hotplug For Real Time and Energy Management” (ECRTS 2012) Lessons Learned and Relearned Lessons Learned, Old and New - Workload matters!!! - Different workloads have different requirements - A given workload's requirements change over time - More important, one's understanding of requirements changes over time! - Supporting a single workload is easier than supporting many of them Lessons Learned, Old and New - Workload matters!!! - Different workloads have different requirements - A given workload's requirements change over time - More important, one's understanding of requirements changes over time! - Supporting a single workload is easier than supporting many of them - Energy-efficient and performance benchmarkers - You would never believe what either group will do for 5%... Lessons Learned, Old and New - **Workload matters!!!** - Different workloads have different requirements - A given workload's requirements change over time - More important, one's understanding of requirements changes over time! - Supporting a single workload is easier than supporting many of them - **Energy-efficiency and performance benchmarkers** - You would never believe what either group will do for 5%... - **Median age of randomly chosen line of RCU code: < 2 years** Lessons Learned, Old and New - **Workload matters!!!** - Different workloads have different requirements - A given workload's requirements change over time - More important, one's understanding of requirements changes over time! - Supporting a single workload is easier than supporting many of them - **Energy-efficiency and performance benchmarkers** - You would never believe what either group will do for 5%... - **Median age of randomly chosen line of RCU code: < 2 years** - **The guys who request an enhancement are rarely the guys who are willing to test your patches** Lessons Learned, Old and New - Workload matters!!! - Different workloads have different requirements - A given workload's requirements change over time - More important, one's understanding of requirements changes over time! - Supporting a single workload is easier than supporting many of them - Energy-efficiency and performance benchmarkers - You would never believe what either group will do for 5%... - Median age of randomly chosen line of RCU code: < 2 years - The guys who request an enhancement are rarely the guys who are willing to test your patches - The importance of the community A Brief History of RCU Issues - ~1993: SMP scalability (30 CPUs) for RDBMS workloads - 1996: NUMA (64 CPUs) for RDBMS workloads - 2002: SMP scalability (~30 CPUs) for general workloads - 2004: SMP scalability (~512 CPUs) for HPC workloads - And some concern about energy efficiency - 2005: Real-time response (~4 CPUs) - 2008: SMP scalability (>1024 CPUs) for HPC workloads - 100s of CPUs for more general workloads - 2009: Real-time response (~30 CPUs) for general workloads - 2010: Energy efficiency (~2 CPUs), real-time response when CPU-bound - 2011: Energy efficiency (lots of CPUs) - 2012: RCU causes 200-microsecond latency spikes... And So I Owe The Linux Community Many Thanks - Because of the many RCU-related challenges from the Linux community, some of my most important work and collaborations have been in the past ten years. And So I Owe The Linux Community Many Thanks - Because of the many RCU-related challenges from the Linux community, some of my most important work and collaborations have been in the past ten years - Not many people my age can truthfully say that - Here is hoping for ten more years!!! ;-) Legal Statement - This work represents the view of the author and does not necessarily represent the view of IBM. - IBM and IBM (logo) are trademarks or registered trademarks of International Business Machines Corporation in the United States and/or other countries. - Linux is a registered trademark of Linus Torvalds. - Other company, product, and service names may be trademarks or service marks of others. Questions
{"Source-Url": "http://www.rdrop.com/users/paulmck/realtime/paper/RCUbattery.2013.01.30b.LCA.pdf", "len_cl100k_base": 5464, "olmocr-version": "0.1.51", "pdf-total-pages": 74, "total-fallback-pages": 0, "total-input-tokens": 99790, "total-output-tokens": 8440, "length": "2e12", "weborganizer": {"__label__adult": 0.0003631114959716797, "__label__art_design": 0.00039768218994140625, "__label__crime_law": 0.00023305416107177737, "__label__education_jobs": 0.001102447509765625, "__label__entertainment": 0.00011992454528808594, "__label__fashion_beauty": 0.00014913082122802734, "__label__finance_business": 0.000469207763671875, "__label__food_dining": 0.0002701282501220703, "__label__games": 0.0009312629699707032, "__label__hardware": 0.01355743408203125, "__label__health": 0.000408172607421875, "__label__history": 0.0002868175506591797, "__label__home_hobbies": 0.00019657611846923828, "__label__industrial": 0.0008993148803710938, "__label__literature": 0.00020170211791992188, "__label__politics": 0.00019276142120361328, "__label__religion": 0.0005230903625488281, "__label__science_tech": 0.08270263671875, "__label__social_life": 9.03010368347168e-05, "__label__software": 0.0204620361328125, "__label__software_dev": 0.87548828125, "__label__sports_fitness": 0.0002868175506591797, "__label__transportation": 0.0004775524139404297, "__label__travel": 0.0001881122589111328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21214, 0.03044]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21214, 0.24782]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21214, 0.86742]], "google_gemma-3-12b-it_contains_pii": [[0, 109, false], [109, 272, null], [272, 811, null], [811, 998, null], [998, 1399, null], [1399, 2327, null], [2327, 2396, null], [2396, 2467, null], [2467, 2560, null], [2560, 2648, null], [2648, 3188, null], [3188, 3518, null], [3518, 3664, null], [3664, 3779, null], [3779, 3831, null], [3831, 3976, null], [3976, 4294, null], [4294, 4339, null], [4339, 5050, null], [5050, 5306, null], [5306, 5610, null], [5610, 5759, null], [5759, 5966, null], [5966, 6174, null], [6174, 6619, null], [6619, 6899, null], [6899, 6944, null], [6944, 7177, null], [7177, 7448, null], [7448, 7993, null], [7993, 8282, null], [8282, 8327, null], [8327, 8574, null], [8574, 8676, null], [8676, 9351, null], [9351, 9394, null], [9394, 9499, null], [9499, 9998, null], [9998, 10275, null], [10275, 10496, null], [10496, 10729, null], [10729, 10945, null], [10945, 11171, null], [11171, 11286, null], [11286, 11628, null], [11628, 11772, null], [11772, 11882, null], [11882, 12027, null], [12027, 12646, null], [12646, 12806, null], [12806, 12925, null], [12925, 13644, null], [13644, 13791, null], [13791, 13975, null], [13975, 14351, null], [14351, 14547, null], [14547, 14848, null], [14848, 15137, null], [15137, 15283, null], [15283, 15473, null], [15473, 15900, null], [15900, 16627, null], [16627, 17203, null], [17203, 17233, null], [17233, 17539, null], [17539, 17958, null], [17958, 18451, null], [18451, 19044, null], [19044, 19656, null], [19656, 20302, null], [20302, 20502, null], [20502, 20794, null], [20794, 21205, null], [21205, 21214, null]], "google_gemma-3-12b-it_is_public_document": [[0, 109, true], [109, 272, null], [272, 811, null], [811, 998, null], [998, 1399, null], [1399, 2327, null], [2327, 2396, null], [2396, 2467, null], [2467, 2560, null], [2560, 2648, null], [2648, 3188, null], [3188, 3518, null], [3518, 3664, null], [3664, 3779, null], [3779, 3831, null], [3831, 3976, null], [3976, 4294, null], [4294, 4339, null], [4339, 5050, null], [5050, 5306, null], [5306, 5610, null], [5610, 5759, null], [5759, 5966, null], [5966, 6174, null], [6174, 6619, null], [6619, 6899, null], [6899, 6944, null], [6944, 7177, null], [7177, 7448, null], [7448, 7993, null], [7993, 8282, null], [8282, 8327, null], [8327, 8574, null], [8574, 8676, null], [8676, 9351, null], [9351, 9394, null], [9394, 9499, null], [9499, 9998, null], [9998, 10275, null], [10275, 10496, null], [10496, 10729, null], [10729, 10945, null], [10945, 11171, null], [11171, 11286, null], [11286, 11628, null], [11628, 11772, null], [11772, 11882, null], [11882, 12027, null], [12027, 12646, null], [12646, 12806, null], [12806, 12925, null], [12925, 13644, null], [13644, 13791, null], [13791, 13975, null], [13975, 14351, null], [14351, 14547, null], [14547, 14848, null], [14848, 15137, null], [15137, 15283, null], [15283, 15473, null], [15473, 15900, null], [15900, 16627, null], [16627, 17203, null], [17203, 17233, null], [17233, 17539, null], [17539, 17958, null], [17958, 18451, null], [18451, 19044, null], [19044, 19656, null], [19656, 20302, null], [20302, 20502, null], [20502, 20794, null], [20794, 21205, null], [21205, 21214, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 21214, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21214, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21214, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21214, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21214, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21214, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21214, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21214, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21214, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21214, null]], "pdf_page_numbers": [[0, 109, 1], [109, 272, 2], [272, 811, 3], [811, 998, 4], [998, 1399, 5], [1399, 2327, 6], [2327, 2396, 7], [2396, 2467, 8], [2467, 2560, 9], [2560, 2648, 10], [2648, 3188, 11], [3188, 3518, 12], [3518, 3664, 13], [3664, 3779, 14], [3779, 3831, 15], [3831, 3976, 16], [3976, 4294, 17], [4294, 4339, 18], [4339, 5050, 19], [5050, 5306, 20], [5306, 5610, 21], [5610, 5759, 22], [5759, 5966, 23], [5966, 6174, 24], [6174, 6619, 25], [6619, 6899, 26], [6899, 6944, 27], [6944, 7177, 28], [7177, 7448, 29], [7448, 7993, 30], [7993, 8282, 31], [8282, 8327, 32], [8327, 8574, 33], [8574, 8676, 34], [8676, 9351, 35], [9351, 9394, 36], [9394, 9499, 37], [9499, 9998, 38], [9998, 10275, 39], [10275, 10496, 40], [10496, 10729, 41], [10729, 10945, 42], [10945, 11171, 43], [11171, 11286, 44], [11286, 11628, 45], [11628, 11772, 46], [11772, 11882, 47], [11882, 12027, 48], [12027, 12646, 49], [12646, 12806, 50], [12806, 12925, 51], [12925, 13644, 52], [13644, 13791, 53], [13791, 13975, 54], [13975, 14351, 55], [14351, 14547, 56], [14547, 14848, 57], [14848, 15137, 58], [15137, 15283, 59], [15283, 15473, 60], [15473, 15900, 61], [15900, 16627, 62], [16627, 17203, 63], [17203, 17233, 64], [17233, 17539, 65], [17539, 17958, 66], [17958, 18451, 67], [18451, 19044, 68], [19044, 19656, 69], [19656, 20302, 70], [20302, 20502, 71], [20502, 20794, 72], [20794, 21205, 73], [21205, 21214, 74]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21214, 0.00625]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
e7064476bd9e2652fb425ae1a5365d6f7e2087d1
Comparing Line and AST Granularity Level for Program Repair using PyGGI Gabin An KAIST Daejeon, Republic of Korea agb94@kaist.ac.kr Jinhan Kim KAIST Daejeon, Republic of Korea jinhankim@kaist.ac.kr Shin Yoo KAIST Daejeon, Republic of Korea shin.yoo@kaist.ac.kr ABSTRACT PyGGI is a lightweight Python framework that can be used to implement generic Genetic Improvement algorithms at the API level. The original version of PyGGI only provided lexical modifications, i.e., modifications of the source code at the physical line granularity level. This paper introduces new extensions to PyGGI that enable syntactic modifications for Python code, i.e., modifications that operate at the AST granularity level. Taking advantage of the new extensions, we also present a case study that compares the lexical and syntactic search granularity level for automated program repair, using ten seeded faults in a real world open source Python project. The results show that search landscapes at the AST granularity level are more effective (i.e. eventually more likely to produce plausible patches) due to the smaller sizes of ingredient spaces (i.e., the space from which we search for the material to build a patch), but may require longer time for search because the larger number of syntactically intact candidates leads to more fitness evaluations. CCS CONCEPTS • Software and its engineering → Search-based software engineering; ACM Reference Format: 1 INTRODUCTION Most of the currently available Genetic Improvement (GI) [17] tools are meant to be research prototypes, and naturally are either tied or specialised to particular research ideas. For example, astor provides Java re-implémentations of a series of widely studied GI approaches [13]: while the re-implémentations share common infrastructures provided by astor itself, one may argue that the aim of the framework is to provide a suite of GI approaches, rather than to provide a general framework at the API level. GIN is designed to be a general framework [21], but its current implementation has some limitations such as only being able to operate one Java source file. Additionally, GIN’s implementation language, Java, can pose inherent overhead to fast prototyping. Consequently, in practice, it may be hard to use those implementations as a baseline to build new GI tools and experiments. Since the essential parts of GI, such as program preprocessing, code modifications, or patch management, can impose considerable implementation overhead, a general framework that is extensible and easy to use may serve researchers and practitioners alike. We have previously introduced the initial version of PyGGI [1] as a general framework of GI written in Python†. Our aim was to implement a simple, lightweight yet extensible framework, that can be used by researchers and practitioners to implement GI techniques of various flavours, rather than to promote a specific algorithm for GI. The initial version only provided lexical modification, i.e., insertion, copy, and replacement of physical lines of program source code. The lexical modifications had the benefit of being language agnostic, as found in other techniques such as Observational Slicing [3, 7]. In this paper, we introduce the extended version of PyGGI that supports syntactic modifications for Python (i.e., modifications that operate on parsed Python code) as well as fault localisation supports. In addition, exploiting the new extensions, we also present an empirical study that compares the lexical and the syntactic search spaces for automated program repair using seeded faults in an open source project. The lexical search space is the space of all programs that can be constructed by application of lexical operators (i.e., copy, insertion, and replacement of physical lines), whereas the syntactic search space is the space of all programs that can be constructed by application of syntactic operators (i.e., modifications of abstract syntax tree nodes that correspond to Python statements). The technical contributions of this paper is as follows: • We present a new version of PyGGI which now supports syntactic modification at the statement level for Python, rather than the lexical modification at the physical line level in the previous version. • We conduct a case study that compares line and AST granularity level and corresponding search landscapes for the statement level automated program repair (i.e., repairs that consist of copy, insertion, and replacement of program statements). We use a Python open source project, sh, and seeded faults for the evaluation. the results show that the AST level granularity and its search landscape are more effective at producing plausible patches, as patches tend to lead to fewer syntactic errors. We begin by describing the overall design of PyGGI. †Available from https://github.com/coinse/pyggi. 2 DESIGN OF PYGGI Figure 1: PyGGI Overview Figure 1 illustrates the overall system of PyGGI version 1.1 and shows how it works through the example: green boxes represent either inputs user provides (i.e., source code to patch or test scripts to execute) or an GI program user writes using PyGGI API (e.g., improve.py). To run PyGGI, a user provides a target program and locates a configuration file in the root path of the program. The configuration file should contain the information about paths to the target source codes and a test command. With the inputs from the user, PyGGI pre-processes the source codes into own contents according to the given granularity level. Depending on the algorithm, PyGGI modifies the target program with edit operators and applies them lazily; that is, PyGGI manages a sequence of edit operators and applies it only when the candidate patch needs to be evaluated. Generating a cloned temporary directory, PyGGI executes the test command to evaluate the candidate patch. The test program or script, i.e., run_test.sh in Figure 1, should yield output in a pre-defined format. Otherwise, the user should provide a test result parser. The whole process continues until the termination criteria are met. 2.1 PyGGI Classes PyGGI is composed of several classes as shown in Figure 2. Compared to the initial version, Edit class have been replaced with two abstract classes, AtomicOperator and CustomOperator, to allow implementation of operators that correspond to different granularity levels. PyGGI also provides four child classes that actually instantiate operators that are specific to different granularity levels. While it is not represented in Figure 2, PyGGI also contains a module named algorithms. It currently includes only one abstract class, LocalSearch, that implements the basic skeleton of a local search algorithm. The user can easily inherit and override this to implement multiple local search variants, such as the Hill climbing or Tabu search. PyGGI can also be intergrated with evolutionary computation libraries, such as DEAP [4], to take advantage of both single and multi-objective evolutionary algorithms. 2.1.1 Program. Program encapsulates the target program, especially its source code. The actual internal representation of the program depends on the choice of granularity level. See Section 2.2 for details. 2.1.2 Logger. Logger customises a logging object for a program instance to help recording various information during the GI process. It has file and stream handlers, and provides five logging levels: debug, info, warning, error, and critical. 2.1.3 GranularityLevel. GranularityLevel dictates multiple factors: how the program is pre-processed and internally stored, and which operators can be used during the search. It inherits Enum, a python built-in enumeration class, and currently has two members: line and AST. 2.1.4 Patch. Patch is a sequence of edit operators, both atomic operator and custom operator. During search iteration, PyGGI modifies the source code of the target program by applying a patch. To apply a patch, a sequence of edit operators should be translated into a sequence of atomic operators. It can be possible since a custom operator is essentially a list of atomic operators. Then, the atomic operators modify the source codes in order. 2.1.5 TestResult. TestResult stores the result of the test suite executions on the patched candidate program. The results contain whether compilation succeeded, the elapsed execution time, as well as any other user-defined test outputs. 2.1.6 AtomicOperator. AtomicOperator is an abstract class for PyGGI-provided operators. Currently, there are total four classes that inherit and specialise it as shown in Figure 2. See Section 2.3 for details of each. 2.1.7 CustomOperator. CustomOperator is an abstract class that provides a skeleton for a user-defined custom operator. Custom operators should be a sequence of atomic operators as mentioned in Section 2.1.4. When users implement their own operators, it must inherit the CustomOperator class and override some methods to define the intended behaviours. Custom operators can be conceptually described as a function, which takes some program elements as input and outputs a list of atomic operators that operate on them. 2.2 Program Preprocessing PyGGI pre-processes the target program before manipulating its source code. There are currently two granularity levels, line and AST, which correspond to the lexical and the syntactic approaches, respectively. For the line granularity level, PyGGI transmutes source code into a list of code lines; for the AST granularity level, it parses the source code and stores the AST. The line granularity level offers an off-the-shelf, language agnostic modifications, whereas the AST granularity level provides more structured code modifications. In particular, AST level modifications produce patches that are more syntactically intact, because any modification of a statement AST node will include its subtree. Figure 3 illustrates this difference. 2.3 Code Manipulation PyGGI provides atomic operators that are based on the Plastic Surgery Hypothesis [2], which has been the fundamental assumption for other state-of-the-art repair techniques such as GenProg \[8\]. As mentioned in Section 2.1.6, PyGGI provides four atomic operators for manipulating the source codes of the program: LineReplacement and LineInsertion for the line level granularity, StmtReplacement and StmtInsertion for the statement level granularity. See descriptions of each operator in Table 1. The critical difference between Line and AST granularity level is the unit of modification. For example, if PyGGI attempts to insert the third line somewhere else, it would copy and insert only the single line that contains the for in Figure 3, with the risk of a syntax error. However, inserting the third statement means copying the entire loop with the body, thereby avoiding the potential syntax error. Other custom operators can be generated by combining the atomic operators. For example, a deletion can be instantiated as a replacement with an empty line, whereas a move can be instantiated as an insertion followed up by a deletion, as shown in Table 2. Those operators in Table 2 are already implemented by PyGGI for both line and AST granularity level. Another new feature in PyGGI allows the user to set modification weights for each modification point: this allows the user to focus modifications on specific parts of the source code. For example, the suspiciousness scores obtained by fault localisation techniques can be used as modification weights to make PyGGI focus on program elements that are more likely to be faulty. Once entered, the weights are normalised and used as the probability distribution of the roulette wheel selection of modification points by each modification operators. If no weight values are given, PyGGI uses an uniform distribution. 2.4 Validation & Evaluation We need to apply the patch to the target program to evaluate a candidate program. To avoid any unwanted interference with the original program, PyGGI clones the entire original program into a temporary directory when a user requests applying the patch and makes edits in the directory. After applying the patch, PyGGI executes the given test command to validate and evaluate the patch. We call a patch is valid if and only if it produces a syntactically correct program and the resulting test execution halts within the given time-out limit. The test results, which are printed out by the test execution, are parsed by either PyGGI or a user-provided result parser. PyGGI returns the captured test results to the user so that they can make use of it for evaluating the patch. For example, test results can include the number of failing test cases or the information about memory consumption during the test execution; the user can write their own fitness function based on these results. <table> <thead> <tr> <th>Line</th> <th>Lexical</th> <th>Syntactic</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>number = [-3, 4, -5, 6, 7] number = [-3, 4, -5, 6, 7]</td> <td></td> </tr> <tr> <td>2</td> <td>pos_total = 0 pos_total = 0</td> <td></td> </tr> <tr> <td>3</td> <td>for num in numbers: for num in numbers:</td> <td></td> </tr> <tr> <td>4</td> <td>if num &gt; 0: if num &gt; 0:</td> <td></td> </tr> <tr> <td>5</td> <td>pos_total += num pos_total += num</td> <td></td> </tr> <tr> <td>6</td> <td>print(num) print(num)</td> <td></td> </tr> </tbody> </table> (a) Example Code <table> <thead> <tr> <th>Gr. Operator</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Line</td> <td>LineReplacement(x, y) Replace x with y (delete x if y is None)</td> </tr> <tr> <td></td> <td>LineInsertion(x, y, pos) Insert(copy) y before or after x</td> </tr> <tr> <td>AST</td> <td>StmtReplacement(x, y) Replace x with y (delete x if y is None)</td> </tr> <tr> <td></td> <td>StmtInsertion(x, y, pos) Insert(copy) y before or after x</td> </tr> </tbody> </table> (b) Modification Points Figure 2: Simple Class Diagram of PyGGI Figure 3: Comparision of modification points between lexical and syntactic approaches. Note that 1) lexical modification points contain indentation whitespace characters, and 2) syntactic statements include physical lines that correspond to its AST subtree. We use the latest version of PyGGI, the AST granularity level and the capability of accepting modification weights, we have conducted a small study of comparing search landscapes for automated program repair at different granularity levels. This Section presents the details of the experimental settings. 3 GRANULARITY COMPARISON STUDY To introduce the new features of PyGGI, the AST granularity level and the capability of accepting modification weights, we have conducted a small study of comparing search landscapes for automated program repair at different granularity levels. This Section presents the details of the experimental settings. 3.1 Research Questions To compare the line and the AST granularity level, we ask the following research questions. - **RQ1. Effectiveness**: which granularity level is more effective at generating patches? To answer RQ1, we measure various metrics during the PyGGI repair runs. Success count is simply the number of successful repair runs out of the total number of trials. We also compute the ratio of valid patches, i.e., patches that do not break the syntax of the program when applied and do not go over the time limit. Finally, we count the number of unique plausible patches [19] (i.e., the patches that pass all given test cases) found for each fault. - **RQ2. Efficiency**: which granularity level is more efficient to navigate and search? To answer RQ2, we report the number of fitness evaluation that resulted in plausible patches, as well as the wall clock time required. For both RQ1 and RQ2, we also undertake qualitative analysis of successful and unsuccessful repair attempts to gain insights. 3.2 Subjects We use the latest version of `sh`\(^2\), which is a full-fledged replacement of the `subprocess` module in Python. The `sh` project is currently 3,583 LOC and comes with 156 test cases. The study use seeded faults in `sh`: ten faulty versions of the original program have been generated by manually mutating a single statement. All seeded faults are repairable under the Plastic Surgery Hypothesis [2], i.e., the correct version can be obtained by rearranging the statements in the version with the seeded fault. 3.3 Fault Localisation and Test Filtering We use Ochiai [16, 22] to provide the suspiciousness scores. Ochiai scores are computed as follows: \[ \text{Ochiai}(e_p, e_f, n_p, n_f) = \frac{e_f}{\sqrt{(e_f + n_f) \times (e_f + e_p)}} \] where \(e_p\) and \(e_f\) represent the number of passing and failing test cases that execute the given program element, respectively, and \(n_p\) and \(n_f\) the number of passing and failing test cases that do not execute the given element, respectively. The resulting Ochiai score is expected to be correlated with how likely the given element is to be faulty. Intuitively, the higher the \(e_f\) and \(n_p\) are, the more suspicious the element is. If the test executes the faulty element, it is more likely to fail († \(e_f\)). Similarly, if the test does not execute the faulty element, it is more likely to pass († \(n_p\)). For the study, we provide pre-computed Ochiai scores as modification weights. First, lines and statements are ranked according to their Ochiai scores, with ties broken by the max tie breaker (i.e., tied elements are placed at the lowest rank). Subsequently, if there are fewer than or equal to top ten distinctive lines or statements, PyGGI considers these to form the suspicious set and only targets elements in this set for modifications. If, however, it is not possible to pick top ten distinctive elements, PyGGI targets all lines and statements with equal probabilities. Since the coverage measurement tool we use reports statement coverage per physical line, Ochiai scores are computed at the line granularity level. A statement is deemed to be in the suspicious set if its first physical line is in that set. Table 3 presents the number of total modification points for line and AST granularity level, as well as the number of suspicious modification points. The original test suite contains total 156 test cases. However, it may be that not all test cases are relevant to the fault under repair. In case where the Ochiai scores have identified top ten suspicious lines or statements, any passing test cases that do not execute these have been filtered out. If we do not have the distinct top ten target lines or statements, we do not filter out test cases. Table 4 lists the number of passing and failing test cases after test filtering. Note that the filtering method we adopted does not eliminate any test cases for fault 4 and 6. 3.4 Search Algorithm and Fitness Function Since the test suite of `sh` contains deterministic test cases only, it is unnecessary for the search to reconsider previously evaluated patches. As such, we implement and use Tabu Search [6] by extending localSearch class of PyGGI. The search maintains a record of visited solutions, namely the `tabu` list, and does not go back to any in the list. The tabu search uses three modification operators: deletion, insertion, and replacement, to generate neighbouring solutions. Algorithm 1 presents the pseudo code. The `tabu` is initially an empty list (Line 1). In each iteration, the search continues to explore the neighbourhood of the current best patch, until finding a patch which is not contained in `tabu`. Once such a patch is found, the patch is added to `tabu` before the search proceeds to evaluate it (Line 19-20). --- \(^2\) https://github.com/amoffat/sh Table 2: The list of custom operators already implemented in PyGGI, where the \(x\) and \(y\) are the indexes of modification points. <table> <thead> <tr> <th>Operator</th> <th>Translated into</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>LineDeletion(x)</td> <td>[LineReplacement(x, None)]</td> <td>Delete x</td> </tr> <tr> <td>LineMoving(x, y, pos=before or after)</td> <td>[LineInsertion(y, x, pos), LineReplacement(x, None)]</td> <td>Move x before or after y</td> </tr> <tr> <td>StmtDeletion(x)</td> <td>[StmtReplacement(x, None)]</td> <td>Delete x</td> </tr> <tr> <td>StmtMoving(x, y, pos=before or after)</td> <td>[StmtInsertion(y, x, pos), StmtReplacement(x, None)]</td> <td>Move x before or after y</td> </tr> </tbody> </table> We execute PyGGI against the same fault for 20 times using line and AST granularity level respectively. In each attempt, the tabu search is given the budget of 2,000 fitness evaluations: the stopping criterion is either when the budget expires, or when a plausible patch [19] is found. Test executions time out after 20 seconds. As described, we use Ochiai formula to compute the suspiciousness scores: the coverage data has been collected using a widely used Python coverage measurement tool, coverage.py 3. The experiment has been conducted on a PC with Intel Core i7-7700 CPU and 32GB memory running Ubuntu 16.04, using Python 3.6.2. PyGGI uses astor 4 to manipulate the Python AST. ### 3.6 Results This section presents the results from our study. Table 5 shows the comparative results from two granularity levels. #### 3.6.1 Effectiveness (RQ1) Among the total ten faults, we succeed to find the plausible patches for the six faults: 1, 2, 3, 7, 8, and 10. In case of the faults 4, 6, and 9, the filtered test suites still do not finish within the time-out limit of 20 seconds, resulting in test failures. --- 3https://bitbucket.org/ned/coveragepy 4https://github.com/biererpeksag/astor failures and +Inf fitness for all candidate patches. Consequently, PyGGI fails to generate any patches for these. For fault 5, PyGGI finds no plausible patch at all, even though the actual faulty line was chosen to be suspicious. We investigate this in more detail in Section 3.6.3. To answer RQ1, we first compare the success rates shown in Table 5. On all repaired faults, which have at least one plausible patch, except 1 and 7, both granularity levels succeed similar number of times. However, for fault 1 and 7, PyGGI succeed roughly four times more frequently at the AST level than at the line level. We posit that this difference stems from the different nature of these faults. Let us divide the six repaired faults into two categories. The first category includes fault 1, 3, and 7: these faults require specific ingredients to be repaired. The second category includes fault 2, 8, and 10: these faults can be repaired to pass all tests without requiring specific ingredients, i.e., either by deleting it or replacing it with an ingredient (Line 5) originally has a different indentation level from the target modification point (Line 4), it can be repaired only at the AST granularity level. Consequently, the probability of finding a correct ingredient in the given budget is lower than that of the line level. Fault 3 is the sole exception in the first category that cannot be explained by the size of the ingredient space. However, as shown in Table 4, the fault 3 has twice as many suspicious lines as fault 1 and 7. The larger the number of candidate lines to modify, the wider the search space becomes, and consequently fault 3 is much harder to fix than fault 1 and 7. As a result, it has a very low success rate of 1 out of 20 regardless of the granularity level. To summarise, the first category of faults are harder to fix due to the large search space (either due to the large number of ingredients or suspicious targets), although AST granularity level seem to be at least more successful than the line level in the case of fault 1 and 7. On the contrary, the repair for the faults in the second category does not require specific ingredients and, as such, their success does not depend on the size of the ingredients, or even the number of total modification points. Furthermore, the nature of these faults means that it is easier to repair them within the given budget, compared to the faults in the first category. Since the two levels generate a plausible patch for the same fault sets, we tried to establish the correctness of the found patches. First, we executed the un-filtered test suite against all patches: all of the generated plausible patches passed the entire test suites. Next, we manually investigated the correctness of the plausible patches to provide qualitative answers to RQ1. We define a correct patch as a patch that modifies the version with the seeded faults to be semantically equivalent to the original one again. The last column of Table 5 presents the number of unique correct patches in the parentheses. Table 5: Comparison of the line and the AST granularity level in the automated repair of seeded faults in sh. Results are averaged from twenty repeated runs. The last column presents the number of unique correct patches in the parentheses. <table> <thead> <tr> <th>Fault Index</th> <th>Successful Runs</th> <th>Valid Patch Rate (%)</th> <th>Num. of Evaluations</th> <th>Elapsed Time (s)</th> <th>Plausible Patches</th> </tr> </thead> <tbody> <tr> <td></td> <td>Line</td> <td>AST</td> <td>Line</td> <td>AST</td> <td>Line</td> </tr> <tr> <td>1</td> <td>1</td> <td>4</td> <td>51.24</td> <td>98.67</td> <td>1708</td> </tr> <tr> <td>2</td> <td>20</td> <td>20</td> <td>30.21</td> <td>84.96</td> <td>602.1</td> </tr> <tr> <td>3</td> <td>1</td> <td>1</td> <td>33.31</td> <td>94.94</td> <td>2</td> </tr> <tr> <td>4</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>-</td> </tr> <tr> <td>5</td> <td>0</td> <td>0</td> <td>39.88</td> <td>99.88</td> <td>-</td> </tr> <tr> <td>6</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>-</td> </tr> <tr> <td>7</td> <td>2</td> <td>9</td> <td>39.20</td> <td>93.29</td> <td>792.5</td> </tr> <tr> <td>8</td> <td>20</td> <td>20</td> <td>55.29</td> <td>99.65</td> <td>14.65</td> </tr> <tr> <td>9</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>-</td> </tr> <tr> <td>10</td> <td>20</td> <td>19</td> <td>37.50</td> <td>56.06</td> <td>9.6</td> </tr> </tbody> </table> To answer RQ2, we first compare the success rates shown in Table 6. On all repaired faults, which have at least one plausible patch, the AST granularity level succeeds three times more frequently than the line granularity level. The larger the number of candidate lines to modify, the wider the search space becomes, and consequently fault 3 is much harder to fix than fault 1 and 7. As a result, it has a very low success rate of 1 out of 20 regardless of the granularity level. To summarise, the first category of faults are harder to fix due to the large search space (either due to the large number of ingredients or suspicious targets), although AST granularity level seem to be at least more successful than the line level in the case of fault 1 and 7. Fault 3 is the sole exception in the first category that cannot be explained by the size of the ingredient space. However, as shown in Table 4, the fault 3 has twice as many suspicious lines as fault 1 and 7. The larger the number of candidate lines to modify, the wider the search space becomes, and consequently fault 3 is much harder to fix than fault 1 and 7. As a result, it has a very low success rate of 1 out of 20 regardless of the granularity level. To summarise, the first category of faults are harder to fix due to the large search space (either due to the large number of ingredients or suspicious targets), although AST granularity level seem to be at least more successful than the line level in the case of fault 1 and 7. On the contrary, the repair for the faults in the second category does not require specific ingredients and, as such, their success does not depend on the size of the ingredients, or even the number of total modification points. Furthermore, the nature of these faults means that it is easier to repair them within the given budget, compared to the faults in the first category. Since the two levels generate a plausible patch for the same fault sets, we tried to establish the correctness of the found patches. First, we executed the un-filtered test suite against all patches: all of the generated plausible patches passed the entire test suites. Next, we manually investigated the correctness of the plausible patches to provide qualitative answers to RQ2. We define a correct patch as a patch that modifies the version with the seeded faults to be semantically equivalent to the original one again. The last column of Table 6 presents the number of unique correct patches in the parentheses. Figure 4: Correct patch generated for the fault 7: Replacing the 736th statement with the 740th statement. Only the AST granularity level could generate a correct patch for the fault 7, which is shown in Figure 4. It repairs the faulty program to be the same as the original program. Because the required ingredient (Line 5) originally has a different indentation level from the target modification point (Line 4), it can be repaired only at the AST granularity level. The attempt to perform the same insertion at the line granularity level results in a syntax error, as the insertion would copy the indentation whitespace character along with the contents of the line, which results in a violation of the indentation rule Python uses to organise its code. Note that this would not have caused any problem in languages such as Java or C. Manual inspection of patches generated for fault 2, 8, and 10 provides interesting insights into test-based program repair. Fault 2 redirects some of the program output to the STDERR as in Figure 5. There is no test case to check that the program output should be made to STDOUT. Consequently, PyGGI can replace pipe = OProc.STDERR with any side effect free statement to generate plausible patches, but fails to generate a correct patch. Fault 8 and 10 illustrates an interesting contrast between the line and AST granularity level. Figure 6 and 7 present these faults. 3.6.2 Efficiency (RQ2). To compare the efficiency, we report the number of fitness evaluations and the wall clock time required until finding the first plausible patch in each trial. The average values are shown in Table 5. For all the faults except 3 and 7, the AST level requires fewer fitness evaluations to find a plausible patch compared to the line level. On the contrary, the line level spends less time than the AST level except for fault 1 and 2. Since the AST granularity level yields a significantly larger number of valid (i.e., syntactically intact) patches, more AST level patches (including unsuccessful ones) are evaluated by test execution, resulting in longer overall execution time of PyGGI. In comparison, the majority of patches created by the line level modifications are syntactically incorrect, and can be evaluated without invoking test executions. In summary, the fitness evaluation required to find a plausible patch is generally less at the AST level, but the overall execution time is longer due to the larger number of test executions. 3.6.3 Case study of fault 5. In the experiment, PyGGI could not find any plausible patch for fault 5. To investigate the cause, we look at how the fault was generated. Figure 8 shows the difference between the original and the version with fault 5 seeded. The statement at Line 4 was included in the set of suspicious classes of inputs \[18\]. A large portion of existing literature can be applied to automatically patch program faults using only dynamic analysis (i.e., test executions) as the fitness function \[5, 20\]. Automated Program Repair (APR) is now a rapidly advancing area \[10, 12–15\]. Langdon and Harman showed that the same GP based approach can be used to improve non-functional properties of software \[11\]. Petke et al. showed how software systems can be specialised for certain classes of inputs \[18\]. A large portion of existing literature can be We present a new version of what the previous version depended on. We use the new features to whereas GIN requires the end user to more directly interact with AST rather than statements marked by physical lines, which was due to the higher number of test executions. 5 CONCLUSION We present a new version of PyGGI, a Python general framework for Genetic Improvement. It has been designed to be used at the API level: the new version supports AST granularity level modifications for Python, i.e., modifications that handle statement nodes in the AST rather than statements marked by physical lines, which was what the previous version depended on. We use the new features to conduct a study of comparing search landscapes at different granularity levels for Automated Program Repair, using a real world Python open source project and seeded faults. The results show that the AST granularity level and the corresponding search landscape can be more effective (i.e., more frequently results in plausible patches) because they result in smaller ingredient spaces (i.e., fewer modification points). However, because the AST granularity level tends to result in much larger number of syntactically intact candidate patches, it may also lead to less efficiency and longer execution time due to the higher number of test executions. ACKNOWLEDGEMENT This research was supported by Next-Generation Information Computing Development Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT (No. 2017M3C4A7068179). REFERENCES
{"Source-Url": "http://www0.cs.ucl.ac.uk/staff/W.Langdon/icse2018/gi2018/papers/An_2018_GI.pdf", "len_cl100k_base": 7625, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 33333, "total-output-tokens": 9669, "length": "2e12", "weborganizer": {"__label__adult": 0.00029778480529785156, "__label__art_design": 0.00019872188568115232, "__label__crime_law": 0.000247955322265625, "__label__education_jobs": 0.0003592967987060547, "__label__entertainment": 4.404783248901367e-05, "__label__fashion_beauty": 0.00011801719665527344, "__label__finance_business": 0.00012290477752685547, "__label__food_dining": 0.0002453327178955078, "__label__games": 0.0003571510314941406, "__label__hardware": 0.0005369186401367188, "__label__health": 0.00029158592224121094, "__label__history": 0.00014066696166992188, "__label__home_hobbies": 6.765127182006836e-05, "__label__industrial": 0.00022208690643310547, "__label__literature": 0.00016236305236816406, "__label__politics": 0.0001742839813232422, "__label__religion": 0.0003001689910888672, "__label__science_tech": 0.00469970703125, "__label__social_life": 7.987022399902344e-05, "__label__software": 0.0051116943359375, "__label__software_dev": 0.9853515625, "__label__sports_fitness": 0.00023937225341796875, "__label__transportation": 0.0002989768981933594, "__label__travel": 0.00014853477478027344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38667, 0.04249]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38667, 0.22947]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38667, 0.88391]], "google_gemma-3-12b-it_contains_pii": [[0, 5194, false], [5194, 10282, null], [10282, 14253, null], [14253, 20661, null], [20661, 21860, null], [21860, 29868, null], [29868, 32250, null], [32250, 38667, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5194, true], [5194, 10282, null], [10282, 14253, null], [14253, 20661, null], [20661, 21860, null], [21860, 29868, null], [29868, 32250, null], [32250, 38667, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38667, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38667, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38667, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38667, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38667, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38667, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38667, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38667, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38667, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38667, null]], "pdf_page_numbers": [[0, 5194, 1], [5194, 10282, 2], [10282, 14253, 3], [14253, 20661, 4], [20661, 21860, 5], [21860, 29868, 6], [29868, 32250, 7], [32250, 38667, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38667, 0.20245]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
f7b44e6f5a2d62197c220d775b8952c1618b7800
Package ‘satuRn’ May 7, 2024 Type Package Title Scalable Analysis of Differential Transcript Usage for Bulk and Single-Cell RNA-sequencing Applications Date 2022-08-05 Version 1.12.0 Description satuRn provides a highly performant and scalable framework for performing differential transcript usage analyses. The package consists of three main functions. The first function, fitDTU, fits quasi-binomial generalized linear models that model transcript usage in different groups of interest. The second function, testDTU, tests for differential usage of transcripts between groups of interest. Finally, plotDTU visualizes the usage profiles of transcripts in groups of interest. Depends R (>= 4.1) Imports locfdr, SummarizedExperiment, BiocParallel, limma, pbapply, ggplot2, boot, Matrix, stats, methods, graphics Suggests knitr, rmarkdown, testthat, covr, BiocStyle, AnnotationHub, ensembldb, edgeR, DEXSeq, stageR, DelayedArray VignetteBuilder knitr Collate 'data.R' 'satuRn-framework.R' 'allGenerics.R' 'accessors.R' 'fitDTU.R' 'testDTU.R' 'plotDTU.R' License Artistic-2.0 URL https://github.com/statOmics/satuRn BugReports https://github.com/statOmics/satuRn/issues Encoding UTF-8 LazyData false RoxygenNote 7.2.0 biocViews Regression, ExperimentalDesign, DifferentialExpression, GeneExpression, RNASeq, Sequencing, Software, SingleCell, Transcriptomics, MultipleComparison, Visualization 1 Description Parameter estimation of quasi-binomial models. Usage fitDTU(object, ...) ## S4 method for signature 'SummarizedExperiment' fitDTU( object, formula, parallel = FALSE, BPPARAM = BiocParallel::bpparam(), verbose = TRUE ) **getModel,StatModel-method** **Arguments** - **object** A ‘SummarizedExperiment’ instance generated with the `SummarizedExperiment` function of the `SummarizedExperiment` package. In the assay slot, provide the transcript-level expression counts as an ordinary ‘matrix’, ‘DataFrame’, a ‘sparseMatrix’ or a ‘DelayedMatrix’. The ‘rowData’ slot must be a ‘DataFrame’ object describing the rows, which must contain a column ‘isoform_id’ with the row names of the expression matrix and a column ‘gene_id’ with the corresponding gene identifiers of each transcript. ‘colData’ is a ‘DataFrame’ describing the samples or cells in the experiment. Finally, specify the experimental design as a formula in the metadata slot. This formula must be based on the `colData`. See the documentation examples and the vignette for more details. - **formula** Model formula. The model is built based on the covariates in the data object. - **parallel** Logical, defaults to `FALSE`. Set to `TRUE` if you want to parallelize the fitting procedure. - **BPPARAM** Object of class `bpparamClass` that specifies the back-end to be used for computations. See `bpparam` in the `BiocParallel` package for details. - **verbose** Logical, should progress be printed? **Value** An updated ‘SummarizedExperiment’ instance. The instance now includes a new list of models ("fitDTUModels") in its rowData slot, which can be accessed by `rowData(object)[["fitDTUModels"]]. **Author(s)** Jeroen Gilis **Examples** ```r data(sumExp_example, package = "satuRn") sumExp <- fitDTU( object = sumExp_example, formula = ~ 0 + group, parallel = FALSE, BPPARAM = BiocParallel::bpparam(), verbose = TRUE ) ``` **Accessor functions for StatModel class** Description Accessor functions for StatModel class getModel(object) to get model getDF(object) to get the residual degrees of freedom of the model getDfPosterior(object) to get the degrees of freedom of the empirical Bayes variance estimator getDispersion(object) to get the dispersion estimate of the model getCoef(object) to get the parameter estimates of the mean model Usage ## S4 method for signature 'StatModel' getModel(object) ## S4 method for signature 'StatModel' getDF(object) ## S4 method for signature 'StatModel' getDfPosterior(object) ## S4 method for signature 'StatModel' getDispersion(object) ## S4 method for signature 'StatModel' getCoef(object) Arguments object StatModel object Value The requested parameter of the StatModel object Examples ## A fully specified dummy model myModel <- StatModel( type = "glm", params = list(x = 3, y = 7, b = 4), varPosterior = c(0.1, 0.2, 0.3), dfPosterior = c(6, 7, 8) ) getModel(myModel) **plotDTU** *Plot function to visualize differential transcript usage (DTU)* **Description** Plot function to visualize differential transcript usage (DTU) **Usage** ```r plotDTU( object, contrast, groups, coefficients, summaryStat = "model", transcripts = NULL, genes = NULL, top.n = 6 ) ``` **Arguments** - **object** A 'SummarizedExperiment' containing the models and results of the DTU analysis as obtained by the 'fitDTU' and 'testDTU' function from this 'satuRn' package, respectively. - **contrast** Specifies the specific contrast for which the visualization should be returned. This should be one of the column names of the contrast matrix that was provided to the 'testDTU' function. - **groups** A 'list' containing two character vectors. Each character vector contains the names (sample names or cell names) of the respective groups in the target contrast. - **coefficients** A 'list' containing two numeric vectors. Each numeric vector specifies the model coefficient of the corresponding groups in the selected contrast. - **summaryStat** Which summary statistic for the relative usage of the transcript should be displayed. 'Character' or 'character vector', must be any of following summary statistics; model (default), mean or median. - **transcripts** A 'character' or 'character vector' of transcript IDs, to specify which transcripts should be visualized. Can be used together with 'genes'. If not specified, 'plotDTU' will check if the 'genes' slot is specified. - **genes** A single 'character' or 'character vector' of gene IDs, to specify the genes for which the individual transcripts should be visualized. Can be used together with 'transcripts'. If not specified, 'plotDTU' will check if the 'transcripts' slot is specified. - **top.n** A 'numeric' value. If neither 'transcripts' nor 'genes' was was specified, this argument leads to the visualization of the 'n' most significant DTU transcripts in the contrast. Defaults to 6 transcripts. **Value** A ggplot object that can be directly displayed in the current R session or stored in a list. **Author(s)** Jeroen Gilis **Examples** ```r data(sumExp_example, package = "satuRn") library(SummarizedExperiment) sumExp <- fitDTU( object = sumExp_example, formula = ~ 0 + group, parallel = FALSE, BPPARAM = BiocParallel::bpparam(), verbose = TRUE ) group <- as.factor(colData(sumExp)$group) design <- model.matrix(~ 0 + group) colnames(design) <- levels(group) L <- matrix(0, ncol = 2, nrow = ncol(design)) rownames(L) <- colnames(design) colnames(L) <- c("Contrast1", "Contrast2") L[c("VISp.L5_IT_VISp_Hsd11b1_Endou", "ALM.L5_IT_ALM_Tnc"), 1] <- c(1, -1) L[c("VISp.L5_IT_VISp_Hsd11b1_Endou", "ALM.L5_IT_ALM_Tnc63_Dmrtb1"), 2] <- c(1, -1) sumExp <- testDTU(object = sumExp, contrasts = L, diagplot1 = FALSE, diagplot2 = FALSE, sort = FALSE) group1 <- rownames(colData(sumExp))[colData(sumExp)$group == "VISp.L5_IT_VISp_Hsd11b1_Endou"] group2 <- rownames(colData(sumExp))[colData(sumExp)$group == "ALM.L5_IT_ALM_Tnc63"] plots <- plotDTU( object = sumExp, contrast = "Contrast1", groups = list(group1, group2), coefficients = list(c(0, 0, 1), c(0, 1, 0)), summaryStat = "model", transcripts = c("ENSMUST00000165123", "ENSMUST00000165721", "ENSMUST0000005067"), genes = NULL, top.n = 6) ``` plotDTU StatModel Description Function for constructing a new 'StatModel' object. Usage StatModel( type = "fitError", params = list(), varPosterior = NA_real_, dfPosterior = NA_real_ ) Arguments type default set to fiterror, can be a glm params A list containing the parameters of the fitted glm varPosterior Numeric, posterior variance of the glm, default is NA dfPosterior Numeric, posterior degrees of freedom of the glm, default is NA Value A StatModel object Examples ## A fully specified dummy model myModel <- StatModel( type = "glm", params = list(x = 3, y = 7, b = 4), varPosterior = c(0.1, 0.2, 0.3), dfPosterior = c(6, 7, 8) ) myModel The `StatModel` class contains a statistical model generated by the DTU analysis. Models are created by the dedicated user-level function `fitDTU()` or manually, using the `StatModel()` constructor. In the former case, each quantitative feature is assigned its statistical model and the models are stored as a variable in a 'DataFrame' object, that in turn will be stored in the 'RowData' slot of a 'SummarizedExperiment object'. **Slots** - `type` 'character(1)' defining type of the used model. Default is "fitError", i.e. an error model. If not an error, class type will be "glm". - `params` A list() containing the parameters of the fitted model. - `varPosterior` numeric() of posterior variance. - `dfPosterior` numeric() of posterior degrees of freedom. **Author(s)** Jeroen Gilis **Examples** ```r ## A fully specified dummy model myModel <- StatModel( type = "glm", params = list(x = 3, y = 7, b = 4), varPosterior = c(0.1, 0.2, 0.3), dfPosterior = c(6, 7, 8) ) myModel ``` A 'SummarizedExperiment' derived from our case study which builds on the dataset of Tasic et al. It contains the same cells as the data object used in the vignette (see '?Tasic_counts_vignette' for more information). In this SummarizedExperiment, we performed a filtering with 'filterByExpr' of edgeR with more stringent than default parameter settings (min.count = 100, min.total.count = 200, large.n = 50, min.prop = 0.9) to reduced the number of retained transcripts. We used this object to create an executable example in the help files of satuRn. **Tasic_counts_vignette** **Description** A ‘SummarizedExperiment’ derived from our case study which builds on the dataset of Tasic et al. It contains the same cells as the data object used in the vignette (see ‘?Tasic_counts_vignette’ for more information). In this SummarizedExperiment, we performed a filtering with ‘filterByExpr’ of edgeR with more stringent than default parameter settings (min.count = 100, min.total.count = 200, large.n = 50, min.prop = 0.9) to reduced the number of retained transcripts. We used this object to create an executable example in the help files of satuRn. **Usage** ```r data(sumExp_example) ``` **Format** An object of class SummarizedExperiment with 1286 rows and 60 columns. --- **Tasic_counts_vignette** A ‘Matrix’ with transcript-level counts derived from our case study which builds on the dataset of Tasic et al. We used Salmon (V1.1.0) to quantify all L5IT cells (both for ALM and VISp tissue) from mice with a normal eye condition. From these cells, we randomly sampled 20 cells of each of the following cell types to use for this vignette; L5_IT_VISp_Hsd11b1_Endou, L5_IT_ALM_Tmem163_Dmrtb1 and L5_IT_ALM_Tnc. The data has already been leniently filtered with the ‘filterByExpr’ function of edgeR. **Usage** ```r data(Tasic_counts_vignette) ``` **Format** An object of class matrix (inherits from array) with 22273 rows and 60 columns. testDTU Tasic_metadata_vignette Metadata associated with the expression matrix 'Tasic_counts_vignette'. See '?Tasic_counts_vignette' for more information on the dataset. Description Metadata associated with the expression matrix 'Tasic_counts_vignette'. See '?Tasic_counts_vignette' for more information on the dataset. Usage data(Tasic_metadata_vignette) Format An object of class data.frame with 60 rows and 3 columns. testDTU Test function to obtain a top list of transcripts that are differentially used in the contrast of interest Description Function to test for differential transcript usage (DTU) Usage testDTU(object, contrasts, diagplot1 = TRUE, diagplot2 = TRUE, sort = FALSE) Arguments object A ‘SummarizedExperiment’ instance containing a list of objects of the ‘StatModel’ class as obtained by the ‘fitDTU’ function of the ‘satuRn’ package. contrasts ‘numeric’ matrix specifying one or more contrasts of the linear model coefficients to be tested. The rownames of the matrix should be equal to the names of parameters of the model that are involved in the contrast. The column names of the matrix will be used to construct names to store the results in the rowData of the SummarizedExperiment. diagplot1 ‘boolean(1)’ Logical, defaults to TRUE. If set to TRUE, a plot of the histogram of the z-scores (computed from p-values) is displayed using the locfdr function of the ‘locfdr’ package. The blue dashed curve is fitted to the mid 50 to originate from null transcripts, thus representing the estimated empirical null component densities. The maximum likelihood estimates (MLE) and central matching estimates (CME) of this estimated empirical null distribution are given below the plot. If the values for delta and sigma deviate from 0 and 1 respectively, the downstream inference will be influenced by the empirical adjustment implemented in satuRn. diagplot2 'boolean(1)' Logical, defaults to TRUE. If set to TRUE, a plot of the histogram of the "empirically adjusted" test statistics and the standard normal distribution will be displayed. Ideally, the majority (mid portion) of the adjusted test statistics should follow the standard normal. sort 'boolean(1)' Logical, defaults to FALSE. If set to TRUE, the output of the topTable test function is sorted according to the empirical p-values. Value An updated ‘SummarizedExperiment’ that contains the ‘Dataframes’ that display the significance of DTU for each transcript in each contrast of interest. Author(s) Jeroen Gilis Examples data(sumExp_example, package = "satuRn") library(SummarizedExperiment) sumExp <- fitDTU( object = sumExp_example, formula = ~ 0 + group, parallel = FALSE, BPPARAM = BiocParallel::bpparam(), verbose = TRUE ) group <- as.factor(colData(sumExp)$group) design <- model.matrix(~ 0 + group) colnames(design) <- levels(group) L <- matrix(0, ncol = 2, nrow = ncol(design)) rownames(L) <- colnames(design) colnames(L) <- c("Contrast1", "Contrast2") L[c("VISp.L5.IT_VISp_Hsd11b1_Endou", "ALM.L5.IT_ALM_Tnc"), 1] <- c(1, -1) L[c("VISp.L5.IT_VISp_Hsd11b1_Endou", "ALM.L5.IT_ALM_Tmem163_Dmrtb1"), 2] <- c(1, -1) sumExp <- testDTU(object = sumExp, contrasts = L, diagplot1 = FALSE, diagplot2 = FALSE, sort = FALSE) Index * datasets sumExp_example, 8 Tasic_counts_vignette, 9 Tasic_metadata_vignette, 10 .StatModel (StatModel-class), 8 fitDTU, 2 fitDTU, SummarizedExperiment-method (fitDTU), 2 getCoef (getModel, StatModel-method), 3 getCoef, StatModel-method (getModel, StatModel-method), 3 getDF (getModel, StatModel-method), 3 getDF, StatModel-method (getModel, StatModel-method), 3 getDfPosterior (getModel, StatModel-method), 3 getDfPosterior, StatModel-method (getModel, StatModel-method), 3 getDispersion (getModel, StatModel-method), 3 getDispersion, StatModel-method (getModel, StatModel-method), 3 getModel (getModel, StatModel-method), 3 getModel, StatModel-method, 3 plotDTU, 5 StatModel, 7 StatModel-class, 8 statModelAccessors (getModel, StatModel-method), 3 sumExp_example, 8 Tasic_counts_vignette, 9 Tasic_metadata_vignette, 10 testDTU, 10
{"Source-Url": "https://www.bioconductor.org/packages/release/bioc/manuals/satuRn/man/satuRn.pdf", "len_cl100k_base": 4185, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 23650, "total-output-tokens": 4955, "length": "2e12", "weborganizer": {"__label__adult": 0.00040078163146972656, "__label__art_design": 0.0005779266357421875, "__label__crime_law": 0.0004329681396484375, "__label__education_jobs": 0.0015707015991210938, "__label__entertainment": 0.00028705596923828125, "__label__fashion_beauty": 0.00020563602447509768, "__label__finance_business": 0.0003447532653808594, "__label__food_dining": 0.0006351470947265625, "__label__games": 0.0010957717895507812, "__label__hardware": 0.001308441162109375, "__label__health": 0.0016126632690429688, "__label__history": 0.0003581047058105469, "__label__home_hobbies": 0.00021970272064208984, "__label__industrial": 0.0006103515625, "__label__literature": 0.00033926963806152344, "__label__politics": 0.00042128562927246094, "__label__religion": 0.0005536079406738281, "__label__science_tech": 0.217529296875, "__label__social_life": 0.00028324127197265625, "__label__software": 0.089599609375, "__label__software_dev": 0.6806640625, "__label__sports_fitness": 0.0005278587341308594, "__label__transportation": 0.0003147125244140625, "__label__travel": 0.00027179718017578125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15486, 0.02566]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15486, 0.76889]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15486, 0.59439]], "google_gemma-3-12b-it_contains_pii": [[0, 1397, false], [1397, 1641, null], [1641, 3380, null], [3380, 4348, null], [4348, 6346, null], [6346, 7690, null], [7690, 8397, null], [8397, 9943, null], [9943, 11340, null], [11340, 12906, null], [12906, 14617, null], [14617, 15486, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1397, true], [1397, 1641, null], [1641, 3380, null], [3380, 4348, null], [4348, 6346, null], [6346, 7690, null], [7690, 8397, null], [8397, 9943, null], [9943, 11340, null], [11340, 12906, null], [12906, 14617, null], [14617, 15486, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 15486, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15486, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15486, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15486, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 15486, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15486, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15486, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15486, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15486, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15486, null]], "pdf_page_numbers": [[0, 1397, 1], [1397, 1641, 2], [1641, 3380, 3], [3380, 4348, 4], [4348, 6346, 5], [6346, 7690, 6], [7690, 8397, 7], [8397, 9943, 8], [9943, 11340, 9], [11340, 12906, 10], [12906, 14617, 11], [14617, 15486, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15486, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
fa2a0945bf03f4c3fede3f33a314a4ae91fb9347
[REMOVED]
{"Source-Url": "http://www.researchgate.net/profile/Oystein_Haugen/publication/220789770_Developing_a_Software_Product_Line_for_Train_Control_A_Case_Study_of_CVL/links/0fcfd509a3b60b63a9000000.pdf", "len_cl100k_base": 6801, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 29626, "total-output-tokens": 7970, "length": "2e12", "weborganizer": {"__label__adult": 0.0003895759582519531, "__label__art_design": 0.00024700164794921875, "__label__crime_law": 0.0002579689025878906, "__label__education_jobs": 0.0004625320434570313, "__label__entertainment": 6.109476089477539e-05, "__label__fashion_beauty": 0.00013518333435058594, "__label__finance_business": 0.00021278858184814453, "__label__food_dining": 0.00030517578125, "__label__games": 0.000614166259765625, "__label__hardware": 0.0010919570922851562, "__label__health": 0.000278472900390625, "__label__history": 0.0001919269561767578, "__label__home_hobbies": 7.557868957519531e-05, "__label__industrial": 0.0004992485046386719, "__label__literature": 0.00019478797912597656, "__label__politics": 0.00018417835235595703, "__label__religion": 0.0003306865692138672, "__label__science_tech": 0.007801055908203125, "__label__social_life": 6.693601608276367e-05, "__label__software": 0.005706787109375, "__label__software_dev": 0.978515625, "__label__sports_fitness": 0.00030803680419921875, "__label__transportation": 0.0017251968383789062, "__label__travel": 0.00020396709442138672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35964, 0.01976]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35964, 0.2039]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35964, 0.92517]], "google_gemma-3-12b-it_contains_pii": [[0, 2443, false], [2443, 4704, null], [4704, 6865, null], [6865, 9887, null], [9887, 11631, null], [11631, 13073, null], [13073, 16242, null], [16242, 19779, null], [19779, 23192, null], [23192, 26594, null], [26594, 27871, null], [27871, 29599, null], [29599, 32036, null], [32036, 33481, null], [33481, 35964, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2443, true], [2443, 4704, null], [4704, 6865, null], [6865, 9887, null], [9887, 11631, null], [11631, 13073, null], [13073, 16242, null], [16242, 19779, null], [19779, 23192, null], [23192, 26594, null], [26594, 27871, null], [27871, 29599, null], [29599, 32036, null], [32036, 33481, null], [33481, 35964, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35964, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35964, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35964, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35964, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35964, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35964, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35964, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35964, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35964, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35964, null]], "pdf_page_numbers": [[0, 2443, 1], [2443, 4704, 2], [4704, 6865, 3], [6865, 9887, 4], [9887, 11631, 5], [11631, 13073, 6], [13073, 16242, 7], [16242, 19779, 8], [19779, 23192, 9], [23192, 26594, 10], [26594, 27871, 11], [27871, 29599, 12], [29599, 32036, 13], [32036, 33481, 14], [33481, 35964, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35964, 0.04098]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
540e57f6a8be830a85e8b06825392746ce55244a
Securing the Grid using Virtualization The ViSaG Model Pierre Kuonen, Valentin Clément, Frédéric Bapst University of Applied Sciences Western Switzerland, HES-SO/Fribourg Fribourg, Switzerland Emails: {pierre.kuonen, frederic.bapst}@hefr.ch, clementval@gmail.com Abstract—Security in large distributed computing infrastructures, peer-to-peer, or clouds, remains an important issue and probably a strong obstacle for a lot of potential users of these types of computing infrastructures. In this paper, we propose an architecture for large scale distributed infrastructures guaranteeing confidentiality and integrity of both the computation and the host computer. Our approach is based on the use of virtualization and we introduce the notion of confidence link to safely execute programs. We implemented and tested this approach using the POP-C++ tool, which is a comprehensive object-oriented system to develop applications in large decentralized distributed computing infrastructures. Keywords—virtualization; security in large distributed system; grid middleware. I. INTRODUCTION Today, more and more applications require having punctually access to significant computing power. The purchasing of High Performance Computing (HPC) hardware is really profitable only in case of frequent usage. There are several alternatives to purchasing HPC hardware. The two most popular are Clouds [11] and Grids [12]. Even if these two approaches are not totally identical, they share at least one difficulty, which is the fact that the user has to trust the resources provider. In the case of Grid infrastructures, this problem is complemented by the fact that the resource provider also has to trust the user to be sure that running user’s tasks will not harm his own resources. This paper will focus on how, by using virtualization, we can guarantee confidentiality and integrity of computing and resource for the user and the resources provider in decentralized distributed computing environments, such as Grid systems. The main questions we intend to answer are: - How to ensure the integrity of the user's data and the user's calculations? - How to ensure the integrity of the host machine? - How to ensure the confidentiality of the communications? - How to safely use machines belonging to different private networks in the presence of firewalls? - How to ensure that different users sharing the same computing resource cannot interfere between each other? Last, but not least, we want to provide these features while minimizing the loss of performances. We propose an abstract vision of a secure decentralized distributed computing environment. This vision is based on the notion of confidence links. It has been implemented in the ViSaG project (ViSaG stands for: Virtual Safe Grid), which is presented in this paper. The rest of this paper is organized as follows: Section II details the main security issues we want to address with the ViSaG project. Section III presents the ViSaG model and Section IV presents the POP-C++ model, which has been used to implement the ViSaG model. Section V details the implementation of the ViSaG model using the POP-C++ middleware and Section VI presents the results we have obtained. Finally, Section VII concludes this paper. II. CURRENT SECURITY ISSUES IN GRID COMPUTING As mentioned in the introduction, there are several security issues in current Grid middleware systems that must be addressed. These issues are detailed below. A. How to ensure the integrity of the user's data and the user's calculations When a user submits a computation on a remote machine he must be sure that the owner of the remote resource cannot interfere with his computation, or, at least, if there is interference the user must be aware of it. B. How to ensure the integrity of the host machine Consider a user of the Grid willing to provide his computing resources to the infrastructure. As this user does not have a strong control on who will execute a job on his resources, he wants that the middleware guarantees him that the executed jobs cannot access unauthorized resources, cannot harm his resources, and cannot use more resources than he agreed to allocate to them. C. How to ensure the confidentiality of the communications We want to secure communications between nodes. First, we want to prevent communications from being seen by any other person/system and also we do not want that anyone could intercept and modify the transmitted data. D. How to safely use machines belonging to different private networks (presence of firewalls) One of the most difficult security problems when deploying decentralized Grid middleware is due to the presence of private networks protected by firewalls. Indeed, most of the time available resources in an institution, which could be part of a Grid, are resources located on private networks protected by firewalls. The question is: how to make these resources available without creating dangerous security holes in the firewalls. E. How to ensure that different users using the same computing resource cannot interfere between each other We also have to ensure that a user of a remote resource cannot harm processes of other users on the same remote resource. Usage of virtual machines in conjunction with Grids to address security issues has been already proposed in several papers, one of the first being [3]. Santhanam et al. [5] propose four scenarios to deploy virtual machines on Grid. None of these four scenarios exactly corresponds to our approach, even if the fourth is the closest. Smith et al. [6] propose a Grid environment enabling users to safely install and use custom software on demand using an image creation station to create user-specific virtual machines. Kealey et al. [4] focus on creating, configuring and managing execution environments. The authors show how dynamic virtual environments can be modeled as Grid services, thus allowing a client to create, configure and manage remote execution environments. In all these papers, the problem of deploying virtual machines in a Grid is addressed in a general way, although Santhanam et al. [5] have used Condor to test their scenarios. Our approach is different because the model we propose is closely related to an execution scheme based on the paradigm of distributed object-oriented programming. The proposed solution is specifically designed to solve the problems associated with this model such as the creation and destruction of the remote object (process) and passing remote objects as parameters of remote methods. III. THE VI-SAG MODEL Unlike most existing Grid middleware, the approach proposed in the VI-SAG project is based on the fundamental assumption that a Grid infrastructure is a fully decentralized system, which, in a sense, is close the peer-to-peer (P2P) network concept. At the hardware level, a computing Grid is composed of an unknown, but large, number of computer owning computing resources. None of the computers in the Grid has a global view of the overall infrastructure. Each computer only knows a very limited number of neighbors to which it is directly connected by two-way confidence links. A confidence link is a bidirectional channel that allows two computers located at both ends of the link to communicate safely at any time. How the confidence links are established is not part of the VI-SAG model, but is a hypothesis which defines our vision of a computing Grid. However, we can give as an example of a confidence link, an SSH channel between two computers whose system managers, or users, have manually exchanged their public keys. The set of all computers together with all the confidence links form a connected graph, we call it a trusted network, whose nodes are computers and edges are the confidence links. In the remainder of this document, when no confusion is possible, we often use the terms nodes and links, respectively, to designate computers and confidence links. Although, usually, computers are volatile resources, we will not address this aspect in this paper, where we made the assumption that, during the execution of a given program, computers participating to this execution do not fail. Finally, we assume that the confidence links are reliable. Figure 1 illustrates a computing Grid, as defined above, where confidence links have been realized using SSH (Secure Shell [10]) tunnels. In the VI-SAG execution model, computing resources are requested on the fly during execution of the application. Obtaining of requested resources is achieved through the usage of a resource discovery algorithm which runs on every node and only uses confidence links. Usually, this algorithm is a variant of the flooding algorithms Details of this algorithm are not part of the model, but is an implementation issue. When a node, we call it the originator, needs a new computing resource, it issues a request, which will be handled by the resource discovery algorithm. When the requested computing resource, provided by a node we will call the target, has been found, the originator of the request must contact the target to launch the computation and possibly communicate with it during the computation. For the originator, one possibility would be to communicate with the target by following confidence links. This option is, obviously, very inefficient because it exaggeratedly loads all intermediary nodes which have to route all messages. This is especially true when, during computation, nodes must exchange large data as is often the case in HPC applications. A better solution would be for the originator, to contact the target directly. Unfortunately, it is likely that the originator does not have a direct link (a confidence link) with the target and, in addition, the target does not necessarily desire to create a direct confidence link with the originator. Nevertheless, as the request reached him following confidence links, the target could accept to launch a virtual machine to provide the necessary computing resources for the originator. The virtual machine will act as a sandbox for the execution of the remote process. If the virtual machine is not permeable, this will guarantee that the executing node (the target) cannot be damaged by the execution of the remote process and that the computation made by the process cannot be biased by the node which hosts the computation (the target). To summarize, we can make the following statement: *The security of this execution model is only limited by the security offered by the virtual machine and the security offered by the confidence links.* This is the very basic idea of the ViSaG model. The implementation of such a model raises numerous problems that we are going to address in the next sections. IV. THE POP-C++ EXECUTION MODEL A Grid computing infrastructure not only consists in hardware but also requires the presence of a middleware which provides services and tools to develop and to run applications on the Grid. Therefore, before presenting how the ViSaG model has been implemented we need to know which Grid middleware our implementation is based on. The ViSaG model, as presented in the previous section, has been implemented in the POP-C++ Grid middleware [1]. In order to achieve this task we had to adapt to the execution model of POP-C++, which is briefly presented below. For more information of the POP-C++ tool please visit the POP-C++ web site: http://gridgroup.hefr.ch/popc. The POP-C++ tool implements the POP programming model first introduced by Dr. Tuan Anh Nguyen in his PhD thesis [2]. The POP programming model is based on the very simple idea that objects are suitable structures to distribute data and executable codes over heterogeneous distributed hardware and to make them interact between each other. The object oriented paradigm has unified the concept of module and type to create the new concept of class. The next step introduced by the POP model is to unify the concept of class with the concept of task (or process). This is realized by adding to traditional “sequential” classes a new type of classes: the parallel class. By instantiating parallel classes we are able to create a new category of objects we call parallel objects. Parallel objects are objects that can be remotely executed. They coexist and cooperate with traditional sequential objects during the application execution. POP-C++ is a comprehensive object-oriented framework implementing the POP model as a minimal extension of the C++ programming language. It consists of a programming suite (language, compiler) and a run-time providing the necessary services to run POP-C++ applications. In the POP-C++ execution model, when a new parallel object is created, the node which required the creation of the parallel object contacts the POP middleware running locally to ask for a new computing resource for this parallel object. To find this new computing resource, the POP middleware launches the resource discovery service available in the POP-C++ middleware. This service will contact all the neighbors of the node thanks to its confidence links, to ask for computing resources. Then the request is propagated through the network by following confidence links. When the request reaches a node which is able to provide the requested computing resource, it answers the originator of the request by following back the confidence links. The originator of the request chooses, between all the positive answers it received, the resource it wants to use to create the parallel object and remotely launch the execution of the parallel object inside a virtual machine. In order to be able to use the procedure presented above with the POP-C++ runtime, we had to design a dedicated architecture for the nodes of the Grid. This architecture is presented in the next section. V. IMPLEMENTATION A. Node architecture In the presented implementation, a node is a computer running a virtualization platform, or hypervisor. On this platform, two or more virtual machines are deployed. The first virtual machine, called the administrator virtual machine (or in short: Admin-VM) is used to run the POP-C++ runtime. The other virtual machines are the worker virtual machines (or in short: Worker-VM). They are used by the Admin-VM to run parallel objects. The Admin-VM is connected to its direct neighbors in the Grid by the confidence links. The latter are implemented using SSH tunnels. Figure 2 illustrates this architecture. One of the first questions we have to answer is how many Worker-VMs do we launch on a specific node. In other words, do we launch all parallel objects in the same Worker-VM, or do we launch one Worker-VM for each parallel object allocated to this node? In order to guarantee isolation between applications (see sub-section IIE) we decided to allocate Worker-VM on an application basis: for a given node, parallel objects belonging to the same application (the same POP-C++ program running instance) are executed in the same Worker-VM. This choice implies that we are able to identify applications. For this purpose, we have to generate a unique application identifier, called AppUID, for each POP-C++ program launched in the Grid. As we are in a fully decentralized environment, to guarantee unicity of the identifier we have based it on the IP address of the node where the program is launched, the Unix process ID as well as the clock. This AppUID is added to all requests to allow identifying parallel objects belonging to the same POP-C++ program. When the Admin-VM launches a Worker-VM to provide a computing resource for the execution of a parallel object, it must ensure that the node that originates the request is able to safely contact this Worker-VM, i.e., is able to establish an SSH tunnel. This is realized through a key exchange process which is detailed below. B. Key exchange process There are two main situations where the POP-C++ middleware needs to exchange keys between virtual machines. The first one is, as mentioned above, when a new Worker-VM is launched, and the second is when the reference of a parallel object is sent to another parallel object. Indeed, as POP-C++ is based on the C++ programming language, it is possible to pass the reference of a parallel object as parameter of a method of another parallel object. As a consequence, these two parallel objects, possibly running on different nodes, must be able to communicate. Let us first consider the situation where a new Worker-VM is launched by the Admin-VM. This operation is the consequence of a resource discovery request sent by a node that asked for the creation of a new parallel object. This request contains, among other information, the public key (rPuK) and the IP address (rIP) of the node that sent the request. The Admin-VM launches the Worker-VM and passes it the rPuK and the rIP address. The newly launched Worker-VM generates a new pair of private/public keys and sends its lPuK and lIP to the originator of the request along the confidence links. At this stage, both the originator of the request and the newly launched Worker-VM, have both PuKs and therefore are able to establish an SSH tunnel between them. This keys exchange process is illustrated on Figure 3. The second situation we have to consider is when a parallel object running on the virtual machine VMa sends the reference of a parallel object running on a virtual machine VMb to a third parallel object running on a virtual machine VMc. This situation is illustrated on Figure 4. In such a situation, the POP-C++ middleware must ensure that VMb and VMc can establish an SSH tunnel. When this situation occurs, we necessarily have an object running on VMa calling a method of an object running on VMc. Thus, the first thing to do is to add the PuK and the IP address of VMb in the message sent by VMa to VMc. This does not increase the number of messages but just slightly increases the size of the message sent to execute a remote method call. Next, along the confidence links, VMc sends its PuK to VMb. Now VMb and VMa can establish an SSH tunnel. We claim that the proposed infrastructure solves four of the issues mentioned in Section II, namely, issues A, B, C and E. The integrity of the user’s data and the user’s calculation (issue A) as well as the integrity of the host machine (issue B) are guaranteed by the isolation the virtual machine provides between the host machines and the remotely executed process. The confidentiality of the communications (issue C) is guaranteed by the SSH tunnels. Finally, as each application is executed in a different virtual machine, we guarantee that different users using the same computing resources cannot interfere between each other (issue E). The last issue to solve (issue D) is to be able to safely use machines belonging to different private networks in presence of firewalls. Indeed the nodes belonging to a same Grid are not necessarily in the same administrative domain and can be PuKs and therefore are able to establish an SSH tunnel between them. This keys exchange process is illustrated on Figure 3. The second situation we have to consider is when a parallel object running on the virtual machine VMa sends the reference of a parallel object running on a virtual machine VMb to a third parallel object running on a virtual machine VMc. This situation is illustrated on Figure 4. In such a situation, the POP-C++ middleware must ensure that VMb and VMc can establish an SSH tunnel. When this situation occurs, we necessarily have an object running on VMa calling a method of an object running on VMc. Thus, the first thing to do is to add the PuK and the IP address of VMb in the message sent by VMa to VMc. This does not increase the number of messages but just slightly increases the size of the message sent to execute a remote method call. Next, along the confidence links, VMc sends its PuK to VMb. Now VMb and VMa can establish an SSH tunnel. We claim that the proposed infrastructure solves four of the issues mentioned in Section II, namely, issues A, B, C and E. The integrity of the user’s data and the user’s calculation (issue A) as well as the integrity of the host machine (issue B) are guaranteed by the isolation the virtual machine provides between the host machines and the remotely executed process. The confidentiality of the communications (issue C) is guaranteed by the SSH tunnels. Finally, as each application is executed in a different virtual machine, we guarantee that different users using the same computing resources cannot interfere between each other (issue E). The last issue to solve (issue D) is to be able to safely use machines belonging to different private networks in presence of firewalls. Indeed the nodes belonging to a same Grid are not necessarily in the same administrative domain and can be separated by firewalls managed by different authorities. Our goal is to enable these nodes to belong to the same Grid and to be able to safely run parallel objects without opening security holes in firewalls. Figure 5 shows a situation where three nodes (A, B and C) are in a private network separated from the rest of the Grid by a firewall. If at least one node of the Grid belonging to the private network (node B on Figure 5) creates at least one confidence link with one node located on the other side of the firewall (node D on Figure 5); then, it is possible to make this private network part of a Grid located outside of the private network. To achieve this goal we have to follow the following procedure. Suppose that a node X, located somewhere in the Grid but outside the private network, launches a request for resources. By following confidence links, this query can reach nodes located in the private network (thanks to the confidence link B-D). If a node inside the private network, let’s say node A, agrees to carry out this execution, it must establish a communication with the node X. To achieve this, the Admin-VM of node A will launch a virtual machine (a Worker-VM) and will configure it with the public key of X contained in the request launched by node X. The Worker-VM creates a pair of public/private keys and transmits its public key to its Admin-VM. The latter transmits, by following the confidence links, this public key to the Worker-VM of the node X. Then, node X can establish an SSH tunnel with the Worker-VM started on node A. Now, node X and node A which are not in the same private network can safely directly communicate. To be able to realize this communication, the firewall must be configured in the following way: - It must allow the permanent SSH tunnel between nodes B and D. - It must allow temporary establishment of an SSH tunnel between any node outside the local network and any Worker-VM launched by any Admin-VM inside the local network. We claim that this configuration of a firewall is perfectly acceptable and does not create security holes if the administrator of the private network follows the following recommendations. First, the node D, which in a sense acts as a bridge toward outside the private network, must be under the control of the administrator of the private network. The Admin-VM running on this node must only run the minimal services required by the POP-C++ middleware. In our case, the SSH services with node B and the few fixed neighbors it will manually establish a confidence link with. Second, when installing the POP-C++ middleware in the private network, the administrator must take the following precaution. As the POP-C++ middleware creates and launches virtual machines, a good policy is to reserve a set of well-defined IP addresses only for this purpose and then to open, in the firewall, the SSH service only for this set of IP addresses. This will guarantee that the nodes external to the private network can only access, through SSH, Worker-VMs handled by the POP-C++ middleware. Of course, we make the assumption that the POP-C++ middleware itself is not malicious. The middleware must ensure that when the execution of a POP-C++ program terminates, all Worker-VMs allocated to this program are deleted (or reset), to ensure that all links established during the program execution are destroyed. VI. TESTS To demonstrate the feasibility of the ViSaG model, we have developed a prototype integrated with the POP-C++ middleware. This prototype uses the VMware ESXi hypervisor [8] to manage virtual machines. The virtual machine management layer can start, stop, revert, and clone virtual machines. It also allows to exchange SSH PKI and to get the IP address of a virtual machine. All these operations are performed thanks to the libvirt library [9] and the proprietary VMWare VIX API. As much as possible, we used libvirt to be compatible with different hypervisor virtualization platforms. Unfortunately, not all desired features were available, so we had to partially rely on the proprietary VIX API for a few key features such as cloning virtual machines and information gathering. The SSH tunneling management is independent of any API because it uses the installed version of SSH to initiate and manage SSH tunnel. In our infrastructure, the installed version was OpenSSH running on Ubuntu 10.04 operating system. To test our model and our prototype, we have deployed a Grid on two different sites. The first site was the “École d’ingénieurs et d’architectes” in the city of Fribourg in Switzerland and the second was “Haute école du paysage, d'ingénierie et d’architecture” in the city of Geneva in Switzerland. These two sites were connected only by Internet and therefore, the security was a key point. More important, the two sites have totally different administrative network management, as required to make the test significant. We have been able to run several distributed applications written with POP-C++ between the two sites in a transparent way for the users. The performance loss was acceptable; the main slowdown is due to the startup of the virtual machines. VII. CONCLUSION AND PERSPECTIVES This paper addressed the security issues in the context of a fully decentralized Grid infrastructure. Grid consumers, system managers, local users of a shared resource, network administrators, etc., are different actors involved in distributed computing, and as such they need security guarantees to accept taking part in a Grid infrastructure. Our solution takes advantage of virtualization as an isolation means, and on public key cryptography. The existing POP-C++ Grid middleware is taken as the illustration of the decentralized Grid paradigm. POP-C++ offers "parallel objects" as a programming model that essentially hides the complexity of the Grid aspects (local vs remote access, heterogeneous machines, resource discovery, etc.). On top of this architecture, and with no further constraint on the developer, our new implementation adds the wrapping of the parallel objects within virtual machines, as well as secure communications via SSH tunneling. Combining those two features brings a convincing answer to the security issues in decentralized Grids. Two levels of activities in the Grid are distinguished: - Setup: to join a grid returns to configure and start a dedicated virtual machine (VM-Admin), which manages the POP-C++ services. The setup phase establishes connections to other nodes of the Grid; those confidence links ensure the connectivity of the Grid. The Admin-VM never executes user code itself, but has control over a pool of virtual machines for the user jobs. The setup is considered as a local event (it does not need to stop the Grid), and it typically involves a manual intervention of a user responsible of the Grid installation. - Grid computing: when a POP-C++ user program is launched, the Admin-VMs communicate to distribute the jobs on the available resources; when it accepts a job, an Admin-VM wraps that job in a virtual machine (Worker-VM) that will be devoted to that running instance of the Grid program. The necessary encrypted connections with other Worker-VMs are automatically established, as our system takes care of conveying the needed public keys from node to node. Thus launching a program on the Grid causes the start of several virtual machines that will be dedicated to this computation, with the appropriate communication topology. When the distributed program terminates, no trace of its execution remain (the involved virtual machines are reset before being recycled). Our prototype has been implemented with ESXi virtual machines, but the code relies on libvirt, so that porting to another virtualization technology is greatly simplified. The value of virtualization as a companion of Grid technology has been shown for many years. In a centralized Grid architecture, using virtual machines instead of physical systems can for instance greatly simplify Grid reconfiguration or load balancing. In the decentralized approach that we advocate, virtual machines are used as an isolation wrapper for pieces of distributed computing, a means to guarantee an appropriate security level. In the course of our work, we identified several issues that need to be further investigated: - It would be interesting to bring the current version based on ESXi on another virtualization software (hypervisor). The ideal candidate should provide the same level of isolation, but lightweight VMs management operations (start, stop, resume, revert, clone, etc.). - A potential issue is about ensuring that the different VMs can benefit from system updates. - Concerning the VMs installation, it would be worth to define precisely what capabilities have to be included in the OS equipment. In fact this leads to a concept of a "harmless Worker-VM": i.e., a virtual machines that somehow are restricted to compute and communicate with other harmless Worker-VMs only, and that are unable to cause any damage in the hosting environment (in particular no other network traffic). - In our system, one hypothesis about security is that the POP-C++ installation is safe; we should study how this can be guaranteed and verified by the different Grid nodes. ACKNOWLEDGMENT The ViSaG project has been funded by the University of Applied Sciences Western Switzerland (HES-SO), project No.24247. REFERENCES
{"Source-Url": "https://www.thinkmind.org/articles/cloud_computing_2014_2_50_20074.pdf", "len_cl100k_base": 6045, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 18696, "total-output-tokens": 6960, "length": "2e12", "weborganizer": {"__label__adult": 0.00035834312438964844, "__label__art_design": 0.00043272972106933594, "__label__crime_law": 0.0008211135864257812, "__label__education_jobs": 0.0009765625, "__label__entertainment": 0.00010383129119873048, "__label__fashion_beauty": 0.0001531839370727539, "__label__finance_business": 0.0005950927734375, "__label__food_dining": 0.0003268718719482422, "__label__games": 0.0006237030029296875, "__label__hardware": 0.00293731689453125, "__label__health": 0.0007281303405761719, "__label__history": 0.0003638267517089844, "__label__home_hobbies": 0.0001577138900756836, "__label__industrial": 0.0007462501525878906, "__label__literature": 0.00024819374084472656, "__label__politics": 0.0003917217254638672, "__label__religion": 0.000499725341796875, "__label__science_tech": 0.337890625, "__label__social_life": 0.0001423358917236328, "__label__software": 0.036041259765625, "__label__software_dev": 0.6142578125, "__label__sports_fitness": 0.00023877620697021484, "__label__transportation": 0.0006895065307617188, "__label__travel": 0.0002560615539550781}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32160, 0.00995]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32160, 0.5563]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32160, 0.93062]], "google_gemma-3-12b-it_contains_pii": [[0, 4488, false], [4488, 9996, null], [9996, 15567, null], [15567, 20899, null], [20899, 27110, null], [27110, 32160, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4488, true], [4488, 9996, null], [9996, 15567, null], [15567, 20899, null], [20899, 27110, null], [27110, 32160, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32160, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32160, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32160, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32160, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32160, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32160, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32160, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32160, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32160, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32160, null]], "pdf_page_numbers": [[0, 4488, 1], [4488, 9996, 2], [9996, 15567, 3], [15567, 20899, 4], [20899, 27110, 5], [27110, 32160, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32160, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
28673cd4eb333c9e197d9967479cafc969752e0a
A global state of a distributed transaction system is consistent if no transactions are in progress. A global checkpoint transaction should not be considered as an official department of the Army. The algorithm for adding global checkpoint transactions to an arbitrary distributed transaction system is non-intrusive in the sense that checkpoint transactions do not interfere with ordinary transactions in progress; however, the checkpoint transactions still produce meaningful results. The author(s) of this report are those of the position, policy, or decision, unless so designated by other documentation. The views, opinions, and/or findings contained in this report are those of the author(s) and should not be construed as official Department of the Army position, policy, or decision, unless so designated by other documentation. Key words: distributed systems algorithms, global states, transaction processing, checkpoints, global checkpoints, consistent global states, distributed algorithms, algorithm for adding global checkpoint transactions, non-intrusive algorithm, distributed transaction system, global checkpoint transactions. GLOBAL STATES OF A DISTRIBUTED SYSTEM* MICHAEL J. FISCHER** NANCY D. GRIFFETH*** NANCY A. LYNCH*** **Department of Computer Science University of Washington Seattle, Washington 98195 ***School of Information and Computer Science Georgia Institute of Technology Atlanta, Georgia 30332 *This research was supported in part by the National Science Foundation under grants MCS77-02474, MCS77-15628, MCS80-03337, U.S. Army Research Office Contract Number DAAG29-79-C-0155 and Office of Naval Research Contracts N00014-79-C-0873 and N00014-80-C-0221. JUNE 1981 GLOBAL STATES OF A DISTRIBUTED SYSTEM * Michael J. Fischer University of Washington Nancy D. Griffeth Georgia Institute of Technology Nancy A. Lynch Georgia Institute of Technology ABSTRACT A global state of a distributed transaction system is consistent if no transactions are in progress. A global checkpoint is a transaction which must view a globally consistent system state for correct operation. We present an algorithm for adding global checkpoint transactions to an arbitrary distributed transaction system. The algorithm is non-intrusive in the sense that checkpoint transactions do not interfere with ordinary transactions in progress; however, the checkpoint transactions still produce meaningful results. 1. Introduction Computing systems operate by a sequence of internal transitions on the global state of the system. The global state represents the collective state of a set of objects which the system controls. Often many primitive state transitions are necessary to accomplish a larger semantically-meaningful task, called a transaction. Transactions are designed to take the system from one meaningful or consistent state to another, but during the execution of the transaction, the system may go through inconsistent intermediate states. Thus, to insure consistency of the system state, every transaction must either be run to completion or not run at all. Transactions are often the basis for concurrency control. In a distributed database system, a standard criterion for correctness of a system is that all allowable interleavings of transactions be "serialisable" (cf. [1]). However, there are systems which can run acceptably with unconstrained interleavings. In a banking system, for example, a transfer transaction might consist of a withdrawal step followed by a deposit step. In order to obtain fast performance, the withdrawals and deposits of different transfers might be allowed to interleave arbitrarily, even though the *This research was supported in part by the National Science Foundation under grants MCS77-02474, MCS77-15628, MCS78-01698, MCS80-03337, U.S. Army Research Office Contract Number Darpa-79-C-0155 and Office of Naval Research Contracts NO0014-79-C-0873 and NO0014-80-C-0221. users of the banking system are thereby presented with a view of the account balances which includes the possibility of money being "in transit" from one account to another. One useful kind of transaction is a "checkpoint" — a transaction that reads and returns all the current values for the objects of the system. In a bank database, a checkpoint can be used to audit all of the account balances (or the sum of all account balances). In a population database, a checkpoint can be used to produce a census. In a general transaction system, the checkpoint can be used for failure detection and recovery: if a checkpoint produces an inconsistent system state, one assumes that an error has occurred and takes appropriate recovery measures. For a checkpoint transaction to return a meaningful result, the individual read steps of the checkpoint must not be permitted to interleave with the steps of the other transactions; otherwise an inconsistent state can be returned even for a correctly operating system, and it might be quite difficult to obtain useful information from such intermediate results. For example, in a bank database with transfer operations, an arbitrarily-interleaved audit might completely miss counting some money in transit or count some transferred money twice, thereby arriving at an incorrect value for the sum of all the account balances. A checkpoint which is not allowed to interleave with any other transactions is called a global checkpoint. In the bank database, a global checkpoint would only see completed transfers; no money would be overlooked in transit, and a correct sum would be obtained for all account balances. In general, a global checkpoint views a globally consistent state of the system. In this paper, we present a method of implementing global checkpoints in general distributed transaction systems. We assume one starts with an underlying distributed transaction system known to be correct. Next we add some checkpoint transactions C which are known to be correct if run when no other transactions are running. Call the resulting system S. Finally, we show how to transform S into a new system S' which does the "same" thing as S and which turns each of the transactions in C into a global checkpoint, i.e. one that always returns a view of a globally consistent system state of the underlying transaction system. Our introduction of the global checkpoints is "nonintrusive" in the sense that no operations of the underlying system need be halted while the global checkpoint is being executed. Because of this, it is not always possible to have the global checkpoint view a consistent state in the recent history of the underlying transaction system, for that system might enter consistent states only infrequently because of heavy transaction traffic. Thus, instead of viewing a consistent state that actually occurs, our global checkpoints view a state that could result by running to completion all of the transactions that are in progress when the global checkpoint begins, as well as some of the transactions that are initiated during its execution. 2. A Model for Asynchronous Parallel Processes The formal model used to state the correctness conditions and describe the algorithm is that of (2). Only a brief description is provided in this paper; the reader is referred to (2) for a complete, rigorous treatment. The basic entities of the model are processes (automata) and variables. Processes have states (including start states and possibly also final states), while variables take on values. An atomic execution step of a process involves accessing one variable and possibly changing the process' state or the variable's value or both. A system of processes is a set of processes, with certain of its variables designated as internal and others as external. Each process is at most one port containing a transaction status word. We call that word, its variables designated as "environment" (e.g. other processes or users) which can change the values between steps of the given system. The execution of a system of processes is described by a set of execution sequences. Each sequence is a (finite or infinite) list of steps which the system could perform when interleaved with appropriate actions by the environment. Each sequence is obtained by interleaving sequences of steps of the processes of the system. Each process must have infinitely many steps in the sequence unless that process reaches a final state. For describing the external behavior of a system, certain information in the execution sequences is irrelevant. The external behavior of a system S of processes, extbeh(S), is the set of sequences derived from the execution sequences by "erasing" information about process identity, changes of process state and accesses to internal variables. What remains is just the history of accesses to external variables. This history takes the form of a sequence of variable actions, which are triples of the form (u, X, v), where u is the old value read from the variable X, and v is the new value written by an atomic step of the system. The external behavior completely characterizes the system from the user's point of view: two systems with the same external behavior are completely indistinguishable to the user. 3. An Abstract Distributed Transaction System In a database system, a transaction is usually considered to be a sequence of operations on the database entities which should be performed according to some concurrency control policy. For our purposes, we do not need to look inside the transactions -- all that we require is that a particular transaction can be requested at any time, and once requested, it will eventually run to completion. What the transaction does while it is running and how it interacts with other concurrent transactions does not concern us. We simply assume a distributed system which understands the initiation and completion of transactions at its external variables. We make the technical restriction that each transaction can be invoked only once; thus, our transactions should be thought of as instances of the usual database notion of transaction. We also assume that an infinite number of transactions are possible, although only a finite number can be running at any given time; thus, our systems never stop. Formally, an abstract transaction system is a distributed system whose external variables, called ports, have a special interpretation. Let T be an infinite set of transactions. Each port can contain a finite set of transaction status words, each of which is a triple (t, a, s), where t ∈ T, a is an arbitrary parameter or result value of the transaction, and s ∈ {'RUNNING', 'COMPLETE'} describes the state of the transaction. We require that each port can be accessed by only one process, called the owner of the port. The intended operation of the system is as follows: A user initiates a transaction t with argument a by inserting the triple (t, a, 'RUNNING') into the set of transaction status words in some port. Eventually, the system replaces that triple by a new triple (t, b, 'COMPLETE') in the same port. The value b is the result of the transaction. We assume the user behaves correctly in not trying to initiate the same transaction more than once and in not modifying the transaction status word once a transaction has been initiated. Likewise, a correct abstract transaction system never changes the ports or modifies the transaction status words except as described above. Thus, a correct abstract transaction system running with a correct user maintains a global invariant that for each transaction t ∈ T, there is at most one port containing a transaction status word with t as first component, and there is at most one such word in that port. We call that word, if it exists, the status word for t, and we say t is running or completed depending on the third component of its status word. We say t is latent if it has no status word. The conditions above imply that the only possible transitions in the status of a transaction are from latent to running and from running to completed; moreover, every running transaction eventually becomes completed. Note also that there is no a priori bound on the number of transactions that can be running simultaneously. 4. Checkpoint Transactions Let C ⊆ T be a distinguished set of transactions called checkpoints. Members of T-C are called ordinary transactions. The execution of a checkpoint transaction and the result it returns are valid if no other transaction is running while the checkpoint is. While we make no restrictions on what a checkpoint does, the intuition is that a checkpoint needs to look at a globally consistent system state in order to work properly, that is, a state of the system that occurs when no transactions are in progress. For example, the checkpoint might be an audit of the account balances in a simple banking system, or it might be a consistency check in a file system. These two examples are pursued further in Section 6. Our goal in this paper is to construct a new transaction system S' which does the "same" thing as S for non-checkpoint transactions and which returns a valid result for each checkpoint transaction. A straightforward implementation of S' would simply suspend further processing of transactions when a checkpoint is requested, wait for any transactions currently in progress to complete, and then run the checkpoint. After the checkpoint has been completed, normal processing of transactions can be resumed. In many practical situations, however, such a solution is highly undesirable, for the entire system must wait while a checkpoint is being performed. This is likely to take a considerable length of time since checkpoints may require reading the entire system state. In Section 5, we present a solution which permits checkpoints to be run concurrently with the normal processing of ordinary transactions. The price we pay is in having a slightly less appealing correctness condition for the result of the checkpoint. Since normal transactions are not suspended, the system may never reach a globally consistent state, so it is not obvious how a meaningful result can be obtained at all. Our approach is to try to predict what happens in the history of the system that occurs when no transactions are in progress. Consequences of this strategy are: (a) The value returned by a checkpoint does not reflect what actually happened in the history of execution, only what might have happened if certain transactions initiated after the start of the checkpoint had not occurred. (b) Any side-effects of checkpoint transactions are discarded, so other transactions continue to operate as if no checkpoints had ever taken place. With this motivation in mind, we now turn to the definitions needed to state the formal correctness conditions for the system S'. Let X be a port and u, v be sets of transaction status words. We call the variable action (u,X,v) a port action. Let PA be the set of all port actions. A behavior sequence is a finite or infinite sequence of port actions, i.e., a member of B = PA u PA' . Let h be a map which erases checkpoint status words from port values, that is, if u is a set of transaction status words, then h(u) = {(s,a,s) | (s,a,s) ε u and t ε T-C}. Extend h to port actions by h((u,X,v)) = (h(u), X, h(v)). Extend h further to B by applying it component-wise. Let e ε B. Define two functions: running(e) = {t ε T | e = e_1 ((u,X,v), e_2) and (t,a, 'RUNNING') ε u for (t,a, 'RUNNING') some transaction status word, e_1 ε PA, (u,X,v) ε PA, and e_2 ε B}. completed(e) = {t ε T | e = e_1 ((u,X,v), e_2) and (t,b, 'COMPLETED') ε v for (t,b, 'COMPLETED') some transaction status word, e_1 ε PA, (u,X,v) ε PA, and e_2 ε B}. Thus, running(e) is the set of transactions which are running at some time during e, and completed(e) is the set of transactions which have completed in e. Let e ε B, t ε running(e), and i ε N. t starts at step i of e if i is the length of the longest prefix e_i of e for which t ε running(e_i). An abstract distributed transaction system S' is a faithful implementation of a system S with checkpoint set C if the following conditions hold. 1. (Faithfulness). Let e ε extbeh(S) such that h(e) = a (i.e. a contains no checkpoint transactions). Let s = c = \mathbb{N} be a partial function with domain dom(\theta). Then there exists \theta' \in \text{extbeh}(S) such that h(\theta') = e, running(\theta') = running(e) \cup dom(\theta), and for all t \in dom(\theta), t starts at step o(i) in \theta'. 2. (Safety). Let \theta' \in \text{extbeh}(S'). Then h(\theta') \in \text{extbeh}(S). 3. (Validity of checkpoints). Let \theta' \in \text{extbeh}(S'), c \in C, and suppose c runs to completion in e' and produces result b. Let i be the step at which c starts in e', and let e'_1 be the prefix of e' of length i. Let e'_2 be the shortest word such that e'_1e'_2 is a prefix of e' and c completes(e'_1e'_2). Then there exists \theta \in \text{extbeh}(S) such that c runs to completion in e and produces result b, and e satisfies the following. There exist words e'_1, e'_2, f such that e'_1e'_2f is a prefix of e, and (1) h(e'_1e'_2) = e'_2; (2) h(e'_1) = e; (3) running(e'_1e'_2) \subseteq running(e'_1e'_2); (4) c \in completed(e'_1e'_2f) and e = running(e'_1e'_2) - completed(e'_1e'_2). Conditions (1) and (2) ensure that S' faithfully simulates S on the non-checkpoint transactions and that the presence or absence of checkpoint transactions does not affect the processing of other transactions by S'. Condition (3) ensures that S' computes acceptable results for the checkpoint transactions. In particular, the result of each checkpoint must be a value obtainable by some computation of S which (1) runs no checkpoints before the given checkpoint, (4i) agrees with the computation of S' up to the point where the checkpoint began (again ignoring other checkpoints), (4ii) only initiates transactions thereafter which actually occurred in S', and (4iv) runs the checkpoint after all the transactions in progress at the time of the checkpoint request together with any transactions initiated after the checkpoint have completed, thereby insuring a valid result. 5. A Faithful Implementation Given an abstract distributed transaction system S with checkpoint set C, we sketch how to construct a new system S' which faithfully implements S. S' operates by simulating a number of copies of S: a "base" copy S_0 and a copy S_c for each c \in C. S_0 processes all of the non-checkpoint transaction requests received by S', and S_c processes checkpoint transaction c. S_0 ignores checkpoints but otherwise acts just like S. S_c, c \in C, does exactly the same thing as S_0 up until checkpoint c is requested. At that time, the computation of S_c begins to diverge from that of S_0. S_c continues behaving like S, but it starts ignoring certain new transactions that are being processed by S_0. Eventually, it ceases processing new transactions entirely, and all the transactions currently in progress are run to completion. At that time, S_c runs checkpoint transaction c, and when it completes, S_c writes the result back into the transaction status word at the initiating port. S_c has then completed its task and can terminate. The structure of S' is similar to that of S. Each process and variable of S has a corresponding process or variable in S'. Process k of S' simulates process k in each of the S_i. Similarly, internal variable X of S' simulates internal variable X in each of the S_i. The states of processes in S' are labelled sets of states of corresponding processes of S, and values of variables in S' are labelled sets of values of corresponding variables of S, where the labels are taken from \{0\} \cup C. S and S' have identical ports and port values. We now describe in some detail the operation of the processes in S_c. Each process does exactly the same thing as the corresponding process of S_0 until it learns that checkpoint c has been requested. There are three ways that a process might learn this. It might access its port and see the transaction status word for c. In this case, that process is called the checkpoint initiator. Secondly, it might receive a "message" from the checkpoint initiator informing it of the start of the checkpoint. Finally, it might read an internal variable and detect that the computation of S_c has begun to diverge from that of S_0, enabling it to deduce that the checkpoint has started. When the checkpoint initiator discovers the start of the checkpoint, it broadcasts this fact to the other processes of S_c. Each process of S_c upon learning of the initiation of the checkpoint makes a private copy of its port and thereafter refers to its private copy rather than the real port. In this way, future transaction requests are ignored by S_c, and results of transactions produced by S_c (which might differ from those produced by S_0) do not affect the real ports. When a process of S_c finally discovers that all of the transactions at its port have completed, it sends back an acknowledgement to the checkpoint initiator. When the initiator has received an acknowledgement from each process (including itself), it begins processing the checkpoint by placing the checkpoint request in its own private copy of its port. All of the processes of $S'_C$ continue operating and serve collectively to process the checkpoint $c$. When $c$ completes, the checkpoint initiator copies the final transaction status word for $c$ from the private copy of its port back into the real port. The correctness conditions of Section 4 are quite strong and do not permit $S'$ to make any accesses to the ports other than those made by $S_0$. Therefore, the simulation of the $S'_C$, $c \in C$, must be coordinated with that of $S_0$ so that all real port accesses by $S'_C$ are "piggybacked" onto port accesses by $S_0$. The basic strategy is that $S_0$ runs freely, but a process of $S'_C$ wishing to access the real port must wait until the corresponding process of $S_0$ is ready to make its next port access. The two (or more) accesses are then combined into one and performed simultaneously. The accesses never conflict because each process of $S'_C$ does the exact same thing as the corresponding process of $S_0$ up until the point where it discovers the start of the checkpoint. Thereafter, it only modifies the status word for $c$, whereas processes of $S_0$ only modify status words for ordinary transactions. At any point in the computation, only a finite set $D$ of checkpoints have ever been initiated, so the computation of every $S'_C$, $c \in C-D$, is identical to the computation of $S_0$ and need not be represented only once. As soon as a process of $S'_C$ discovers that checkpoint $c$ is in progress, either by being the checkpoint initiator, receiving a message from the checkpoint initiator, or by reading an internal variable in which the $c$th component differs from the $0$th, it splits the simulation of $S'_C$ from that of $S_0$ and from then on, the two simulations continue independently, as described above. Hence, $S'_C$ actually simulates a finite but growing set of computations. In order to carry out the above implementation, $S'_C$ needs a mechanism which permits the checkpoint initiator to communicate with every other process. In any particular application, such a communication mechanism would probably already exist in the underlying system $S$. However, if it isn't already there, then we require that $S'$ be augmented with such a facility. Theorem. Let $S$ be an abstract distributed transaction system with checkpoint set $C$, and let $S'$ be the system described above. Then $S'$ is a faithful implementation of $S$. Proof Sketch. We omit the tedious but straightforward verification that $S'$ satisfies the conditions for being a faithful implementation of $S$. It remains to verify however that $S'$ is a correct abstract distributed transaction system, that is, that every transaction which is requested will eventually run to completion. This property holds for non-checkpoint transactions by the safety property and the fact that it holds for $S$. It holds for checkpoint transactions because each of the phases in processing a checkpoint terminates. Eventually a request for checkpoint $c$ gets noticed by the process of $S_C$ which owns the port; otherwise, $S_C$ and hence $S$ would fail to process future transactions originating at that port. After the checkpoint request is noticed, the checkpoint initiator notifies all other processes of $S_C$; hence, eventually all of the other processes learn of the request. After each process becomes aware of the checkpoint, if stops accepting requests for new transactions; hence, eventually $S_C$ stops processing new transactions. $S_C$ continues to simulate $S$ on the transactions that it has accepted; they all eventually complete since they would in $S$. Each process eventually acknowledges completion to the initiator, so eventually the checkpoint transaction itself is started. $S_C$ continues to simulate $S'$, so eventually the checkpoint transaction will complete and produce a valid result, which is copied back into the port. Hence, $S'$ is an abstract distributed transaction system which faithfully implements $S$, as required. We remark that under certain naturally-occurring conditions, the efficiency of $S'$ can be made to approach the efficiency of $S$. Namely, assume that all checkpoint transactions originate at the same port. Then it is an easy matter to modify the checkpoint initiator so that only one checkpoint is handled at a time. If several are requested simultaneously, the initiator will pick one to process and wait until it completes before handling another. Since only one checkpoint is running at a time, each process of $S'$ need only simulate two processes: the corresponding process of $S_0$ and the corresponding process of $S_C$. When a process becomes aware of the request of some checkpoint $S_d$, $d \neq c$, then it knows that checkpoint $c$ must have completed; hence it terminates the simulation of $S_C$. Thus, the storage needed by $S'$ for the internal variables and process states is only double that of $S$. (In practice, one would probably only keep duplicate copies of those objects for which the two executions $S_0S_C$ really produce different values.) Likewise, the time required by $S'$, when appropriately measured, should be at worst double that of $S$ on the particular computations actually simulated. Applications of Global Checkpoints Global checkpoints can play an important role in the design of distributed systems for error detection, error recovery, or both. For error detection, their use is in identifying inconsistencies in global system states that should be consistent. We have already alluded to this use in the simple banking system example in which the only allowable transactions are to transfer funds from one account to another. The sum of the account balances is the same in every globally consistent state. Therefore, our algorithm can be used to obtain that sum by running a global checkpoint transaction which simply reads each of the account balances and adds them all up. An error is indicated if this sum is not what was expected. A similar situation occurs in the design of file systems. Often a directory must be kept consistent with the actual contents of a disk. A global checkpoint might read the items in the directory and check that they correspond with what is really on the disk. As long as no directory modification transactions were in progress when the checking was done, then a discrepancy would indicate a true file system error. Our global checkpoint algorithm can be used to detect such inconsistencies. For error recovery, global checkpoints can be used to save the relevant part of the global state of the system so that in the event of a crash, the system can later be restarted from that point in the computation. For example, a global checkpoint could be used to provide a restart capability in the Eden system which is being developed at the University of Washington [3]. That system is object-based and includes as a primitive kernel operation a checkpoint operation that writes a single object to stable storage. The object itself decides when it is in a consistent state and hence when the checkpoint can be performed. If the object later crashes, it is restored from the version on stable storage. To extend this checkpoint facility to groups of related and cooperating Eden objects requires that the objects coordinate their checkpointing activities so that the versions saved on stable storage are globally and not just locally consistent. That is just the problem we have been treating in this paper if we take "transaction" to mean the portion of computation that an individual Eden object is in an inconsistent state, and if we assume further that an object only enters an inconsistent state in response to some external stimulus (corresponding to a transaction request). Our global checkpoint algorithms could then be applied to produce a globally consistent system state on stable storage by running the "global checkpoint" transaction which simply checkpoints each of the objects in the group. Note that our algorithm requires the independence of transactions. If one transaction can initiate another and then wait for its completion, then completion of the first depends on completion of the second, and our algorithm, which might decide to exclude the second from a checkpoint, would wait forever for the first transaction to complete. Our formal definition of a transaction system excludes the possibility of system-initiated transactions. Acknowledgement We are grateful to Mike Merritt for many helpful comments and suggestions. References
{"Source-Url": "http://www.dtic.mil/dtic/tr/fulltext/u2/a101746.pdf", "len_cl100k_base": 6212, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 30318, "total-output-tokens": 6976, "length": "2e12", "weborganizer": {"__label__adult": 0.0005488395690917969, "__label__art_design": 0.00041961669921875, "__label__crime_law": 0.000659942626953125, "__label__education_jobs": 0.0010213851928710938, "__label__entertainment": 0.00014412403106689453, "__label__fashion_beauty": 0.00026297569274902344, "__label__finance_business": 0.0010271072387695312, "__label__food_dining": 0.0005755424499511719, "__label__games": 0.0010118484497070312, "__label__hardware": 0.0019817352294921875, "__label__health": 0.0016984939575195312, "__label__history": 0.0005068778991699219, "__label__home_hobbies": 0.00016582012176513672, "__label__industrial": 0.0008182525634765625, "__label__literature": 0.0006952285766601562, "__label__politics": 0.00045108795166015625, "__label__religion": 0.0007162094116210938, "__label__science_tech": 0.33203125, "__label__social_life": 0.00013434886932373047, "__label__software": 0.01038360595703125, "__label__software_dev": 0.642578125, "__label__sports_fitness": 0.0003876686096191406, "__label__transportation": 0.0014095306396484375, "__label__travel": 0.000286102294921875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30655, 0.02295]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30655, 0.69819]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30655, 0.92127]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 1146, false], [1146, 1146, null], [1146, 1706, null], [1706, 6316, null], [6316, 12195, null], [12195, 16264, null], [16264, 21236, null], [21236, 26598, null], [26598, 30655, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 1146, true], [1146, 1146, null], [1146, 1706, null], [1706, 6316, null], [6316, 12195, null], [12195, 16264, null], [16264, 21236, null], [21236, 26598, null], [26598, 30655, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30655, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30655, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30655, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30655, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30655, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30655, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30655, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30655, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30655, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30655, null]], "pdf_page_numbers": [[0, 0, 1], [0, 1146, 2], [1146, 1146, 3], [1146, 1706, 4], [1706, 6316, 5], [6316, 12195, 6], [12195, 16264, 7], [16264, 21236, 8], [21236, 26598, 9], [26598, 30655, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30655, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
b3178c5b413ddca9c8599d93573fd950a8c68fc3
Introduction to Computer Systems 15-213/18-243, fall 2009 22nd Lecture, Nov. 17 Instructors: Roger B. Dannenberg and Greg Ganger Today - Threads: review basics - Synchronization - Races, deadlocks, thread safety Process: Traditional View - Process = process context + code, data, and stack Process: Alternative View - Process = thread + code, data, and kernel context Process with Two Threads Threads vs. Processes - Threads and processes: similarities - Each has its own logical control flow - Each can run concurrently with others - Each is context switched (scheduled) by the kernel - Threads and processes: differences - Threads share code and data, processes (typically) do not - Threads are much less expensive than processes - Process control (creating and reaping) is more expensive than thread control - Context switches for processes much more expensive than for threads Detaching Threads - Thread-based servers: - Use "detached" threads to avoid memory leaks - At any point in time, a thread is either joinable or detached - Joinable thread can be reaped and killed by other threads - Detached thread cannot be reaped or killed by other threads - Must be careful to avoid unintended sharing - For example, what happens if we pass the address of connfd to the thread routine? - Pthread_create(tid, NULL, thread, (void *)&connfd); Pros and Cons of Thread-Based Designs - + Easy to share data structures between threads - + Threads are more efficient than processes - Unintentional sharing can introduce subtle and hard-to-reproduce errors! - The ease with which data can be shared is both the greatest strength and the greatest weakness of threads Shared Variables in Threaded C Programs - Question: Which variables in a threaded C program are shared variables? - The answer is not as simple as "global variables are shared" and "stack variables are private" - Requires answers to the following questions: - What is the memory model for threads? - How are variables mapped to each memory instance? - How many threads might reference each of these instances? Threads Memory Model - Conceptual model: - Multiple threads run within the context of a single process - Each thread has its own separate thread context - Code, data, heap, and shared library segments of the process virtual address space - Open files and installed handlers - Operationally, this model is not strictly enforced: - Register values are truly separate and protected, but - Any thread can read and write the stack of any other thread - Mismatch between the conceptual and operation model is a source of confusion and errors Thread Accessing Another Thread’s Stack ```c char **ptr; /* global */ int main() { int i; pthread_t tid; char *msgs[2] = { "Hello from foo", "Hello from bar" }; ptr = msgs; pthread_create(&tid, NULL, thread, (void *)i); for (i = 0; i < 2; i++) pthread_create(&tid, NULL, thread, (void *)&(si)); pthread_exit(NULL); } /* thread routine */ int thread(void *vargp) { int myid = (int) vargp; static int i = 0; printf("[%d]: %s (svar=%d)\n", myid, ptr[myid], ++i); } ``` Peer threads access main thread’s stack indirectly through global ptr variable Mapping Variables to Memory Instances Global var: 1 instance (ptr [data]) Local var: 1 instance (cnt, msgs, x) badcnt.c: Improper Synchronization Shared Variable Analysis Which variables are shared? <table> <thead> <tr> <th>Variable</th> <th>Referenced by</th> <th>Referenced by</th> <th>Referenced by</th> </tr> </thead> <tbody> <tr> <td>ptr</td> <td>yes</td> <td>yes</td> <td>yes</td> </tr> <tr> <td>avar</td> <td>no</td> <td>yes</td> <td>yes</td> </tr> <tr> <td>myid</td> <td>yes</td> <td>no</td> <td>no</td> </tr> <tr> <td>msgs</td> <td>yes</td> <td>yes</td> <td>yes</td> </tr> <tr> <td>Myid_p0</td> <td>no</td> <td>yes</td> <td>no</td> </tr> <tr> <td>Myid_p1</td> <td>no</td> <td>yes</td> <td>no</td> </tr> </tbody> </table> Answer: A variable x is shared if multiple threads reference at least one instance of x. Thus: - ptr, avar, and msgs are shared - i and myid are not shared Assembly Code for Counter Loop C code for counter loop in thread i for (i=0; i<NITERS; i++) cnt++; Corresponding assembly code Key: Head (H) Load cnt (L) Store cnt (S) Tail (T) i(thread) instr | %eax | %eax | %eax | cnt 1 M1 - - - 0 1 L1 0 - - 0 1 S1 0 - - 0 1 T1 - 1 - 1 2 L2 - - 2 1 2 S2 - 2 - 2 2 T2 - 2 - 2 1 L1 1 - - 2 Key: Load Update Store Concurrent Execution Key idea: In general, any sequentially consistent interleaving is possible, but some give an unexpected result! - I denotes that thread i executes instruction I - %eax is the content of %eax in thread i’s context OK Concurrent Execution (cont) Incorrect ordering: two threads increment the counter, but the result is 1 instead of 2 Concurrent Execution (cont) How about this ordering? <table> <thead> <tr> <th>(thread)</th> <th>instr</th> <th>Stmtx1</th> <th>Stmtx2</th> <th>cnt</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>H1</td> <td></td> <td></td> <td></td> </tr> <tr> <td>2</td> <td>H2</td> <td></td> <td></td> <td></td> </tr> <tr> <td>2</td> <td>U1</td> <td></td> <td></td> <td></td> </tr> <tr> <td>2</td> <td>S2</td> <td></td> <td></td> <td></td> </tr> <tr> <td>1</td> <td>U1</td> <td></td> <td></td> <td></td> </tr> <tr> <td>1</td> <td>S1</td> <td></td> <td></td> <td></td> </tr> <tr> <td>2</td> <td>T2</td> <td></td> <td></td> <td></td> </tr> </tbody> </table> - We can analyze the behavior using a process graph. Progress Graphs A progress graph depicts the discrete execution state space of concurrent threads. Each axis corresponds to the sequential order of instructions in a thread. Each point corresponds to a possible execution state (Inst1, Inst2). E.g., (L1, S2) denotes state where thread 1 has completed L1 and thread 2 has completed S2. Trajectories in Progress Graphs A trajectory is a sequence of legal state transitions that describes one possible concurrent execution of the threads. Example: H1, L1, U1, H2, L2, S1, T1, U2, S2, T2 Critical Sections and Unsafe Regions L, U, and S form a critical section with respect to the shared variable cnt. Instructions in critical sections (wrt to some shared variable) should not be interleaved. Sets of states where such interleaving occurs form unsafe regions. Definition: A trajectory is safe iff it does not enter any unsafe region. Claim: A trajectory is correct (wrt cnt) iff it is safe. Critical Sections and Unsafe Regions Semaphores - Question: How can we guarantee a safe trajectory? - We must synchronize the threads so that they never enter an unsafe state. - Classic solution: Dijkstra's P and V operations on semaphores - Semaphore: non-negative global integer synchronization variable - P(s): \[\text{while} (s == 0) \text{wait}(); s--;\] - Dutch for "Proberen" (test) - V(s): \[s++;\] - Dutch for "Verhogen" (increment) - OS guarantees that operations between brackets are executed indivisibly - Only one P or V operation at a time can modify s. - When while loop in P terminates, only that P can decrement s. - Semaphore invariant: \(s >> 0\) badcnt.c: Improper Synchronization ```c /* shared */ volatile unsigned int cnt = 0; #define NITERS 100000000 int main() { pthread_t tid1, tid2; Pthread_create(&tid1, NULL, count, NULL); Pthread_create(&tid2, NULL, count, NULL); Pthread_join(tid1, NULL); Pthread_join(tid2, NULL); if (cnt != (unsigned)NITERS*2) printf("BOOM! cnt=%d\n", cnt); else printf("OK cnt=%d\n", cnt); } /* thread routine */ void *count(void *arg) { int i; for (i=0; i<NITERS; i++) cnt++; return NULL; } ``` How to fix using semaphores? ```c /* Semaphore s is initially 1 */ /* Thread routine */ void *count(void *arg) { int i; for (i=0; i<NITERS; i++) { P(&sem); cnt++; V(&sem); } return NULL; } ``` Safe Sharing with Semaphores - One semaphore per shared variable - Initially set to 1 - Here is how we would use P and V operations to synchronize the threads that update cnt: Wrappers on POSIX Semaphores ```c /* Initialize semaphore sem to value */ /* pshared=0 if thread, pshared=1 if process */ void Sem_init(sem_t *sem, int pshared, unsigned int value) { if (sem_init(sem, pshared, value) < 0) unix_error("Sem_init"); } /* P operation on semaphore sem */ void P(sem_t *sem) { if (sem_wait(sem)) unix_error("P"); } /* V operation on semaphore sem */ void V(sem_t *sem) { if (sem_post(sem)) unix_error("V"); } ``` Warning: It's really slow! Sharing With POSIX Semaphores ```c /* properly sync'd counter program */ #include "csapp.h" #define NITERS 100000000 volatile unsigned int cnt; sem_t sem; /* semaphore */ int main() { pthread_t tid1, tid2; Sem_init(&sem, 0, 1); /* sem=1 */ /* create 2 threads and wait */ ... if (cnt != (unsigned)NITERS*2) printf("BOOM! cnt=%d\n", cnt); else printf("OK cnt=%d\n", cnt); exit(0); } /* thread routine */ void *count(void *arg) { int i; for (i=0; i<NITERS; i++) { P(&sem); P(&sem); } return NULL; } ``` Today - Threads: basics - Synchronization - Races, deadlocks, thread safety One worry: races - A race occurs when correctness of the program depends on one thread reaching point x before another thread reaches point y. ```c /* a threaded program with a race */ int main() { pthread_t tid[N]; int i; for (i = 0; i < N; i++) pthread_create(&tid[i], NULL, thread, &i); for (i = 0; i < N; i++) pthread_join(tid[i], NULL); exit(0); } /* thread routine */ void *thread(void *vargp) { int myid = *((int *)vargp); printf("Hello from thread %d\n", myid); return NULL; } ``` Where is the race? Race Elimination - Make sure don’t have unintended sharing of state ```c /* a threaded program with a race */ int main() { pthread_t tid[N]; int i; for (i = 0; i < N; i++) { int *valp = malloc(sizeof(int)); *valp = i; pthread_create(&tid[i], NULL, thread, valp); } for (i = 0; i < N; i++) pthread_join(tid[i], NULL); exit(0); } /* thread routine */ void *thread(void *vargp) { int myid = *((int *)vargp); free(vargp); printf("Hello from thread %d\n", myid); return NULL; } ``` Another worry: Deadlock - Processes wait for condition that will never be true Typical Scenario - Processes 1 and 2 needs two resources (A and B) to proceed - Process 1 acquires A, waits for B - Process 2 acquires B, waits for A - Both will wait forever! Deadlocking With POSIX Semaphores ```c int main() { pthread_t tid[2]; Sem_init(&mutex[0], 0, 1); /* mutex[0] = 1 */ Sem_init(&mutex[1], 0, 1); /* mutex[1] = 1 */ Pthread_create(&tid[0], NULL, count, (void*) 0); Pthread_create(&tid[1], NULL, count, (void*) 1); Pthread_join(tid[0], NULL); Pthread_join(tid[1], NULL); printf("cnt=%d\n", cnt); exit(0); } void *count(void *vargp) { int i; int id = (int) vargp; for (i = 0; i < NITERS; i++) { P(&mutex[id]); P(&mutex[1-id]); cnt++; V(&mutex[id]); V(&mutex[1-id]); } return NULL; } ``` Avoiding Deadlock - Acquire shared resources in same order ```c int main() { pthread_t tid[2]; Sem_init(&mutex[0], 0, 1); /* mutex[0] = 1 */ Sem_init(&mutex[1], 0, 1); /* mutex[1] = 1 */ Pthread_create(&tid[0], NULL, count, (void*) 0); Pthread_create(&tid[1], NULL, count, (void*) 1); Pthread_join(tid[0], NULL); Pthread_join(tid[1], NULL); printf("cnt=%d\n", cnt); exit(0); } void *count(void *vargp) { int i; int id = (int) vargp; for (i = 0; i < NITERS; i++) { P(&mutex[id]); P(&mutex[1-id]); cnt++; V(&mutex[id]); V(&mutex[1-id]); } return NULL; } ``` Avoided Deadlock in Progress Graph Thread 2 V(s0) P(s0) V(s1) P(s1) Thread 1 V(s0) P(s0) V(s1) P(s1) No way for trajectory to get stuck Processes acquire locks in same order Order in which locks released immaterial Crucial concept: Thread Safety Functions called from a thread (without external synchronization) must be thread-safe - Meaning: it must always produce correct results when called repeatedly from multiple concurrent threads Some examples of thread-unsafe functions: - Failing to protect shared variables - Relying on persistent state across invocations - Returning a pointer to a static variable - Calling thread-unsafe functions Avoided Deadlock in Progress Graph Thread 2 V(s0) P(s0) V(s1) P(s1) Thread 1 V(s0) P(s0) V(s1) P(s1) No way for trajectory to get stuck Processes acquire locks in same order Order in which locks released immaterial Crucial concept: Thread Safety Functions called from a thread (without external synchronization) must be thread-safe - Meaning: it must always produce correct results when called repeatedly from multiple concurrent threads Some examples of thread-unsafe functions: - Failing to protect shared variables - Relying on persistent state across invocations - Returning a pointer to a static variable - Calling thread-unsafe functions Thread-Unsafe Functions (Class 1) - Failing to protect shared variables - Fix: Use P and V semaphore operations - Example: goodcnt.c - Issue: Synchronization operations will slow down code - e.g., badcnt requires 0.5s, goodcnt requires 7.9s Thread-Unsafe Functions (Class 2) - Relying on persistent state across multiple function invocations - Example: Random number generator (RNG) that relies on static state /* rand: return pseudo-random integer on 0..32767 */ static unsigned int next = 1; int rand(void) { next = next*1103515245 + 12345; return (unsigned int)(next/65536) % 32768; } /* srand: set seed for rand() */ void srand(unsigned int seed) { next = seed; } Thread-Unsafe Functions (Class 3) - Returning a ptr to a static variable - Fix: - 1. Rewrite code so caller passes pointer to struct - Issue: Requires changes in caller and callee - 2. Lock-and-copy - Issue: Requires only simple changes in caller (and none in callee) - However, caller must free memory Making Thread-Safe RNG - Pass state as part of argument - and, thereby, eliminate static state /* rand: return pseudo-random integer on 0..32767 */ int rand_r(int *nextp) { *nextp = *nextp*1103515245 + 12345; return (unsigned int)(*nextp/65536) % 32768; } - Consequence: programmer using rand must maintain seed Thread-Unsafe Functions (Class 1) - Failing to protect shared variables - Fix: Use P and V semaphore operations - Example: goodcnt.c - Issue: Synchronization operations will slow down code - e.g., badcnt requires 0.5s, goodcnt requires 7.9s Thread-Unsafe Functions (Class 2) - Relying on persistent state across multiple function invocations - Example: Random number generator (RNG) that relies on static state /* rand: return pseudo-random integer on 0..32767 */ static unsigned int next = 1; int rand(void) { next = next*1103515245 + 12345; return (unsigned int)(next/65536) % 32768; } /* srand: set seed for rand() */ void srand(unsigned int seed) { next = seed; } Thread-Unsafe Functions (Class 3) - Returning a ptr to a static variable - Fix: - 1. Rewrite code so caller passes pointer to struct - Issue: Requires changes in caller and callee - 2. Lock-and-copy - Issue: Requires only simple changes in caller (and none in callee) - However, caller must free memory Making Thread-Safe RNG - Pass state as part of argument - and, thereby, eliminate static state /* rand: return pseudo-random integer on 0..32767 */ int rand_r(int *nextp) { *nextp = *nextp*1103515245 + 12345; return (unsigned int)(*nextp/65536) % 32768; } - Consequence: programmer using rand must maintain seed Thread-Unsafe Functions (Class 4) - Calling thread-unsafe functions - Calling one thread-unsafe function makes the entire function that calls it thread-unsafe - Fix: Modify the function so it calls only thread-safe functions Thread-Safe Library Functions - All functions in the Standard C Library (at the back of your K&R text) are thread-safe - Examples: malloc, free, printf, scanf - Most Unix system calls are thread-safe, with a few exceptions: <table> <thead> <tr> <th>Thread-unsafe function</th> <th>Class</th> <th>Reentrant version</th> </tr> </thead> <tbody> <tr> <td>asctime</td> <td>3</td> <td>asctime_r</td> </tr> <tr> <td>ctime</td> <td>3</td> <td>ctime_r</td> </tr> <tr> <td>gethostbyaddr</td> <td>3</td> <td>gethostbyaddr_r</td> </tr> <tr> <td>gethostbyname</td> <td>3</td> <td>gethostbyname_r</td> </tr> <tr> <td>inet_ntoa</td> <td>3</td> <td>(none)</td> </tr> <tr> <td>localtime</td> <td>3</td> <td>localtime_r</td> </tr> <tr> <td>rand</td> <td>2</td> <td>rand_r</td> </tr> </tbody> </table> Threads Summary - Threads provide another mechanism for writing concurrent programs - Threads are very popular - Somewhat cheaper than processes - Easy to share data between threads - Make use of multiple cores for parallel algorithms - However, the ease of sharing has a cost: - Easy to introduce subtle synchronization errors - Thread carefully with threads! - For more info: - D. Butenhof, “Programming with Posix Threads”, Addison-Wesley, 1997
{"Source-Url": "http://www.cs.cmu.edu/afs/cs/academic/class/15213-f09/www/lectures/22-synchronization-6up.pdf", "len_cl100k_base": 5005, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 26968, "total-output-tokens": 5474, "length": "2e12", "weborganizer": {"__label__adult": 0.0003643035888671875, "__label__art_design": 0.00035691261291503906, "__label__crime_law": 0.0003771781921386719, "__label__education_jobs": 0.0027599334716796875, "__label__entertainment": 7.581710815429688e-05, "__label__fashion_beauty": 0.00016438961029052734, "__label__finance_business": 0.00015032291412353516, "__label__food_dining": 0.0004172325134277344, "__label__games": 0.0009307861328125, "__label__hardware": 0.003917694091796875, "__label__health": 0.0004673004150390625, "__label__history": 0.0002906322479248047, "__label__home_hobbies": 0.000194549560546875, "__label__industrial": 0.0008320808410644531, "__label__literature": 0.00023627281188964844, "__label__politics": 0.0002899169921875, "__label__religion": 0.0006265640258789062, "__label__science_tech": 0.040924072265625, "__label__social_life": 0.0001132488250732422, "__label__software": 0.0061187744140625, "__label__software_dev": 0.93896484375, "__label__sports_fitness": 0.00048828125, "__label__transportation": 0.000949859619140625, "__label__travel": 0.00021648406982421875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17117, 0.02623]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17117, 0.52126]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17117, 0.74788]], "google_gemma-3-12b-it_contains_pii": [[0, 906, false], [906, 3259, null], [3259, 4826, null], [4826, 6985, null], [6985, 9111, null], [9111, 11681, null], [11681, 15693, null], [15693, 17117, null]], "google_gemma-3-12b-it_is_public_document": [[0, 906, true], [906, 3259, null], [3259, 4826, null], [4826, 6985, null], [6985, 9111, null], [9111, 11681, null], [11681, 15693, null], [15693, 17117, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 17117, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17117, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17117, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17117, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 17117, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17117, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17117, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17117, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17117, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17117, null]], "pdf_page_numbers": [[0, 906, 1], [906, 3259, 2], [3259, 4826, 3], [4826, 6985, 4], [6985, 9111, 5], [9111, 11681, 6], [11681, 15693, 7], [15693, 17117, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17117, 0.05242]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
673842e07cbb289999caf188338ee9da701d4089
A System for Seamless Abstraction Layers for Model-based Development of Embedded Software* Judith Thyssen, Daniel Ratiu, Wolfgang Schwitzer, Alexander Harhurin, Martin Feilkas Technische Universität München Institut für Informatik, Lehrstuhl für Software und Systems Engineering {thyssen,ratiu,schwitzer,harhurin,feilkas}@in.tum.de Eike Thaden OFFIS - Institute for Information Technology eike.thaden@offis.de Abstract: Model-based development aims at reducing the complexity of software development by the pervasive use of adequate models throughout the whole development process starting from early phases up to implementation. In this paper we present a conceptual framework to holistically classify developed models along different levels of abstraction. We do this by defining adequate abstractions for different development stages while ignoring the information that is not relevant at a particular development step or for a certain stakeholder. The abstraction is achieved in terms of the granularity level of the system under study (e.g. system, sub-system, sub-sub-system) and in terms of the information that the models contain (e.g. specification of functionality, description of architecture, deployment on specific hardware). We also present the relation between models that describe different perspectives of the system or are at different granularity levels. However, we do not address the process to be followed for building these models. 1 Motivation Central challenges that are faced during the development of today’s embedded systems are twofold: firstly, the rapid increase in the amount and importance of functions realized by software and their extensive interaction that leads to a combinatorial increase in complexity, and secondly, the distribution of the development of complex systems over the boundaries of single companies and its organization as deep chain of integrators and suppliers. Model based development promises to provide effective solutions by the pervasive use of adequate models in all development phases as main development artifacts. Today’s model-based software development involves different models at different stages in the process and at different levels of abstraction. Unfortunately, the current approaches do not make clear which kinds of models should be used in which process steps or how the transition between models should be done. This subsequently leads to gaps between *This work was mainly funded by the German Federal Ministry of Education and Research (BMBF), grant SPES2020, 01IS08045A. models and thereby to a lack of automation, to difficulties in tracing the origins of modeling decisions made in different stages, and to difficulties in performing global analyses or optimizations that transcend the boundaries of a single model. Our aim is to enable a seamless integration of models into model chains. A first step to achieve this is to understand which kinds of models are built, who are their main stakeholders, and the relation between them. To this end, we introduce a conceptual framework to holistically classify models that describe the system. Outline. In Section 2 we introduce our modeling framework. In the following sections we detail the framework along two dimensions: on the vertical dimensions model elements specified on different granularity levels are mapped on each other. The specifications of elements on the lower granularity level form a refinement of the specification of their counter parts on the higher granularity level. A special kind of mapping is the mapping of one element on the higher level to a set of elements on the lower level which is similar to the classical decomposition (Section 3). On the horizontal dimension different development perspectives are related to each other (Section 4). In Section 6 we tackle the issue of crossing the different abstraction layers. Section 7 presents related work and Section 8 concludes the paper giving an outlook on future work. 2 Achieving abstraction Abstraction means the reduction of complexity by reducing details. There are two typical possibilities to reduce details: Whole-part decomposition. One manner to deal with complexity is to apply the “divide and conquer” principle and to decompose the whole system into smaller, less complex parts. These parts can be regarded as full-fledged systems themselves at a lower level of granularity. We structurally decompose the system into its sub-systems, and sub-systems into sub-sub-systems until we reach basic blocks that can be regarded as atomic. Distinct development perspectives. The second manner to deal with complexity is to focus only on certain aspects of the system to be developed while leaving away other aspects that are not interesting for the aims related to a given development perspective. The essential complexity of a system is given by the (usage) functionality that it has to implement. By changing the perspective from usage functionality towards realization, the complexity is increased by considering additional implementation details (e. g. design decisions that enable the reuse of existing components). Consequently, our modeling framework comprises two different dimensions as illustrated in Figure 1: one given by the level of granularity at which the system is regarded and the second one is given by different software development perspectives on the system. Levels of granularity. A system is composed of sub-systems which are at a lower granularity level and which can themselves be regarded as systems (Section 3). Often, the sub-systems are developed separately by different suppliers and must be integrated afterwards. Especially for system integrators, the decomposition of a system into sub-systems and subsequently the composition of the sub-systems into a whole are of central importance. As a consequence, we explicitly include the different granularity levels of systems in our framework (Figure 1 - vertical). Software development perspectives. A system can be regarded from different perspectives, each perspective representing different kinds of information about the system. (Section 4). Our framework contains the following development perspectives: the user perspective, the logical (structural) perspective, the technical perspective, and the geometrical perspective. The perspectives aim to reduce the complexity of the software development by supporting a stepwise refinement of information from usage functionality to the realization on hardware: early models are used to capture the usage functionality and are step-by-step enriched with design and implementation information (Figure 1 - horizontal). 3 Granularity Levels In order to cope with the complexity of today’s systems, we decompose them into sub-systems. Each sub-system can be considered as a system itself and can be further decomposed until we reach basic building blocks. As a result we obtain a set of granularity levels upon which we regard the system, e.g. system level – sub-system level – sub-sub-system level – ... – basic block level. These granularity levels enable us to seamlessly regard the systems at increasingly finer levels of granularity. Since the system is structured into finer parts which can be modeled independently and aggregated to the overall system afterwards, the granularity levels allow us to reduce the overall complexity following the “divide and conquer” principle. Each system can be decomposed into sub-systems according to different criteria that are many times competing. For example, we might choose to decompose the system into sub-systems that map at best either the user, or the logical, or the technical architecture (these software development perspectives are presented in Section 4 in detail). Depending on which of these software development perspectives is prioritized, the resulting system decomposition looks different: the more emphasis is put on decomposing based on one of the software development perspectives, the more the modularization concerning the other software development perspectives is neglected and other aspects of modularity are lost (phenomenon known as “tyranny of dominant decomposition”). On the one hand, for models situated at coarser granularity levels, the decomposition is many times driven by established industry-wide domain architectures. These domain architectures are primary determined by the physical/technical layout of a system, that is subsequently influenced by the vertical organization of the industry and division of labor between suppliers and integrators (most suppliers deliver a piece of hardware that contains the corresponding control software). On the other hand, for the models situated at finer granular layers, the decomposition is influenced by other factors, e.g. an optimal modularization of the logical component architecture in order to enable reuse of existing components. However, since the logical architecture acts as mediator between the user and technical perspective (see Section 4), we strongly believe that the logical modularization of the system decomposition should be reflected mainly in the part-whole decomposition of the system. **Integrators vs. suppliers.** According to the level of granularity at which we regard the system, we can distinguish between different roles among stakeholders (illustrated in Figure 2): the end users are interested only in the top-level functionality of the entire system, the system integrators are responsible for integrating the high-granular components in a whole, while the suppliers are responsible for implementing the lower level components. An engineer can act as a system integrator with respect to the engineers working at finer granularity levels, and as supplier with respect to the engineers working at a higher granularity level. This top-down division of work has different depths depending on the complexity of the end product – for example, in the case of developing an airplane the granularity hierarchy has a high depth, meanwhile for developing a conveyor the hierarchy is less deep. 4 Software Development Perspectives The basic goal of a software system is to offer the required user functionality. There can be other goals that need to be considered such as efficiency, reliability, reuse of other existent systems or integration with legacy systems. However, in our opinion the functional goals are primordial since it is meaningless to build a highly efficient or reliable system that does not perform the desired functionality. Regarding the system purely from the point of view of the user functionality that it implements offers the highest level of abstraction since implementation, technical details and the other concerns required by the non-functional requirements are ignored (abstracted away). By changing the perspective from usage functionality towards implementation, we add more (implementation) details that are irrelevant for the usage and thus, the complexity of the system description is higher. We therefore propose a system of software development perspectives that allows to incrementally add more information to the system models. This refinement of models is inspired by the goal-oriented approach introduced by [Lev00], in which the information at one level acts as the goals with respect to the model at the next level. Our software development perspectives are: the user perspective that represents the decomposition of the system according to the behavior needed by its users, the logical perspective that represents the logical decomposition of the system and the realization of the software architecture, and the the technical perspective that represents the technical implementation of the system. Additionally, the geometrical perspective represents the geometrical (physical) layout of the system, e.g. information about the shape of an hardware element or its concrete position in an airplane. From a systems engineering point of view, the geometrical perspective is essential. In this paper, however, we concentrate on the development of the software of the future systems. Thus, the geometrical perspective is not further detailed in the following. User perspective. The user perspective describes the usage functionality that a system offers its environment/users. Thereby, a user can be both, the end user of the system (at the highest level of granularity) or a system integrator (at a lower level of granularity). The functionality that is desired from the system represents the most abstract – but nevertheless, most important – information about the system. During design and implementation, we add realization details that are irrelevant for the usage of the system (but nevertheless essential for its realization). The central aims of the user perspective are: Hierarchical structuring of the functionality from the point of view of the system’s users; Definition of the boundary between the system functionality and its environment: definition of the syntactical interface and abstract information flow between the system and its environment; Consolidation of the functional requirements by formally specifying the requirements on the system behavior from the black-box perspective; Understanding of the functional interrelationships and mastering of feature interaction. Logical perspective. The logical perspective describes how the functionality is realized by a network of interacting logical components that determines the logical architecture of the system. The design of the logical component architecture is driven by various considerations such as: achieving maximum reuse of already existent components, fulfilling different non-functional properties of the system, etc. The logical architecture bridges the gap between functional requirements and the technical implementation means. It acts as a pivot that represents a flexibility point in the implementation. The main aims of the logical perspective are: Describing the architecture of the system by partitioning the system into communicating logical components; Supporting the reuse of already existent components and designing the components such that to facilitate their reuse in the future; Optionally: Definition of the total behavior of the system (as opposed to the partial specifications in the user perspective) and enabling the complete simulation of all desired functionalities; Mediation between the structure of the function hierarchy and that of the already existing technical platform on which the system should run. Since the functions of the user perspective are defined by given user requirements and the prerequisites of the technical layer are primarily given a priori, from a software development point of view, the main engineering activities are concentrated on the logical component architecture. Thereby, the logical architecture should be designed in order to capture the central domain abstractions and to support reuse. As a consequence, the logical architecture should be as insensitive as possible to changes of the desired user functionality or technical platform. It should be the artifact in the development process with the highest stability and with the highest potential of reuse. Technical perspective. The technical perspective comprises the hardware topology on which the logical model is to be deployed. For us hardware means entities on which the software runs (ECUs), or that directly interact with the software (sensors/actors). On higher granularity levels hardware entities can also be abstractions/aggregations of such entities. In the technical perspective engineers need to consider hardware related issues such as throughput of communication, bandwidth, timing properties, the location of different hardware parts, or the exact interaction of the software system with the environment. The main aims of the technical perspective are: Describing the hardware topology on which the system will run including important characteristics of the hardware; Describing the actuators, sensors, and the HMI (human-machine interaction) that are used to interact with the environment; Implementation and verification of real-time properties in combination with a deployment of logical components; Ensuring that the behavior of the deployed system (i.e., the hardware and the software running on it) conforms to the specifications of the logical layer. Note: One of the main advantages of the clear distinction between the logical and technical architecture is that it enables a flexible (re-)deployment of the logical components to a distributed network of ECUs. If the hardware platform changes, the logical components only need to be (re-)deployed, but the logical architecture does not need to be redesigned. 5 Core-models and their Decorations (Viewpoints) In order to enable the extensibility of models with additional information relevant for the realization of non-functional requirements (e.g. failure models) or even other development disciplines (e.g. mechanical information), we enable the use of decorators (also called Viewpoints). Each of the three software development perspectives is usually represented by a dominating core-model (e.g. functions hierarchies in the user perspective, networks of components in the logical perspective) and may provide a number of additional decorator-models. Decorator-models are specialized for the description of distinct classes of functional and non-functional requirements that are relevant to their respective software development perspectives. Decorator-models enrich the core-models with additional information that is necessary for later steps in the software development process or for the integration with other disciplines. For example, this could be safety analyses, scheduling generation or deployment optimizations. The complexity of decorator-models is arbitrary and their impact on the overall system functionality may be significant. Failure-models are an example of usually quite complex decorator-models to the models of the logical architecture. The impact of failure-models to the overall functionality is usually relatively critical, too. Other examples for existing decorations concerning the technical perspective are information concerning physical, mechanical or electrical properties of the technical system under design. 6 Relating the Models In the previous sections (Sections 3 and 4) we detailed two different manners to achieve abstraction: by providing mappings between elements on different granularity levels and by using different software development perspectives. Thereby we obtain models (contained in each cell of the table in Figure 1) that describe the system either at different granularity levels or from different software development points of view. In this section we discuss the relation between models in adjacent cells (horizontally from left to right, and vertically between two consecutive rows). Horizontal allocation (mapping models at the same granularity level). In general, there is a many-to-many (n:m) relation between functions (user perspective) and logical components (logical perspective) that implement them, respectively between logical components and hardware on which they run. However, in order to keep the relations between models relatively simple, we require the allocation to be done in a many-to-one manner. Especially, we do not allow a function to be scattered over multiple logical components or a logical component to run on multiple hardware entities, respectively. If necessary, the user perspective should be further decomposed in finer granular parts until an allocation of each function on individual components is possible. In a similar manner, the logical components should be fine granular enough to allow a many-to-one (n:1) deployment on hardware units. Allocation represents decision points since each time a transition between an abstract to a more concrete perspective is done engineers have to decide for one of several possible allocations/deployments. Note. It can happen (especially at coarse levels of granularity) that there is an isomorphism between the decompositions realized in different perspectives (e.g. that the main sub-functions of a complex product are realized by dedicated logical components that are run on dedicated hardware). Vertical allocation (transition from systems to sub-systems). Vertical allocation means the top down transition from systems to sub-systems that happens typically at the border between suppliers and integrators – sub-systems built by suppliers are integrated into larger systems by integrators (see Figure 2). In Figure 3 we illustrate the transition between two subsequent granularity levels generically named “system” and “sub-systems”. The sub-systems of a system can be determined by the structural decomposition in the logical or hardware perspectives (see Section 3). More complex mappings between elements on different granularity levels are possible, too, but are not further considered in this paper. The structure of the system defines its set of components and how they are composed. Each leaf component of the system structure determines a new system at the next granularity level. For example, in Figure 3 (top-right) the structural decomposition at the system level contains three components. Subsequently, at the next level of granularity we have three sub-systems that correspond to the components. Generally, the functionality allocated to one of the components of the system defines the functional requirements for its corresponding sub-system. Each sub-system carries the user functionality allocated to its corresponding component at a higher granularity level. For example, in Figure 3 to the “Component 1” component was allocated the F1.1.3 and F1.2.2 functions. These functions should be implemented by the “Sub-system 1” that corresponds to “Component 1” on the next level of granularity. In addition to the original user functions allocated to components at the system level, at the sub-system level new functionality is required due to design (and implementation) decisions taken at the system level. This functionality is needed in order to allow the integration of the sub-systems into the systems and can be seen as a “glue” functionality. In Figure 3 we pictured the glue functionality by hashed circles. The root representing the entire functionality of a sub-component (not-existent at the system level) is pictured through a dotted circle. Figure 3: Structural decomposition of the system is one way to define the sub-systems situated at the subsequent granularity level. 7 Related Work Our approach to reduce complexity by a systematic software development along different abstraction layers is part of a more general area of research about the pervasive and disciplined use of models along the development process. MDA [MM03] represents a well known approach to master the complexity of today's systems by describing the system on different levels of abstraction starting with an an informal description of the system (known as “Computation Independent Model”). Based on this, the Platform Independent Model (PIM) defines the pure system functionality independently from its technical realization and at last the PIM is translated to one or more Platform Specific Model (PSM) that can run on a specific platform. While we are aiming at a modeling framework for the development of embedded systems or even specific domains, the MDA is a general purpose approach. The different development perspectives of our modeling framework can be seen as instantiation of the MDA-layers. Furthermore, the MDA layers do not address issues related to the vertical decomposition of systems into sub-systems down to basic components. The EAST ADL (Electronics Architecture and Software Technology – Architecture Definition Language, [ITE08]) has been designed for the automotive domain and describes software-intensive electric/electronic systems in vehicles on five different abstraction layers starting from high-level requirements and features which are visible to the user to details close to implementation, such as constructs of operating systems (e.g. tasks) and electronic hardware. In contrast to our approach, the EAST ADL does not clearly distinguish the different dimension of abstraction, namely the decomposition in sub-systems and the different perspectives on a system. However, the EAST ADL can be seen as instantiation of our modeling framework. With regard to contents and aims the Vehicle Feature Model and the Functional Analysis Architecture of the EAST ADL can be seen as a counterpart of the functional perspective, the Functional Design Architecture vaguely corresponds to the logical perspective and the abstraction levels of the Function Instance Model, the Platform Model and the Allocation Model vaguely correspond to the abstraction level, which can be found on the technical perspective. The idea to describe a system from different perspectives is not new. For example, the 4+1 View Model by Kruchten [Kru95] provides four different views of a system: the logical, process, development and physical views. The views are not fully orthogonal or independent – elements of one view are connected to elements in other views. The elements in the four views work together seamlessly by the use of a set of important scenarios. This fifth view is in some sense an abstraction of the most important requirements. However, since this approach is not based on a proper theory and sufficient formalization, a deeper benefit is not achieved. As a result, possibilities to do analysis with these views are weak, and the models are applied only at particular stages of the development process without a formal connection between them. In contrast, the introduced approach aims at “theoretical foundations of a strictly model based development in terms of an integrated, homogeneous, but yet modular construction kit for models” [BR07]. In [BFG +08], a first step has been made to integrate the existing research results into an integrated architectural model for the development of embedded software systems. The architectural model comprises three subsequent layers, namely the service layer, the logical layer, and the technical layer. This architectural model served as basis for the modeling framework presented here. The different layers are reflected by the different software development perspectives of our modeling framework. In the current work we extended the architectural model by introducing granularity levels as a second dimension. 8 Future Work In this paper we presented different abstraction layers at which the models used to realize a software product should be categorized. There are, however, many open issues that are subject to current and future work: 1) methodology – the steps that should be performed to instantiate the layers is of capital importance for the realization of the software product; 2) allocation – based on which criteria is the allocation of functionalities on logical components and of logical components on technical platform made; 3) models – define which modeling techniques would fit at best to describe particular aspects of the system (e.g. functionality) at particular granularity level (e.g. entire system). Acknowledgements: Early ideas of this paper originate from a discussion with Carsten Strobel, Alex Metzner, and Ernst Sikora. References
{"Source-Url": "http://subs.emis.de/LNI/Proceedings/Proceedings160/137.pdf", "len_cl100k_base": 4957, "olmocr-version": "0.1.49", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 22016, "total-output-tokens": 5751, "length": "2e12", "weborganizer": {"__label__adult": 0.0003609657287597656, "__label__art_design": 0.0005102157592773438, "__label__crime_law": 0.0002655982971191406, "__label__education_jobs": 0.0005135536193847656, "__label__entertainment": 4.941225051879883e-05, "__label__fashion_beauty": 0.00014913082122802734, "__label__finance_business": 0.00018167495727539065, "__label__food_dining": 0.000331878662109375, "__label__games": 0.0005030632019042969, "__label__hardware": 0.001361846923828125, "__label__health": 0.0003790855407714844, "__label__history": 0.0002388954162597656, "__label__home_hobbies": 8.33272933959961e-05, "__label__industrial": 0.0004549026489257813, "__label__literature": 0.00022208690643310547, "__label__politics": 0.0002180337905883789, "__label__religion": 0.0005044937133789062, "__label__science_tech": 0.01294708251953125, "__label__social_life": 5.823373794555664e-05, "__label__software": 0.0036525726318359375, "__label__software_dev": 0.97607421875, "__label__sports_fitness": 0.00027251243591308594, "__label__transportation": 0.0006461143493652344, "__label__travel": 0.0001919269561767578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28489, 0.01517]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28489, 0.6186]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28489, 0.92086]], "google_gemma-3-12b-it_contains_pii": [[0, 2557, false], [2557, 5753, null], [5753, 7045, null], [7045, 10096, null], [10096, 12201, null], [12201, 15604, null], [15604, 18368, null], [18368, 21478, null], [21478, 22669, null], [22669, 26133, null], [26133, 28489, null], [28489, 28489, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2557, true], [2557, 5753, null], [5753, 7045, null], [7045, 10096, null], [10096, 12201, null], [12201, 15604, null], [15604, 18368, null], [18368, 21478, null], [21478, 22669, null], [22669, 26133, null], [26133, 28489, null], [28489, 28489, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28489, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28489, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28489, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28489, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28489, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28489, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28489, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28489, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28489, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28489, null]], "pdf_page_numbers": [[0, 2557, 1], [2557, 5753, 2], [5753, 7045, 3], [7045, 10096, 4], [10096, 12201, 5], [12201, 15604, 6], [15604, 18368, 7], [18368, 21478, 8], [21478, 22669, 9], [22669, 26133, 10], [26133, 28489, 11], [28489, 28489, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28489, 0.0]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
3c0d16fdee6c55971ab06b3b3c230cbc366f8db9
Agent-Based Electronic Market With Ontology-Services Nuno Silva, Maria João Viamonte, Paulo Maio GECAD - Knowledge Engineering and Decision Support Research Group Institute of Engineering of Porto {nps, mjv, pam}@isep.ipp.pt Abstract This paper proposes a semantic information integration approach for agent-based electronic markets based on ontology-based technology, improved by the application and exploitation of the trust relationships captured by the social networks. We intent face the problem of the growth of e-commerce using software agents to support both customers and suppliers in buying and selling products. The diversity of the involved actors leads to different conceptualizations of the needs and capabilities, giving rise to semantic incompatibilities between them. It is hard to find two agents using precisely the same vocabulary. They usually have a heterogeneous private vocabulary defined in their own private ontology. In order to provide help in the conversation among different agents, we are proposing what we call ontology-services to facilitate agents’ interoperability. More specifically, this work proposes an ontology-based information integration approach, exploiting the ontology mapping paradigm, by aligning consumer needs and the market capacities, in a semi-automatic mode, improved by the application and exploitation of the trust relationships captured by the social networks. 1. Introduction In an efficient agent-mediated electronic market, where all the partners, both sending and receiving messages have to lead to acceptable and meaningful agreements, it is necessary to have common standards, like an interaction protocol to achieve deals, a language for describing the messages’ content and ontologies for describing the domain’s knowledge. The need for these standards emerges due to the nature of the goods/services traded in business transactions. The goods/services are described through multiple attributes (e.g. price, features and quality), which imply that negotiation processes and final agreements between seller and buyers must be enhanced with the capability to both understand the terms and conditions of the transaction (e.g. vocabulary semantics, currencies to denote different prices, different units to represent measures or mutual dependencies of products). A critical factor for the efficiency of the future negotiation processes and the success of the potential settlements is an agreement among the negotiating parties about how the issues of a negotiation are represented and what this representation means to each of the negotiating parties. This problem is referred to as the ontology problem of electronic negotiations [1]. Distributors, manufactures, and service providers may have radically different ontologies that differ significantly in format, structure, and meaning. Given the increasingly complex requirements of applications, the need for rich, consistent and reusable semantics, the growth of semantically interoperable enterprises into knowledge-based communities; and the evolution; and adoption of semantic web technologies need to be addressed. Ontologies represent the best answer to the demand for intelligent systems that operate closer to the human conceptual level [2]. To achieve this degree of automation and move to new generation e-commerce applications, we believe that a new model of software is needed. In order to provide help in the conversation among different agents, we are proposing what we call ontology-services to facilitate agents’ interoperability. More specifically, this work proposes an ontology-based information integration approach, exploiting the ontology mapping paradigm, by aligning consumer needs and the market capacities, in a semi-automatic mode, improved by the application and exploitation of the trust relationships captured by the social networks. In this paper we present an ontology-mapping service which is aligned with a negotiation mediation service, allowing negotiation to take place between entities using different domain ontologies. Although the negotiation process of the ontology mapping is not a part of the business negotiation (B2C), the same infrastructure can be applied, minimizing the development effort. One the other hand ontologies are perceived as socially evolving descriptive artifacts of a domain of discourse, namely that of B2C which can be used as a social information and web-of-trust source of information to support the ontology-services. The paper is organized as follows. This section makes an introduction. Section 2 contextualizes the usage of an ontology-mapping service in agent-based automated negotiations. Section 3 details the service itself. Section 4 illustrates an Ontology Mapping Negotiation Example and section 5 makes some conclusions. 2. The Marketplace Ontology Services Model To study our proposal we combine the use of two tools, developed at our Research Group, namely ISEM [3] and MAFRA Toolkit [4] into a novel electronic marketplace approach, together with the exploitation of social semantic network services. ISEM (Intelligent System for Electronic MarketPlaces) is a multi-agent market simulator, designed for analysing agent market strategies. The main characteristics of ISEM are: first, ISEM addresses the complexities of on-line buyer’s behaviour by providing a rich set of behaviour parameters; second, ISEM provides available market information that allows sellers to make assumptions about buyers’ behaviour and preference models; third, the different agents customise their behaviour adaptively, by learning each user’s preference models and business strategies. MAFRA Toolkit is the instantiation of the MAFRA - MAppping FRAmework, addressing the fundamental phases of the ontology mapping process. In particular, it allows the identification, specification and representation of semantic relations between two different ontologies. These semantic relations are further applied in the execution phase of the interoperation, by transforming the data as understood by one of the actors into the data understood by the other. In this sense, ontology mapping allows actors to keep their knowledge bases unchanged while supporting the semantic alignment between their conceptualizations (ontologies). 2.1. The Marketplace Model The Marketplace facilitates agent meeting and matching, besides supporting the negotiation model. In order to have results and feedback to improve the negotiation models and consequently the behaviour of user agents, we simulate a series of negotiation periods, \( D = \{1, 2, \ldots, n\} \), where each one is composed by a fixed interval of time \( T = \{0, 1, \ldots, m\} \). Furthermore, each agent has a deadline \( D_{\text{max}} \) to achieve its business objectives. At a particular negotiation period, each agent has an objective that specifies its intention to buy or sell a particular good or service and on what conditions. The available agents can establish their own objectives and decision rules. Moreover, they can adapt their strategies as the simulation progresses on the basis of previous efforts’ successes or failures. The simulator probes the conditions and the effects of market rules, by simulating the participant’s strategic behaviour. ISEM is flexible; the user completely defines the model he or she wants to simulate, including the number of agents, each agent’s type and strategies. 2.2. The Negotiation Model and Protocol The negotiation model used in ISEM is bilateral contracting where buyer agents are looking for sellers that can provide them with the desired products at the best conditions, Figure 1. ![Figure 1. Sequence of Bilateral Contracts](image) Negotiation starts when a buyer agent sends a request for proposal (RFP) (Figure 1). In response, a seller agent analyses its own capabilities, current availability, and past experiences and formulates a proposal (PP). Sellers can formulate two kinds of proposals: a proposal for the product requested or a proposal for a related product, according to the buyer preference model. On the basis of the bilateral agreements made among market players and lessons learned from previous bid rounds, both agents revise their strategies for the next negotiation round and update their individual knowledge module. The negotiation protocol of the ISEM simulator has three main actors (Figure 2): - **Buyer (B)**, is the agent that represents a consumer or a buyer coalition. Multiple Buyers normally exist in the marketplace in an instant; - **Seller (S)**, is the agent that represents a supplier. Multiple Sellers normally exist in the marketplace in an instant; - **Market Facilitator agent (MF)**, usually one per marketplace, coordinates the market and ensures that it works correctly. MF identifies all the agents in the market, regulates negotiation, and assures that the market operates according to established rules. Before entering the market, agents must first register with the MF agent. 2.3. The Ontology Service Model While the use of ontologies allows e-commerce actors to describe their needs and capabilities into proprietary repositories, the use of the ontology-mapping paradigm allows transparent semantic interoperability between them. This is the technological basis for the alignment between needs and capabilities of consumer and supplier, even when they use different ontologies. Based on this approach we can obtain the essential requirements to support our proposed solution and a new system infrastructure is proposed, recognizing two new types of actors: - Ontology Mapping Intermediary (OM-i), is the agent that supports the information integration process during the market interoperability, typically one per marketplace; - Social Networks Intermediary (SN-i), is the agent that provides trust relationship information holding between B, S and other agents that undertake similar experiences (e.g. a trader agent), typically one per marketplace. These actors deploy a set of relationship types whose goal is to automate and improve the quality of the results achieved in the e-commerce transactions. Figure 3 depicts the types of interactions between the marketplace supporting agents (i.e. MF, OM-i and SN-i agents) and the operational agents (i.e. B and S). 1. The Registration phase is initiated by the B or S agent, and allows these agents to identify themselves to the marketplace and specify their roles and services; 2. The Ontology Publication phase is the set of transactions allowing B and S to specify their ontologies to the marketplace; 3. The Mapping phase is the set of transactions driven by OM-i to align the ontologies of B and S; 4. The Transformation phase is the set of information transactions through OM-i that transforms (i.e. converts) the interaction data described in two different ontologies. Figure 3. Marketplace’s Actors and Interactions Considering the previous descriptions, a more complete and complex protocol is now detailed, including the OM-i and SN-i agents in the system (Figure 4). The integration starts when the B agent sends a request for proposal message (ReqProposal) to the MF agent. In response, the MF sends to the OM-i a request for mapping message (ReqMapping) between B and S’s ontologies. Once OM-i receives the ReqMapping message, it will start the ontology mapping specification process, with the support of other entities, including matching agents, ontology mapping repositories and SN-i. SN-i is responsible for retrieving the relevant information from ontology mapping repositories and social networks. Past similar ontology mapping experiences undertaken by agents with trust relationships with B and S will be used by SN-i to compile the social network repository information (i.e. SNInf(Inf)). Because the ReqSNInf is the exclusive responsibility of OM-i, both B and S are advised to perform a similar verification (eventually using other SN-i) once the ontology mapping is submitted for acceptance (i.e. ReqAcceptance(M)). Despite the fact that figure 4 represents only the acceptance scenario, a rejection scenario is also possible, in which case no further interaction will occur between B and S. In case the mapping is accepted, MF resumes the protocol by requesting to OM-i the RFP data transformation. Using the ontology mapping document, RFP data represented according to B’s ontology is transformed into data represented according to S’s ontology. The transformed data (RFP’) is forwarded to S, which will process it and will reply to MF. MF will then request the transformation of the proposal data (P) and will forward P’ to B. B processes it and will accept or formulate a counter-proposal (CP). As can be seen, once a mutually acceptable ontology mapping is established between B’s ontology and S’s ontology, all messages exchanged between B and S through MF are forwarded to OM-i for transformation. Notice that Figure 4 represents one single S in the system, but in fact multiple S’s capable of replying to the request may exist in the marketplace. In such case, the protocol would replicate the previous protocol for as many capable S’s. In order to decide which S’s are capable of answering the request, a simple approach based on a keyword matching algorithm is taken. The B agent specifies a few keywords along with its formal request (RFP). The MF, with the aid of SN-i, matches this list against every S’s publicized keyword list. In case the match succeeds to a certain level, the S is classified as capable. Figure 4. The Integration Protocol An important goal is to maintain the identification stage of the CBB model by using ontologies, the main idea is to construct the most accurate model of the consumer’s needs. Moreover, at the product brokering, buyer coalition formation, merchant brokering and negotiation CBB stages, the ontology mapping process will provide the integration of the seller and consumer’s models. In fact, in every stage of the CBB model, both the SN-i and OM-i are major players in the proposed solution. Notice that the social network information and trust component of the system is orthogonal to previous processes, as depicted in Figure 5. Also notice that the trust component of the system is orthogonal to previous processes, as depicted in Figure 3. Figure 5. Marketplace’s Ontology-Based Services Complementarily, the repository of relationships provided by emergent social networks will support establishing more accurate trust relationships between businesses and customers, as well as providing a better alignment (mapping) between their models. This new information is very important to feed the agents’ knowledge bases to improve their strategic behaviour. Market participant’s strategic behaviour is very significant in the context of competition. In particular, the Social Network component is envisaged as a source of information for disambiguation and decision-making to the other processes, along with trust relationships between users and groups: - The Registration process will profit from the Trust component in several ways. For example, the S agents can better decide which Services to provide in a marketplace, depending on the segment of customers traditionally found in specific marketplace. This is achieved by the social characterization of the B agents according to the social networks they belong to. In the same sense, B agents can more accurately choose the marketplaces to register to, depending on the social network advice, based on a social characterization of the other marketplace participants (i.e. Buyers and Sellers); - During the Ontology Publication process, agents need to decide which ontologies are advisable in that particular marketplace (e.g. simple or more detailed). The agent is able to choose the ontology that conveniently describes the semantics of its data in a certain context. In order to decide the more convenient ontology, S agents require a social characterization of the marketplace. Similar decisions are taken by B agents. Notice however, that the agent’s published ontology should not be understood as the complete representation of its internal data, but the semantics the agent intends to exteriorize through the Ontology Publication process. As a consequence, the agent should encompass the mechanisms allowing the internal transformation between the internal data semantics (e.g. data schema) and the external semantics (ontology), and vice-versa; - The Ontology Mapping Specification process is typically very ambiguous, thus it can potentially profit from the social characterisation and social trust relationships provided by SN-i. This process is understood as a negotiation process, in which B and S try achieving a consensus about the ontology mapping. The SN-i agent participates in this process as an information provider to the OM-i in order to disambiguate the ontology mapping achieved through automatic mechanisms and protocols. The rest of the paper addresses this dimension; - The Ontology Mapping Execution process is very systemic (in accordance to the ontology mapping specification document). Yet, the resulting messages’ data may be inconsistent in respect to the B’s and S’s data repository. In such cases, social knowledge is often required in order to decide/correct the consistency of the data. Through the use of social relationships, SN-i is a facilitator of this process. 3. The Ontology Mapping Process There are several ontology mapping formats but only a few are able to thoroughly describe the semantic relations established between any two ontologies as required in the B2C and B2B contexts. SBO is one of the most thorough formats, as its building blocks semantically constrain the relevant and useful relationships between ontologies for data integration in B2B and B2C contexts. As in any negotiation process, the ontology mapping negotiation problem is mainly characterized by the type of object to negotiate. In this case, the negotiation objects are part of the ontology mapping domain. According to the SBO, several types of objects might be considered in the negotiation: - The ontology mapping document, when the whole specification is subject of negotiation; - The semantic bridges, when each of the semantic bridges composing the mapping are subject of negotiation; - Parameters of the semantic bridges (e.g. the set of related entities). However, the more elements are subject to negotiation, the longer and more difficult it is to achieve a consensus among agents. Notice that a coarse grained negotiation (upon the mapping) is very fast, but a consensus is very hard to achieve, due to the lack of negotiation parameters. On the other hand, a fine grained negotiation (e.g. about the semantic bridges parameters) is easier to achieve, but it might be too long and therefore computationally inefficient. Another important dimension to consider is the value associated to the object of negotiation. In the ontology mapping negotiation scenario, the value of the object (i.e. the semantic bridge) is a function relating: - The correctness of the object, either the correction of the mapping, of the semantic bridges or of their parameters; - Pertinence of the object in respect to its envisaged application. In fact, the ontologies might be larger than it is necessary for the transaction, thus the focus on the relevant parts. Other dimensions are also relevant for the negotiation process, but in order to reduce the negotiation space, the following constraints have been decided and stated: - The negotiation always occurs between two honest, non-bluffing agents; - The ontology mapping to agree on is unidirectional, which means that for a bi-directional conversation, two ontology mapping negotiation processes are required. This is especially due to the fact that many semantic bridges represent non bijective relations; - The negotiation objects are the semantic bridges only. It means that no internal parameter of the semantic bridge is independently negotiable. 3.1. Hypothesis The proposed negotiation process bases on the idea that each agent is able to derive the correct semantic bridges and decide which semantic bridges are required in order to interoperate with the other entity. The suggested approach aims to further exploit the multidimensional service-oriented architecture adopted in the semi-automatic semantic bridging process [5]. In this process, a confidence value is evaluated for every candidate semantic bridge. This evaluation aggregates different similarity values resulting from the analysis carried out upon different dimensions of ontologies (e.g. lexical, structural and semantic). Different evaluation functions are applied depending on the most relevant dimensions of the ontologies and required semantic relations. I.e. the semantic heterogeneity arising from different ontologies requires different semantic relations. These are referred as confidence functions. Yet, an agent might not know the other’s evaluation of the same semantic relation. Based on these confidence functions, semantic bridges are categorized as: - Rejected semantic bridges (i.e. whose confidence value is smaller than the rejection threshold $t_r$); - Accepted semantic bridges (i.e. whose confidence value is greater or equal the rejection threshold $t_r$). As referred previously, one of the major problems faced in negotiation scenarios relates to the difficulty in determining and supplying convergence mechanisms to the agents. Negotiation suggests the relaxation of the goals to be achieved by one (or both) agents, so that both achieve an acceptable contract, and as good as possible one. This introduces two distinct concepts: - The goals of the negotiation (the features of the contract to achieve); - The relaxation mechanisms. Mathematically, these concepts might be represented respectively as: - An utility function ($u$), representing the overall goal of the negotiation of the semantic bridge, in which each parameter of the function is a sub-goal of the negotiation: $u(p_1, p_2, ..., p_n)$ - A meta-utility function ($U$) defining the conditions in which the parameters may vary: $U(p_1, p_2, ..., p_n)$ It is fundamental to identify the ontology mapping concepts that are able to play these role in the negotiation process. ### 3.2 Negotiation phase The confidence evaluation function applied in the generation of the ontology mapping is a good candidate to play the role of the utility function \( u \). This function plays a major role in the negotiation process and reusing it reduces the efforts of parameterization and customization, two hard, time-consuming and human demanding tasks of the ontology mapping process. However, it is our proposal to distinguish the semantic bridging from the negotiation phase, i.e. both phases occur consecutively. First, each agent performs its own semantic bridging process, generating a valid and meaningful document mapping. After that, the set of semantic bridges composing the mapping are subject to the negotiation between both agents. The confidence value evaluated for each semantic bridge (csb) is then used as the negotiation value of the semantic bridge, corresponding to the agent confidence in proposing the semantic bridge to the other agent. Several situations might occur when negotiating a specific semantic bridge: - Both agents propose the semantic bridge; - Only one of the agents proposes the semantic bridge. In case last situation arises, one of two situations occurs: - The other agent relaxes the confidence value and accepts the semantic bridge; - The other agent is not able to relax the confidence value and rejects the semantic bridge. In case last situation occurs, one of two situations occurs: - The agent proposing the semantic bridge cannot accept the rejection. In this case, the proposed semantic bridge is considered mandatory; - The agent proposing the semantic bridge can accept the rejection (i.e. the semantic bridge is not mandatory). Since the goal of the process is to negotiate, it is important to provide the mechanisms so that the agents are able to revise their proposals about the semantic bridges, relaxing their sub-goals (i.e. individual semantic bridges) in favor of a larger goal, i.e. a valid, agreed mapping document. In this sense, the agent should not decide a priori on the acceptance/rejection of the semantic bridge. Instead, it should admit that certain semantic bridges are neither accepted nor rejected: they are negotiable. Confidence categories account for the pertinence of the semantic bridge to the mapping and to the interoperability according to the agent. As a consequence, the rejection threshold borderline \((t_r)\) defined for the generation of the agent’s mapping is insufficient and should be replaced by a multi-threshold approach: - **Mandatory threshold** \((t_m)\) that determines the utility function value above which it is fundamental that the semantic bridge is accepted by the other agent; - **Proposal threshold** \((t_p)\), above which the semantic bridge is proposed to the other agent; - **Negotiation threshold** \((t_n)\), above which the semantic bridge is negotiable. Therefore, four distinct categories of semantic bridges are defined according to the confidence value and the previously identified thresholds (Figure 6). Both Buyer and Seller classify their semantic bridges according to these categories (i.e. \(SB^F, SB^E, SB^T, x \in \{Buyer, Seller\}\)). <table> <thead> <tr> <th>Rejected ($SB^F)</th> <th>Negotiable ($SB^E)</th> <th>Proposed ($SB^T)</th> <th>Mandatory ($SB^M)</th> </tr> </thead> <tbody> <tr> <td>0</td> <td></td> <td>(t_1)</td> <td>(t_2)</td> </tr> </tbody> </table> **Figure 6. Semantic bridges classified according to the utility function \(u(p1,p2,\ldots,pn)\) and the thresholds** Furthermore, it is necessary to provide the mechanisms so that the agent is able to revise its perception of the negotiable semantic bridges. These mechanisms should be embodied in the meta-utility function, as defined in the hypothesis, but not yet contemplated in the applied ontology mapping process [5]. The meta-utility function \((U)\) is responsible for the definition of: - The parameters variation possibilities; - The priorities over parameters variation; - The conditions under which the variation may take place. Through these elements, an updated confidence value is evaluated \((c_u)\) for the negotiable semantic bridges that were proposed by the other agent. If \(c_u \geq t_p\), the negotiable semantic bridge is categorized as tentatively agreed \((SB^E)\). Since the meta-utility function determines priorities and conditions for the variation of the parameters, it is possible that, for some variations, \(c_u < t_p\). It is therefore necessary to iterate across the different variation possibilities, following the defined priorities and conditions. In case it is impossible to evaluate \(c_u \geq t_p\), the semantic bridge is not re-categorized and is therefore rejected. As result of this process three groups of semantic bridges are generated: The Accepted Semantic Bridges \((SB_a)\), those that were proposed by one of the agents and accepted by the other without any relaxation; - Tentative Semantic Bridges \((SB_t)\), those that were proposed by one of the agents, and negotiated and successfully relaxed its confidence value. Tentatively agreed semantic bridges are subject of the definitive decision phase; - Backup Semantic Bridges \((SB_b)\), those that were negotiable for both agents or those that were negotiable for one of the agents but the relaxation was not successful. The effort made by the agent to re-categorize a semantic bridge varies according to the priorities and values of the parameters. The meta-utility function is also responsible for the evaluation of this effort, named convergence effort \((c_{sb})\). In its simplest form, the convergence effort may account for \(c_{sb} = c_{sb} - e_{sb}\), but it can be arbitrarily complex depending on the several parameters of the (meta-) utility function(s). 3.3 Decision phase In order to ensure that the proposed agreement is advantageous for both agents, it is necessary to decide if the is globally advantageous and not only locally advantageous. The problem arises due to the convergence efforts made during the negotiation process. For every \(sb \in SB_t\), re-categorized as \(SB_t\) a convergence effort \(c_{sb}\) is evaluated by the meta-utility function. Convergence efforts should be considered inconvenient to the agent and treated as a loss. Instead, the agreement upon the same semantic bridge provided some profit for the agent when it is re-categorized. This profit is denoted by the confidence value \((csb)\). In that sense, the balance between profits and losses is a function such: \[ \text{balance} = \sum c_{sb} - \sum e_{sb} : sb \in SB_t \] Depending on the balance value the entity decides to agree on the negotiation agreement or to propose a revision of the mapping. 3.4 Completion phase While the \(SB_b\) is the minimum agreed semantic bridge set, it does not necessarily correspond to a valid ontology mapping. This is primarily due to the semantic constraints holding between semantic bridges. For example, a semantic bridge between properties (i.e. a property bridge) should be enclosed in the scope of a semantic bridge that relates the domain concepts (i.e. a concept bridge) of the properties. I.e. because a property value exists in the context of a concept instance, a property bridge only makes sense in the scope of a concept bridge. This leads to the necessity of a completion phase in which logically required semantic bridges are identified and negotiated. Yet, these semantic bridges are taken only from those that belong to \(SB_a \cup SB_t\). For that a flooding algorithm is applied, emphasizing the similarity/dissimilarity of neighbor semantic bridges. Through this, a high confidence semantic bridge is able to positively affect a “near” semantic bridge, and a low-confidence semantic bridge will negatively affect a “near” semantic bridge. Yet, because the completion phase potentially affects the negotiation balance, the cyclic decision-completion process runs until no change occurs in one of the phases. 4. An Ontology Mapping Negotiation Example Consider the e-commerce scenario where the Buyer uses ontology O1 and the Seller uses ontology O2. The correct ontology mapping between O1 and O2 is depicted in Figure 7. ![Figure 7. Perfect ontology mapping](image) Buyer and Seller internally generate their ontology mappings, further classifying the semantic bridges. The first negotiation phase deals with synchronizing these sets in the MF. As result, semantic bridges are grouped according to the accepted, tentative and backup categories, Figure 8. Notice that sb7 was proposed by the Seller but rejected by the Buyer. During the decision phase, the OM-i supports Buyer and Seller achieving a consensus about tentative semantic bridges. The effort for sb3 is acceptable for Seller, thus the process follows for the completion phase with \(SB_c = \{sb1, sb2, sb4, sb3\}\). OM-i detects that sb4 requires sb5, thus it proposes the re-classification of sb5 to \(SB_t\), so it can be decided by both agents. At this point \(SB_b = \{sb3\}\). Because this phase changed the sets, the decision phase takes places once again. Now, agents have to decide if \(SB_c\) is acceptable for both agents. In fact, the resulting balance is even more near zero than previous and therefore \( SB_{n} = (st1, st2, sb4, sb3, sb6). \) According to \( SB_{m} \), no completion changes are required and the process ends. \[ \begin{array}{ccc} \text{Buyer} & \text{Market Facilitator} & \text{Seller} \\ SB_{R}^{e} & \{st^e, st2, st3\} & SB_{m}^{e} \\ SB_{R}^{e} & \{st4\} & SB_{f}^{e} \\ SB_{R}^{e} & \{st5, st6\} & SB_{f}^{e} \\ SB_{R}^{e} & \{st7\} & \cdots \\ \end{array} \] \[\text{Figure 8. Earlier negotiation state}\] 5. Conclusions and Future Work A big challenge for communicating software agents is to resolve the problem of interoperability. Through the use of a common ontology it is possible to have a consistent and compatible communication. However, we maintain that each different actor involved in the marketplace must be able to independently describe their universe of discourse, while the market has the responsibility of providing a technological framework that promotes the semantic integration between parties through the use of ontology mapping. In addition, we think that the solution to overcome these problems has to take into consideration the technological support already existent, namely a well-proven e-commerce platform, where agents with strategic behaviour represent consumers and suppliers. The proposed ontology mapping negotiation mechanism suggested in this paper is our effort in that direction. This approach is being applied and tested in our ISEM+MAFRA platform as a larger effort to provide support for the overall e-commerce interoperability. Earlier experiences show that this mechanism provides a considerable mitigation of interoperability risks and substantially reduces the human participation in the interoperability setup phase. The hardest part of the described ontology mapping negotiation process is the specification, configuration, adaptation and evolution tasks of the utility and meta-utility functions. These tasks are very complex and recurring so they adapt the negotiation to past experiences, both from the agent itself as well from other “friend” agents. There is where social relationships and past experiences reports are useful. In fact, as occurring in human-driven negotiations of physical goods, experience, reputation and trust play a fundamental role in the process. With the emergence of social web in general and of social networks in particular, users frenetically started producing experience classifications and reports of their experiences. It is our conviction that the established infrastructure is a starting point for the automation and quality improvement of the e-commerce negotiation in general and of ontology mapping negotiation in particular. Trust providers might emerge from this infrastructure, collecting agents’ experiences and providing their insights in an aggregated, concise and useful format to the market facilitator and the buyer and seller agents. Market facilitator (MF) would play this role, as it manages and therefore collects and evaluates the business results. MF is then called to actively participate in the negotiation process, integrating information from different sources, including social networks and tagging repositories. Furthermore, MF must collect (in a legal way) relevant information respecting the negotiation, including the established ontology mapping contracts and relate these with the success measures of the interaction. ACKNOWLEDGEMENTS The authors would like to acknowledge FCT, FEDER, POCTI, POSI, POCI and POSC for their support to R&D Projects and GECAD Unit. 6. References
{"Source-Url": "http://www.dei.isep.ipp.pt/~pmaio/pubs/2009/ICEBE_2009.pdf", "len_cl100k_base": 6949, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 33397, "total-output-tokens": 7728, "length": "2e12", "weborganizer": {"__label__adult": 0.0004203319549560547, "__label__art_design": 0.00067138671875, "__label__crime_law": 0.0009617805480957032, "__label__education_jobs": 0.0016841888427734375, "__label__entertainment": 0.0001939535140991211, "__label__fashion_beauty": 0.00027179718017578125, "__label__finance_business": 0.01180267333984375, "__label__food_dining": 0.0005049705505371094, "__label__games": 0.0015249252319335938, "__label__hardware": 0.0015506744384765625, "__label__health": 0.0008153915405273438, "__label__history": 0.00052642822265625, "__label__home_hobbies": 0.00019037723541259768, "__label__industrial": 0.0012054443359375, "__label__literature": 0.0007266998291015625, "__label__politics": 0.0008020401000976562, "__label__religion": 0.00049591064453125, "__label__science_tech": 0.36083984375, "__label__social_life": 0.0001856088638305664, "__label__software": 0.085693359375, "__label__software_dev": 0.52685546875, "__label__sports_fitness": 0.00027942657470703125, "__label__transportation": 0.0011911392211914062, "__label__travel": 0.00039839744567871094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36247, 0.01192]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36247, 0.35296]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36247, 0.92175]], "google_gemma-3-12b-it_contains_pii": [[0, 4827, false], [4827, 9030, null], [9030, 13219, null], [13219, 17311, null], [17311, 22265, null], [22265, 27245, null], [27245, 31652, null], [31652, 36247, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4827, true], [4827, 9030, null], [9030, 13219, null], [13219, 17311, null], [17311, 22265, null], [22265, 27245, null], [27245, 31652, null], [31652, 36247, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36247, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36247, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36247, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36247, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36247, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36247, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36247, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36247, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36247, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36247, null]], "pdf_page_numbers": [[0, 4827, 1], [4827, 9030, 2], [9030, 13219, 3], [13219, 17311, 4], [17311, 22265, 5], [22265, 27245, 6], [27245, 31652, 7], [31652, 36247, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36247, 0.01875]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
e37346db16f7eb8499f537f2c5317a8ccbc4db7d
An Intelligent Web Service Workflow: A Petri Net Based Approach E. Hossny, S. AbdElrahman and A. Badr* Department of Computer Science, Faculty of Computers and Information, Cairo University, Egypt Abstract: Fuzzy Petri Net for Web Service Composition (FPN4WSC) aims to compose the individual web services into more complex one. It is a workflow model which is hybridization between Petri net, SHOP2, and fuzzy logic. Petri net allows user to specify his request as a workflow. SHOP2 is used as an Artificial Intelligence (AI) planning system to get a plan for the user request. However, SHOP2 fails to capture the uncertainty. So, the fuzzy logic is used as a refinement engine to get the best solution based on the user preferences. Therefore, FPN4WSC presents simple graphical, intelligent, and automatic web service composition model. FPN4WSC model is scalable to any workflow-based domains. As a case study, it is applied on the travel reservation domain where the user should specify his preferences and the fuzzy engine tries to get the best solution depending on his preferences. Keywords: Web Services, Fuzzy Petri Net, SHOP2, Web Service Composition. INTRODUCTION Nowadays, the internet has become a service oriented model instead of a repository of information, where many companies are putting their core business on the web as a collection of web services [1-3]. Web service [1, 4] is considered as self-contained, self-describing, modular application that can be described, advertised, discovered, and executed through the web. It can be identified by URL, whose interfaces and bindings are described by using XML. The emergency of web services has led to more interest for composing individual web services into more complex one. The ability to efficiently and effectively assemble the autonomous and heterogeneous web services on the internet at runtime is a critical step towards the development of web service applications. Actually, if there is no web service that can fulfill the user request, there should have a possibility to integrate set of existing web services together to satisfy the functionality required by the user [5]. Thus, web service composition has been an active area of research in the past few years. The study of workflow model is one of the most important parts of web service composition. Several approaches are proposed to investigate and model the process of web service composition. However, these approaches did not provide formal framework for modeling the automated and complex service composition process. Also, the current technologies, such as Universal Description, Discovery, and Integration (UDDI) [6], Web Service Description Language (WSDL) [7], and Simple Object Access Protocol (SOAP) [8], do not realize complex web service integrations. So, they give limited supported in web service composition [1]. The proposed system is a fuzzy Petri net based model for composing web services. This model is called Fuzzy Petri net for Web Service Composition (FPN4WSC). The web services are modeled as Petri nets by assigning transitions to services and places to states. The model uses SHOP2 as an AI planner to get a plan for the user request. Since SHOP2 can not capture the uncertainty, the fuzzy logic is used as a refinement engine to get the best solution based on the user preferences. BACKGROUND Petri Net A Petri Net (PN) [9] is a directed, connected and bipartite graph. It was invented in 1962 by Carl Adam Petri. A Petri net has four components, namely place nodes, transition nodes, directed arcs connecting places with transitions, and tokens occupy places. Petri nets have a well defined mathematical foundations and an easy-to-understand graphical feature. The graphical nature of Petri nets makes them self-documenting and a powerful design tool which facilitate the visual communication between the members who are engaged in the design [10]. Definition 1 (Petri Net) A Petri net is an algebraic structure (P, T, F, M0): 1. \( P \) is a finite set of places, \( T \) is a finite set of transitions and \( P \cap T = \emptyset \) 2. \( F \subseteq (P \times T) \cup (T \times P) \) is a set of directed arcs from \( P \) to \( T \) and from \( T \) to \( P \) 3. \( M0 \) is the initial marking. SHOP2 SHOP2 [11] is a domain-independent Hierarchical Task Network (HTN) planning system. HTN planning is an AI planning methodology that creates plan by task decomposition. The task decomposition is a process in which the planning system decomposes tasks into smaller and smaller sub-tasks, until primitive tasks are found that can be performed directly. This makes HTN planning system a good candidate for automatic Web services composition task. One difference between SHOP2 and other HTN planning systems is that SHOP2 plans for tasks in the same order that they will later be executed. Planning for tasks in the order that those tasks will be performed makes it possible to know the current state of the world at each step in the planning process. While SHOP2 allows for tasks to be sequentially ordered, there is no mechanism to handle the control constructs related to concurrency, namely parallel split, synchronization, and exclusive choice. The proposed model overcame this problem by adding a new keyword (:concurrent) to handle the concurrency problem. In order to do planning in a given domain, SHOP2 needs to be given knowledge about that domain. The knowledge based of SHOP2 contains set of operators and methods. Each operator is a description of what needs to be done to execute some primitive task. Each method tells how to decompose some compound task into partially ordered subtasks. There exists a java implementation for SHOP2 called JSHOP2 [12] which is an open source. The inputs to JSHOP2 are a planning domain and a planning problem. The planning domain is composed of operators and methods. The planning problem is composed of an initial state and list f tasks to be performed. The planning domain and problem are written by the lisp language [13]. RELATED RESEARCH Several approaches investigated the process of web service composition. However, none of these approaches offers formal framework for modeling the automated and complex service composition process. The proposed model addresses a particular subset of this problem with FPN4WSC, which is a fuzzy Petri-net-based workflow model for web service composition. In this section, the approaches that are closely related to our work are briefly described. Web service composition requires more complex functionalities such as transactions, workflow, negotiation, management, security. Those functionalities are not provided by the current technologies based on WSDL, SOAP, and UDDI. There are several efforts that aim at providing such functionalities, for example, the Business Process Execution Language for Web Services (BPEL4WS) [14] is positioned to become the basis of a standard for web service composition. But this language is complex procedural language and very hard to implement and deploy [2]. Its XML representation is very verbose and it has many constructs. Also, it facilitates orchestration only, i.e. allows execution of a manually constructed composition. There are also some proposals such as OWL-S [15] that aims at realizing the semantic web concept. However OWL-S is a complex procedural language for web service composition. In the workflow community, a lot of attention has been paid to allow the systems to be adaptive and to separate between the interface and implementation of a process. eFlow [16] is one of the recent emerging workflow projects focus on loosely coupled processes. It supports adaptive and dynamic service composition. However, it lacks a formal model to specify and verify the web service composition [2]. Petri nets have been widely used as a tool for workflow modeling and analysis since their introduction by C.A. Petri in 1962. Hamadi [2] proposed a Petri-net-based algebra to model control flows, as a necessary constituent of reliable web service composition process. The model can capture the semantics of complex web service composition and enable declarative composition of web services. Also it aims to provide support for correct web services by insuring the absence of deadlocks and live locks. However, it lacks a management of time and resources. Service/Resource Net (SRN) [1] is an extended Petri-net-based model for web service composition with some new elements such as time, resource taxonomy, and condition. It proposed Web Service Semigroup (WSSG) and meta-service based on group theory as a theoretical system of SRN service taxonomy. This model is more closely related to the model of Hamadi [2]. Bing and Huaping [10] proposed a Petri-net-based algebra to capture the semantics of complex web service composition. In this model the web services are represented as Petri nets where transitions are assigned to methods and places are assigned to states. The model uses the Petri net to test a set of non-functional properties such as reachability, safety and existence of deadlocks. However, it did not provide a formal framework for modeling the automated and complex web service composition. Evren et al. [11] integrated the SHOP2 planning system with DAML-S web services descriptions to automatically compose the web services. The authors used the DAML-S for semantic markup of web services and gave a detailed description of how to translate DAML-S process definitions into a SHOP2 domain. Also, they implemented a converter to convert a SHOP2 plan to DAML-S format which can be executed directly by a DAML-S executor. However, that approach can not handle the control constructs related to concurrency. Fu et al. [17] proposed an optimized web service composition algorithm based on fuzzy Petri net and semantic web. In this algorithm the web services are described by fuzzy Horn clause and the Quality of Service (QoS)-oriented web service composition model is built by fuzzy Petri net. FPN4WSC MODEL The process of web service composition can be regarded as workflow. Workflow model is the precondition of workflow. The Petri net can be used as a practical method and tool to model the workflows. The basic definition and concept of Petri net can be found in [9]. The relation between Petri net and workflow net is defined as the following [1]: **Definition 2** A Petri net is a workflow net iff: 1. has two specific places , source place \((i)\) and sink place \((o)\) \(•i=∅, o•=∅\) 2. if a new transition \((i)\) is added to connect place \(o\) and place \(i\), i.e. \(•t=\{o\}, t•=\{i\}\), PN is strong connected Petri net To describe the workflow of web service composition, a new model is proposed as hybridization between Petri net, SHOP2, and fuzzy logic. Petri net helps the user to draw his request as a workflow. SHOP2 is used as an AI planning system to get a plan for the user request. Since SHOP2 fails to capture the uncertainty, the fuzzy logic is used as a refinement engine to get the best solution based on the user preferences. This model is called Fuzzy Petri net for Web Service Composition (FPN4WSC). Most precisely, the process of automatic service composition includes the following phases [5]. **Presentation of Service** The service providers publish their atomic services at a global service registry, such as, UDDI. The essential attributes to describe a web service include the service’s inputs, outputs and exceptions. In the proposed model, the services are represented using WSDL. **Translation of the Languages** Most service composition systems distinguish between the external and internal service specification languages. The external languages are used by the service requesters to allow them to express what they want in a relatively easy manner. They are usually different from the internal ones that are used by the composition process generator, because the process generator requires more formal and precise languages, for example, the logical programming languages and SHOP2 [18]. **Generation of Composition Process Model** The service requester can also express the requirement in a service specification language. Then, a process generator tries to solve the requirement by composing the atomic services which are advertised by the service providers. The process generator usually takes the functionalities of services as input, and outputs a plan that describes the composite service. **Evaluation of Composite Service** It is quite common that many services have the same or similar functionalities. So, the planner might generate more than one composite service fulfilling the requirement. In that case, the composite services are evaluated by their overall utilities using the information provided from the non-functional attributes. The most commonly used method is utility functions. The requester should specify weights to each non-functionality attributes and the best composite service is the one who is ranked on top. **Process Execution Engine** After a unique composite process is selected, the composite service is ready to be executed. Execution of a composite web service can be thought as a sequence of message passing according to the plan. The dataflow of the composite service is defined as the actions that the output data of a former executed service transfers to the input of a later executed atomic service. **RESEARCH CONTRIBUTION** A hybrid system is innovated to achieve simple graphical, intelligent, and automatic web service composition that provides the following main contributions: - The user preferences are satisfied using uncertainty values for each preference through automatic generation of suitable plans. - The proposed system is scalable to any workflows-based domains. This means the user can get the WSDL description for a set of web services and add them to the system without any problem. - The difficulties of each individual component are solved. For example, Petri net can only generate workflows and SHOP2 can only generate plans. These two components are combined with the fuzzy to generate an intelligent graphical automatic web service composition model. **SYSTEM DESIGN** FPN4WSC can be modeled as a client server model, where the client asks the server to execute a set of web services represented as a Petri net workflow and the server tries to satisfy his request and provide him with the best solution. The architecture of the proposed model is depicted in Fig. (1). The composition system of the proposed model has two types of participants, namely service provider and service requester. The service providers publish their services for use. The service requesters use the services offered by the service providers. The composition system also contains the following components: WoPeD (Workflow Petri net Designer), translator, JSHOP2, plan generator, service repository, discovery engine, evaluator (fuzzy engine), and execution engine. A full description of each component is illustrated in the following subsections. Algorithm 1 specifies how all these components integrated with each other. Algorithm 1 System Components Integration 1. The service requester should specify his request (i.e. a set of web services to be composed) using WoPeD. 2. WoPeD generates a Petri net workflow that is represented the user request. 3. The translator component (Algorithm 2) is built to parse the output from WoPeD (i.e. a Petri net workflow) into lisp problem which is entered as an input to SHOP2. 4. SHOP2 executes the lisp problem and generates a plan that is represented as a java code. 5. The plan generator component (Algorithm 3) is built to execute the java code of the plan and generate a plan which is a list of tasks (web services) to be executed. 6. An input form is displayed to the service requester to ask about set of required data and the user preferences. 7. The discovery component (Algorithm 4) is built to detect if the required web services and their inputs are available. 8. If the discovery component fails in the previous step, then a message is displayed to the user to tell him that the required services are not available. 9. Otherwise, the discovery component sends the user preferences to the fuzzy engine. 10. The fuzzy engine is used as a refinement engine to get the best solution based on set of non-functional attributes such as uncertainty constraints. The non-functional attributes are represented through the user preferences. 11. WSDL2OWLS [19] converter is used to convert WSDL description of web service into its equivalent OWL-S. 12. Finally, the process execution engine (Algorithm 5) executes the OWL-S of the required web services according to the plan. A full description of the proposed system components is illustrated in the following subsections. WoPeD (Workflow Petri Net Designer) WoPeD [20] is an open source java code for designing Petri net workflows. It can be used by the user to enhance accessibility of the users in the sense that the users can express what they want in a relatively easy manner using the drag/drop of the elements in the Graphical User Interface (GUI) of WoPeD. It has a friendly user interface which can be used by the end user in an easy manner. The user can use this open source to specify his request as a set of places and transitions. The places are used to represent the states (i.e. jump from a web service into another). The transitions are used to represent the states (i.e. jump from a web service into another). The transitions are used to represent the web services. For example, use WoPeD to draw two concurrently web services, namely SearchFlight and SearchHotel, and one sequential service called ReserveHotel. The Petri net for this example is shown in Fig. (2). JSHOP2 As it is mentioned above, JSHOP2 [12] is an open source java code which plan for the tasks to be ordered sequentially and there is no mechanism to handle the control constructs related to concurrency, so the proposed model overcame this problem by adding a new keyword (:concurrent) to JSHOP2 in order to handle the concurrency problem and allow tasks to be executed concurrently. The inputs to JSHOP2 are planning domain and problem files. Those files must be written by lisp language. ![Fig. (2). Petri net example is designed by WoPeD.](image-url) A sample of the planning domain whose Petri net is clarified in Fig. (2), is depicted as the following: **Lisp code for the planning domain:** ```lisp (defun domain webservices (defmethod SearchFlight ((from ?x)(to ?y)) (!searchFlight)) (defmethod ReserveFlight ((from ?x)(to ?y)) (!reserveflight)) (defmethod SearchHotel ((destination ?x)) (!searchHotel)) ) ``` **Translation of the Languages** The language used by the JSHOP2 is different from the language used to represent Petri net in the WoPeD. WoPeD generates a Petri net workflow. The JSHOP2 needs the problem to be represented by the lisp language. Thus the translation component between the WoPeD language and the internal language (i.e. lisp) is developed. Algorithm 2 specifies the functionalities of the translator component. **Algorithm 2 Translator Component** 1. Traverse the paths of the Petri net network and save them as a vector, where each element in the vector contains a path and its type (sequential or concurrent). 2. Translate the paths vector into lisp code according to the syntax of the lisp language, such that the sequential task is written as it is between () and the concurrent tasks are preceded by the keyword :concurrent. 3. Finally, the generated lisp code represents the problem and enters as an input to SHOP2. **Generation of Composition Process Model** The JSHOP2 outputs the java code which represents the problem and domain. This java code is entered as an input to the plan generator component to generate suitable plans for the composed web services. Algorithm 3 clarifies the functionalities of the plan generator component. **Algorithm 3 Plan generator Component** 1. Compiles the java code of the problem and domain. This compilation generates executable code for the problem. 2. Run the executable code of the problem and save the plan into a text file. 3. Parse the plan text file and determine the concurrent and the sequential tasks. 4. Finally, save the parsed data as a vector, where each element in the vector represents a web service name. **Discovery** There exists a database which stores the available web services in the current domain. The discovery component is developed to detect if the required web services and their inputs are available. Algorithm 4 depicts how the discovery component works. **Algorithm 4 Discovery Component** 1. Create SQL statement to search in the database about each web service with the given data. 2. If the previous step fails, then a message is displayed to the user to tell him that the required services are not available. 3. Otherwise, the discovery component sends the user preferences to the fuzzy engine to get the best solution for this user. **Evaluation of Composite Service** This can be satisfied using fuzzy engine. The fuzzy engine contains set of fuzzy rules and linguistic variables. In the proposed model the fuzzy rules are static and must be defined based on the given domain. All fuzzy evaluations are based on the rules in the symbolic representation. The following is the format for symbolic representation of fuzzy rule [21]: \[ \text{if } LV_1 \text{ is } MF_1 \langle \text{and/or} \rangle LV_2 \text{ is } MF_2 \ldots \text{then } LV_N \text{ is } MF_N \] This format can be used to define the static fuzzy rules and then load them to the fuzzy engine. The user provides the system with a set of his preferences and the fuzzy engine runs the fuzzy rules to get the best solution depending on the user preferences. Since there is a set of uncertainty values in a web service, the user should specify at least two preferences and specify the status of each preference, rigid or soft. After the preferences are specified, the fuzzy engine runs and tries to satisfy the first preference. Then runs again to satisfy the second preference and finally makes a filtration based on status of the first and the second preferences and get the best solution. Where, if there is contradiction between the first and the second preferences, then the fuzzy engine tries to execute the rigid one. For example, first preference: weight is large and status is rigid, second preference: budget is large and status is soft. Since the weight is large, this means the user can not travel by plan. Since the budget is large, the user can take the first class of a vehicle. Thus the final output from the fuzzy engine is "You can take bus and the class of service is first" **Process Execution Engine** The plan of composed services runs by the process execution engine. The required services must be described by OWL-S and enters as inputs to the process execution engine. Algorithm 5 specifies how the process execution engine works. **Algorithm 5 Process Execution Engine** 1. Traverse the plan vector and get the web service name in each element in the vector. 2. Load the OWL-S description of the current web service name, such that each concurrent web service runs in a separate thread and the sequential web services runs sequentially in the same thread. **IMPLEMENTATION** Actually, the proposed model is mainly implemented using the java language and Eclipse Integrated Development Environment (IDE). Since the java language is a platform independent, the proposed system can be used in any platform. The end user can run the proposed system in two ways, standalone java application or java applet. But in both cases he must specify his request as a Petri net workflow using the simple graphical user interface of WoPeD. The main three components which are used in the proposed system, namely WoPeD, fuzzy engine, and JSHOP2, are open sources java code and some of modifications are added to enhance each one of them. They can be downloaded from [20-22]. Below are the modifications which are added to those open sources: **JSHOP2** Since JSHOP2 has no mechanism to handle the control constructs related to concurrency, the proposed model over- An Intelligent Web Service Workflow A huge number of approaches have been proposed to tackle the problem of web service composition. Most of them are based on either workflow or AI planning techniques. However, the proposed model combines both techniques, workflow and AI planning. Despite all these efforts, establishing web service composition has largely been an ad-hoc, time-consuming process, and beyond the human capability to deal with the whole process manually because the web service environment is highly complex and it is not feasible to generate everything in an automatic way [5]. Table 1 summarizes a comparison between the proposed model and some of the current web service composition approaches. Some features, which have great importance for developing composite services, are identified for the analysis of the composition frameworks [4, 23]. Service connectivity: all composition approaches specify how to connect to the service and reason about its inputs and outputs. Execution monitor: to monitor and trace service execution. QOS modeling: most approaches neglect specification of non-functional QoS properties such as security, dependability, performance, or user's preferences. Service definition: specifies which language is used to define a service. It should be noted that the time that is consumed by the different system components proportional to the execution of web services. Therefore, the performance of the proposed model is dependent totally on the selected web service execution. Further more, since the main system actors are its web services, the system can apply many number of requests based on these actors. CASE STUDY FPN4WSC is a formal model for description and evaluation of web service composition process. FPN4WSC is applied on the travel reservation system as a case study. Table 1. Comparison Between FPN4WSC and the Available Web Service Composition (WSC) Frameworks <table> <thead> <tr> <th>Framework</th> <th>Service Connectivity</th> <th>Composition Strategy</th> <th>Execution Monitor</th> <th>QoS Modeling</th> <th>Service Definition</th> <th>Graph Support</th> </tr> </thead> <tbody> <tr> <td>BPEL4WS</td> <td>✓</td> <td>workflow composition</td> <td>×</td> <td>×</td> <td>WSDL</td> <td>×</td> </tr> <tr> <td>E-flow</td> <td>✓</td> <td>workflow composition</td> <td>✓</td> <td>×</td> <td>WSDL</td> <td>✓</td> </tr> <tr> <td>Petri net based algebra for WSC</td> <td>✓</td> <td>workflow composition</td> <td>×</td> <td>✓</td> <td>Not defined</td> <td>✓</td> </tr> <tr> <td>SRN</td> <td>✓</td> <td>workflow composition</td> <td>×</td> <td>✓</td> <td>WSDL</td> <td>×</td> </tr> <tr> <td>Automatic WSC using SHOP2</td> <td>✓</td> <td>AI composition</td> <td>✓</td> <td>×</td> <td>DAML-S</td> <td>×</td> </tr> <tr> <td>WSC based on Fuzzy Petri net and Semantic Web</td> <td>✓</td> <td>workflow composition</td> <td>×</td> <td>✓</td> <td>Fuzzy Petri net</td> <td>×</td> </tr> <tr> <td>FPN4WSC</td> <td>✓</td> <td>workflow and AI composition</td> <td>✓</td> <td>✓</td> <td>Fuzzy engine</td> <td>✓</td> </tr> </tbody> </table> Please fill the following form to execute your web services: Fig. (5). An empty input form. Although many alternatives are available for travel reservation, the FPN4WSC is the most suitable one because it provides with two main added values. First one, it provides the service requester with set of suitable plans for doing the required composition. Second one, it provides the user with uncertainty values for each web service. Further more, it has a very easy GUI. Below are set of the snapshots which are implemented for testing the model. Firstly, the user needs to register before using the system functionality. Fig. (4) clarifies the registration form. Since the proposed model is applied on the travel reservation, some of the questions depending on this domain exist in the registration form. Also, there is a set of information about the user's health such as if he has any medical conditions, any allergies, or if he is smoking. This information helps the proposed model to get to the user the most suitable trip according to his health. After the user registers in the system, he can use the system's functionality and run the model to compose his web services. To allow the user to compose set of web services, then he needs to specify a Petri net diagram for the required web services. For example, the diagram in Fig. (2) specifies three web services, namely SearchFlight, SearchHotel, and ReserveHotel, to be composed as a new web service such that the first two web services run concurrently and after they end the execution, the final web service starts its execution. Now the user is ready to physically start executing the composition process by running the FPN4WSC model. The model tries to parse the given Petri net diagram and generate a vector of concurrent and sequential web services. Then it uses this vector to formulate a lisp problem to be planned later by JSHOP2. Depending on the required web services, the system generates a new web form for the required data of each web service. This web form is displayed to be filled by the user. The snapshots in Figs. (5) and (6) depict an input form (before and after filling the required data) based on the web services which are specified in the Fig. (2). For each web service, the model asks the user about set of data, for example, in the SearchFlight web service, the model asks him about class of service, trip type, trip speed, and the budget. Also it asks him to select the first and the second preferences and which of them is rigid or soft. All these info is used later by the fuzzy engine to get the most suitable trip for the user. In the ReserveHotel web service, the model asks him about the class of service and the room type and by this information the fuzzy engine tries to specify the price of the requested hotel. After the user fills the input form, he can click on the Ok button to complete the execution of the composition process. In this part, the model tries to use the vector of web services to create a lisp problem and then send it to JSHOP2 to generate a plan for this problem. After JSHOP2 generates a plan for the given problem, the model runs the discovery engine to discover if the required web services are available. If the discovery component succeeds, the fuzzy engine runs to get the best solution for the required web services. Finally, the process execution engine executes the generated plan by running the OWL-S file of each web service and the results of the composed web services are displayed in Fig. (7). CONCLUSION AND FUTURE WORK Web service composition provides new services for web-based cooperation and it becomes more interested. To describe and model the process of web service composition, a new Petri-net-based model for web service composition is proposed with two new components (SHOP2 and fuzzy engine), i.e. FPN4WSC. The proposed model uses static fuzzy rules which depend on the domain and in the future work the fuzzy rules will be managed in a dynamic way. This problem can be dealt with using the genetic algorithm to generate the fuzzy rules dynamically. REFERENCES
{"Source-Url": "https://benthamopen.com/contents/pdf/TOISJ/TOISJ-4-1.pdf", "len_cl100k_base": 6681, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 29932, "total-output-tokens": 7305, "length": "2e12", "weborganizer": {"__label__adult": 0.00028061866760253906, "__label__art_design": 0.00031256675720214844, "__label__crime_law": 0.00029730796813964844, "__label__education_jobs": 0.0003848075866699219, "__label__entertainment": 6.496906280517578e-05, "__label__fashion_beauty": 0.00011992454528808594, "__label__finance_business": 0.00028061866760253906, "__label__food_dining": 0.00032258033752441406, "__label__games": 0.0004248619079589844, "__label__hardware": 0.0005927085876464844, "__label__health": 0.0003769397735595703, "__label__history": 0.0001957416534423828, "__label__home_hobbies": 5.370378494262695e-05, "__label__industrial": 0.0003018379211425781, "__label__literature": 0.0002419948577880859, "__label__politics": 0.00024044513702392575, "__label__religion": 0.0003426074981689453, "__label__science_tech": 0.025177001953125, "__label__social_life": 7.987022399902344e-05, "__label__software": 0.0133209228515625, "__label__software_dev": 0.95556640625, "__label__sports_fitness": 0.00020444393157958984, "__label__transportation": 0.0004229545593261719, "__label__travel": 0.00020706653594970703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32573, 0.01215]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32573, 0.48052]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32573, 0.92665]], "google_gemma-3-12b-it_contains_pii": [[0, 4726, false], [4726, 10968, null], [10968, 14473, null], [14473, 18387, null], [18387, 21372, null], [21372, 24342, null], [24342, 26172, null], [26172, 27832, null], [27832, 29441, null], [29441, 32573, null], [32573, 32573, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4726, true], [4726, 10968, null], [10968, 14473, null], [14473, 18387, null], [18387, 21372, null], [21372, 24342, null], [24342, 26172, null], [26172, 27832, null], [27832, 29441, null], [29441, 32573, null], [32573, 32573, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32573, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32573, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32573, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32573, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32573, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32573, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32573, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32573, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32573, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32573, null]], "pdf_page_numbers": [[0, 4726, 1], [4726, 10968, 2], [10968, 14473, 3], [14473, 18387, 4], [18387, 21372, 5], [21372, 24342, 6], [24342, 26172, 7], [26172, 27832, 8], [27832, 29441, 9], [29441, 32573, 10], [32573, 32573, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32573, 0.05844]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
0c4ffd81e6fdca3428400ac39e91526008f9545c
LOFT: Enhancing Faithfulness and Diversity for Table-to-Text Generation via Logic Form Control Yilun Zhao∗1 Zhenting Qi2 Linyong Nan1 Lorenzo Jaime Yu Flores1 Dragomir Radev1 1Yale University 2Zhejiang University yilun.zhao@yale.edu zhenting.19@intl.zju.edu.cn Abstract Logical Table-to-Text (LT2T) generation is tasked with generating logically faithful sentences from tables. There currently exists two challenges in the field: 1) Faithfulness: how to generate sentences that are factually correct given the table content; 2) Diversity: how to generate multiple sentences that offer different perspectives on the table. This work proposes LOFT, which utilizes logic forms as fact verifiers and content planners to control LT2T generation. Experimental results on the LOGICNLG dataset demonstrate that LOFT is the first model that addresses unfaithfulness and lack of diversity issues simultaneously. Our code is publicly available at https://github.com/Yale-LILY/LoFT. 1 Introduction Table-to-Text (T2T) generation aims to produce natural language descriptions from structured tables. A statement generated from tabular data can be inferred based on different levels of information (e.g., value of a specific cell, logical operation result across multiple cells). Although current T2T models (Lebret et al., 2016; Wiseman et al., 2017; Puduppully et al., 2019; Parikh et al., 2020) have shown remarkable progress in fluency and coherence, they mainly focus on surface-level realizations without much logical inference. Recently, Chen et al. (2020a) proposed LOGICNLG, which is tasked with generating textual descriptions that require logical reasoning over tabular data (i.e., LT2T generation). LT2T generation is challenging as it requires a model to learn the logical inference knowledge from table-text pairs and generate multiple factually correct sentences. Another challenge for LT2T generation is the diversity of generated text. Natural Language Generation (NLG) encourages the diverse output of statements over a single input, as it provides various perspectives on the data and offers users more choices. In LT2T generation, requirements for diversity naturally emerge from the need to apply different logical operations to extract different levels of table information. However, current methods (Chen et al., 2021; Nan et al., 2022; Liu et al., 2022a; Zhao et al., 2022b) that address issues of unfaithfulness have overlooked the importance of diversity. As shown in Figure 1, multiple statements generated using current methods (Nan et al., 2022) might only cover information from the same... table region or logical operation. Such issues related to lack of diversity could limit the deployment of LT2T models in the real world. In this work, we attribute unfaithfulness and lack of diversity to the absence of controllability over generation. Specifically, due to the large number of combinations of different logical operations and table regions, the space of factually correct statements is exponentially large. However, LOGIC-NLG uses the whole table as the input, without providing annotations related to any other explicit control attribute. As a result, it is hard and uncontrollable for neural models to decide a favorable choice of logical selections solely based on the table input. We believe such uncontrollability leads to unfaithfulness and lack of diversity issues. This work proposes LoFT, a framework that utilizes logic forms as mediators to enable controllable LT2T generation. Logic forms (Chen et al., 2020d,b) are widely used to retrieve evidence and explain the reasons behind table fact verification (Yang et al., 2020; Yang and Zhu, 2021; Ou and Liu, 2022). In this work, logic forms are used as: 1) fact verifiers to ensure the factual correctness of each generated sentence; and 2) content planners to control which logical operation and table region to use during the generation. Experimental results show that LoFT surpasses previous methods in faithfulness and diversity simultaneously. 2 Related Work Logical Table-to-Text (LT2T) Generation LOGIC-NLG (Chen et al., 2020a) is tasked with generating logically faithful sentences from tables. To improve the faithfulness of generated statements, Nan et al. (2022) trained a system both as a generator and a faithfulness discriminator with additional replacement detection and unlielihood learning tasks. Liu et al. (2022a) pre-trained a model on a synthetic corpus of table-to-logic-form generation. Zhao et al. (2022b) demonstrated that faithfulness of LT2T can be improved by pre-training a generative language model over synthetic Table QA examples. However, these methods overlook the importance of diversity in T2T generation, and might generate multiple statements that cover the same table regions or reasoning operations. Previous methods in NLG proposed to improve diversity by modifying the decoding techniques (Li et al., 2016). However, these approaches degrade faithfulness as measured against baselines (Perlitz et al., 2022). To enable controllable generation and improve diversity, Perlitz et al. (2022) used logical types of statements as a control. However, such methods still suffer from problems related to unfaithfulness, and may generate statements covering limited table regions. This work proposes to leverage the logic form as a fact checker and content planner to control LT2T generation, which tackles the challenges about faithfulness and diversity at the same time. Table Fact Verification via Logic Form Logic forms are widely used in Table Fact Verification (Chen et al., 2020b). Specifically, given an input statement, the model (Yang et al., 2020; Yang and Zhu, 2021; Ou and Liu, 2022) will first translate it into logic form. Then the logic form will be executed over the table, and return true/false as the entailment label for a given statement. While several works (Chen et al., 2020d; Shu et al., 2021; Liu et al., 2021) focused on generating fluent statements from logic forms, the utilization of logic forms to benefit LT2T generation is still unexplored. 3 LoFT This section first introduces the logic form utilized, and then delves into the training and inference process of LoFT. We also explain how the use of logic forms can enhance both faithfulness and text-diversity in LT2T generation. 3.1 Logic Form Implementation Logic forms are widely used to retrieve evidence and explain the reasons behind table fact verification. We use the same implementation as Chen et al. (2020d), which covers 8 types of the most common logical operations (e.g., count, aggregation) to describe a structured table. Each logical operation corresponds to several Python-based functions. For example, the definition of function all_greater(view, header, value) under “majority” category is: checking whether all the values under header column are greater than value, with the scope (i.e., view) of all or a subset of table rows. The complete list of logical operation types and corresponding function definitions are shown in Table 4 in Appendix. 3.2 LoFT Training Training Task Formulation Given the serialized tabular data with selected columns as $T$, the train- The faithfulness of these statements were further checked by a verifier. As a result, LoFT can generate a diverse set of faithful statements covering different table regions and reasoning operations. For each table in the LOGIC-NLG test set, we randomly sampled five candidate statements for evaluation. The logic form synthesis pipeline was first applied to synthesize candidate logic forms that cover different table regions and logical operations. LoFT is applied to generate statements for each candidate logic form. Then a statement verifier is used to filter out those potentially unfaithful statements. As a result, LoFT can generate a diverse set of faithful statements covering different table regions and reasoning operations. For each table in the LOGIC-NLG test set, we randomly sampled five candidate statements for evaluation. To generate a candidate logic form, the pipeline first sampled a logic form using a random function and corresponding function definitions. Each function category corresponded to one unique table reasoning operation. For example, max/min, greater/less) into smaller groups to obtain a more abstract template. Each function category was defined to ensure that the generated candidate logic forms follow a similar distribution as LoFT. To instantiate the sampled template, a bottom-up sampling strategy is adopted to fill in each placeholder of the template and finally generate the logic form. Statement Generation & Verification Through the logic form synthesis pipeline, we obtained a large number of candidate logic forms. For each logic form, we used LoFT to generate the corresponding statement. Then LoFT is trained to generate the reference statement given the translated logic form and serialized table data. responding statement. The candidate statements might still contain some factually incorrectness, thus we applied an NLI-based verifier to filter out those potentially unfaithful generations. Specifically, we used the TABFACT (Chen et al., 2020b) dataset to train a classifier, which adopts RoBERTa-base as the backbone. We fed each generated statement and its corresponding table into the classifier, and only kept those statements that were predicted as entailed. Then we randomly sampled five statements as the output for each table in LOGiCNLG. 3.4 Enhancing LT2T via Logic Form Control This subsection provides two perspectives to explain why logic forms can help improve both faithfulness and diversity of LT2T generation. Logic Form as Content Planner Logic forms pass column or cell values as arguments, guiding the model to focus on relevant table regions. The function category of the logic form, such as count, helps the model better organize logical-level content planning. Logic Form as Fact Verifier Logic forms are defined with unambiguous semantics, hence are reliable mediators to achieve faithful and controllable logical generations. During the inference stage, we synthesize candidate logic forms with 100% execution correctness. The sampled logic form serves as a fact verifier and conveys accurate logical-level facts for controllable LT2T generation. 4 Experimental Setup We next discuss the evaluation metrics, baselines, and implementation details for the experiments. 4.1 Evaluation Metrics We applied various automated evaluation metrics at different levels to evaluate the model performance from multiple perspectives. Surface-level Following Chen et al. (2020a), we used BLEU-1/2/3 to measure the consistency of generated statements with the reference. Diversity-level We used Distinct-n (Li et al., 2016) and self-BLEU-n (Zhu et al., 2018) to measure the diversity of five generated statements for each table. Distinct-n is defined as the total number of distinct n-grams divided by the total number of tokens in the five generated statements; Self-BLEU-n measures the average n-gram BLEU score between generated statements. We measured Distinct-2 and Self-BLEU-4 in our experiment. Faithfulness-level Similar as the previous works (Chen et al., 2020a; Nan et al., 2022; Liu et al., 2022a), we used a parsing-based evaluation metric (i.e., SP-Acc) and two NLI-based evaluation metrics (i.e., NLI-Acc and TAPEX-Acc) to measure the faithfulness of generation. SP-Acc directly extracts the meaning representation from the generated sentence and executes it against the table to verify the correctness. NLI-Acc and TAPEX-Acc use TableBERT (Chen et al., 2020b) and TAPEX (Liu et al., 2022b) respectively as their backbones, and were finetuned on the TABFACT dataset (Chen et al., 2020b). Liu et al. (2022a) found that NLI-Acc is overly positive about the predictions, while TAPEX-Acc is more reliable to evaluate the faithfulness of generated sentences. 4.2 Baseline Systems We implemented following baseline systems for the performance comparison: GPT2-TabGen (Chen et al., 2020a) directly fine-tunes GPT-2 over the LOGiCNLG dataset; GPT2-C2F (Chen et al., 2020a) first produces a template which determines the global logical structure, and then generates the statement conditioned on the template; DCSVED (Chen et al., 2021) applies a de-confounded variational encoder-decoder to reduce the spurious correlations during LT2T generation training; DEVTC (Perlitz et al., 2022) utilized reasoning operation types as an explicit control to increase the diversity of LT2T generation; and R2D2 (Nan et al., 2022) trains a generative language model both as a generator and a faithfulness discriminator with additional replacement detection and unlikelihood learning tasks, to enhance the faithfulness of LT2T generation. 4.3 Implementation Details Following Shu et al. (2021), we converted each logic form into a more human-readable form for both LoFT training and inference data. LoFT was implemented using fairseq library (Ott et al., 2019), with BART-Large (Lewis et al., 2020) as the backbones. All experiments were conducted on an 8 NVIDIA RTX-A5000 24GB cluster. Both LoFT and the statement verifier was trained for 5,000 steps with a batch size of 128. The best checkpoints were selected by the validation loss. 5 Experimental Results This section discusses automated and human evaluation results of different systems. 5.1 Main Results Table 1 presents the results on LOGICNLG. LoFT outperforms all the baselines on the criteria of diversity and faithfulness, and is the first model that achieves state-of-the-art results on both faithfulness- and diversity-level. It is worth noting that in the LOGICNLG setting, a generated statement is allowed to cover a different table region or reasoning operations from the references, as long as it is fluent and factually correct. However, in such cases, the reference-based metrics will be low, explaining why the BLEU-1/2/3 scores of LoFT are lower than other models. ### 5.2 Human Evaluation We conducted the human evaluation with four expert annotators using the following three criteria: (1) **Faithfulness** (scoring 0 or 1): if all facts contained in the generated statement are entailed by the table content; (2) **Diversity** (voting the best & worst): if the five generated statements cover information from different table regions, and use different reasoning operations; (3) **Fluency** (scoring 0 or 1): if the five generated statements are fluent and without any grammar mistakes. We chose R2D2 (Nan et al., 2022) and DEVTC (Perlitz et al., 2022) for comparison, as they achieved best-performance results in faithfulness and diversity, respectively. We sampled 50 tables from the LOGICNLG test set. For each table, we selected all five generated statements from each model’s output. To ensure fairness, the model names were hidden to the annotators, and the display order between three models was randomly shuffled. Human evaluation results show that LoFT delivers improvements in both faithfulness (Table 3) and diversity (Table 2), while achieving comparable performance in fluency (Table 3). ### 6 Conclusions This work proposes LoFT, which utilizes logic forms as fact verifiers and content planners to enable controllable LT2T generation. Experimental results on LOGICNLG demonstrate that LoFT delivers a great improvement in both diversity and faithfulness of LT2T generation. **Limitations** The first limitation of our approach is that LoFT does not explore long text generation (Moosavi et al., 2021). LoFT only supports the generation of multiple single sentences. To enable long text generation (i.e., generate a long paragraph that delivers... various perspectives on the table data), a global content planner (Su et al., 2021) needs to be designed to highlight which candidate sentences should be mentioned and in which order. Additionally, we believe that LOFT can also be applied to text generation over hybrid context with both textual and tabular data (Chen et al., 2020c; Zhao et al., 2022a; Nakamura et al., 2022). The second limitation of our work is that the statement verifier discussed in Section 3.3 was trained using the same data as NLI-Acc and TAPEX-Acc. This might bring some bias for NLI-based metrics on faithfulness-level evaluation. In the future, we will exploit a more robust automated evaluation system (Fabbri et al., 2021; Liu et al., 2022c) to comprehensively evaluate the LT2T model performances from different perspectives. Moreover, we applied the SASP model (Ou and Liu, 2022) to convert statements into logic forms (Section 3.2). Some converted logic forms may be inconsistent with the original statement. We believe that future work could incorporate the Logic2Text (Chen et al., 2020d) dataset into training data to further improve the LOFT performance. Ethical Consideration We used the LOGICNLG (Chen et al., 2020a) dataset for training and inference. LOGICNLG is publicly available under MIT license\(^1\) and widely used in NLP research and industry. References Ao Liu, Haoyu Dong, Naoaki Okazaki, Shi Han, and Dongmei Zhang. 2022a. PLOG: Table-to-logic pretraining for logical table-to-text generation. In EMNLP 2022. Yixin Liu, Alexander R. Fabbri, Pengfei Liu, Yilun Zhao, Linyong Nan, Ruilin Han, Simeng Han, Shaqiqi Joty, Chien-Sheng Wu, Caiming Xiong, and Dragomir Radev. 2022c. Revisiting the gold standard: Grounding summarization evaluation with robust human evaluation. \(^1\)https://opensource.org/licenses/MIT A Appendix <table> <thead> <tr> <th>Reasoning Op</th> <th>Function Category</th> <th>Name</th> <th>Arguments</th> <th>Output</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Unique</td> <td>UNIQUE</td> <td>only</td> <td>view</td> <td>bool</td> <td>returns whether there is exactly one row in the view</td> </tr> <tr> <td>Aggregation</td> <td>AGGREGATION</td> <td>avg/sum</td> <td>view, header, string</td> <td>number</td> <td>returns the average/sum of the values under the header column</td> </tr> <tr> <td>Count</td> <td>COUNT</td> <td>count</td> <td>view</td> <td>number</td> <td>returns the number of rows in the view</td> </tr> <tr> <td>Ordinal</td> <td>ORD_ARG</td> <td>nth_argmax/nth_argmin</td> <td>view, header, string</td> <td>view</td> <td>returns the row with the n-th maximum value in header column</td> </tr> <tr> <td></td> <td>ORIGINAL</td> <td>nth_max/nth_min</td> <td>view, header, string</td> <td>number</td> <td>returns the n-th max/n-th min of the values under the header column</td> </tr> <tr> <td></td> <td>SUPER_ARG</td> <td>argmax/argmin</td> <td>view, header, string</td> <td>view</td> <td>returns the row with the maximum value in header column</td> </tr> <tr> <td>Comparative</td> <td>COMPARE</td> <td>eq/not_eq</td> <td>object, object</td> <td>bool</td> <td>returns if the two arguments are equal</td> </tr> <tr> <td></td> <td></td> <td>round_eq</td> <td>object, object</td> <td>bool</td> <td>returns if the two arguments are roughly equal under certain tolerance</td> </tr> <tr> <td></td> <td></td> <td>greater/less</td> <td>object, object</td> <td>bool</td> <td>returns if 1st argument is greater/less than 2nd argument</td> </tr> <tr> <td></td> <td></td> <td>diff</td> <td>object, object, object</td> <td>object</td> <td>returns the difference between two arguments</td> </tr> <tr> <td>Majority</td> <td>MAJORITY</td> <td>all_eq/not_eq</td> <td>view, header, string, object</td> <td>bool</td> <td>returns whether all the values under the header column are equal/not equal to 3rd argument</td> </tr> <tr> <td></td> <td></td> <td>all_greater/less</td> <td>view, header, string, object</td> <td>bool</td> <td>returns whether all the values under the header column are greater/less than 3rd argument</td> </tr> <tr> <td></td> <td></td> <td>all_greater_eq/less_eq</td> <td>view, header, string, object</td> <td>bool</td> <td>returns whether all the values under the header column are greater/less or equal to 3rd argument</td> </tr> <tr> <td></td> <td></td> <td>most_eq/not_eq</td> <td>view, header, string, object</td> <td>bool</td> <td>returns whether most of the values under the header column are equal/not equal to 3rd argument</td> </tr> <tr> <td></td> <td></td> <td>most_greater/less</td> <td>view, header, string, object</td> <td>bool</td> <td>returns whether most of the values under the header column are greater/less than 3rd argument</td> </tr> <tr> <td></td> <td></td> <td>most_greater_eq/less_eq</td> <td>view, header, string, object</td> <td>bool</td> <td>returns whether most of the values under the header column are greater/less or equal to 3rd argument</td> </tr> <tr> <td>Conjunction</td> <td>FILTER</td> <td>filter_eq/not_eq</td> <td>view, header, string, object</td> <td>view</td> <td>returns the subview whose values under the header column are equal/not equal to 3rd argument</td> </tr> <tr> <td></td> <td></td> <td>filter_greater/less</td> <td>view, header, string, object</td> <td>view</td> <td>returns the subview whose values under the header column are greater/less than 3rd argument</td> </tr> <tr> <td></td> <td></td> <td>filter_greater_eq/less_eq</td> <td>view, header, string, object</td> <td>view</td> <td>returns the subview whose values under the header column are greater/less or equal to 3rd argument</td> </tr> <tr> <td>Other</td> <td>OTHER</td> <td>filter_all</td> <td>view, header, string</td> <td>view</td> <td>returns the view itself for the case of describing the whole table</td> </tr> <tr> <td></td> <td></td> <td>hop</td> <td>view, header string</td> <td>object</td> <td>returns the value under the header column of the row</td> </tr> <tr> <td></td> <td></td> <td>and</td> <td>bool, bool</td> <td>bool</td> <td>returns the boolean operation result of two arguments</td> </tr> </tbody> </table> Table 4: A complete list of function definitions for the logic forms (Similar as Chen et al. (2020d)).
{"Source-Url": "https://aclanthology.org/2023.eacl-main.40.pdf", "len_cl100k_base": 5269, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 28822, "total-output-tokens": 7083, "length": "2e12", "weborganizer": {"__label__adult": 0.0006556510925292969, "__label__art_design": 0.0010471343994140625, "__label__crime_law": 0.0006852149963378906, "__label__education_jobs": 0.0031757354736328125, "__label__entertainment": 0.0004973411560058594, "__label__fashion_beauty": 0.0003707408905029297, "__label__finance_business": 0.0003752708435058594, "__label__food_dining": 0.0005888938903808594, "__label__games": 0.001712799072265625, "__label__hardware": 0.0008044242858886719, "__label__health": 0.0010547637939453125, "__label__history": 0.0005016326904296875, "__label__home_hobbies": 0.0001246929168701172, "__label__industrial": 0.000637054443359375, "__label__literature": 0.00421142578125, "__label__politics": 0.0006475448608398438, "__label__religion": 0.0009899139404296875, "__label__science_tech": 0.2100830078125, "__label__social_life": 0.00027251243591308594, "__label__software": 0.0309295654296875, "__label__software_dev": 0.7392578125, "__label__sports_fitness": 0.00048279762268066406, "__label__transportation": 0.0006771087646484375, "__label__travel": 0.0002465248107910156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28131, 0.02901]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28131, 0.23819]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28131, 0.81549]], "google_gemma-3-12b-it_contains_pii": [[0, 2618, false], [2618, 7209, null], [7209, 8969, null], [8969, 13317, null], [13317, 15727, null], [15727, 20533, null], [20533, 24394, null], [24394, 28131, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2618, true], [2618, 7209, null], [7209, 8969, null], [8969, 13317, null], [13317, 15727, null], [15727, 20533, null], [20533, 24394, null], [24394, 28131, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28131, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28131, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28131, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28131, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28131, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28131, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28131, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28131, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28131, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28131, null]], "pdf_page_numbers": [[0, 2618, 1], [2618, 7209, 2], [7209, 8969, 3], [8969, 13317, 4], [13317, 15727, 5], [15727, 20533, 6], [20533, 24394, 7], [24394, 28131, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28131, 0.2087]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
ad9afe589097279872e0275dc04f1897b54d30eb
RAPID KNOWLEDGE FORMATION (RKF) INFORMATION & TRANSITION SUPPORT Teknowledge Corporation Sponsored by Defense Advanced Research Projects Agency DARPA Order No. M185 APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the U.S. Government. This report has been reviewed by the Air Force Research Laboratory, Information Directorate, Public Affairs Office (IFOIPA) and is releasable to the National Technical Information Service (NTIS). At NTIS it will be releasable to the general public, including foreign nations. AFRL-IF-RS-TN-2005-1 has been reviewed and is approved for publication APPROVED: /s/ RAYMOND A. LIUZZI Project Engineer FOR THE DIRECTOR: /s/ JAMES A. COLLINS, Acting Chief Advanced Computing Division Information Directorate The prime objective of this effort was to create and maintain the Rapid Knowledge Formation (RKF) community web site. This site was operative for the entire RKF project. This effort also investigated development of the SCOOP-System for Collaborative Open Ontology Production. The SCOOP system manipulates logic expressions and checks for redundancies or contradictions between the products developed by different engineers. SCOOP also includes an automated workflow process that supports recommendations for changes and voting to agree on changes. Future work needs to investigate more flexible and powerful methods for specifying interest and relevance in knowledge products. When a simple keyword matching scheme is employed, the presence of formally encoded knowledge provides a rich basis for more powerful methods. This can potentially provide full logical queries to specify interest, where employing theorem-provers can pose the query against new knowledge products. As a result, this employs techniques where alternate encoding of knowledge may facilitate knowledge sharing. Table of Contents Summary of Results ................................................................. 1 Appendix 1 ............................................................................. 3 Summary of Results This effort investigated development of the SCOOP-System for Collaborative Open Ontology Production. The SCOOP system manipulates logic expressions and checks for redundancies or contradictions between the products developed by different engineers. SCOOP also includes an automated workflow process that supports recommendations for changes with voting to agree on changes. There are four primary features of the SCOOP collaboration process. 1. Knowledge is organized hierarchically as a set of files with inheritance of content. Consistency among these files is maintained vertically by enforcing that any assertion must not be contradictory with respect to files of knowledge that is inherited by the file where the assertion in question is to be placed. Consistency is encouraged horizontally by alerting individuals to contradictions or redundancies with respect to the files of their colleagues, and providing them a structured process for being aware of and resolving inconsistencies if they so choose. 2. Knowledge developers can influence their colleagues in several ways. They can register supervisory authority over another developer’s content. They can register interest in particular keywords that are dynamically matched to their colleagues’ content, which allows them to be alerted when new content matching the keywords is created. They can register interest in specific files or documents their colleagues are creating. 3. SCOOP assists developers in resolving conflicts and inconsistencies by identifying the statements by different developers that are problematic. The system accomplishes this by analyzing the results of theorem proving that detects the problem and extracting only those statements that were authored directly, rather than automatically deduced. 4. Authors have primary control over their own content but are given information that can help maintain compatibility with the content created by their colleagues. When an author is alerted to a redundancy with respect to a colleague’s content, they have several choices. They may withdraw their knowledge, elect to keep their knowledge, or vote to have their knowledge elevated to a more general document. A majority must vote to elevate the knowledge for this to occur. When a contradiction is discovered, the developer may either retract the knowledge; vote to keep it or vote to have the colleague’s knowledge removed. Such a vote is informational since only the author, or someone who has registered authority over the author’s content, may change the content. SCOOP is unique in the following aspects: (1) SCOOP concentrates on support for knowledge and ontology authoring and formation activities. (2) SCOOP mainly supports collaborating authors who are formalizing knowledge in the same domains to create harmonious knowledge for the domains. It is not a general-purpose collaborative tool that brings the knowledge owners and the seekers together. (3) SCOOP provides a conflict detection and limited conflict resolution mechanism on the contents the authors generate. It is not simply a distributed coauthoring environment that does not check the correctness of the contents. (4) SCOOP not only provides notifications to the coauthors when conflicts are detected, but also conducts voting among the related authors and seeks resolutions. The Sigma ontology workbench is a Web application that allows a user to load and browse one or more ontology files, export to different formats, query the combined Knowledge Base, and perform certain types of checking. Teknowledge accomplished integration of SCOOP checking with Sigma by testing the Vampire theorem prover. In the process of testing Vampire, Teknowledge extended the Sigma test suite to move from a client-server architecture to a more tightly integrated single user version called “Personal Sigma”. As part of this testing, Teknowledge discovered an outstanding problem with a new version of Vampire where there seems to be a limit of about 7500 axioms which was not present in earlier versions. Future work will investigate more flexible and powerful methods for specifying interest and relevance in knowledge products. When a simple keyword matching scheme is employed, the presence of formally encoded knowledge provides a rich basis for more powerful methods. This can potentially provide full logical queries to specify interest, where employing theorem-provers can pose the query against new knowledge products. As a result, this employs techniques where alternate encodings of knowledge may facilitate knowledge sharing. Teknowledge also prepared a final presentation titled SCOOP-System for Collaborative Open Ontology Production. Teknowledge presented the SCOOP collaboration tool to customers in an ARDA-sponsored program for possible inclusion in future work. Teknowledge also conducted a design review of SCOOP and its proposed integration with the Sigma ontology workbench. Teknowledge also investigated further development of the SUMO (Suggested Upper Merged Ontology) which is an ontology that was created at Teknowledge Corporation with extensive input from the IEEE Standard Upper Ontology (SUO) mailing list. SUMO has been proposed as a starter document for the IEEE sanctioned SUO Working Group. The SUMO was initially created by merging publicly available ontological content into a single, comprehensive, and cohesive ontology. The ontology can be browsed online (http://ontology.teknowledge.com), and source files for all of the versions of the ontology can be freely downloaded. SUMO has also been mapped, by hand, to the complete set of 100,000 WordNet noun, verb, adjective and adverb word senses that supports keyword matching within SCOOP. Another key objective of the RKF project was to create and maintain the RKF community web site. This site was operative for the entire project. Teknowledge continually updated the community web site and at the conclusion of this effort transferred the web site and discussion groups to CycCorp. The following paper provides more technical detail on SCOOP. Appendix 1: Agent-Mediated Knowledge Engineering Collaboration Adam Pease, John Li Teknowledge [apease | jli]@teknowledge.com Abstract Abstract: Knowledge Management is most necessary and valuable in a collaborative and distributed environment. A problem with commercial knowledge management tools is that they do not understand at a deep level the content that they are managing. In this paper we discuss the System for Collaborative Open Ontology Production (SCOOP), which manipulates logic expressions and checks for redundancies or contradictions between the products developed by different engineers. SCOOP also includes an automated workflow process that supports recommendations for changes and voting to agree on changes. Introduction The current state of practice in knowledge management hinges on human review of content. Tools exist for organizing content or facilitating human collaboration but they do not understand the knowledge that they are managing. The System for Collaborative Open Ontology Production (SCOOP) works with logic expressions (among other knowledge products), which can be understood, to some degree, by a machine. There are four primary features of the SCOOP collaboration process. 1. Knowledge is organized hierarchically as a set of files with inheritance of content. Consistency among these files is maintained vertically by enforcing that any assertion must not be contradictory with respect to files of knowledge that is inherited by the file where the assertion in question is to be placed. Consistency is encouraged horizontally by alerting individuals to contradictions or redundancies with respect to the files of their colleagues, and providing them a structured process for being aware of and resolving inconsistencies if they so choose. 2. Knowledge developers can influence their colleagues in several ways. They can register supervisory authority over another developer’s content. They can register interest in particular keywords that are dynamically matched to their colleagues’ content, which allows them to be alerted when new content matching the keywords is created. They can register interest in specific files or documents their colleagues are creating. 3. SCOOP assists developers in resolving conflicts and inconsistencies by identifying the statements by different developers that are problematic. The system accomplishes this by analyzing the results of theorem proving that detects the problem, and extracting only those statements that were authored directly, rather than automatically deduced. The full proof remains available for analysis by the authors if desired. 4. Authors have primary control over their own content but are given information that can help them maintain compatibility with the content created by their colleagues. When an author is alerted to a redundancy with respect to a colleague’s content, he has several choices. He may withdraw the knowledge, elect to keep it, or vote to have it and the colleague’s knowledge elevated to a more general document. A majority must vote to elevate the knowledge for this to occur. When a contradiction is discovered, the developer may either retract the knowledge, vote to keep it as knowledge which is contextually valid, although inconsistent with another view, or vote to have the colleague’s knowledge removed. Such a vote is informational though since only the author, or someone who has registered authority over the author’s content may change the author’s content. This approach could of course be made stricter to be in keeping with a more authoritarian work environment. General Architecture and Mechanism A general architecture of SCOOP and its environment are shown in the figure below. To the knowledge authors, SCOOP is an invisible agent working at the background. Its existence becomes known to the authors only when conflicts or errors are detected in their products. To perform the error detection and limited correction functions described above, the SCOOP has an inference engine that it uses to process diagnostic queries. This architecture allows any inference engine to be used for this purpose as long as it can answer logic queries and provide justifications. Our current implementation of SCOOP uses a first order logic (FOL) theorem-prover that accepts KIF (Genesereth, 1991), the language and grammar in which the assertions created by users are written. With small modifications, SCOOP can employ other inference engines that accept other languages the end users use. SCOOP depends on a distributed scheme of inference. Each knowledge developer runs a local inference engine that handles vertical consistency with the developer’s products, and those products he uses directly. A central inference engine handles the detection of conflicts between knowledge developers. Each assertion made by its author is first verified locally as vertically consistent and then sent to SCOOP for horizontal consistency check. As required by this co-authoring task, when the developers start to work on their own products, their inference engines, as well as SCOOP’s central inference engine, have the same background knowledge base loaded. When a knowledge developer asserts a statement into a file, SCOOP is notified of this action. The content of the assertion is sent to SCOOP by the Knowledge Engineering (KE) Editor that the author uses to enter the assertion. SCOOP stores that assertion and its authorial information to its own temporary storage for possible future usage. When there are multiple assertions, SCOOP treats them in the order of their arrival. To detect redundancies, SCOOP asks a central inference engine whether it can prove that statement is true in the existing context. If it is not redundant, SCOOP will continue to ask whether the engine can prove the negation of the statement is true in the existing context. If both answers are negative, the statement will be asserted into the inference engine and becomes a part of the context to test the next statement by the same or different author. This mechanism of diagnosis sounds simple but is very powerful when more domain-specific axioms, diagnostic rules and term interpretations (such as synonyms and antonyms) are available in the background knowledge base. For example, SCOOP can detect a contradiction between (instance MyCar-1 FastMovingObject) and (instance MyCar-1 SlowMovingObject) when there is an axiom in the knowledge base on the oppositeness of the terms, FastMovingObject and SlowMovingObject. If an error is detected, SCOOP will send the diagnosis and the justification to the author and other affected knowledge developers. The justification comes from the proof by SCOOP’s inference engine but is not a full proof. It only lists the facts and axioms used by the inference engine to reach the diagnosis but not the further deductions. The full proof is generated by SCOOP as a web page that the user can view if desired. The usage of justification has two purposes. Besides giving a brief view of the assertions involved in the diagnosis, the justification also provides a base for an algorithm to identify the other authors who are related to the problem because of their contributions in the proof of the diagnostic result. Since the authorial information of each assertion is stored in SCOOP, SCOOP can easily identify the authors from the assertions and seek solution from only those who need to know, but not all authors. An author has three choices with regarding to a redundancy warning: withdraw the knowledge, elect to keep it, or vote to have it and the colleague’s knowledge elevated to a more general document. As to a contradiction case, the developer may either retract the knowledge, or to vote to keep it (in that case the contradictory knowledge will be removed by the colleague author). If the author does not want to retract the assertion diagnosed as redundant or contradictory to the existing knowledge base, a request to vote will be sent out by SCOOP to the need-to-know authors reviewers, together with the result of the diagnosis message. When all votes are cast for a case, SCOOP will then tally the votes. The fate of the statement is determined based on the voting results and a pre-set policy. The voting authors will be notified of the decision made on the statement afterwards. ### An Example of SCOOP Usage The following example may provide a better understanding of our current implementation of SCOOP. Suppose there are three knowledge engineers or subject matter experts, Joe, Jane and Arnold, co-authoring a knowledge base in a distributed environment. Each of them is working with a KE editor that is connected to its local inference engine. To work with SCOOP, a KE editor must be able to communicate with SCOOP and display messages as appropriate. We implemented a simple KE editor as shown in the figure below. Our KE editor has three major windows: Assertion Editor, Results Window and a window for Redundant and Contradictory Items. As the authors log in, the KE editor sends their user names to SCOOP and SCOOP establishes a user session for each of them. After an author adds assertions in the KE editor, the diagnostic result for the assertion is shown in the Results Window. If a redundancy or a contradiction is detected, a warning will appear and a new indexed entry will appear in the Redundant or Contradictory Items window. The author can see the detailed report (with justification) or the full proof by highlighting the item in the window and then making a choice on the View Details button or View Complete Proof button. The voting buttons are also enabled when a redundant or contradictory item is selected. As an example, suppose that Joe defines that a sibling relationship is an irreflexive relation and his assertion gets OK from SCOOP: (instance sibling IrreflexiveRelation) In SCOOP’s background, there is an axiom about the IrreflexiveRelation: (=> (instance ?REL IrreflexiveRelation) (forall (?X) (not (holds ?REL ?X ?X)))) Jane is asserting some facts about the sibling relation: (mother Bill Jane) (mother Bob Jane) (sibling Bob Bill) SCOOP accepts each of these statements and returns an OK message. Arnold tries to assert an axiom that defines the sibling relationship in terms of father and mother relationships: (=> (or (exists (?F) (and (father ?S1 ?F) (father ?S2 ?F))) (exists (?M) (and (mother ?S1 ?M) (mother ?S2 ?M)))) As Arnold enters the assertion, SCOOP sends him the result of the diagnosis, which is a contradiction, and adds an entry in the windows below “[ITEM-1] CONTRADICTION”. By highlighting this entry, Arnold can choose to view the detailed report or view the full proof. These two reports are as follows: Sample XML-formatted Report Note that some XML tags below have been removed for brevity. ```xml <DETAILS:> <ITEM-ID:1> <assertion: (=> (or (exists (?F) (and (father ?S1 ?F) (father ?S2 ?F))) (exists (?M) (and (mother ?S1 ?M) (mother ?S2 ?M)))) (sibling ?S1 ?S2))> <author: ARNOLD> <diagnostics: CONTRADICTION> <justification> <premises> <statement:(mother Bob Jane)> <statement: (=> (instance ?X0 IrreflexiveRelation) (forall (?X1) (not (holds ?X0 ?X1 ?X1))))> <statement: (instance sibling IrreflexiveRelation)> <conclusion> <statement: (not (=> (or (exists (?F) (and (father Bob ?F) (father Bob ?F))) (exists (Jane) (and (mother Bob Jane) (mother Bob Jane)))) (sibling Bob Bob))> ``` Sample Proof Result: There is 1 answer. Answer 1: [definite] ?S2 = Bob, ?S1 = Bob 1. (mother Bob Jane)[KB] 2. (=> (instance ?X0 IrreflexiveRelation) (forall (?X1) (not (holds ?X0 ?X1 ?X1))))[KB] 3. (forall (?X0) (=> (instance ?X0 IrreflexiveRelation) (forall (?X1) (not (holds ?X0 ?X1 ?X1))))) [2] 4. (forall (?X0) (or (not (instance ?X0 IrreflexiveRelation)) (forall (?X1) (not (holds ?X0 ?X1 ?X1))))) [3] 5. (or (not (instance ?X0 IrreflexiveRelation)) (not (holds ?X0 ?X1 ?X1))) [4] 6. (instance sibling IrreflexiveRelation)[KB] 7. (not (not (=> (exists (?X4) (and (father ?X3 ?X4) (father ?X2 ?X4))) (exists (?X5) (and (mother ?X3 ?X5) (mother ?X2 ?X5)))) (sibling ?X3 ?X2))) [Negated Query] 8. (forall (?X2 ?X1) (not (not (=> (exists (?X0) (and (father ?X1 ?X0) (father ?X2 ?X0))) (exists (?X3) (and (mother ?X1 ?X3) (mother ?X2 ?X3)))) (sibling ?X1 ?X2)))) [7] 9. (forall (?X2 ?X1) (or (forall (?X0) (or (not (father ?X1 ?X0)) (not (father ?X2 ?X0))) (forall (?X3) (or (not (mother ?X1 ?X3)) (not (mother ?X2 ?X3)))) (sibling ?X1 ?X2)))) [8] 10. (or (and (or (not (father ?X1 ?X0)) (not (father ?X2 ?X0))) (or (not (mother ?X1 ?X3)) (not (mother ?X2 ?X3))) (sibling ?X1 ?X2)) [9] 11. (or (sibling ?X1 ?X2) (not (mother ?X2 ?X3)) (not (mother ?X1 ?X3))) [10] 12. (not (sibling ?X0 ?X0)) [5, 6] 13. (or (sibling ?X0 Bob) (not (mother ?X0 Jane))) [11, 1] 14. (and (= ?S2 Bob)(= ?S1 Bob)) [1, 12, 13] Note that in the proof above, [N] after a formula means the formula is derived from the formula in proof step #N. [KB] means the assertion already exists in the knowledge base, either by assertion or in a preloaded background context. The proof shows that the axiom developed by Arnold has a flaw that can lead to (sibling Bob Bob), violating the declared irreflexive nature of the predicate. The author may choose to retract it to resolve the conflict with the other axioms. If the author chooses to retain the statement, SCOOP will try to resolve the issue by sending a vote request to Jane and Joe because their products are involved in the proof. Version Control and Change Notification To perform the centralized knowledge harmonization functions described above, SCOOP is connected to a Concurrent Version System (CVS) repository that stores the files developed by all authors at different stages. The repository is organized into knowledge modules so the products at different development stages can be stored together under a theme. SCOOP allows users to specify relationships between themselves, others and knowledge products in the system that influences the workflow process employed when change or conflict occurs. A knowledge developer can designate another developer as the knowledge reviewer of his or her products. All users can also register their interests in the products of other authors via keywords. If there is any match of the keywords to the contents of the files in the repository, existing or in the future, the user will automatically get informed of the presence of the new or changed content. When an author submits a file for review, the reviewer will also automatically get a notice. The reviewers can vote to approve or reject a document and determine whether a file should pass the reviewing process. All the notifications and votes are carried via automatically generated email messages. SUMO SCOOP is especially useful when there is a large body of formal knowledge. The example in the previous section displays the use of a background axiom in the proof. One key to having informed and intelligent assistance to the knowledge developer is having existing knowledge that can support machine analysis of new content. We developed a large, formal ontology that can support that need. The SUMO (Suggested Upper Merged Ontology) is an ontology that was created at Teknowledge Corporation with extensive input from the SUO mailing list, and it has been proposed as a starter document for the IEEE-sanctioned SUO Working Group. The SUMO was initially created by merging publicly available ontological content into a single, comprehensive, and cohesive structure (Niles & Pease, 2001) (Pease et al, 2002). As of January 2003, the ontology contains 1000 terms and 4000 assertions including 750 rules. The ontology can be browsed online (http://ontology.teknowledge.com), and source files for all of the versions of the ontology can be freely downloaded. SUMO has also been mapped, by hand, to the complete set of 100,000 WordNet (Miller et al, 1993) noun, verb, adjective and adverb word senses. This supports keyword matching within SCOOP. Related Work The Ontolingua (Farquhar et al, 1996) ontology server provides user and group access control that can facilitate group work. It also allows simultaneous access to ontologies and change highlighting. Some work detecting inconsistencies during ontology merging has been shown (Noy & Musen, 2000). Other relevant work includes (Elst, 2001) and (Dignum, 2002). Previous work in collaborative ontology construction environment and recent developments in CSCW (Computer Supported Cooperative Work), knowledge sharing models, meaning negotiation approaches and ontological approach in knowledge management has provided a broad base and motivation for our research. However, SCOOP is unique in the following aspects: 1. SCOOP concentrates on support for knowledge and ontology authoring and formation activities. It is not a general-purpose CSCW tool. 2. SCOOP mainly supports collaborating authors who are formalizing knowledge in the same domains to create harmonious knowledge for the domains. It is not a general-purpose collaborative tool that brings the knowledge owners and the seekers together. 3. SCOOP provides a conflict detection and limited conflict resolution mechanism on the contents the authors generate. It is not simply a distributed co-authoring environment that does not check the correctness of the contents. 4. SCOOP not only provides notifications to the co-authors when conflicts are detected, but also conducts voting among the related authors and seeks resolutions. Future Work A primary area of effort is to make it possible for users to author content that SCOOP can check for consistency. We have developed a system for translating a restricted natural language to logic. Otherwise, we are faced with having developers author knowledge in formal logic, which would dramatically limit the applicability of this work. A second area of effort is in improving the quality of proofs of contradiction and redundancy. We have done preliminary work in removing the application of the same axiom in a particular line of reasoning. Additional efforts are in removing proof steps that are conceptually very small, such as taking advantage of the associativity of the logical operator “and” to change the order of clauses. We can currently generate natural language from logic, as well as translating from language to logic, although considerable work remains in making this output more colloquial. A potentially very difficult problem is in guessing how long to let a theorem prover run in order to find contradictions or redundancies. It is quite possible that however long the system tries to find a problem, that other undiscovered problems remain. Users are not going to be willing to wait indefinitely before their knowledge is declared consistent and compatible with that of their colleagues. As a result, SCOOP must consider that an undiscovered contradiction may at some point cause all subsequent proofs to declare the existence of an erroneous contradiction, since if the system is asked to prove a contradiction, and another real contradiction exists, the system will always respond “true”. This is a result of the fact that any fact holds from a contradictory knowledge base. Future work will also include a more flexible and powerful method for specifying interest and relevance in knowledge products. While currently a simple keyword matching scheme is employed, the presence of formally encoded knowledge provides a rich basis for more powerful methods. We anticipate provide full logical queries to specify interest, and employing our theorem-prover to pose the query against new knowledge products. Finally, we anticipate providing alternate encodings of knowledge to facilitate knowledge sharing. Our work on the DARPA DAML project (Pease et al, 2002) has allowed us to develop translators to convert formal ontologies to a form suitable for the semantic web. We have also developed a semantic web crawler and search system, which should facilitate knowledge discovery on a much larger scale than any individual collaboration system will allow. References Statement of Interest of the Authors Adam Pease is Program Manager and Director of Knowledge Systems at Teknowledge Corporation. He led an integration team for the DARPA High Performance Knowledge Bases project and he participates in the DARPA DAML project. He was chair of the IJCAI-2001 Workshop on the IEEE Standard Upper Ontology and initiated Teknowledge's work on a Suggested Upper Merged Ontology proposal to the IEEE SUO group. He is the author of the Core Plan Representation (CPR), and the article "Knowledge Bases" in the Wiley Encyclopedia of Software Engineering. He worked previously at NASA/Ames and at the Naval Undersea Warfare Center. He holds M.S. and B.S. degrees in Computer Science from Worcester Polytechnic Institute. http://projects.teknowledge.com/apease John Li is a senior scientist/project leader for Teknowledge's Knowledge Systems Division, currently working on DARPA's Rapid Knowledge Formation (RKF) project and DARPA Agent Markup Language (DAML) project. Previously, he worked for Teknowledge on DARPA's High Performance Knowledge Base (HPKB) project and other government projects. Dr. Li has his Ph.D. in Communication and Information Sciences from the University of Hawaii, 1992. Prior to joining Teknowledge, John was a Research System Developer, Research Corporation of the University of Hawaii.
{"Source-Url": "http://www.dtic.mil/get-tr-doc/pdf?AD=ADA430270", "len_cl100k_base": 6308, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 29304, "total-output-tokens": 7506, "length": "2e12", "weborganizer": {"__label__adult": 0.0003085136413574219, "__label__art_design": 0.0006566047668457031, "__label__crime_law": 0.0006961822509765625, "__label__education_jobs": 0.00576019287109375, "__label__entertainment": 0.00019729137420654297, "__label__fashion_beauty": 0.00022792816162109375, "__label__finance_business": 0.0016307830810546875, "__label__food_dining": 0.0003790855407714844, "__label__games": 0.0006608963012695312, "__label__hardware": 0.0008411407470703125, "__label__health": 0.0007195472717285156, "__label__history": 0.0005254745483398438, "__label__home_hobbies": 0.00016760826110839844, "__label__industrial": 0.0006771087646484375, "__label__literature": 0.0008878707885742188, "__label__politics": 0.0006680488586425781, "__label__religion": 0.0005693435668945312, "__label__science_tech": 0.315673828125, "__label__social_life": 0.00035309791564941406, "__label__software": 0.08203125, "__label__software_dev": 0.58544921875, "__label__sports_fitness": 0.00024247169494628904, "__label__transportation": 0.0006256103515625, "__label__travel": 0.00023746490478515625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32397, 0.02807]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32397, 0.32029]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32397, 0.92329]], "google_gemma-3-12b-it_contains_pii": [[0, 483, false], [483, 989, null], [989, 2072, null], [2072, 2269, null], [2269, 5400, null], [5400, 8375, null], [8375, 12907, null], [12907, 18087, null], [18087, 19954, null], [19954, 23827, null], [23827, 29175, null], [29175, 32397, null]], "google_gemma-3-12b-it_is_public_document": [[0, 483, true], [483, 989, null], [989, 2072, null], [2072, 2269, null], [2269, 5400, null], [5400, 8375, null], [8375, 12907, null], [12907, 18087, null], [18087, 19954, null], [19954, 23827, null], [23827, 29175, null], [29175, 32397, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32397, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32397, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32397, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32397, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32397, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32397, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32397, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32397, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32397, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32397, null]], "pdf_page_numbers": [[0, 483, 1], [483, 989, 2], [989, 2072, 3], [2072, 2269, 4], [2269, 5400, 5], [5400, 8375, 6], [8375, 12907, 7], [12907, 18087, 8], [18087, 19954, 9], [19954, 23827, 10], [23827, 29175, 11], [29175, 32397, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32397, 0.0]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
e25e645fc1dba6db35b4135b1a6477d52005f00b
Declarative Computation Model Single assignment store (VRH 2.2) Kernel language syntax (VRH 2.3) Carlos Varela RPI March 4, 2010 Adapted with permission from: Seif Haridi KTH Peter Van Roy UCL Sequential declarative computation model - The single assignment store - declarative (dataflow) variables - partial values (variables and values are also called *entities*) - The kernel language syntax - The kernel language semantics - The environment: maps textual variable names (variable identifiers) into entities in the store - Interpretation (execution) of the kernel language elements (statements) by the use of an abstract machine - Abstract machine consists of an execution stack of statements transforming the store A single assignment store is a store (set) of variables. Initially the variables are unbound, i.e., do not have a defined value. Example: a store with three variables, $x_1$, $x_2$, and $x_3$. Single assignment store (2) - Variables in the store may be bound to values - Example: assume we allow as values, integers and lists of integers Single assignment store (3) - Variables in the store may be bound to values - Assume we allow as values, integers and lists of integers - Example: $x_1$ is bound to the integer 314, $x_2$ is bound to the list [1 2 3], and $x_3$ is still unbound Declarative (single-assignment) variables - A declarative variable starts out as being unbound when created - It can be bound to exactly one value - Once bound it stays bound through the computation, and is indistinguishable from its value A store where all variables are bound to values is called a value store. Example: a value store where $x_1$ is bound to integer 314, $x_2$ to the list [1 2 3], and $x_3$ to the record (labeled tree) `person(name: “George” age: 25)`. Functional programming computes functions on values, needs only a value store. This notion of value store is enough for functional programming (ML, Haskell, Scheme). Operations on the store (1) Single assignment \[\langle x \rangle = \langle v \rangle\] - \(x_1 = 314\) - \(x_2 = [1\ 2\ 3]\) - This assumes that \(\langle x \rangle\) is unbound \[ \langle x \rangle = \langle \text{value} \rangle \] - \[ x_1 = 314 \] - \[ x_2 = [1 2 3] \] Single-assignment (2) \( \langle x \rangle = \langle v \rangle \) - \( x_1 = 314 \) - \( x_2 = [1 \ 2 \ 3] \) - The single assignment operation (‘=’) constructs the \( \langle v \rangle \) in the store and binds the variable \( \langle x \rangle \) to this value - If the variable is already bound, the operation will test the compatibility of the two values - If the test fails an error is raised The Store \[ \begin{align*} x_1 & \rightarrow 314 \\ x_2 & \rightarrow 1 \rightarrow 2 \rightarrow 3 \rightarrow \text{nil} \\ x_3 & \rightarrow \text{unbound} \end{align*} \] Variable identifiers - Variable identifiers refers to store entities (variables or values) - The environment maps variable identifiers to variables - `declare X`: - `local X in ...` - "X" is a (variable) identifier - This corresponds to ’environment’ {"X" → x₁} Variable-value binding revisited (1) - $X = [1 \ 2 \ 3]$ - Once bound the variable is indistinguishable from its value Variable-value binding revisited (2) - \( X = [1\ 2\ 3] \) - Once bound the variable is indistinguishable from its value - The operation of traversing variable cells to get the value is known as *dereferencing* and is invisible to the programmer Partial Values - A partial value is a data structure that may contain unbound variables. - The store contains the partial value: `person(name: “George” age: x_2)` - `declare Y X` `X = person(name: “George” age: Y)` - The identifier ’Y’ refers to $x_2$ Partial Values may be complete - `declare Y X X = person(name: “George” age: Y) - Y = 25 Variable to variable binding \[ \langle x_1 \rangle = \langle x_2 \rangle \] - It is to perform the bind operation between variables - Example: - \( X = Y \) - \( X = [1 \ 2 \ 3] \) - The operations equates (merges) the two variables Variable to variable binding (2) \[ \langle x_1 \rangle = \langle x_2 \rangle \] - It is to perform a single assignment between variables - Example: - \( X = Y \) - \( X = [1 \ 2 \ 3] \) - The operations equates the two variables (forming an equivalence class) \[ \langle x_1 \rangle = \langle x_2 \rangle \] - It is to perform a single assignment between variables - Example: - \( X = Y \) - \( X = [1 \ 2 \ 3] \) - All variables (\( X \) and \( Y \)) are bound to \([1 \ 2 \ 3]\) ``` The Store ``` ``` x_1 ``` ``` x_2 ``` ``` X ``` ``` Y ``` ``` 1 | 2 | 3 | nil ``` C. Varela; Adapted w/permission from S. Haridi and P. Van Roy Summary Variables and partial values • **Declarative variable:** – is an entity that resides in a single-assignment store, that is initially unbound, and can be bound to exactly one (partial) value – it can be bound to several (partial) values as long as they are compatible with each other • **Partial value:** – is a data-structure that may contain unbound variables – when one of the variables is bound, it is replaced by the (partial) value it is bound to – a complete value, or *value* for short is a data structure that does not contain any unbound variables Declaration and use of variables - Assume that variables can be declared (introduced) and used separately - What happens if we try to use a variable before it is bound? 1. Use whatever value happens to be in the memory cell occupied by the variable (C, C++) 2. The variable is initialized to a default value (Java), use the default 3. An error is signaled (Prolog). Makes sense if there is a single activity running (pure sequential programs) 4. An attempt to use the variable will wait (suspends) until another activity binds the variable (Oz/Mozart) Declaration and use of variables (2) - An attempt to use the variable will wait (suspends) until another activity binds the variable (Oz/Mozart) - Declarative (single assignment) variables that have this property are called dataflow variables - It allows multiple operations to proceed concurrently giving the correct result - Example: A = 23 running concurrently with B = A+1 - Functional (concurrent) languages do not allow the separation between declaration and binding (ML, Haskell, and Erlang) Kernel language syntax The following defines the syntax of a statement, \( \langle s \rangle \) denotes a statement \[ \langle s \rangle ::= \text{skip} \quad \text{empty statement} | \quad \langle x \rangle = \langle y \rangle \quad \text{variable-variable binding} | \quad \langle x \rangle = \langle v \rangle \quad \text{variable-value binding} | \quad \langle s_1 \rangle \langle s_2 \rangle \quad \text{sequential composition} | \quad \text{local } \langle x \rangle \text{ in } \langle s_1 \rangle \text{ end} \quad \text{declaration} | \quad \text{if } \langle x \rangle \text{ then } \langle s_1 \rangle \text{ else } \langle s_2 \rangle \text{ end} \quad \text{conditional} | \quad \text{'}{'} \langle x \rangle \langle y_1 \rangle \ldots \langle y_n \rangle \text{'}{'} \quad \text{procedural application} | \quad \text{case } \langle x \rangle \text{ of } \langle \text{pattern} \rangle \text{ then } \langle s_1 \rangle \text{ else } \langle s_2 \rangle \text{ end} \quad \text{pattern matching} \langle v \rangle ::= \ldots \quad \text{value expression} \langle \text{pattern} \rangle ::= \ldots Variable identifiers • $x$, $y$, $z$ stand for variables • In the concrete kernel language variables begin with upper-case letter followed by a (possibly empty) sequence of alphanumeric characters or underscore • Any sequence of printable characters within back-quote • Examples: – X – Y1 – Hello_World – `hello this is a $5 bill` (back-quote) Values and types - A *data type* is a set of values and a set of associated operations. - Example: `Int` is the data type ”Integer”, i.e set of all integer values. - 1 is of type `Int`. - `Int` has a set of operations including +,-,*,div, etc. - The model comes with a set of basic types. - Programs can define other types, e.g., *abstract data types* (ADT). Data types Value - Number - Int - Float - Char - Record - Procedure - Tuple - Literal - Atom - true - Boolean - false - List - String Data types (2) Value - Number - Int - Float - Char - Record - Tuple - Literal - Atom - Boolean - true - false - Procedure - List - String Value expressions \[ \langle v \rangle ::= \langle \text{procedure} \rangle \mid \langle \text{record} \rangle \mid \langle \text{number} \rangle \] \[ \langle \text{procedure} \rangle ::= \text{proc} \{\langle \text{statement} \rangle \} \langle s \rangle \text{ end} \] \[ \langle \text{record} \rangle, \langle \text{pattern} \rangle ::= \langle \text{literal} \rangle \mid \langle \text{literal} \rangle ([\langle \text{feature} \rangle_1 : \langle x \rangle_1 \ldots \langle \text{feature} \rangle_n : \langle x \rangle_n]) \] \[ \langle \text{literal} \rangle ::= \langle \text{atom} \rangle \mid \langle \text{bool} \rangle \] \[ \langle \text{feature} \rangle ::= \langle \text{int} \rangle \mid \langle \text{atom} \rangle \mid \langle \text{bool} \rangle \] \[ \langle \text{bool} \rangle ::= \text{true} \mid \text{false} \] \[ \langle \text{number} \rangle ::= \langle \text{int} \rangle \mid \langle \text{float} \rangle \] Numbers - Integers - 314, 0 - ~10 (minus 10) - Floats - 1.0, 3.4, 2.0e2, 2.0E2 (2\times10^2) Atoms and booleans - A sequence starting with a lower-case character followed by characters or digits, … - person, peter - ‘Seif Haridi’ - Booleans: - true - false Records • Compound representation (data-structures) - $\langle l \rangle (\langle f_1 \rangle : \langle x_1 \rangle \ldots \langle f_n \rangle : \langle x_n \rangle )$ - $\langle l \rangle$ is a literal • Examples - `person(age:X1 name:X2)` - `person(1:X1 2:X2)` - `'l'(1:H 2:T)` - `nil` - `person` Syntactic sugar (tuples) - Tuples \( \langle l \rangle \langle x_1 \rangle \ldots \langle x_n \rangle \) (tuple) - This is equivalent to the record \( \langle l \rangle (1: \langle x_1 \rangle \ldots n: \langle x_n \rangle) \) - Example: \texttt{person(‘George’ 25)} - This is the record \texttt{person(1:‘George’ 2:25)} Syntactic sugar (lists) - Lists \( \langle x_1 \rangle \mid \langle x_2 \rangle \) (a cons with the infix operator ‘|’) - This is equivalent to the tuple ‘|’(\( \langle x_1 \rangle \langle x_2 \rangle \)) - Example: \( H \mid T \) - This is the tuple ‘|’(H T) Syntactic sugar (lists) • Lists \( \langle x_1 \rangle | \langle x_2 \rangle | \langle x_3 \rangle \) • ‘|’ associates to the right \( \langle x_1 \rangle | (\langle x_2 \rangle | \langle x_3 \rangle) \) • Example: 1 | 2 | 3 | nil • Is 1 | (2 | (3 | nil )) Syntactic sugar (complete lists) • Complete lists • Example: [1 2 3] • Is 1 | (2 | (3 | nil )) Strings • A string is a list of character codes enclosed with double quotes • Ex: "E=mc^2" • Means the same as [69 61 109 99 94 50] Procedure declarations - According to the kernel language \( \langle x \rangle = \text{proc} \{ \langle y_1 \rangle \ldots \langle y_n \rangle \} \langle s \rangle \text{ end} \) is a legal statement - It binds \( \langle x \rangle \) to a procedure value - This statement actually declares (introduces) a procedure - Another syntactic variant which is more familiar is \( \text{proc} \{ \langle x \rangle \langle y_1 \rangle \ldots \langle y_n \rangle \} \langle s \rangle \text{ end} \) - This introduces (declares) the procedure \( \langle x \rangle \) Operations of basic types - **Arithmetics** - Floating point numbers: +,-,*, and / - Integers: +,-,*,div (integer division, i.e. truncate fractional part), mod (the remainder after a division, e.g. 10 mod 3 = 1) - **Record operations** - Arity, Label, and ”.” - X = person(name:”George” age:25) - {Arity X} = [age name] - {Label X} = person, X.age = 25 - **Comparisons** - Boolean comparisons, including ==, \!= (equality) - Numeric comparisons, =<, <, >, >=, compares integers, floats, and atoms Value expressions \[ \langle v \rangle ::= \langle \text{procedure} \rangle \mid \langle \text{record} \rangle \mid \langle \text{number} \rangle \mid \langle \text{basicExpr} \rangle \] \[ \langle \text{basicExpr} \rangle ::= ... \mid \langle \text{numberExpr} \rangle \mid ... \] \[ \langle \text{numberExpr} \rangle ::= \langle x \rangle_1 + \langle x \rangle_2 \mid ... \] ..... Syntactic sugar (multiple variables) - Multiple variable introduction ``` local X Y in ⟨statement⟩ end ``` - is transformed to ``` local X in local Y in ⟨statement⟩ end end ``` Syntactic sugar (basic expressions) - Basic expression nesting \[ \text{if } \langle \text{basicExpr} \rangle \text{ then } \langle \text{statement} \rangle_1 \text{ else } \langle \text{statement} \rangle_2 \text{ end} \] - is transformed to \[ \text{local } T \text{ in} \begin{align*} T &= \langle \text{basicExpr} \rangle \\ \text{if } T \text{ then } \langle \text{statement} \rangle_1 \text{ else } \langle \text{statement} \rangle_2 \text{ end} \end{align*} \] - where \( T \) is a fresh (’new’) variable identifier Syntactic sugar (variables) - Variable initialization \[ \text{local } X = \langle \text{value} \rangle \text{ in } \langle \text{statement} \rangle \text{ end} \] - Is transformed to \[ \text{local } X \text{ in } \\ X = \langle \text{value} \rangle \\ \langle \text{statement} \rangle \\ \text{end} \] Exercises 38. Using Oz, perform a few basic operations on numbers, records, and booleans (see Appendix B1-B3) 39. Explain the behavior of the declare statement in the interactive environment. Give an example of an interactive Oz session where “declare” and “declare ... in” produce different results. Explain why. 40. VRH Exercise 2.9.1 41. Describe what an anonymous procedure is, and write one in Oz. When are anonymous procedures useful?
{"Source-Url": "http://www.cs.rpi.edu/academics/courses/spring10/proglang/handouts/sections2.2-2.3.pdf", "len_cl100k_base": 4225, "olmocr-version": "0.1.48", "pdf-total-pages": 42, "total-fallback-pages": 0, "total-input-tokens": 67193, "total-output-tokens": 5913, "length": "2e12", "weborganizer": {"__label__adult": 0.0003142356872558594, "__label__art_design": 0.0003266334533691406, "__label__crime_law": 0.0002834796905517578, "__label__education_jobs": 0.00446319580078125, "__label__entertainment": 7.367134094238281e-05, "__label__fashion_beauty": 0.00013113021850585938, "__label__finance_business": 0.00023293495178222656, "__label__food_dining": 0.0003814697265625, "__label__games": 0.0005521774291992188, "__label__hardware": 0.0007100105285644531, "__label__health": 0.0004181861877441406, "__label__history": 0.00025391578674316406, "__label__home_hobbies": 0.00012254714965820312, "__label__industrial": 0.0005040168762207031, "__label__literature": 0.000438690185546875, "__label__politics": 0.00030922889709472656, "__label__religion": 0.0005440711975097656, "__label__science_tech": 0.0245361328125, "__label__social_life": 0.00016069412231445312, "__label__software": 0.00677490234375, "__label__software_dev": 0.95751953125, "__label__sports_fitness": 0.000286102294921875, "__label__transportation": 0.0005321502685546875, "__label__travel": 0.0001773834228515625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14109, 0.02312]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14109, 0.82341]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14109, 0.65346]], "google_gemma-3-12b-it_contains_pii": [[0, 196, false], [196, 733, null], [733, 926, null], [926, 1072, null], [1072, 1318, null], [1318, 1559, null], [1559, 1961, null], [1961, 2142, null], [2142, 2238, null], [2238, 2816, null], [2816, 3085, null], [3085, 3205, null], [3205, 3452, null], [3452, 3709, null], [3709, 3801, null], [3801, 4041, null], [4041, 4308, null], [4308, 4689, null], [4689, 5266, null], [5266, 5827, null], [5827, 6327, null], [6327, 7441, null], [7441, 7794, null], [7794, 8154, null], [8154, 8343, null], [8343, 8528, null], [8528, 9472, null], [9472, 9573, null], [9573, 9746, null], [9746, 10061, null], [10061, 10392, null], [10392, 10665, null], [10665, 10932, null], [10932, 11029, null], [11029, 11162, null], [11162, 11725, null], [11725, 12241, null], [12241, 12628, null], [12628, 12829, null], [12829, 13357, null], [13357, 13665, null], [13665, 14109, null]], "google_gemma-3-12b-it_is_public_document": [[0, 196, true], [196, 733, null], [733, 926, null], [926, 1072, null], [1072, 1318, null], [1318, 1559, null], [1559, 1961, null], [1961, 2142, null], [2142, 2238, null], [2238, 2816, null], [2816, 3085, null], [3085, 3205, null], [3205, 3452, null], [3452, 3709, null], [3709, 3801, null], [3801, 4041, null], [4041, 4308, null], [4308, 4689, null], [4689, 5266, null], [5266, 5827, null], [5827, 6327, null], [6327, 7441, null], [7441, 7794, null], [7794, 8154, null], [8154, 8343, null], [8343, 8528, null], [8528, 9472, null], [9472, 9573, null], [9573, 9746, null], [9746, 10061, null], [10061, 10392, null], [10392, 10665, null], [10665, 10932, null], [10932, 11029, null], [11029, 11162, null], [11162, 11725, null], [11725, 12241, null], [12241, 12628, null], [12628, 12829, null], [12829, 13357, null], [13357, 13665, null], [13665, 14109, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 14109, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14109, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14109, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14109, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 14109, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14109, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14109, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14109, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14109, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14109, null]], "pdf_page_numbers": [[0, 196, 1], [196, 733, 2], [733, 926, 3], [926, 1072, 4], [1072, 1318, 5], [1318, 1559, 6], [1559, 1961, 7], [1961, 2142, 8], [2142, 2238, 9], [2238, 2816, 10], [2816, 3085, 11], [3085, 3205, 12], [3205, 3452, 13], [3452, 3709, 14], [3709, 3801, 15], [3801, 4041, 16], [4041, 4308, 17], [4308, 4689, 18], [4689, 5266, 19], [5266, 5827, 20], [5827, 6327, 21], [6327, 7441, 22], [7441, 7794, 23], [7794, 8154, 24], [8154, 8343, 25], [8343, 8528, 26], [8528, 9472, 27], [9472, 9573, 28], [9573, 9746, 29], [9746, 10061, 30], [10061, 10392, 31], [10392, 10665, 32], [10665, 10932, 33], [10932, 11029, 34], [11029, 11162, 35], [11162, 11725, 36], [11725, 12241, 37], [12241, 12628, 38], [12628, 12829, 39], [12829, 13357, 40], [13357, 13665, 41], [13665, 14109, 42]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14109, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
eade645349c1a70614a131825c4809607ff22349
Domain-Independent Structured Duplicate Detection Rong Zhou Palo Alto Research Center 3333 Coyote Hill Road Palo Alto, CA 94304 rzhou@parc.com Eric A. Hansen Dept. of Computer Science and Eng. Mississippi State University Mississippi State, MS 39762 hansen@cse.msstate.edu Abstract The scalability of graph-search algorithms can be greatly extended by using external memory, such as disk, to store generated nodes. We consider structured duplicate detection, an approach to external-memory graph search that limits the number of slow disk I/O operations needed to access search nodes stored on disk by using an abstract representation of the graph to localize memory references. For graphs with sufficient locality, structured duplicate detection outperforms other approaches to external-memory graph search. We develop an automatic method for creating an abstract representation that reveals the local structure of a graph. We then integrate this approach into a domain-independent STRIPS planner and show that it dramatically improves scalability for a wide range of planning problems. The success of this approach strongly suggests that similar local structure can be found in many other graph-search problems. Introduction The scalability of graph-search algorithms such as breadth-first search, Dijkstra’s algorithm, and A*, is limited by the memory needed to store generated nodes in the Open and Closed lists, primarily for use in detecting duplicate nodes that represent the same state. Although depth-first search of a graph uses much less memory, its inability to detect duplicates leads to an exponential increase in time complexity that makes it ineffective for many graph-search problems. Recent work shows that the scalability of breadth-first and best-first search algorithms can be significantly improved without sacrificing duplicate detection by storing only the frontier nodes, using special techniques to prevent regeneration of closed nodes, and recovering the solution path by a divide-and-conquer technique (Korf et al. 2005; Zhou & Hansen 2006). But even with this approach, the amount of memory needed to store the search frontier eventually exceeds available internal memory. This has led to growing interest in external-memory graph-search algorithms that use disk to store the nodes that are needed for duplicate detection. Because duplicate detection potentially requires comparing each newly-generated node to all stored nodes, it can lead to crippling disk I/O if the nodes stored on disk are accessed randomly. Two different approaches to performing duplicate detection efficiently in external-memory graph search have been proposed. Delayed duplicate detection (DDD) expands a set of nodes (e.g., the nodes on the frontier) without checking for duplicates, stores the generated nodes (including duplicates) in one or more disk files, and eventually remove duplicates by either sorting or hashing (Korf 2004; Edelkamp, Jabbar, & Schrödl 2004; Korf & Schultz 2005). The overhead for generating duplicates and removing them later is avoided in an approach called structured duplicate detection (SDD). It leverages local structure in a graph to partition stored nodes between internal memory and disk in such a way that duplicate detection can be performed immediately, during node expansion, and no duplicates are ever generated (Zhou & Hansen 2004). Although SDD is more efficient than DDD, it requires the search graph to have appropriate local structure. In this paper, we develop an automatic approach for uncovering the local structure in a graph that can be leveraged by SDD. We integrate this approach into a domain-independent STRIPS planner, and show that, for a wide range of planning domains, it improves the scalability of the graph-search algorithm that solves the planning problems, often dramatically. We analyze the reasons for the success of this approach, and argue that similar local structure can be found in many other graph-search problems. Memory-efficient graph search Before reviewing approaches to external-memory graph search, we briefly review approaches to graph search that use internal memory as efficiently as possible. Since methods for duplicate detection in external-memory graph search are built on top of an underlying graph-search algorithm, the more efficiently the underlying search algorithm uses internal memory, the less it needs to access disk, and the more efficient the overall search. Frontier search, introduced by Korf et al. (2005), is a memory-efficient approach to graph search that only stores nodes that are on the search frontier, uses special techniques to prevent regeneration of closed nodes, and recovers the solution path by a divide-and-conquer technique. It is a general approach to reducing memory requirements in graph search that can be used with A*, Dijkstra’s algorithm, breadth-first search, and other search algorithms. Copyright © 2006, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. When frontier search or one of its variants is adopted, we previously showed that breadth-first branch-and-bound search can be more memory-efficient than A* in solving search problems with unit edge costs (Zhou & Hansen 2006). The reason for this is that a breadth-first frontier is typically smaller than a best-first frontier. We introduced the phrase breadth-first heuristic search to refer to this memory-efficient approach to breadth-first branch-and-bound search and a breadth-first iterative-deepening A* algorithm that is based on it. We adopt breadth-first heuristic search as the underlying search algorithm for the external-memory STRIPS planner developed in this paper, although we note that SDD and the automatic approach to abstraction that we develop can be used with other search strategies too. **External-memory graph search** In this section, we review structured duplicate detection and compare it to other approaches to duplicate detection in external-memory graph search. **Sorting-based delayed duplicate detection** The first algorithms for external-memory graph search used delayed duplicate detection (Stern & Dill 1998; Munagala & Ranade 1999; Korf 2004; Edelkamp, Jabbar, & Schrödl 2004). In its original and simplest form, delayed duplicate detection takes a file of nodes on the search frontier, (e.g., the nodes in the frontier layer of a breadth-first search graph), generates their successors and writes them to another file without checking for duplicates, sorts the file of generated nodes by the state representation so that all duplicate nodes are adjacent to each other, and scans the file to remove duplicates. The I/O complexity of this approach is dominated by the I/O complexity of external sorting, and experiments confirm that external sorting is its bottleneck. **Structured duplicate detection** Sorting a disk file in order to remove duplicates is only necessary if duplicate detection is delayed. Structured duplicate detection (Zhou & Hansen 2004) detects duplicates as soon as they are generated by leveraging the local structure of a search graph. As long as SDD is applicable, it has been proved to have better I/O complexity than DDD. Moreover, the more duplicates are generated by DDD in the course of searching in a graph, the greater the relative advantage of SDD. To identify the local structure in a graph that is needed for SDD, a state-space projection function is used to create an abstract state space in which each abstract state corresponds to a set of states in the original state space. Typically, the method of projection corresponds to a partial specification of the state. For example, if the state is defined by an assignment of values to state variables, an abstract state corresponds to an assignment of values to a subset of the state variables. In the abstract state-space graph created by the projection function, an abstract node \( y' \) is a successor of an abstract node \( y \) if and only if there exist two states \( x' \) and \( x \) in the original state space, such that (1) \( x' \) is a successor of \( x \), and (2) \( x' \) and \( x \) map to \( y' \) and \( y \), respectively, under the projection function. The abstract state-space graph captures local structure in the problem if the maximum number of successors of any abstract node is small relative to the total number of abstract nodes. We refer to this ratio as the locality of the graph. The duplicate-detection scope of a node in the original search graph is defined as all stored nodes that map to the successors of the abstract node that is the image of the node under the projection function. The importance of this concept is that the search algorithm only needs to check for duplicates in the duplicate-detection scope of the node being expanded. Given sufficient locality, this is a small fraction of all stored nodes. An external-memory graph search algorithm uses RAM to store nodes within the current duplicate-detection scope, and can use disk to store nodes that fall outside the duplicate detection scope, when RAM is full. Structured duplicate detection is designed to be used with a search algorithm that expands a set of nodes at a time, like breadth-first search, where the order in which nodes in the set are expanded can be adjusted to minimize disk I/O. Stored nodes are partitioned into “buckets,” where each bucket corresponds to a set of nodes in the original search graph that map to the same abstract node. Because nodes in the same bucket have the same duplicate-detection scope, expanding them at the same time means that no disk I/O needs to be performed while they are being expanded. When a new bucket of nodes is expanded, nodes stored on disk are swapped into RAM if they are part of the duplicate-detection scope of the new bucket, and buckets outside the current duplicate-detection scope can be flushed to disk when RAM is full. The general approach to minimizing disk I/O is to order node expansions so that changes of duplicate-detection scope occur as infrequently as possible, and, when they occur, they involve change of as few buckets as possible. That is, expanding buckets with overlapping duplicate-detection scopes consecutively also tends to minimize disk I/O. **Hash-based delayed duplicate detection** Hash-based delayed duplicate detection (Korf & Schultze 2005) is a more efficient form of delayed duplicate detection. To avoid the time complexity of sorting in DDD, it uses two orthogonal hash functions. During node expansion, successor nodes are written to different files based on the value of the first hash function, and all duplicates are mapped to the same file. Once a file of successor nodes has been generated, duplicates can be removed. To avoid the time complexity of sorting to remove duplicates, a second hash function is used that maps all duplicates to the same location of a hash table. Since the hash table corresponding to the second hash function must fit in internal memory, this approach requires some care in designing the hash functions (which are problem-specific) to achieve efficiency. As a further enhancement, the amount of disk space needed for hash-based DDD can be reduced by interleaving expansion and duplicate removal. But interestingly, this is only possible if the successor nodes written to a file are generated from a small subset of the files being expanded; that is, its possibility depends on the same kind of local structure leveraged by SDD. Korf and Schultze (2005) give some reasons for preferring hash-based DDD to SDD. Their observation that “hash-based DDD only reads and writes each node at most twice” applies to duplicate nodes of the same states, and since hash-based DDD allows many duplicates to be generated, it does not actually bound the number of reads and writes per state. Their observation that hash-based DDD does not require the same local structure as SDD is an important difference (even though we noticed that some of the techniques used by hash-based DDD to improve efficiency over sorting-based DDD exploit similar local structure). They also point out that SDD has a minimum memory requirement that corresponds to the largest duplicate-detection scope. However, the minimum memory requirement of SDD can usually be controlled by changing the granularity of the state-space projection function. Given appropriate local structure, a finer-grained state-space projection function creates smaller duplicate-detection scopes, and thus, lower internal-memory requirements. In the end, whether a search graph has appropriate local structure for SDD is the crucial question, and we address this in the rest of the paper. An advantage of SDD worth mentioning is that it can be used to create external-memory pattern databases (Zhou & Hansen 2005). Because a heuristic estimate is needed as soon as a node is generated, the delayed approach of DDD cannot be used for this. But the most important advantage of SDD is that, when applicable, it is always more efficient than hash-based DDD. The reason for this is that even though hash-based DDD significantly reduces the overhead of removing duplicates compared to sorting-based DDD, it still incurs overhead for generating and removing duplicates, and this overhead is completely avoided by SDD. Domain-independent abstraction We have seen that SDD is preferable to DDD when it is applicable. To show that it is widely applicable, we introduce the main contribution of this paper: an algorithm that automatically creates an abstract representation of the state space that captures the local structure needed for SDD. We show how to do this for a search graph that is implicitly represented in a STRIPS planning language. But the approach is general enough to apply to other graph-search problems. We begin by introducing some notation related to domain-independent STRIPS planning and state abstraction. STRIPS planning and abstraction A STRIPS planning problem is a tuple \( \langle S, O, I, G \rangle \), where \( A \) is a set of atoms, \( O \) is a set of grounded operators, and \( I \subseteq A \) and \( G \subseteq A \) describe the initial and goal situations, respectively. Each operator \( o \in O \) has a precondition list \( Pre(o) \subseteq A \), an add list \( Add(o) \subseteq A \), and a delete list \( Del(o) \subseteq A \). The STRIPS planning problem defines a state space \( \langle S, s_0, G, T \rangle \), where \( S \subseteq 2^A \) is the set of states, \( s_0 \) is the start state, \( G \) is the set of goal states, and \( T \) is a set of transitions that transform one state \( s \) into another state \( s' \). In progression planning, the transition set is defined as \( T = \{(s, s') \in T \mid \exists o \in O, Pre(o) \subseteq s \land s' = s \setminus Del(o) \cup Add(o)\} \). In regression planning, which involves searching backwards from the goal to the start state, the transition set is defined as \( T = \{(s, s') \in T \mid \exists o \in O, Add(o) \cap s \neq \emptyset \land Del(o) \cap s = \emptyset \land s' = s \setminus Add(o) \cup Pre(o)\} \). A sequential plan is a sequence of operators that transform the start state into one of the goal states. Since STRIPS operators have unit costs, an optimal sequential plan is also a shortest plan. A state-space abstraction of a planning problem is a projection of the state space into a smaller abstract state space. The projection is created by selecting a subset of the atoms \( P \subseteq A \). The projection function \( \phi_P : 2^A \rightarrow 2^P \) is defined so that the projected state is the intersection of \( s \) and \( P \), and the projected or abstract state space is defined as \( \phi_P(S) = \{s \in S \mid s \cap P \neq \emptyset\} \); for convenience, we write \( \phi_P(S) \) as \( S_P \). The projected operators are similarly defined by intersecting the subset of atoms \( P \) with the precondition, add, and delete lists of each operator. Locality-preserving abstraction The number of possible abstractions is huge—exponential in the number of atoms. For SDD, we need to find an abstraction that has appropriate local structure. Recall that we defined the locality of an abstract state-space graph as the maximum number of successors of any abstract node compared to the overall number of abstract nodes. The smaller this ratio, the more effective SDD can be in leveraging external memory. To see why this is so, first assume that the abstract nodes evenly partition the stored nodes in the original graph. Although this is never more than approximately true, it is a reasonable assumption to make for our analysis. Next, recall that the duplicate-detection scope of any node \( s \in S \) in the original search graph is defined as all stored nodes that map to the successors of the abstract node that is the image of the node under the projection function \( \phi_P \). The largest duplicate-detection scope determines the minimal memory requirement of SDD, since it is the largest number of nodes that need to be stored in RAM at one time in order to perform duplicate detection. From the assumption that the abstract nodes evenly partition the stored nodes in the original graph, it follows that the locality of the abstract state-space graph determines the largest duplicate-detection scope. To measure the degree of the local structure captured by a state-space projection function \( \phi_P \), we define the maximum duplicate-detection scope ratio, \( \delta \), as follows, \[ \delta(P) = \max_{s \in S_P} \frac{|\text{Successors}(s_P)|}{|S_P|} \] Essentially, \( \delta \) is the maximum fraction of abstract nodes in the abstract state-space graph that belong to a single duplicate-detection scope. The smaller the ratio, the smaller the percentage of nodes that need to be stored in RAM for duplicate detection, compared to the total number of stored nodes. Reducing this ratio allows SDD to leverage more of external memory to improve scalability. Thus, we propose searching for an abstraction that minimizes this ratio as a way of finding a good abstraction for SDD. In our experience, however, this ratio can almost always be reduced by increasing the resolution of the abstraction, that is, by adding atoms to the projection function. But we granularity, and resumes the search. The search continues in is no need for a more fine-grained abstraction. If not, the The algorithm starts with a very coarse abstraction, if any. If the search begins and don’t change it. An alternative is to in- graph. M where XOR constraints to limit the size of the abstract state constraints in the abstract state space and limit the size of the abstract graph. To see how this works, consider an abstraction created by selecting ten atoms. If there are no state constraints, the abstract graph has 2^{10} = 1024 nodes. But if the ten atoms consist of two XOR groups of five atoms each, the abstract graph has only 5^2 = 25 nodes. Greedy abstraction algorithm We use a greedy algorithm to try to find a projection function that minimizes Equation (1). Let P_1, P_2, \ldots, P_m be the XOR groups in a domain. Instead of attempting to minimize \( \delta(P) \) for all possible combinations of all XOR groups, a greedy algorithm adds one XOR group to \( \mathcal{P} \) at a time. The algorithm starts with \( \mathcal{P} = \emptyset \). Then it tries every single XOR group \( P_i \in \{ P_1, P_2, \ldots, P_m \} \) by computing the corresponding \( \delta(P_i) \), finds the best XOR group \( P^*_i \) that minimizes \( \delta(P_i) \) for all \( i \in \{ 1, 2, \ldots, m \} \), and adds \( P^*_i \) to \( \mathcal{P} \). Suppose the best XOR group added is \( P_1 \). The greedy algorithm then picks the best remaining XOR group that minimizes \( \delta(P \cup P_1) \) for all \( i \neq 1 \), and adds it to \( \mathcal{P} \). The process repeats until either (a) the size of the abstract graph exceeds the upper bound \( M \) on the size of abstract graphs, or (b) there is no XOR group left. Typically, we don’t run out of XOR groups before exceeding the size bound. If we did, we could continue to add single atoms to try to improve the abstraction. Example We use an example in the logistics domain to illustrate the algorithm. The goal in logistics is to deliver a number of packages to their destinations by using trucks to move them within a city, or airplanes to move them between cities. Consider a simple example in which there are two pack- dages \{ pkg1, pkg2 \}, two locations \{ loc1, loc2 \}, two airports \{ airport1, airport2 \}, two trucks \{ truck1, truck2 \}, and one airplane \{ plane1 \}. A domain constraint analysis discovers several candidate abstract state-space graphs, two of which are shown in Figure 1. An oval represents an abstract state and the label inside an oval shows the atom(s) in that abstract state. An arrow represents a projected operator that transforms one abstract state into another. Figure 1(a) shows an abstract state-space graph created by using a projection function based on the 7 possible locations of pkg1. Note Figure 1: Abstract state-space graphs (with self loops omitted) for logistics. Panel (a) shows an abstract state-space graph based on the location of pkg1. Panel (b) shows another abstract state-space graph based on the location of truck1. that all the atoms shown in Figure 1(a) belong to a single XOR group. Figure 1(b) shows another abstract state-space graph based on the location of truck1. For each candidate abstract graph, the algorithm computes its corresponding δ. For the abstract graph in Figure 1(a), δ = 3/7, since the maximum number of successors of any abstract node is 3 (note that self loops are omitted) and the graph has a total of 7 nodes. For the abstract graph in Figure 1(b), δ = 2/7 = 1, which means it has no locality at all. Thus, the abstract graph shown in Figure 1(a) is preferred, since it has a smaller δ value. Of course, a δ of 3/7 may still be too large. The algorithm can decrease δ by increasing the granularity of the abstract graph. It does so by adding another XOR group to P. Suppose it adds a new XOR group that is based on the location of pkg2. Since pkg2 also has 7 possible locations, the number of combinations for the locations of these two packages is $7 \times 7 = 49$, the size of the new abstract graph. But the maximum number of successors of any abstract node only increases from 3 to 5. Thus, the new δ is 5/49. In the more general case where there are $n$ packages, δ can be made as small as $\frac{1+2n}{2n}$, a number close to zero even for small $n$'s. **Implementation** An important implementation detail in external-memory graph search is how to represent pointers that are used to keep track of ancestor nodes along a least-cost path. One possibility is to use conventional pointers for referencing nodes stored in RAM and a separate type of pointer for referencing nodes stored on disk. A drawback of this approach is that every time a node is swapped between RAM and disk, its successors along the least-cost path (including those stored on disk) must change the type of their ancestor pointer, and this can introduce substantial time and space overhead. In our implementation of external-memory breadth-first heuristic search, a pointer to a node at a given depth contains two pieces of information; (a) the abstract node to which the node maps, and (b) the order in which the node appears in the bucket of nodes of the same depth that map to the same abstract node. It can be shown that these two pieces of information about a node do not change after it is generated by a breadth-first search algorithm. Thus, in our implementation, the same pointer is always valid, whether the node it points to is stored in RAM or on disk. **Results** We implemented our abstraction-finding algorithm in a domain-independent STRIPS planner that uses breadth-first heuristic search (BFHS) with SDD to find optimal sequential plans. We used regression planning. As an admissible heuristic, we used the max-pair heuristic (Haslum & Geffner 2000). We tested the planner in eight different domains from the biennial planning competition. Experiments were performed on an AMD Operton 2.4 GHz processor with 4 GB of RAM and 1 MB of L2 cache. The first eight rows of the Table 1 compare the performance of BFHS with and without SDD. The eight problems are the largest that BFHS can solve without external memory in each of the eight planning domains. The slight difference in the total number of node expansions for the two algorithms is due to differences in tie breaking. The results show several interesting things. First, they show how much less RAM is needed when using SDD. Second, the ratio of the peak number of nodes stored on disk to the peak number of nodes stored in RAM shows how much improvement in scalability is possible. Although the ratio varies from domain to domain, there is improvement in every domain (and more can be achieved by increasing the resolution of the abstraction). The extra time taken by the external-memory algorithm includes the time taken by the abstraction-finding algorithm. The greedy algorithm took no more than 25 seconds for any of the eight problems, and less than a second for some. (Its running time depends on the number of XOR groups in each domain as well as the size of the abstract state-space graph.) The rest of the extra time is for disk I/O, and this is relatively modest. The last column shows the maximum duplicate-detection scope ratio, where the numerator is the maximum number of successors of any node in the abstract graph and the denominator is the total number of abstract nodes. The next to last column shows the number of atoms, $|\mathcal{P}|$, used in the projection function that creates the abstract graph. Comparing $2^{|\mathcal{P}|}$ to the number of nodes in the abstract graph shows how effective the state constraints are in limiting the size of the abstract graph. The last four rows of Table 1 show the performance of the planner in solving some particularly difficult problems that cannot be solved without external memory. As far as we know, this is the first time that optimal solutions have been found for these problems. It is interesting to note that the ratio of the peak number of nodes stored on disk to the peak number of nodes stored in RAM is greater for these more difficult problems. This suggests that the effectiveness of SDD tends to increase with the size of the problem, an appealing property for a technique that is designed for solving large problems. **Conclusion** For search graphs with sufficient local structure, SDD outperforms other approaches to external-memory graph search. In this paper, we have addressed two important questions about the applicability of SDD: how common is this kind of local structure in AI graph-search problems, and is it possible to identify it in an automatic and domain-independent way. Previous work shows that the local structure needed for SDD is present in the sliding-tile puzzle, four-peg Towers of Hanoi, and multiple sequence alignment problems (Zhou & Hansen 2004; 2005). The empirical results presented in this paper show that it can be found in a wide range of STRIPS planning problems. Although it is no guarantee, this encourages us to believe that similar structure can also be found in many other graph-search problems. In previous work, the abstractions of the state space that capture this local structure were hand-crafted. An advantage of automating this process is that locality-preserving abstractions may not be obvious to a human designer, especially in domains with hundreds of variables and complex state constraints. In addition, automating this process makes it possible to search systematically for the best abstraction, which even a knowledgeable human designer may not find. Many further improvement of this approach are possible. In the near future, we will explore ways to improve the algorithm for identifying locality-preserving abstractions and test it on additional problems. We will also consider whether SDD can leverage other forms of abstraction. So far, we have used state abstraction to partition the nodes of a graph in order to create an abstract state-space graph that reveals local structure. For problems where state abstraction (node partitioning) does not reveal enough local structure, operator abstraction (edge partitioning) may provide another way of creating an abstract state-space graph that reveals useful local structure. Given that most AI search graphs are generated from a relatively small set of rules, we believe they are likely to contain local structure of one form or another that can be leveraged for SDD. Finally, we believe the local structure used for SDD can eventually be leveraged in parallel graph search as well as external-memory graph search. Table 1: Comparison of breadth-first heuristic search with and without using domain-independent structured duplicate detection on STRIPS planning problems. Columns show solution length (Len), peak number of nodes stored in RAM (RAM), peak number of nodes stored on disk (Disk), number of node expansions (Exp), running time in CPU seconds (Secs), size of projection atom set (|P|), and maximum duplicate-detection scope ratio (δ). A ‘-’ symbol indicates that the algorithm cannot solve the problem without storing more than 64 million nodes in RAM. <table> <thead> <tr> <th>Problem</th> <th>Len</th> <th>No external memory</th> <th>RAM</th> <th>Exp</th> <th>Secs</th> <th>External memory</th> <th>RAM</th> <th>Exp</th> <th>Secs</th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>logistics-6</td> <td>25</td> <td>151,960</td> <td>338,916</td> <td>2</td> <td>1,725</td> <td>165,418</td> <td>339,112</td> <td>89</td> <td>28</td> <td>9</td> <td></td> </tr> <tr> <td>satellite-6</td> <td>20</td> <td>2,364,696</td> <td>3,483,817</td> <td>106</td> <td>51,584</td> <td>2,315,307</td> <td>3,484,031</td> <td>359</td> <td>33</td> <td>31</td> <td>1728</td> </tr> <tr> <td>freecell-3</td> <td>18</td> <td>3,662,111</td> <td>5,900,780</td> <td>221</td> <td>1,069,901</td> <td>2,764,364</td> <td>5,900,828</td> <td>302</td> <td>22</td> <td>81</td> <td>1764</td> </tr> <tr> <td>elevator-12</td> <td>40</td> <td>4,068,538</td> <td>12,088,717</td> <td>256</td> <td>172,797</td> <td>3,898,692</td> <td>12,829,226</td> <td>349</td> <td>33</td> <td>25</td> <td>1600</td> </tr> <tr> <td>depots-7</td> <td>21</td> <td>10,766,726</td> <td>16,794,797</td> <td>270</td> <td>2,662,253</td> <td>9,524,314</td> <td>16,801,412</td> <td>342</td> <td>42</td> <td>4</td> <td>2150</td> </tr> <tr> <td>blocks-16</td> <td>52</td> <td>7,031,949</td> <td>18,075,779</td> <td>290</td> <td>3,194,703</td> <td>5,527,227</td> <td>18,075,779</td> <td>387</td> <td>54</td> <td>43</td> <td>4906</td> </tr> <tr> <td>driverlog-11</td> <td>19</td> <td>15,159,114</td> <td>18,606,835</td> <td>274</td> <td>742,988</td> <td>15,002,445</td> <td>18,606,485</td> <td>507</td> <td>49</td> <td>15</td> <td>3848</td> </tr> <tr> <td>gripper-8</td> <td>53</td> <td>15,709,123</td> <td>81,516,471</td> <td>606</td> <td>619,157</td> <td>15,165,293</td> <td>81,516,471</td> <td>1,192</td> <td>29</td> <td>37</td> <td>4212</td> </tr> <tr> <td>freecell-4</td> <td>26</td> <td>-</td> <td>-</td> <td>-</td> <td>11,447,191</td> <td>114,224,688</td> <td>208,743,830</td> <td>15,056</td> <td>22</td> <td>81</td> <td>1764</td> </tr> <tr> <td>elevator-15</td> <td>46</td> <td>-</td> <td>-</td> <td>-</td> <td>1,540,657</td> <td>126,194,100</td> <td>430,804,933</td> <td>17,087</td> <td>42</td> <td>32</td> <td>7036</td> </tr> <tr> <td>logistics-9</td> <td>36</td> <td>-</td> <td>-</td> <td>-</td> <td>5,159,767</td> <td>540,438,586</td> <td>1,138,753,911</td> <td>41,028</td> <td>40</td> <td>13</td> <td>14641</td> </tr> <tr> <td>driverlog-13</td> <td>26</td> <td>-</td> <td>-</td> <td>-</td> <td>49,533,873</td> <td>2,147,482,093</td> <td>2,766,380,501</td> <td>145,296</td> <td>92</td> <td>25</td> <td>21814</td> </tr> </tbody> </table> References
{"Source-Url": "http://www.aaai.org/Papers/AAAI/2006/AAAI06-170.pdf", "len_cl100k_base": 7508, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 22192, "total-output-tokens": 8332, "length": "2e12", "weborganizer": {"__label__adult": 0.0004014968872070313, "__label__art_design": 0.00047087669372558594, "__label__crime_law": 0.0006432533264160156, "__label__education_jobs": 0.001369476318359375, "__label__entertainment": 0.00013768672943115234, "__label__fashion_beauty": 0.00026154518127441406, "__label__finance_business": 0.0005097389221191406, "__label__food_dining": 0.0004146099090576172, "__label__games": 0.0012874603271484375, "__label__hardware": 0.0017604827880859375, "__label__health": 0.0007243156433105469, "__label__history": 0.0005426406860351562, "__label__home_hobbies": 0.0001703500747680664, "__label__industrial": 0.001041412353515625, "__label__literature": 0.0004124641418457031, "__label__politics": 0.00045180320739746094, "__label__religion": 0.0005159378051757812, "__label__science_tech": 0.38330078125, "__label__social_life": 0.00013399124145507812, "__label__software": 0.0171051025390625, "__label__software_dev": 0.58642578125, "__label__sports_fitness": 0.0003864765167236328, "__label__transportation": 0.00119781494140625, "__label__travel": 0.00026154518127441406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32717, 0.05625]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32717, 0.57674]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32717, 0.89106]], "google_gemma-3-12b-it_contains_pii": [[0, 5032, false], [5032, 11528, null], [11528, 18343, null], [18343, 21150, null], [21150, 26785, null], [26785, 32717, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5032, true], [5032, 11528, null], [11528, 18343, null], [18343, 21150, null], [21150, 26785, null], [26785, 32717, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32717, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32717, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32717, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32717, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32717, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32717, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32717, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32717, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32717, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32717, null]], "pdf_page_numbers": [[0, 5032, 1], [5032, 11528, 2], [11528, 18343, 3], [18343, 21150, 4], [21150, 26785, 5], [26785, 32717, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32717, 0.09859]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
2d64426b958d6e585fc4074ab2b176f4010432c5
Tutorial on Substructural Logic Hongseok Yang MICROS, KAIST Email: hyang@kaist.ac.kr August 21 2003 Resource-Sensitive Logic - Logic provides a language to express a property, and a proof system for reasoning about sentences in the language. - In computer science, we often need to express resource-sensitive properties and to reason about them. - I have enough money to buy both a computer and PS2. - This sentence can be split into two so that the first part ends with “sentence” and the second part starts with “can”. - If I obtain all the access capabilities of user $A$ in addition to what I have already, then I can change any files in this computer. - Substructural logic allows one to express and reason about such resource-sensitive properties. What is substructural logic? - Model-theoretically, substructural logic is a logic about resource-sensitive connectives. - Proof-theoretically, substructural logic is a proof system that does not have certain structural rules. Substructural logic is currently being studied in many areas of computing. - Separation Logic - Typed Assembly Language: Alias Type, Stack Typing - Logic for Hierarchical Storage - Logic for Processes: Ambient Logic - Query Language for Semi-structured Data - Resource-sensitive Logic Programming Outline 1. Model of Resources 2. Syntax and Semantics of Formulas 3. Proof Rules and Soundness 1. Model of Resources About what do we want to reason? What are resources? We model resources using a monoid \( M \). - A monoid \( M \) is a set \( M \) with a unit element \( e \in M \) and a binary operator \( * : M \times M \to M \) such that 1. \( e \) is the unit for \( * \): \[ \forall m \in M. \ e * m = m = m * e, \text{ and} \] 2. \( * \) is associative: \[ \forall m, m', m'' \in M. \ m * (m' * m'') = (m * m') * m''. \] - Each \( m \) in \( M \) represents a resource. - The unit element \( e \) denotes the “empty” resource. - The \( * \) operator combines resources. Examples \[ \begin{align*} \text{Sentences} & \overset{\text{def}}{=} \text{ASCII}^* \\ \text{Money} & \overset{\text{def}}{=} \{0, 1, 2, \ldots\} \\ \text{Capabilities} & \overset{\text{def}}{=} \{\text{access}(n) \mid n \in \text{Nats}\} \end{align*} \] - Sentence Model: \((\text{Sentences}, \epsilon, \cdot)\) - Money Model: \((\text{Money}, 0, +)\) - Capability Model: \((\mathcal{P}(\text{Capabilities}), \emptyset, \cup)\) Suppose that we have predicates \textit{correct}, and \textit{startWith}(s) and \textit{endWith}(s) for all strings \(s\) in the sentence model. \[ s' \models \textit{correct} : \text{s'} \text{ is grammatically correct.} \] \[ s' \models \textit{startWith}(s) : \text{s'} \text{ starts with } s. \] \[ s' \models \textit{endWith}(s) : \text{s'} \text{ ends with } s. \] Can you express the following? - This sentence can be split into two so that the first part ends with “sentence” and the second part starts with “can”. - Even if I substitute “semantically.” for “grammatically.” at the end of this sentence, the sentence still remains correct grammatically. 2. Syntax and Semantics How to express properties about resources? A formula $P$ is given by the following context free grammar: $$P ::= A \mid 0 \mid P \circ P \mid P \rightarrow P \mid P \leftarrow P$$ Semantics of Formulas For a given monoid \((M, e, \ast)\), each formula \(P\) expresses a set of resources \(m \in M\) that satisfy \(P\), denoted \(m \models P\). - \(m \models A\) iff \(m \in [A]\) - \(m \models 0\) iff \(m = e\) - \(m \models P \circ Q\) iff \(\exists m_1, m_2 \in M.\ s.t. \ m_1 \ast m_2 = m\) and \(m_1 \models P\) and \(m_2 \models Q\) - \(m \models P \rightarrow Q\) iff \(\forall m_1 \in M.\ if \ m_1 \models P,\ then \ m \ast m_1 \models Q\) - \(m \models Q \leftarrow P\) iff \(\forall m_1 \in M.\ if \ m_1 \models P,\ then \ m_1 \ast m \models Q\) Examples from Sentence Model \[ s' \models \text{correct} \quad \text{iff} \quad s' \text{ is grammatically correct} \] \[ s' \models \text{startWith}(s) \quad \text{iff} \quad s' \text{ starts with } s \] \[ s' \models \text{endWith}(s) \quad \text{iff} \quad s' \text{ ends with } s \] \[ s' \models \text{is}(s) \quad \text{iff} \quad s' \text{ is } s \] A sentence \( s \) can be split into two so that the first part ends with “sentence” and the second part starts with “can”. \[ \text{endWith(sentence)} \circ \text{startWith(can)}. \] Even if I substitute “semantically.” for “grammatically.” at the end of this sentence, the sentence still remains correct grammatically. \[ (\text{is(semantically.)} \rightarrow \text{correct}) \circ \text{is(grammatically.)} \] Examples from Money Model \[ m \models \text{canBuy(Computer)} \iff m \geq 1000000 \] \[ m \models \text{canBuy(PS)} \iff m \geq 320000 \] \[ m \models \text{lotto} \iff m = 100,000,000 \] \[ m \models \text{poor} \iff m \leq 200,000,010 \] I have enough money to buy both a computer and PS2. \[ \text{canBuy(PS)} \circ \text{canBuy(Computer)}. \] Even if I hit the jackpot twice in lotto, I am still poor. \[ \text{lotto} \rightarrow (\text{lotto} \rightarrow \text{poor}). \] Examples from Capability Model \[ m \models \text{capability}(A) \iff m = \{\text{access}(0)\} \] \[ m \models \text{crash} \iff \{\text{access}(0), \text{access}(1)\} \subseteq m \] \[ m \models \text{cheat} \iff \text{access}(0) \notin m \text{ and access}(2) \notin m \] If I obtain the capacities of the user \( A \) in addition to what I already have, I can crash the machine. \[ \text{capability}(A) \rightarrow \text{crash} \] I have capabilities enough to crash the machine; and when I throw away some of the capabilities for crashing the machine, I can cheat the system administrator. \[ \text{crash} \circ \text{cheat} \] 3. Proof Rules and Soundness How to reason about formulas, that is, properties about resources? A context, denoted $\Gamma$, is a finite sequence of formulas. A context “$P_0, P_1, \ldots, P_n$” means $P_1 \circ \ldots \circ P_n$. A sequent is a pair of context $\Gamma$ and formula $P$, denoted $\Gamma \vdash P$. For a given monoid $(M, e, \ast)$, a sequent $\Gamma \vdash P$ means that for all $m \in M$, if $m \models \Gamma$, then $m \models P$. We will consider proof rules about sequents. Natural Deduction Our proof system is in the style of natural deduction. That is, the rules have one of the following three forms: 1. a proof rule for using assumption, \[ \frac{P}{\Gamma \vdash P} \text{ Id} \] 2. introduction and elimination rules for each connective, and \[ \frac{\Gamma_1 \vdash P_1 \quad \ldots \quad \Gamma_n \vdash P_n}{\Gamma \vdash P \mathbin{?} P'} \text{ ?I} \quad \frac{\Gamma \vdash P \mathbin{?} P' \quad \Gamma_1 \vdash P_1 \quad \ldots \quad \Gamma_n \vdash P_n}{\Gamma'' \vdash P''} \text{ ?E} \] 3. structural rules. \[ \frac{\Gamma \vdash P}{\Gamma' \vdash P} \] A *proof* is a tree whose leaf nodes use the Id rule, and whose internal nodes use one of the other rules. **Introduction and Elimination Rules for →, ←, and ◦** \[ \begin{align*} \Gamma, P & \vdash Q \\ \hline \Gamma & \vdash P \rightarrow Q \end{align*} \] \[ \begin{align*} P, \Gamma & \vdash Q \\ \hline \Gamma & \vdash Q \leftarrow P \end{align*} \] \[ \begin{align*} \Gamma & \vdash P \\ \Gamma' & \vdash Q \\ \hline \Gamma, \Gamma' & \vdash P \circ Q \end{align*} \] \[ \begin{align*} \Gamma & \vdash P \rightarrow Q \\ \Gamma' & \vdash Q \leftarrow P \\ \hline \Gamma' & \vdash P \\ \Gamma' & \vdash Q \end{align*} \] \[ \begin{align*} \Gamma & \vdash P \circ Q \\ \Gamma', P, Q, \Gamma'' & \vdash R \\ \hline \Gamma', \Gamma, \Gamma'' & \vdash R \end{align*} \] **Example Derivation:** \[ \begin{align*} R \rightarrow (P \rightarrow Q) & \vdash R \rightarrow (P \rightarrow Q) \\ R & \vdash R \\ \hline R \rightarrow (P \rightarrow Q) & R \vdash P \rightarrow Q \\ \hline R \rightarrow (P \rightarrow Q) & R, P \vdash Q & \rightarrow E & P \vdash P & \rightarrow E \end{align*} \] \[ \begin{align*} R \circ P & \vdash R \circ P \\ \begin{align*} R & \rightarrow (P \rightarrow Q) \\ R \rightarrow (P \rightarrow Q), R & \vdash P \rightarrow Q \end{align*} & \circ E \\ \hline R & \rightarrow (P \rightarrow Q), R \circ P \vdash Q & \rightarrow I \end{align*} \] Structural Rules and Substructural Logic \[ \begin{align*} \text{Permutation} & : & \Gamma, Q, P, \Gamma' \vdash R \\ & \quad & \Gamma, P, Q, \Gamma' \vdash R \end{align*} \] \[ \begin{align*} \text{Weakening} & : & \Gamma, \Gamma' \vdash R \\ & \quad & \Gamma, P, \Gamma' \vdash R \end{align*} \] \[ \begin{align*} \text{Contraction} & : & \Gamma, P, P, \Gamma' \vdash R \\ & \quad & \Gamma, P, \Gamma' \vdash R \end{align*} \] We often do not include some structural rules, depending on the resource monoid of interest. When the structural rules are restricted in a proof system, the system is called \textit{substructural logic}. <table> <thead> <tr> <th></th> <th>Permutation</th> <th>Weakening</th> <th>Contraction</th> </tr> </thead> <tbody> <tr> <td>Ordered Linear Logic</td> <td>No</td> <td>No</td> <td>No</td> </tr> <tr> <td>Linear Logic (BCI)</td> <td>Yes</td> <td>No</td> <td>No</td> </tr> <tr> <td>Relevant Logic</td> <td>Yes</td> <td>No</td> <td>Yes</td> </tr> <tr> <td>Affine Logic</td> <td>Yes</td> <td>Yes</td> <td>No</td> </tr> <tr> <td>Classical Logic</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> </tbody> </table> A proof rule \[ \Gamma_1 \vdash P_1 \ldots \Gamma_n \vdash P_n \] \[\Gamma \vdash P\] is called \textit{sound} for a monoid \(M\) if and only if if all of \(\Gamma_i \vdash P_i\) are true in \(M\), then \(\Gamma \vdash P\) is true in \(M\). \textbf{Theorem (Soundness)}: For all monoids \(M\), the identity rule and the rules for connectives are sound for \(M\). However, structural rules are sound only for particular monoids. The soundness of each structural rule depends on the property of the resource-combining operator $\ast$ in $\mathcal{M} = (M, e, \ast)$. 1. If $\ast$ is commutative, then Permutation is sound for $\mathcal{M}$. 2. If $\ast$ is absorbing, then Weakening is sound for $\mathcal{M}$. $\ast$ is absorbing iff for all $m, m' \in M$, \[ m \ast m' = m' \ast m = m. \] 3. If $\ast$ is idempotent, then Contraction is sound for $\mathcal{M}$. $\ast$ is idempotent iff for all $m \in M$, $m \ast m$ is equal to $m$. ### Properties of $*$ for Each Model <table> <thead> <tr> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>Sentences Model</td> <td>No</td> <td>No</td> <td>No</td> <td>Ordered Linear Logic</td> </tr> <tr> <td>Money Model</td> <td>Yes</td> <td>No</td> <td>No</td> <td>Linear Logic</td> </tr> <tr> <td>Capability Model</td> <td>Yes</td> <td>No</td> <td>Yes</td> <td>Relevant Logic</td> </tr> <tr> <td>Trivial Model</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Classical Logic</td> </tr> </tbody> </table> Example Proofs \[ \begin{align*} \text{startWith}(I) \vdash \text{startWith}(I) & \quad \text{endWith}(\text{happy}) \vdash \text{endWith}(\text{happy}) \\ \text{startWith}(I), \text{endWith}(\text{happy}) & \vdash \text{startWith}(I) \circ \text{endWith}(\text{happy}) \end{align*} \] \[ \begin{align*} \text{canBuy}(& \text{Computer}) \vdash \text{canBuy}(& \text{Computer}) & \quad \text{canBuy}(\text{PS}) \vdash \text{canBuy}(\text{PS}) \\ \text{canBuy}(\text{Computer}), \text{canBuy}(\text{PS}) & \vdash \text{canBuy}(\text{Computer}) \circ \text{canBuy}(\text{PS}) & \text{canBuy}(\text{PS}), \text{canBuy}(\text{Computer}) \vdash \text{canBuy}(\text{Computer}) \circ \text{canBuy}(\text{PS}) \end{align*} \] \[ \begin{align*} \text{cheat} \circ \text{cheat} & \quad \text{cheat} \vdash \text{cheat} \\ \text{cheat}, \text{cheat} & \vdash \text{cheat} \circ \text{cheat} \\ \text{cheat} & \vdash \text{cheat} \circ \text{cheat} \end{align*} \] But note that cheat, crash \vdash cheat \circ cheat does not hold. Substructural logic is a proof system where structural rules are used in a restricted way. A connective in substructural logic is resource-sensitive: $\circ$ denotes a resource splitting, and $\rightarrow$ the addition of resources. The interpretation of substructural logic by monoid in this slide is a special case of a more general model. For the more general model theory, look at - “Introduction to Substructural Logics” by Greg Restall - “Possible Worlds and Resources: Semantics of BI” by Pym, O’Hearn, and Yang
{"Source-Url": "https://sigpl.or.kr/school/2003s/slides/14.pdf", "len_cl100k_base": 4157, "olmocr-version": "0.1.53", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 41012, "total-output-tokens": 5167, "length": "2e12", "weborganizer": {"__label__adult": 0.00046324729919433594, "__label__art_design": 0.0005931854248046875, "__label__crime_law": 0.0008993148803710938, "__label__education_jobs": 0.00537109375, "__label__entertainment": 0.00019443035125732425, "__label__fashion_beauty": 0.0002574920654296875, "__label__finance_business": 0.0006957054138183594, "__label__food_dining": 0.0007376670837402344, "__label__games": 0.0009679794311523438, "__label__hardware": 0.0012407302856445312, "__label__health": 0.0011930465698242188, "__label__history": 0.0003817081451416016, "__label__home_hobbies": 0.00033736228942871094, "__label__industrial": 0.0013427734375, "__label__literature": 0.0014715194702148438, "__label__politics": 0.0005822181701660156, "__label__religion": 0.0007715225219726562, "__label__science_tech": 0.405517578125, "__label__social_life": 0.00030684471130371094, "__label__software": 0.01003265380859375, "__label__software_dev": 0.56494140625, "__label__sports_fitness": 0.0004017353057861328, "__label__transportation": 0.0010251998901367188, "__label__travel": 0.00019979476928710935}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 12339, 0.00736]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 12339, 0.22189]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 12339, 0.67293]], "google_gemma-3-12b-it_contains_pii": [[0, 104, false], [104, 767, null], [767, 995, null], [995, 1293, null], [1293, 1389, null], [1389, 1465, null], [1465, 2008, null], [2008, 2440, null], [2440, 3108, null], [3108, 3176, null], [3176, 3314, null], [3314, 3892, null], [3892, 4671, null], [4671, 5154, null], [5154, 5793, null], [5793, 5890, null], [5890, 6294, null], [6294, 7008, null], [7008, 8280, null], [8280, 9366, null], [9366, 9800, null], [9800, 10321, null], [10321, 10793, null], [10793, 11818, null], [11818, 12339, null]], "google_gemma-3-12b-it_is_public_document": [[0, 104, true], [104, 767, null], [767, 995, null], [995, 1293, null], [1293, 1389, null], [1389, 1465, null], [1465, 2008, null], [2008, 2440, null], [2440, 3108, null], [3108, 3176, null], [3176, 3314, null], [3314, 3892, null], [3892, 4671, null], [4671, 5154, null], [5154, 5793, null], [5793, 5890, null], [5890, 6294, null], [6294, 7008, null], [7008, 8280, null], [8280, 9366, null], [9366, 9800, null], [9800, 10321, null], [10321, 10793, null], [10793, 11818, null], [11818, 12339, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 12339, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 12339, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 12339, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 12339, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 12339, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 12339, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 12339, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 12339, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 12339, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 12339, null]], "pdf_page_numbers": [[0, 104, 1], [104, 767, 2], [767, 995, 3], [995, 1293, 4], [1293, 1389, 5], [1389, 1465, 6], [1465, 2008, 7], [2008, 2440, 8], [2440, 3108, 9], [3108, 3176, 10], [3176, 3314, 11], [3314, 3892, 12], [3892, 4671, 13], [4671, 5154, 14], [5154, 5793, 15], [5793, 5890, 16], [5890, 6294, 17], [6294, 7008, 18], [7008, 8280, 19], [8280, 9366, 20], [9366, 9800, 21], [9800, 10321, 22], [10321, 10793, 23], [10793, 11818, 24], [11818, 12339, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 12339, 0.05078]]}
olmocr_science_pdfs
2024-12-12
2024-12-12
b9010d677a130f3bff609404f727f0301acd5c33
How to Build Various Common Applications and Libraries on Beagle2 While many applications and libraries are already provided by the Beagle support staff, you may find you need to compile these yourself with local modifications or requirements. Below we provide how we compiled the various applications we provide. This document assumes you are already familiar with how to compile code on Beagle. If you are not, please make sure you understand the Beagle programming environment before moving on. - BAMTOOLS - BLAT - Charm++ - CGA tools - Cufflinks - Gromacs - Single precision, No MPI - Single precision, With MPI - Double precision, No MPI - Double precision, With MPI - Gromacs 4.5.5 single precision with MPI: custom installation - Hoard lib - Sun java - Swift - GSL (GNU Scientific Library) - LAGAN - GNU Compiler - PGI Compiler - LAMMPS - GNU Compiler - METIS - NAMD - GNU Compiler - PGI Compiler - NCBI Blast+ - GNU Compiler - Netlib CBLAS - PCRE - GNU compiler - Python - Biopython - NumPy - PIL - SciPy - R - GNU Compiler - SuiteSparse - SGA -- String Graph Assembler - GNU Compiler - SKAT - SWIG - GNU Compiler - Valgrind - GNU Compiler - VARIANT TOOLS BAMTOOLS Currently only compiles with GNU compiler. ``` git clone git://github.com/pezmaster31/bamtools.git module switch PrgEnv-cray PrgEnv-gnu module load cmake mkdir build cd build XTPE_LINK_TYPE=dynamic CC="cc -dynamic" CXX="CC -dynamic" BamTools_SOURCE_DIR="/lustre/beagle/<bamtools_src>" cmake .. make ``` Edit makefile to add "-dynamic" to the the last step, building the binaries: ``` make install DESTDIR=<you_dest_dir> ``` Move following libraries into the path, as the installation doesn't seem to work properly: ```bash #move files to the library path of bamtools (for some reason they did not get copied) cp /lustre/beagle2/denovo/src/bamtools/lib/libbamtools-utils.so.2.1.0 <your_dest_dir>/lib cp /lustre/beagle2/denovo/src/bamtools/lib/libjsoncpp.so.1.0.0 <your_dest_dir>/lib/ ``` **BLAT** Currently only compiles with GNU compiler. Setup your environment: ```bash module switch PrgEnv-cray PrgEnv-gnu ``` Apply the patch file and compile: ```bash export BLAT_INSTALL=/soft/blat/gnu/3.4/bin mkdir -p ${BLAT_INSTALL} export MACHTYPE=x86_64 make BINDIR=${BLAT_INSTALL} CC=cc CFLAGS=-O3 ``` **Charm++** Charm++ currently only compiles with the PGI and GNU compilers and there is no difference in how you compile with either. Below is an example using the PGI compiler. NOTE: for the time being we do not have PGI compiler, it might become available in the future. Setup your environment: ```bash module load rca petsc ``` Configure and compile: ```bash env MPICXX=CC MPICC=cc ./build \ charm++ \ mpi-crayxt \ --no-build-shared \ --with-production \ -j4 \ -optimize \ -DCMK_OPTIMIZE=1 env MPICXX=CC MPICC=cc ./build \ AMPI \ mpi-crayxt \ --no-build-shared \ --with-production \ -j4 \ -optimize \ -DCMK_OPTIMIZE=1 env MPICXX=CC MPICC=cc ./build \ FEM \ mpi-crayxt \ --no-build-shared \ --with-production \ -j4 \ -optimize \ -DCMK_OPTIMIZE=1 cd tests make ``` *Verify there are no errors when compiling the tests* CGA tools NOTE: for the time being we do not have PGI compiler, it might become available in the future. vers=1.5.0.31 vers=1.7.1 subvers=${vers}.5 wget http://sourceforge.net/projects/cgatools/files/${vers}/cgatools-${subvers}-source.tar.gz/download tar -xvf cgatools-${subvers}-source.tar.gz module load cmake module swap PrgEnv-cray PrgEnv-gnu module load boost cd cgatools-${subvers}-source cd build # WARNING: check the current version and make sure that the # version of gcc used is compatible with cgatools and with the libraries # it is using (e.g., boost) module swap PrgEnv-gnu/4.1.40 PrgEnv-gnu/3.1.49A module swap gcc/4.7.2 gcc/gcc/4.6.1 module swap xt-mpt xt-mpich2 module swap xt-mpich2/5.6.1 xt-mpich2/5.3.2 cmake -DBOOST_ROOT=/$BOOST_ROOT \ -DCMAKE_CXX_COMPILER=CC -DCMAKE_C_COMPILER=cc \ -DCMAKE_INSTALL_PREFIX=/soft/cgatools/gnu/${vers} \ -DCMAKE_BUILD_TYPE=Release \ /lustre/beagle2/genomicsTools/soft/src/cgatools-${subvers}-source make -j8 cmake -j8 cd /soft/modulefiles/applications/cgatools ln -s ../../templates/cgatools $vers Cufflinks http://cufflinks.cbcb.umd.edu/downloads/cufflinks-1.3.0.tar.gz tar -xvf cufflinks-1.3.0.tar.gz module load python module load boost module load samtools (Needed to create include file on samtools, create a subdir bam there and copy there all the .h files) CXX=CC cc=cc PYTHON=`which python` ./configure --prefix=/soft/cufflinks/gnu/1.3.0 --with-boost=$BOOST_ROOT --with-bam=$SAM_HOME for x in *.cpp *.h; do sed 's/foreach/for_each/' $x > x; mv x $x; done Replace line 24 of common.h //#include <boost/foreach.hpp> #include <boost/foreach.hpp> # After because boost was made with 4.6.1 module swap gcc/4.6.1 gcc/4.5.2 make make install Gromacs Gromacs can be built 4 different ways: single precision no MPI, single precision with MPI, double precision no MPI, double precision with MPI. There is no difference in how you compile Gromacs with the different compilers. Below are examples using the PGI compiler. You can find more details here: [http://www.gromacs.org/Documentation/Installation_Instructions#Details_for_building_the_FFTW_prerequisite](http://www.gromacs.org/Documentation/Installation_Instructions#Details_for_building_the_FFTW_prerequisite) Before building Gromacs, you need to setup your environment: **NOTE:** for the time being we do not have PGI compiler, it might become available in the future. ``` module unload dmapp module load fftw gsl ``` ### Single precision, No MPI ``` ./configure \ CC=cc \ CXX=CC \ F77=ftn \ CFLAGS="$FFTW_INCLUDE_OPTS $GSL_INCLUDE_OPTS" \ --prefix=/soft/gromacs/pgi/4.5.3 \ --disable-shared \ --enable-fortran \ --with-fft=fftw3 \ --with-gsl \ --disable-threads \ --enable-all-static \ --enable-static \ --disable-mpi \ --with-external-blas \ --with-external-lapack \ --without-xml make make install ``` ### Single precision, With MPI ``` ./configure \ CC=cc \ CXX=CC \ F77=ftn \ CFLAGS="$FFTW_INCLUDE_OPTS $GSL_INCLUDE_OPTS" \ --prefix=/soft/gromacs/pgi/4.5.3 \ --disable-shared \ --enable-fortran \ --with-fft=fftw3 \ --with-gsl \ --disable-threads \ --enable-all-static \ --enable-static \ --enable-mpi \ --program-suffix="_mpi" \ --with-external-blas \ --with-external-lapack \ --without-xml make make install ``` ### Double precision, No MPI ``` ``` ``` ./configure \ CC=cc \ CXX=CC \ F77=ftn \nCFLAGS="${FFTW_INCLUDE_OPTS} ${GSL_INCLUDE_OPTS}" \n--prefix=/soft/gromacs/pgi/4.5.3 \n--disable-shared \n--enable-fortran \n--with-fft=fftw3 \n--with-gsl \n--disable-float \n--disable-threads \n--enable-all-static \n--enable-static \n--disable-mpi \n--with-external-blas \n--with-external-lapack \n--without-xml make make install Double precision, With MPI ./configure \ CC=cc \ CXX=CC \ F77=ftn \nCFLAGS="${FFTW_INCLUDE_OPTS} ${GSL_INCLUDE_OPTS}" \n--prefix=/soft/gromacs/pgi/4.5.3 \n--disable-shared \n--enable-fortran \n--with-fft=fftw3 \n--with-gsl \n--disable-float \n--disable-threads \n--enable-all-static \n--enable-static \n--disable-mpi \n--program-suffix="_mpi_d" \n--with-external-blas \n--with-external-lapack \n--without-xml make make install Gromacs 4.5.5 single precision with MPI: custom installation Before you start installation make sure to have following in your module list: Currently Loaded Modulefiles: 1) modules/3.2.6.7 2) nodestat/2.2-1.0502.53712.3.109.gem 3) sdb/1.0-1.0502.55976.5.27.gem 4) alps/5.2.1-2.0502.9072.13.1.gem 5) lustre-cray_gem_s/2.5_3.0.101_0.31_1.1_0.0502.8394.10.1-1.0502.17198.8.50 6) udreg/2.3.2-1.0502.9275.1.25.gem 7) ugni/5.0-1.0502.9685.4.24.gem 8) gnl-headers/3.0-1.0502.9684.5.2.gem 9) dmapp/7.0.1-1.0502.9501.5.211.gem 10) xpmem/0.1-2.0502.55507.3.2.gem 11) hss-lm/7.2.0 12) Base-opts/1.0.2-1.0502.53325.1.2.gem 13) craype-network-gemini 14) craype/2.2.1 15) craype-abudhabi 16) ci 17) moab/6.1.1 18) torque/2.5.7 19) cmake/2.8.4 20) gcc/4.9.1 21) totalview-support/1.2.0.3 22) totalview/8.14.1 23) cray-libsci/13.0.1 24) pmi/5.0.6-1.0000.10439.140.3.gem 25) atp/1.7.5 26) PrgEnv-gnu/5.2.40 27) cray-mpich/7.0.5 28) fftw/3.3.0.4 29) gsl/1.15 export CPPFLAGS="-I/opt/fftw/3.3.0.4/abudhabi/include" export CC='cc' export CXX='CC' export F77='ftn' export F77='ftn' export F77='ftn' export F77='ftn' export F77='ftn' export F77='ftn' --prefix=/lustre/beagle2/<your_installation_directory> --disable-shared --enable-fortran --with-fftw=fftw3 --with-gsl --disable-threads --enable-all-static --enable-static --enable-mpi --program-suffix="_mpi" --with-external-blas --with-external-lapack --without-xml make make install To run Gromacs 4.5.5 on Beagle2, put this in your PBS script: #!/bin/bash #PBS -N <job_name> #PBS -j oe #PBS -l walltime=48:00:00 #PBS -l mppwidth=64 #this number can be any multiple of 32, since there is 32 cores/node, and this number must match aprun -n 32 (in this case) source /opt/modules/default/init/bash cd $PBS_O_WORKDIR module load fftw/3.3.0.4 module load gromacs/4.5.5-single aprun -n 1 -N 1 /soft/gromacs-gnu/4.5.5-single/bin/grompp -f tip3pBox60_equil.mdp -c tip3pBox60_preequil.gro -n tip3pBox60.ndx -p tip3pBox60.top -o tip3pBox60_equil.trr aprun -n 64 /soft/gromacs-gnu/4.5.5-single/bin/mdrun -s tip3pBox60_equil.tpr -o tip3pBox60_equil.trr -x tip3pBox60_equil.xtc -e tip3pBox60_equil.edr -g tip3pBox60_equil.log To learn more about Gromacs installation please visit this page: http://www.gromacs.org/Documentation/Installation_Instructions_4.5 To learn more how to run Gromacs please visit this page: http://www.gromacs.org/@api/deki/files/213/=gromacs_parallelization_acceleration.pdf Hoard lib This is an alternate version of java other than the IBM java that is installed by default on Beagle. http://www.cs.umass.edu/~emery/hoard/hoard-3.8/source/hoard-38.tar.gz module swap PrgEnv-cray PrgEnv-gnu replace all g++ with CC in makefile create lib and move .so there create include and move .h there. Sun java This is an alternate version of java other than the IBM java that is installed by default on Beagle. Setup your environment: module load sun-java Swift The Swift parallel scripting framework. Setup your environment: module load swift GSL (GNU Scientific Library) There is one slight difference in how you compile GSL with the PGI and Cray from GNU compilers. With the PGI and Cray compilers, you need to do the following immediately after running configure: ``` sed 's;#define HAVE_INLINE 1;#undef HAVE_INLINE;g' -i config.h ``` With GNU this is not required. Below is an example using the PGI compiler. NOTE: for the time being we do not have PGI compiler, it might become available in the future. ``` export GSL_VERSION=1.14 ./configure \ CC=cc \n --prefix=/soft/gsl/pgi/${GSL_VERSION} \n --disable-shared sed 's;#define HAVE_INLINE 1;#undef HAVE_INLINE;g' -i config.h make make install ``` LAGAN LAGAN compiles with both PGI and GNU compilers. NOTE: for the time being we do not have PGI compiler, it might become available in the future. GNU Compiler Setup your environment: ``` module switch PrgEnv-cray PrgEnv-gnu ``` Unzip the source and move the resulting directory to the installation destination: ``` mkdir -p /soft/lagan-gnu cd /soft/lagan-gnu tar zxvf lagan20.tar.gz mv lagan20 2.0 ``` Apply the `src/Makefile.patch` and `src/local/Makefile.patch` and compile: ``` rm -f prolagan make ``` PGI Compiler Setup your environment: NOTE: for the time being we do not have PGI compiler, it might become available in the future. ``` module switch PrgEnv-cray PrgEnv-gnu ``` Unzip the source and move the resulting directory to the installation destination: ``` mkdir -p /soft/lagan/pgi cd /soft/lagan/pgi tar zxvf lagan20.tar.gz mv lagan20 2.0 ``` Apply the `src/Makefile.patch` and `src/local/Makefile.patch` and compile: ``` rm -f prolagan make ``` LAMMPS LAMMPS, currently, only compiles with the GNU compiler. GNU Compiler Setup your environment: ``` module switch PrgEnv-cray PrgEnv-gnu module load fftw/2.1.5.2 ``` - Install `atc Makefile` in `lib/atc` - Install `meam` Makefile in `lib/meam` - Install `poems` Makefile in `lib/poems` - Install `reax` Makefile in `lib/reax` - Install `Beagle` Makefile in `src/MAKE` Compile LAMMPS: ```bash cd src make yes-all make no-gpu ``` ```bash cd ../lib/atc make -f Makefile-atc.beagle ``` ```bash cd ../meam make -f Makefile-meam.beagle ``` ```bash cd ../poems make -f Makefile-poems.beagle ``` ```bash cd ../reax make -f Makefile-reax.beagle ``` ```bash cd ../../src sed -i -e 's/fftw.h/dfftw.h/g' fft3d.h make beagle ``` **METIS** Download either the [PGI Makefile](http://example.com/pgi.make) or [GNU Makefile](http://example.com/gnu.make). The following example assumes PGI: **NOTE:** for the time being we do not have PGI compiler, it might become available in the future. ```bash rm -f Makefile.in; ln -sf Makefile-pgi.in Makefile.in make mkdir -p /soft/metis/pgi/4.0.1/bin mkdir -p /soft/metis/pgi/4.0.1/lib cp graphchk mesh2dual oemetis partdmesh pmetis kmetis mesh2nodal onmetis partnmesh /soft/metis/pgi/4.0.1/bin/ cp libmetis.a /soft/metis/pgi/4.0.1/lib/ ``` **NAMD** **GNU Compiler** Setup your environment: ```bash module switch PrgEnv-cray PrgEnv-gnu module load rca petsc charm++ tcl fftw/2.1.5.2 ``` **Edit** `Make.charm` and make sure `CHARMBASE` points to the install of Charm++: ```bash CHARMBASE = $(CHARM_DIR) ``` **Edit** `arch/CRAY-XT.fftw` so it looks like: ```bash FFTDIR=$(FFTW_DIR) FF Tincl=$(FFTW_INCLUDE_OPTS) FFTLIB=$(FFTW_POST_LINK_OPTS) -lfftw -ldfftw FFTFLGS=-DNAMD_FFTW FFT=$(FF Tincl) $(FFTFLGS) ``` **Edit** `arch/CRAY-XT.tcl` so it looks like: TCLDIR=$(TCL_DIR) TCLINCL=$(TCL_INCLUDE_OPTS) TCLLIB=$(TCL_POST_LINK_OPTS) -ldl TCLFLAGS=-DNAMD_TCL TCL=$(TCLINCL) $(TCLFLAGS) Edit arch/CRAY-XT-g++.arch so it looks like: NAMD_ARCH = CRAY-XT CHARMARCH = mpi-crayxt # The GNU compilers produce significantly faster NAMD binaries than PGI. # You must run the following to switch CC/cc to the GNU compiler # environment before building either Charm++ or NAMD: # module swap PrgEnv-cray PrgEnv-gnu # Users of psfgen might also need to do 'module remove acml' in order for # the the psfgen compilation to succeed. CXX = CC -DNANO_SOCKET -DDUMMY_VMDSOCK -DNO_HOSTNAME -DNANO_NO_STDOUT_FLUSH -DNANO_NO_O_EXCL CXXOPTS = -O3 -ffast-math -fexpensive-optimizations -fomit-frame-pointer CC = cc COPTS = -O3 -ffast-math -fexpensive-optimizations -fomit-frame-pointer Compile NAMD: ./config CRAY-XT-g++.arch cd CRAY-XT-g++ make -j4 PGI Compiler Setup your environment: NOTE: for the time being we do not have PGI compiler, it might become available in the future. module load rca petsc charm++ tcl fftw/2.1.5.2 Edit Make.charm and make sure CHARMBASE points to the install of Charm++: CHARMBASE = $(CHARM_DIR) Edit arch/CRAY-XT.fftw so it looks like: FFTDIR=$(FFTW_DIR) FFTINCL=$(FFTW_INCLUDE_OPTS) FTLLIB=$(FFTW_POST_LINK_OPTS) -lfftw -ldfftw FFTFLAGS=-DNAMD_FFTW FFT=$(FFTINCL) $(FFTFLAGS) Edit arch/CRAY-XT.tcl so it looks like: TCLDIR=$(TCL_DIR) TCLINCL=$(TCL_INCLUDE_OPTS) TCLLIB=$(TCL_POST_LINK_OPTS) -ldl TCLFLAGS=-DNAMD_TCL TCL=$(TCLINCL) $(TCLFLAGS) Edit arch/CRAY-XT-pgcc.arch so it looks like: NAMD_ARCH = CRAY-XT CHARMARCH = mpi-crayxt # The GNU compilers produce significantly faster NAMD binaries than PGI. # CXX = CC CXXOPTS = -O CXXNOALIASOPTS = -fast -Mnodepchk -Msafeptr=arg,global,local,static -Minfo=all -Mneginfo=loop CC = cc COPTS = -fast Compile NAMD: ./config CRAY-XT-pgcc.arch cd CRAY-XT-pgcc make -j4 NCBI Blast+ We currently only have instructions for compiling Blast+ with the GNU compiler GNU Compiler Setup you environment: module switch PrgEnv-cray PrgEnv-gnu Configure and compile Blast+: ./configure \ CC=cc \ CXX=CC \ --prefix=/soft/ncbi-blast+/gnu/2.2.24 \ --without-dll \ --with-static \ --with-static-exe \ --with-lfs \ --with-mt \ --with-64 \ --without-debug make make install Netlib CBLAS You can only compile CBLAS using the PGI or GNU compilers. There is no difference in how you compile Netlib CBLAS, only which Makefile you link Makefile.in to. We provide the PGI Makefile and the GNU Makefile. The examples below use the PGI compiler. NOTE: for the time being we do not have PGI compiler, it might become available in the future. Setup you environment: module load acml Compile and install CBLAS: rm -f Makefile.in ln -sf cblas-Makefile.pgi Makefile.in make all mkdir -p /soft/cblas/pgi/3.0/lib cp lib/libcblas.a /soft/cblas/pgi/3.0/lib PCRE GNU compiler We compile PCRE for both shared library support (for SWIG) and for static library support (for Octave). Because compilers default to static libraries, we compile that last. Setup your environment: ```bash module switch PrgEnv-cray PrgEnv-gnu export XTPE_LINK_TYPE=dynamic ``` Configure, build and install the shared library version: ```bash export XTPE_LINK_TYPE=native ./configure \ CC="cc -dynamic" \ CXX="CC -dynamic" \ --prefix=/soft/pcre/gnu/8.12 \ --enable-shared \ --disable-static \ make \ make install ``` Configure, build and install the static library version: ```bash ./configure \ CC=cc \ CXX=CC \ --prefix=/soft/pcre/gnu/8.12 \ --disable-shared \ --enable-static \ make \ make install ``` Python It only makes sense to compile Python dynamically since in almost all cases you will want to load Python modules dynamically. Because of this, you have to use Python in DSL mode. We also only compile Python with the GNU compiler. Setup you environment: ```bash module switch PrgEnv-cray PrgEnv-gnu module unload xt-mpt psi ``` Configure and compile: ```bash ./configure \ CC="cc -dynamic" \ CXX="CC -dynamic" \ F77="ftn -dynamic" \ XTPE_LINK_TYPE=dynamic \ --prefix=/soft/python/2.7.1 \ --enable-shared \ make \ make install ``` Biopython Setup your environment: ```bash module load python export XTPE_LINK_TYPE=dynamic export BIOPYTHON_ROOT=/soft/python/modules/biopython/1.56 ``` Build Biopython: ``` mkdir -p ${BIOPYTHON_ROOT}/lib/python2.7/site-packages env LDFLAGS="-L/soft/python/2.7/2.7.1/lib" python setup.py build PYTHONPATH=${PYTHONPATH}:${BIOPYTHON_ROOT}/lib/python2.7/site-packages python setup.py install --prefix ${BIOPYTHON_ROOT} ``` NumPy ``` module swap PrgEnv-cray PrgEnv-gnu module load python/2.7.3-vanilla module load acml module load metis module load SuiteSparse module load swig export XTPE_LINK_TYPE=dynamic export NUMPY_ROOT=/soft/python/2.7/2.7.3-vanilla/modules/numpy/1.7.0 export BLAS="" export LAPACK="" export ATLAS="" export LDFLAGS="-L/soft/python/2.7/2.7.3-vanilla/python/lib/" python setup.py build --compiler=unix --fcompiler=gnu95 PYTHONPATH=${PYTHONPATH}:${NUMPY_ROOT}/lib/python2.7/site-packages python setup.py install --prefix ${NUMPY_ROOT} ``` PIL Setup your environment: ``` module load python export XTPE_LINK_TYPE=dynamic export IMAGING_ROOT=/soft/python/modules/imaging/1.1.7 ``` Apply the patch file and compile: ``` mkdir -p ${IMAGING_ROOT}/lib/python2.7/site-packages env LDFLAGS="-L/soft/python/2.7/2.7.1/lib" python setup.py build PYTHONPATH=${PYTHONPATH}:${IMAGING_ROOT}/lib/python2.7/site-packages python setup.py install --prefix ${IMAGING_ROOT} ``` SciPy ``` module load pil export XTPE_LINK_TYPE=dynamic export SCIPY_ROOT=/soft/python/2.7/2.7.3-vanilla/modules/scipy/0.12.0 mkdir -p ${SCIPY_ROOT}/lib/python2.7/site-packages export BLAS="/opt/cray/libsci/12.0.00/GNU/47/mc12/lib/libsci_gnu.a" export LAPACK="/opt/cray/libsci/12.0.00/GNU/47/mc12/lib/libsci_gnu.a" export ATLAS="/opt/cray/libsci/12.0.00/GNU/47/mc12/lib/libsci_gnu.a" python setup.py build --compiler=unix --fcompiler=gnu95 ``` ``` repeat at eternum adding -shared to build shared libraries when the idiotic linker is called incorrectly. NEED TO FIND A WAY TO TELL TO THE INSTALLER HOW TO USE GCC TO LINK THE FORTRAN LIBRARIES. ``` ``` export PYTHONPATH=${PYTHONPATH}:${SCIPY_ROOT}/lib/python2.7/site-packages python setup.py install --prefix ${SCIPY_ROOT} ``` R It only makes sense to compile R dynamically since in almost all cases you will want to load R modules dynamically. Because of this, you have to use R in DSL or CCM mode. GNU Compiler This has been successfully tested with R up to version 2.13.1 (not all intermediate releases have been tested) Setup you environment: If using PrgEnv-gnu/3.1.49A or earlier: module switch PrgEnv-cray PrgEnv-gnu/3.1.49A This might be necessary because some of the later versions have issues with shared libraries. Otherwise module switch PrgEnv-cray PrgEnv-gnu is sufficient because cray-mpich is the current default (unless you changed it) Configure and compile: ```bash ./configure \ CC="cc -dynamic" \ CXX="CC -dynamic" \ FC="ftn -dynamic" \ F77="ftn -dynamic" \ XTPR_LINK_TYPE="dynamic" \ CPICFLAGS="-fPIC" \ CXXPICFLAGS="-fPIC" \ FPICFLAGS="-fPIC" \ FCPIICFLAGS="-fPIC" \ SHLIB_LDFLAGS="-dynamic" \ SHLIB_CXXLDFLAGS="-dynamic" \ --prefix=/soft/R/gnu/2.12.1 \ --enable-R-shlib \ --enable-R-static-lib \ --enable-BLAS-shlib \ --enable-shared \ --with-blas="-lsci -lsci_mc12" \ --with-lapack="-lsci -lsci_mc12" \ --without-x make make install ``` Note that `--prefix=/soft/R/gnu/2.12.1` should point to a local directory (probably under `/lustre/beagle/`whoami`/soft/R`) if you are building it for yourself SuiteSparse SuiteSparse currently only builds with GNU compilers. Download the UFconfig.mk configuration file and install it as UFconfig/UFconfig.mk. Setup your environment: ```bash module switch PrgEnv-cray PrgEnv-gnu module load metis ``` Copy the configured and built source tree of METIS to the source root of SuiteSparse as `metis-4.0` so that your SpareSuite root looks like: ``` [jurbanski@sandbox:/soft/build/SuiteSparse]$ ls AMD/ CSparse_to_CXSparse* MATLAB_Tools/ SuiteSparse_install.m BTF/ CXSparse/ MATLAB_Tools/ SuiteSparse_test.m CAME/ CXSparse_newfiles/ MATLAB_Tools/ SuiteSparse_test.m CCOLAMD/ CXSparse_newfiles.tar.gz BMI/ SuiteSparse_test.m CHOLMOD/ KLU/ metis-4.0/ UFcollection/ COLAMD/ LDL/ README.txt UFconfig/ Contents.m LINFACTOR/ SPQR/ CSparse/ Makefile SuiteSparse_demo.m ``` Build and install SuiteSparse: ```bash make make install ``` SGA -- String Graph Assembler GNU Compiler git clone git://github.com/jts/sga.git module load sparsehash module load bamtools module load hoard ./autogen.sh CXX=CC cc=cc ./configure --with-sparsehash=${SPARSEHASH_DIR} --with-bamtools=${BAMTOOLS_HOME} --with-hoard=${HOARD_DIR} make install DESTDIR=<your_dest_sir> SKAT module load R/2.15.1-vanilla vers=0.82 wget http://cran.r-project.org/src/contrib/SKAT_${vers}.tar.gz gunzip SKAT_${vers}.tar.gz # We try with CC, which is the wrapper for MPI and general system builds # it is possible that for the package to work properly under Vanilla we # will need to use CC=g++ cc=gcc R CMD INSTALL --configure-args="--prefix=$R_HOME --build=X86_64 CC=CC cc=cc" SKAT_${vers}.tar SWIG GNU Compiler Because SWIG is used with languages such as Python that are dynamically linked, we build SWIG dynamically. Setup your environment: module switch PrgEnv-cray PrgEnv-gnu module load pcre module load python export XTPE_LINK_TYPE=dynamic Configure, build and install SWIG: ./configure \ CC="cc -dynamic" \ CXX="CC -dynamic" \ --prefix=/soft/swig/gnu/2.0.2 make make install Valgrind GNU Compiler Valgrind is an instrumentation framework for building dynamic analysis tools. The version that ships from Cray will not work on the compute nodes since it requires /tmp to be writable. We will build a patched version that will honor a user’s $TMPDIR environment variable. Setup your environment: module switch PrgEnv-cray PrgEnv-gnu Apply the patch to the Valgrind 3.6.1 source tree: ```bash tar jcvf valgrind-3.6.1.tar.bz2 cd valgrind-3.6.1 cp /path/to/patch/file/tmpdir.patch . patch -p1 < tmpdir.patch ``` Configure, build and install Valgrind, allowing it to find the default gcc: ```bash ./configure \ --prefix=/soft/valgrind/gnu/3.6.1 make make install ``` To use Valgrind on the compute nodes you will need to set `$TMPDIR` to something other than `/tmp`. ### VARIANT TOOLS ```bash wget http://sourceforge.net/projects/varianttools/files/1.0.4/variant_tools-1.0.4a.tar.gz/download tar -xvf variant_tools-1.0.4a.tar.gz module load python/2.7.3-vanilla python setup.py install --install-platlib=/soft/variant_tools/gnu/1.0.4/python_lib --install-scripts=/soft/variant_tools/gnu/1.0.4/bin ```
{"Source-Url": "https://wiki.uchicago.edu/download/temp/pdfexport-20190731-310719-0144-2467/Beagle-HowtoBuildVariousCommonApplicationsandLibrariesonBeagle2-310719-0144-2468.pdf?contentType=application/pdf", "len_cl100k_base": 7881, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 32440, "total-output-tokens": 9214, "length": "2e12", "weborganizer": {"__label__adult": 0.00020897388458251953, "__label__art_design": 0.00025773048400878906, "__label__crime_law": 0.00011962652206420898, "__label__education_jobs": 0.0003294944763183594, "__label__entertainment": 6.0558319091796875e-05, "__label__fashion_beauty": 8.535385131835938e-05, "__label__finance_business": 0.00017940998077392578, "__label__food_dining": 0.00020134449005126953, "__label__games": 0.0004687309265136719, "__label__hardware": 0.0008716583251953125, "__label__health": 0.0001195073127746582, "__label__history": 0.00013494491577148438, "__label__home_hobbies": 8.082389831542969e-05, "__label__industrial": 0.0003314018249511719, "__label__literature": 0.00010854005813598631, "__label__politics": 0.00012069940567016602, "__label__religion": 0.00027823448181152344, "__label__science_tech": 0.00670623779296875, "__label__social_life": 6.645917892456055e-05, "__label__software": 0.0182647705078125, "__label__software_dev": 0.970703125, "__label__sports_fitness": 0.00017154216766357422, "__label__transportation": 0.00020253658294677737, "__label__travel": 0.0001360177993774414}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24804, 0.02901]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24804, 0.54723]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24804, 0.53025]], "google_gemma-3-12b-it_contains_pii": [[0, 1642, false], [1642, 3249, null], [3249, 4963, null], [4963, 6537, null], [6537, 7480, null], [7480, 9508, null], [9508, 11019, null], [11019, 12212, null], [12212, 13803, null], [13803, 15361, null], [15361, 16695, null], [16695, 18122, null], [18122, 20308, null], [20308, 22546, null], [22546, 24033, null], [24033, 24804, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1642, true], [1642, 3249, null], [3249, 4963, null], [4963, 6537, null], [6537, 7480, null], [7480, 9508, null], [9508, 11019, null], [11019, 12212, null], [12212, 13803, null], [13803, 15361, null], [15361, 16695, null], [16695, 18122, null], [18122, 20308, null], [20308, 22546, null], [22546, 24033, null], [24033, 24804, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 24804, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24804, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24804, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24804, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24804, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24804, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24804, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24804, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24804, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24804, null]], "pdf_page_numbers": [[0, 1642, 1], [1642, 3249, 2], [3249, 4963, 3], [4963, 6537, 4], [6537, 7480, 5], [7480, 9508, 6], [9508, 11019, 7], [11019, 12212, 8], [12212, 13803, 9], [13803, 15361, 10], [15361, 16695, 11], [16695, 18122, 12], [18122, 20308, 13], [20308, 22546, 14], [22546, 24033, 15], [24033, 24804, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24804, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
c691487098037fc2c34dbd4a89d938d395fb0d6a
Symbols We assume we have an infinite number of symbols available and write $x \, sym$ to assert that $x$ is a symbol. Symbols are sometimes called names or atoms or identifiers. For $x \, sym$ and $y \, sym$, the judgement $x \neq y$ says that $x$ and $y$ are distinct symbols. Characters and Alphabets An alphabet $\Sigma$ is a countable set of symbols or characters. - ASCII or Unicode. - Binary digits. The judgement $c \, char$ indicates that $c$ is a character, and $\Sigma$ stands for a finite set of such judgements. Strings The set $\Sigma^*$ consists of all finite strings over $\Sigma$, as specified by the judgement $\Sigma \vdash s \, str$. - Null string $\varepsilon$. - Single-character string: $c$, where $c \in \Sigma$. - Append/Juxtaposition: $c \cdot s$, where $c \, char$ and $s \, str$. - Concatenation: $s_1 \cdot s_2$ where $s_1 \, str$ and $s_2 \, str$. Inductive Definition of Strings The judgement $\Sigma \vdash s \, str$ is inductively defined by the following rules: $$ \frac{}{\Sigma \vdash \varepsilon \, str} \quad \frac{\Sigma \vdash c \, str \quad \Sigma \vdash s \, str}{\Sigma \vdash c \cdot s \, str} $$ The judgement $s \, str$ is relative to the alphabet specified via a hypothetical judgement of the form: $$ c_1 \, char, \ldots, c_n \, char \vdash s \, str $$ which we abbreviate as $\Sigma \vdash s \, str$ Explicit mention of $\Sigma$ is often suppressed when it is clear from context. String Induction The principle of rule induction specializes to string induction. Specifically, we can prove that every string $s$ has a property $P$ by showing that - $\varepsilon$ has property $P$; - if $s$ has property $P$ and $c \, char$, then $c \cdot s$ has property $P$. Essentially induction on string length without making length explicit. Example: String Concatenation Consider the inductive definition of string concatenation: \[ \varepsilon S c \cdot s c \cdot d \cdot s t \] By string induction on the first argument, can prove that the judgement form \( s_1^\ast s_2 = s \) has mode \((\forall, \forall, \exists)\). That is, the above rule scheme defines a total function. String Notation A string can be written in various ways, as is convenient: - Juxtaposition of characters: \( abed \) for \( a \cdot b \cdot c \cdot \varepsilon \). - Juxtaposition for concatenation: \( abed \) for \( ab \cdot cd \). - Any possible concatenation: \( ab \cdot cd \) or \( a \cdot bed \) or \( ab \cdot ed \cdot c \) or \( \ldots \). Abstract Syntax Trees Let \( O \) be a countable set of operators. Let \( \Omega : O \rightarrow \mathbb{N} \) be an assignment of arities to operators, i.e., \( \text{ar}(o) = k \) where \( o \) sym and \( k \) nat. Such an \( \Omega \) is called a signature. - Arty = number of arguments. - 0-ary = no arguments. - If \( \Omega \vdash \text{ar}(o) = k \) and \( \Omega \vdash \text{ar}(o) = k' \) then \( k = k' \) nat AST’s (Terms) Examples: - \( \text{zero} \) has arity 0, \( \text{succ}(\varepsilon) \) has arity 1. - \( \text{empty} \) has arity 0, \( \text{add}(\varepsilon, \varepsilon) \) has arity 2. So, for example, \( \text{add}(\text{empty}, \text{empty}) \) is an ast. Inductive Definition of Abstract Syntax Trees The judgement \( o(a_1, \ldots, a_k) \) ast is inductively defined by the following rules: \[ \begin{align*} \Omega \vdash \text{ar}(o) &= k \\ \vdash a_1 \text{ ast} & \quad \ldots \quad \vdash a_k \text{ ast} \\ \vdash o(a_1, \ldots, a_k) \text{ ast} \end{align*} \] The base case is for operators of arity \( \text{zero} \), in which case the rule has no premises. Structural Induction The principle of rule induction specializes to structural induction when applied to abstract syntax trees over signature $\Omega$ Specifically, we can prove that a AST has a property $P$ by showing for each operator $o$ in $O$ that: - Given that $\Omega \vdash ar(o) = k$ - If $a_1$ ast has property $P$ and ... and $a_k$ ast has property $P$, then $o(a_1, ..., a_k)$ ast has property $P$. When $k$ is zero, reduces to showing $o$ has property $P$. Example: AST Height Consider the inductive definition of the height of an abstract syntax tree: $$\text{hgt}(a_1) = n_1 \quad \ldots \quad \text{hgt}(a_k) = n_k \quad \text{hgt}(\langle a_1, \ldots, a_k \rangle) = \text{succ}(n)$$ By structural induction we can prove that the judgement has mode $(\forall, \exists)$. That is, the above rule scheme defines a total function. Variables and Substitution We often wish to have variables in abstract syntax trees that will stand for other abstract syntax trees. A variable will be instantiated by substituting an AST for instances of that variable in another AST. Use of variables in an AST over signature $\Omega$ can be described using a hypothetical judgement such as: $$a_1 \text{ ast}, \ldots, a_k \text{ ast} \vdash o \text{ ast}$$ where the $a_1, \ldots, a_k$ are pairwise distinct symbols. Inductive Definition of Substitution We define the judgement $[a/x]\ b = c$, meaning that $c$ is the result of substituting $a$ for $x$ in $b$ by rules (one for each operator declared in $\Omega$) of the following form: $$\frac{\Omega \vdash ar(o) = k \quad [a/x]b_1 = c_1 \ldots [a/x]b_k = c_k \quad [a/x]o(b_1, \ldots, b_k) = o(c_1, \ldots, c_k)}{[a/x]\ b}$$ Lemma 1 (Substitution) If $x_1 \text{ ast}, \ldots, x_k \text{ ast} \vdash a \text{ ast}$ and $x_1 \text{ ast}, \ldots, x_k \text{ ast} \vdash a \text{ ast}$, then there exists a unique $c$ such that $$[a/x]x_1 = x_1 \ldots [a/x]x_k = x_k \quad [a/x]a = a \vdash [a/x]\ b = c$$ Proof: The proof proceeds by structural induction. Since the result of substitution is unique, we let $[a/x]\ b$ stand for the unique $c$ such that $[a/x]\ b = c$. Simultaneous substitution, written $[a_1, \ldots, a_k/x_1, \ldots, x_k] b$, is defined similarly. Abstract Binding Trees Abstract binding trees enrich abstract syntax trees with the concepts of binding and scope. We add a notion of fresh or new names for use within a specified scope. We also introduce the notions of $\alpha$-equivalence and capture-avoiding substitution. Binding and Scope in English Pronouns and demonstratives are analogous to bound variables. - He, she, it, this, that refer to a noun introduced elsewhere. - Linguistic conventions establish scope and binding of pronouns and demonstratives. Natural languages have only a small, fixed number of variables! Confusion is possible. Binding and Scope in Arithmetic Add to our language of arithmetic expressions the ability to - Bind a variable to an expression in a given scope. - Refer to (the value of) that expression. There is an unlimited supply of variables. There will be no possibility for confusion. Abstract Binding Trees The signature for abstract binding trees specifies the arities of the operators, which include both the number of arguments to the operator and the number of bound names, or valence in each argument. A abstract binding tree signature $\Omega$ consists of a finite set of judgements of the form $ar(a) = (n_1, \ldots, n_k)$, where $n_i$ nat For example, the arity $(0, \ldots, 0)$ of length $k$ specifies an operator taking $k$ arguments that bind no variables, and hence is the analog of the arity for an abstract syntax tree operator taking $k$ arguments. Abstract Binding Trees Well-formed abstract binding trees over a signature $\Omega$ are specified by a parametric hypothetical judgement of the form: $$\{x_1, \ldots, x_k\}x_1 \text{ abt}\cdots x_k \text{ abt} \vdash a \text{ abt}$$ The above says that $a$ is an abstract binding tree of valence $n$ with parameters or free names $x_1, \ldots, x_k$. We sometimes use $a \text{ abt}$ as shorthand for $a \text{ abt}$. Abstract Binding Trees Abstract binding trees inherently include variables. We let \( X \) stand for the parameter (variable) list and \( A \) stand for the corresponding finite set of assumptions of the form \( x \, \text{abt} \), one for each element of \( X \). Using these notational shortcuts, we can write \[ \{x_1, \ldots, x_n\}x_1 \, \text{abt}_1, \ldots, x_n \, \text{abt}_n \vdash a \, \text{abt} \] as \[ X, A \vdash a \, \text{abt} \] which, when \( X \) is clear from context, can be further abbreviated \[ A \vdash a \, \text{abt} \] In such cases we then write \( x \not\in A \) to mean \( x \not\in X \), where \( X \) is the set of parameters governed by \( A \). Structural Induction with Binding and Scope The principle of structural induction for abstract syntax trees extends to abstract binding trees as follows: To show that \( X, A \vdash a \, \text{abt} \) has a property \( P \) it suffices to show: - \( X, A, x \, \text{abt} \vdash x \, \text{abt} \) has property \( P \). - For any \( o \) with \( O \vdash o(o_1, \ldots, o_k) \), if \( X, A \vdash a_1 \, \text{abt} \) has property \( P \) and \( \ldots \) and \( X, A \vdash a_k \, \text{abt} \) has property \( P \), then \( X, A \vdash o(a_1, \ldots, a_k) \, \text{abt} \) has property \( P \). - If \( X, x', A, x' \, \text{abt} \vdash [x' \mapsto z]a \, \text{abt} \) has property \( P \) for some/any \( x' \not\in X \), then \( X, A \vdash x \, \text{abt} \) has property \( P \). Example: ABT Size Here is the inductive definition of \( S \vdash \text{sz}(a \, \text{abt}) = s \): \[ S, \text{sz}(x \, \text{abt}) = 1 \vdash \text{sz}(x \, \text{abt}) = 1 \] \[ S \vdash \text{sz}(a_1 \, \text{abt}) = a_1 \quad S \vdash \text{sz}(a_0 \, \text{abt}) = a_0 \quad S = a_1 + \ldots + a_n + 1 \] \[ S \vdash \text{sz}(o(a_1, \ldots, a_n) \, \text{abt}) = s \] \[ S, \text{sz}(x' \, \text{abt}) = 1 \vdash \text{sz}([x' \mapsto z]a \, \text{abt}) = s \] \[ \vdash S, \text{sz}(x \, \text{abt}+1) = s + 1 \] The size of an ABT counts each variable as 1 and adds 1 for each operator and abstractor in the ABT. Inductive Definition of Abstract Binding Trees The judgement \( X, A \vdash a \, \text{abt} \) is inductively defined by the following rules: \[ X, x \vdash a \, \text{abt} \quad x \not\in X \] \[ S \vdash \text{sz}(a \, \text{abt}) = s \] or more generally the parametric hypothetical judgement: \[ S \vdash \text{sz}(a \, \text{abt}) = s \] which, by taking the parameter list to be implicit and letting \( S \) stand for the corresponding finite set of assumptions of the form \( \text{sz}(a \, \text{abt}) = s \), one for each element of the parameter list, we can further abbreviate as: \[ S \vdash \text{sz}(a \, \text{abt}) = s \] Example: ABT Size Consider the inductive definition of the size, \( s \) of an abstract binding tree, \( a \), of valence \( n \) by a judgement of the form \[ \text{sz}(a \, \text{abt}) = s \] and \[ S \vdash \text{sz}(a \, \text{abt}) = s \] which gives: \[ S, \text{sz}(x \, \text{abt}) = 1 \vdash \text{sz}(x \, \text{abt}) = 1 \] \[ S \vdash \text{sz}(a_1 \, \text{abt}) = a_1 \quad S \vdash \text{sz}(a_0 \, \text{abt}) = a_0 \quad S = a_1 + \ldots + a_n + 1 \] \[ S \vdash \text{sz}(o(a_1, \ldots, a_n) \, \text{abt}) = s \] \[ S, \text{sz}(x' \, \text{abt}) = 1 \vdash \text{sz}([x' \mapsto z]a \, \text{abt}) = s \] \[ S \vdash \text{sz}(x \, \text{abt}+1) = s + 1 \] Theorem 2 (ABT Size) Every well-formed ABT \( A \) has a unique size. If \( A \vdash a \, \text{abt} \), then there exists a unique \( s \) such that \[ \text{sz}(A \, \text{abt}) = s \] Proof: The proof proceeds by structural induction on the derivation of the premise. Thus there are three cases, one for each of the rules in the inductive definition of ABT size. 25 26 27 28 29 A apartness The relation of a name \( x \) being apart from an abstract binding tree \( A \) says that \( x \) is a free name in \( A \). The apartness judgement \( A \vdash x \# A \ abt_\# \) where \( A \vdash a \ abt \) is inductively defined by the following rules: \[ \begin{align*} A \vdash x \# y \ abt_\# \to & \quad A \vdash x \# y \ y \ abt_\# \\ A \vdash x \# a_1 \ abt_\# & \quad \cdots & \quad A \vdash x \# a_n \ abt_\# \\ A \vdash x \# \big( a_1, \ldots, a_n \big) \ abt_\# \to & \quad A \vdash x \# \big( a_1, \ldots, a_n \big) \ abt_\# \\ A, y \ abt \vdash x \# A \ abt_\# \to & \quad A \vdash x \# y.A \ abt_\# \\ A \vdash x \# y.a \ abt_\# & \quad A \vdash x \# y.A \ abt_\# \\ \end{align*} \] Capture-Avoiding Substitution Substitution is replacing all occurrences (if any) of a free name in an abstract binding tree by another ABT without violating the scopes of any names. The judgement \( A \vdash [u/z]h = v \ abt \) is inductively defined by the following rules: \[ \begin{align*} A \vdash [u/z]y = a \ abt \to & \quad A \vdash [u/z]y = x \ abt \\ x \ # y & \quad A \vdash [u/z]y = y \ abt \\ A \vdash [u/z]b_1 = c_1 \ abt & \quad \cdots & \quad A \vdash [u/z]b_n = c_n \ abt \to \\ A \vdash [u/z](b_1, \ldots, b_n) = (c_1, \ldots, c_n) \ abt & \quad A \vdash [u/z]y' \ abt = y' \ abt & \quad y' \ # x \\quad A \vdash [u/z]y, y' \ abt \\ A \vdash [u/z]b = y, y' \ abt & \quad A \vdash [u/z]b = y, y' \ abt \\ \end{align*} \] Renaming Bound Variables We'll make use of identification up to renaming, or \( \alpha \) conversion. - The name of a bound variable does not matter. - Choose a different name to avoid ambiguity. Scope Resolution Where is a variable bound? - **Lexical scope rule:** a variable is bound by the nearest enclosing binding. - Proceed upwards through the abstract syntax tree. - Find nearest enclosing 1st that binds that variable. Examples: - "Parallel" scopes: (let x be 3 in x + x) • (let x be 4 in x + x + x) - "Nested" scopes: let x be 10 in (let x be 11 in x + x) + x In the last rule we tacitly assume \( x \# A \). We may abbreviate \( A \vdash a =_\alpha b \ abt \) by \( A \vdash a =_\alpha b \) or just \( a =_\alpha b \) when \( a \) and \( A \) is clear from context. Names of Bound Variables But watch out: • let \( x \) be 10 in \( (\text{let } x \text{ be 11 in } x+x) + x \) is the same as let \( y \) be 10 in \( (\text{let } x \text{ be 11 in } x+x) + y \). • but is different from let \( y \) be 10 in \( (\text{let } x \text{ be 11 in } y+y) + y \). When renaming we must avoid clashes with other variables in the same scope.
{"Source-Url": "http://www-edlab.cs.umass.edu/cs530/LectureSlides09/syntax09-1-6up.pdf", "len_cl100k_base": 4783, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 26088, "total-output-tokens": 5388, "length": "2e12", "weborganizer": {"__label__adult": 0.0007748603820800781, "__label__art_design": 0.0025234222412109375, "__label__crime_law": 0.0009908676147460938, "__label__education_jobs": 0.009674072265625, "__label__entertainment": 0.00069427490234375, "__label__fashion_beauty": 0.00043487548828125, "__label__finance_business": 0.0005249977111816406, "__label__food_dining": 0.000926494598388672, "__label__games": 0.0018634796142578125, "__label__hardware": 0.001251220703125, "__label__health": 0.0010175704956054688, "__label__history": 0.0010013580322265625, "__label__home_hobbies": 0.00046372413635253906, "__label__industrial": 0.0011310577392578125, "__label__literature": 0.0205230712890625, "__label__politics": 0.0009388923645019532, "__label__religion": 0.0016241073608398438, "__label__science_tech": 0.325439453125, "__label__social_life": 0.0005426406860351562, "__label__software": 0.02215576171875, "__label__software_dev": 0.603515625, "__label__sports_fitness": 0.0005135536193847656, "__label__transportation": 0.0012388229370117188, "__label__travel": 0.0002753734588623047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14173, 0.02536]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14173, 0.38039]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14173, 0.7352]], "google_gemma-3-12b-it_contains_pii": [[0, 1800, false], [1800, 3604, null], [3604, 5842, null], [5842, 7739, null], [7739, 11548, null], [11548, 13802, null], [13802, 14173, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1800, true], [1800, 3604, null], [3604, 5842, null], [5842, 7739, null], [7739, 11548, null], [11548, 13802, null], [13802, 14173, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 14173, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14173, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14173, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14173, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 14173, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14173, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14173, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14173, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14173, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14173, null]], "pdf_page_numbers": [[0, 1800, 1], [1800, 3604, 2], [3604, 5842, 3], [5842, 7739, 4], [7739, 11548, 5], [11548, 13802, 6], [13802, 14173, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14173, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
f13dfcdfc6f97dc9ee4ef55e1c6aca1208dc8433
An Approach to Prioritize Classes in a Multi-objective Software Maintenance Framework Michael Mohan and Des Greer Department of Electronics, Electrical Engineering and Computer Science, Queen’s University Belfast, Northern Ireland, U.K. Keywords: Search based Software Engineering, Maintenance, Automated Refactoring, Refactoring Tools, Software Quality, Multi-objective Optimization, Genetic Algorithms. Abstract: Genetic algorithms have become popular in automating software refactoring and an increasing level of attention is being given to the use of multi-objective approaches. This paper investigated the use of a multi-objective genetic algorithm to automate software refactoring using a purpose built tool, MultiRefactor. The tool used a metric function to measure quality in a software system and tested a second objective to measure the importance of the classes being refactored. This priority objective takes as input a set of classes to favor and, optionally, a set of classes to disfavor as well. The multi-objective setup refactors the input program to improve its quality using the quality objective, while also focusing on the classes specified by the user. An experiment was constructed to measure the multi-objective approach against the alternative mono-objective approach that does not use an objective to measure priority of classes. The two approaches were tested on six different open source Java programs. The multi-objective approach was found to give significantly better priority scores across all inputs in a similar time, while also generating improvements in the quality scores. 1 INTRODUCTION Search-Based Software Engineering (SBSE) has been used to automate various aspects of the software development cycle. Used successfully, SBSE can help to improve decision making throughout the development process and assist in enhancing resources and reducing cost and time, making the process more streamlined and efficient. Search-Based Software Maintenance (SBSM) is usually directed at minimizing the effort of maintaining a software product. An increasing proportion of SBSM research is making use of multi-objective optimization techniques. Many multi-objective search algorithms are built using genetic algorithms (GAs), due to their ability to generate multiple possible solutions. Instead of focusing on only one property, the multi-objective algorithm is concerned with a number of different objectives. This is handled through a fitness calculation and sorting of the solutions after they have been modified or added to. The main approach used to organize solutions in a multi-objective approach is Pareto. Pareto dominance organizes the possible solutions into different non-domination levels and further discerns between them by finding the objective distances between them in Euclidean space. In this paper, a multi-objective approach is created to improve software that combines a quality objective with one that incorporates the priority of the classes in the solution. There are a few situations in which this may be useful. Suppose a developer on a project is part of a team, where each member of the team is concerned with certain aspects of the functionality of the program. This will likely involve looking at a subset of specific classes in the program. The developer may only have involvement in modifying their selected set of classes. They may not even be aware of the functionality of the other classes in the project. Likewise, even if the person is the sole developer of the project, there may be certain classes which are more risky or more recent or in some other way more worthy of attention. Additionally, there may be certain parts of the code considered less well-structured and therefore most in need of refactoring. Given this prioritization of some classes for refactoring, tool support is better em- ployed with refactoring directed towards those classes. Another situation is that there may be some classes considered less suitable for refactoring. Suppose a developer has only worked on a subset of the classes and is unsure about other areas of the code, they may prefer not to modify that section of the code. Similarly, older established code might be considered already very stable, possibly having been refactored extensively in the past, where refactoring might be considered an unnecessary risk. Changing code also necessitates redoing integration and tests and this could be another reason for leaving parts of the code as they were. There may also be cases where "poor quality" has been accepted as a necessary evil. For example, a project may have a class for logging that is referenced by many other classes. Generally, highly coupled classes are seen as negative coding practices, but for the purposes of the project it may be deemed unavoidable. In cases like this where the more unorthodox structure of the class is desired by the developer, these classes can be specified in order to avoid refactoring them to appease the software metrics used. However, we do not want to exclude less favoured classes from the refactoring process since an overall higher quality codebase may be achieved if some of those are included in the refactorings. We propose that it would be helpful to classify classes into a list of "priority" classes and "non-priority" classes in order to focus on the refactoring solutions that have refactored the priority classes and given less attention to the non-priority classes. The priority objective proposed takes count of the classes used in the refactorings of a solution and uses that measurement to derive how successful the solution is at focusing on priority classes and evading non-priority classes. The refactorings themselves are not restricted so during the refactoring process the search is free to apply any refactoring available, regardless of the class being refactored. The priority objective measures the solutions after the refactorings have been applied to aid in choosing between the options available. This will then allow the objective to discern between the available refactoring solutions. In order to test the effectiveness of such an objective, an experiment has been constructed to test a GA that uses it against one that does not. In order to judge the outcome of the experiment, the following research questions have been derived: **RQ1**: Does a multi-objective solution using a priority objective and a quality objective give an improvement in quality? **RQ2**: Does a multi-objective solution using a priority objective and a quality objective prioritize classes better than a solution that does not use the priority objective? In order to address the research questions, the experiment will run a set of tasks to compare a default mono-objective set up to refactor a solution towards quality with a multi-objective approach that uses a quality objective and the newly proposed priority objective. The following hypotheses have been constructed to measure success in the experiment: **H1**: The multi-objective solution gives an improvement in the quality objective value. **H1a**: The multi-objective solution gives no improvement in the quality objective value. **H2**: The multi-objective solution gives significantly higher priority objective values than the corresponding mono-objective solution. **H2a**: The multi-objective solution does not give significantly higher priority objective values than the corresponding mono-objective solution. The remainder of this paper is organized as follows. Section 2 discusses related work. Section 3 describes the MultiRefactor tool used to conduct the experimentation. Section 4 explains the setup of the experiment used to test the priority objective, as well as the outcome of previous experimentation done to derive the quality objective and the GA parameters used. Section 5 discusses the results of the experiment by looking at the objective values and the times taken to run the tasks, and Section 6 concludes the paper. ## 2 RELATED WORK Several recent studies in SBSM have explored the use of multi-objective techniques. Ouni, Kessentini, Sahraoui and Hamdi (Ouni et al. 2012) created an approach to measure semantics preservation in a software program when searching for refactoring options to improve the structure, by using the NSGA-II search. Ouni, Kessentini, Sahraoui and Boukadoum (Ouni et al. 2013) expanded upon the code smells correction approach of Kessentini, Kessentini and Erradi (Kessentini et al. 2011) by replacing the GA used with NSGA-II. Wang, Kessentini, Grosky and Meddeb (Wang et al. 2015) also expanded on the approach of Kessentini, Kessentini and Erradi by combining the detection and removal of software defects with an estimation of the number of future code smells generated in the software by the refactorings. Mkaouer et al. (Mkaouer et al. 2014; M. W. Mkaouer et al. 2015; W. Mkaouer et al. 2015) used NSGA-III to experiment with automated maintenance. 3 MultiRefactor The MultiRefactor approach\(^1\) uses the RECODER framework\(^2\) to modify source code in Java programs. RECODER extracts a model of the code that can be used to analyze and modify the code before the changes are applied. MultiRefactor makes available various different approaches to automated software maintenance in Java programs. It takes Java source code as input and will output the modified source code to a specified folder. The input must be fully compilable and must be accompanied by any necessary library files as compressed jar files. The numerous searches available in the tool have various input configurations that can affect the execution of the search. The refactorings and metrics used can also be specified. As such, the tool can be configured in a number of different ways to specify the particular task that you want to run. If desired, multiple tasks can be set to run one after the other. A previous study (Mohan et al. 2016) used the A-CMA (Koc et al. 2012) tool to experiment with different metric functions but that work was not extended to produce source code as an output (likewise, TrueRefactor (Griffith et al. 2011) only modifies UML and Ouni, Kessentini, Sahraoui and Boukadoum’s (Ouni et al. 2013) approach only generates proposed lists of refactorings). MultiRefactor (Mohan and Greer 2017) was developed in order to be a fully-automated search-based refactoring tool that produces compilable, usable source code. As well as the Java code artifacts, the tool will produce an output file that gives information on the execution of the task including data about the parameters of the search executed, the metric values at the beginning and end of the search, and details about each refactoring applied. The metric configurations can be modified to include different weights and the direction of improvement of the metrics can be changed depending on the desired outcome. MultiRefactor contains seven different search options for automated maintenance, with three distinct metaheuristic search techniques available. For each search type there is a selection of configurable properties to determine how the search will run. The refactorings used in the tool are mostly based on Fowler’s list (Fowler 1999), consisting of 26 field-level, method-level and class-level refactorings, and are listed below. **Field Level Refactorings:** Increase/Decrease Field Visibility, Make Field Final/Non Final, Make Field Static/Non Static, Move Field Down/Up, Remove Field. **Method Level Refactorings:** Increase/Decrease Method Visibility, Make Method Final/Non Final, Make Method Static/Non Static, Remove Method. **Class Level Refactorings:** Make Class Final/Non Final, Make Class Abstract/Concrete, Extract Subclass/Collapse Hierarchy, Remove Class/Interface. The refactorings used will be checked for semantic coherence as part of the search, and will be applied automatically, ensuring the process is fully automated. A number of the metrics available in the tool are adapted from the list of metrics in the QMOOD (Bansiya and Davis 2002) and CK/MOOSE (Chidamber and Kemerer 1994) metrics suites. The 23 metrics currently available in the tool are listed below. **QMOOD Based:** Class Design Size, Number Of Hierarchies, Average Number Of Ancestors, Data Access Metric, Direct Class Coupling, Cohesion Among Methods, Aggregation, Functional Abstraction, Number Of Polymorphic Methods, Class Interface Size, Number Of Methods. **CK Based:** Weighted Methods Per Class, Number Of Children. **Others:** Abstractness, Abstract Ratio, Static Ratio, Final Ratio, Constant Ratio, Inner Class Ratio, Referenced Methods Ratio, Visibility Ratio, Lines Of Code, Number Of Files. In order to implement the priority objective, the important classes need to be specified in the refactoring tool (preferably by the developer(s) to express the classes they are most concerned about). With the list of priority classes and, optionally, non-priority classes and the list of affected classes in each refactoring solution, the priority objective score can be calculated for each solution. To calculate the score, the list of affected classes for each refactoring is inspected, and each time a priority class is affected, the score increases by one. This is done for every refactoring in the solution. Then, if a list of non-priority classes is also included, the affected classes are inspected again. This time, if a non-priority class is affected, the score decreases by 1. The higher the overall score for a solution, the more successful it is at refactoring priority classes and disfavoring non-priority classes. It is important to note that non-priority classes are not necessarily --- \(^1\) https://github.com/mmohan01/MultiRefactor \(^2\) http://sourceforge.net/projects/recoder excluded completely but solutions that do not involve those classes will be given priority. In this way the refactoring solution is still given the ability to apply structural refactorings that have a larger effect on quality even if they are in undesirable classes, whereas the priority objective will favor the solutions that have applied refactorings to the more desirable classes. 4 EXPERIMENTAL DESIGN In order to evaluate the effectiveness of the priority objective, a set of tasks were created that used the priority objective to be compared against a set of tasks that didn’t. The control group is made up of a mono-objective approach that uses a function to represent quality in the software. The corresponding tasks use the multi-objective algorithm and have two objectives. The first objective is the same function for software quality as used for the mono-objective tasks. The second objective is the priority objective. The metrics used to construct the quality function and the configuration parameters used in the GAs are taken from previous experimentation on software quality. Each metric available in the tool was tested separately in a GA to deduce which were more successful, and the most successful were chosen for the quality function. The metrics used in the quality function are given in Table 1. No weighting is applied for any of the metrics. The configuration parameters used for the mono-objective and multi-objective tasks were derived through trial and error and are outlined in Table 2. The hardware used to run the experiment is outlined in Table 3. For the tasks, six different open source programs are used as inputs to ensure a variety of different domains are tested. The programs range in size from relatively small to medium sized. These programs were chosen as they have all been used in previous SBSM studies and so comparison of results is possible. The source code and necessary libraries for all of the programs are available to download in the GitHub repository for the MultiRefactor tool. Each one is run five times for the mono-objective approach and five times for the multi-objective approach, resulting in 60 tasks overall. The inputs used in the experiment as well as the number of classes and lines of code they contain are given in Table 4. For the multi-objective tasks used in the experiment, both priority classes and non-priority classes are specified for the relevant inputs. The number of classes in the input program is used to identify the number of priority and non-priority classes to specify, so that 5% of the overall number of classes in the input are specified as priority classes and 5% are specified as non-priority classes. In order to choose which classes to specify, the number of methods in each class of the input was found and ranked. The top 5% of classes that contain the most methods are the priority classes and the bottom 5% that contain the least methods are the non-priority classes for that input. Using the top and bottom 5% of classes means that the same proportion of classes will be used in the priority objective for each input program, minimizing the effect of the number of classes chosen in the experiment. In lieu of a way to determine the priority of the classes, their complexity as derived from the number of methods present, is taken to represent priority. Using this process, the configurations of the priority objective for each input were constructed and used in the experiment. Table 1: Metrics used in the software quality objective, with corresponding directions of improvement for each. <table> <thead> <tr> <th>Metrics</th> <th>Direction</th> </tr> </thead> <tbody> <tr> <td>Data Access Metric</td> <td>+</td> </tr> <tr> <td>Direct Class Coupling</td> <td>-</td> </tr> <tr> <td>Cohesion Among Methods</td> <td>+</td> </tr> <tr> <td>Aggregation</td> <td>+</td> </tr> <tr> <td>Functional Abstraction</td> <td>+</td> </tr> <tr> <td>Number Of Polymorphic Methods</td> <td>+</td> </tr> <tr> <td>Class Interface Size</td> <td>+</td> </tr> <tr> <td>Number Of Methods</td> <td>-</td> </tr> <tr> <td>Weighted Methods Per Class</td> <td>-</td> </tr> <tr> <td>Abstractness</td> <td>+</td> </tr> <tr> <td>Abstract Ratio</td> <td>+</td> </tr> <tr> <td>Static Ratio</td> <td>+</td> </tr> <tr> <td>Final Ratio</td> <td>+</td> </tr> <tr> <td>Constant Ratio</td> <td>+</td> </tr> <tr> <td>Inner Class Ratio</td> <td>+</td> </tr> <tr> <td>Referenced Methods Ratio</td> <td>+</td> </tr> <tr> <td>Visibility Ratio</td> <td>-</td> </tr> <tr> <td>Lines Of Code</td> <td>-</td> </tr> </tbody> </table> Table 2: GA configuration settings. <table> <thead> <tr> <th>Configuration Parameter</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>Crossover Probability</td> <td>0.2</td> </tr> <tr> <td>Mutation Probability</td> <td>0.8</td> </tr> <tr> <td>Generations</td> <td>100</td> </tr> <tr> <td>Refactoring Range</td> <td>50</td> </tr> <tr> <td>Population Size</td> <td>50</td> </tr> </tbody> </table> The tool has been updated in order to use a heuristic to choose a suitable solution out of the final population with the multi-objective algorithm to inspect. The heuristic used is similar to the method used by Deb and Jain (Deb and Jain 2013) to construct a linear hyper-plane in the NSGA-III algo- An Approach to Prioritize Classes in a Multi-objective Software Maintenance Framework For the quality function the metric changes are calculated using a normalization function. This function causes any greater influence of an individual metric in the objective to be minimized, as the impact of a change in the metric is influenced by how far it is from its initial value. The function finds the amount that a particular metric has changed in relation to its initial value at the beginning of the task. These values can then be accumulated depending on the direction of improvement of the metric (i.e. whether an increase or a decrease denotes an improvement in that metric) and the weights given to provide an overall value for the metric function or objective. A negative change in the metric will be reflected by a decrease in the overall function/objective value. In the case that an increase in the metric denotes a negative change, the overall value will still decrease, ensuring that a larger value represents a better metric value regardless of the direction of improvement. The directions of improvement used for the metrics in the experiment are given in Table 1. In the case that the initial value of a metric is 0, the initial value used is changed to 0.01 in order to avoid issues with dividing by 0. This way, the normalization function can still be used on the metric and its value still is low at the start. Equation (1) defines the normalization function, where \( m \) represents the selected metric, \( c_m \) is the current metric value and \( l_m \) is the initial metric value. \( w_m \) is the applied weighting for the metric (where 1 represents no weighting) and \( D \) is a binary constant (0 or 1) that represents the direction of improvement of the metric. \( s \) represents the number of metrics used in the function. For the priority objective, this normalization function is not needed. The objective score depends on the number of priority and non-priority classes addressed in a refactoring solution and will reflect that. \[ \sum_{m=1}^{n} D \cdot W_m \cdot \left( \frac{c_m}{l_m} - 1 \right) \] (1) ## 5 RESULTS Figure 1 gives the average quality gain values for each input program used in the experiment with the mono-objective and multi-objective approaches. For most of the inputs, the mono-objective approach gives a better quality improvement than the multi-objective approach, although for Mango the multi-objective approach was better. For the multi-objective approach all the runs of each input were <table> <thead> <tr> <th>Name</th> <th>LOC</th> <th>Classes</th> </tr> </thead> <tbody> <tr> <td>Mango</td> <td>3,470</td> <td>78</td> </tr> <tr> <td>Beaver 0.9.11</td> <td>6,493</td> <td>70</td> </tr> <tr> <td>Apache XML-RPC 2.0</td> <td>11,616</td> <td>79</td> </tr> <tr> <td>JHotDraw 5.3</td> <td>27,824</td> <td>241</td> </tr> <tr> <td>GanttProject 1.11.1</td> <td>39,527</td> <td>437</td> </tr> <tr> <td>XOM 1.2.1</td> <td>45,136</td> <td>224</td> </tr> </tbody> </table> Table 4: Java programs used in the experimentation. able to give an improvement for the quality objective as well as look at the priority objective. For both approaches, the smallest improvement was given with GanttProject. The inputs with the largest improvements were different for each approach. For the mono-objective approach it was Beaver, whereas for the multi-objective approach, it was Apache XML-RPC. Figure 1: Mean quality gain values for each input. Figure 2 shows the average priority scores for each input with the mono-objective and multi-objective approaches. For all of the inputs, the multi-objective approach was able to yield better scores coupled with the priority objective. The values were compared for significance using a one-tailed Wilcoxon rank-sum test (for unpaired data sets) with a 95% confidence level ($\alpha = 5\%$). The priority scores for the multi-objective approach were found to be significantly higher than the mono-objective approach. For two of the inputs, Beaver and Apache XML-RPC, the mono-objective approach had priority scores that were less than zero. With the Beaver input, one of the runs gave a score of -6 and another gave a score of -10. Likewise, one run of the Apache XML-RPC input gave a priority score of -37. This implies that, without the priority objective to direct them, the mono-objective runs are less likely to focus on the more important classes (i.e. the classes with more methods), and may significantly alter the classes that should be disfavored (leading to the minus values for the three mono-objective runs across the two input programs). Figure 3 gives the average execution times for each input with the mono-objective and multi-objective searches. For most of the input programs, the multi-objective approach took less time than the mono-objective but, for GanttProject, the multi-objective approach took longer. The Wilcoxon rank-sum test (two-tailed) was used again and the values were found to not be significantly different. The times for both approaches understandably increase as the input program sizes get bigger and the GanttProject input stands out as taking longer than the rest, although the largest input, XOM, is unexpectedly quicker. The execution times for the XOM input are smaller than both JHotDraw and GanttProject, despite it having more lines of code. However, both of these inputs do contain more classes. Considering the relevance of the list of classes in an input program to the calculation of the priority score, it makes sense that this would have an effect on the execution times. Indeed, GanttProject has by far the largest number of classes, at 437, which is almost double the amount that XOM contains. Likewise, the execution times for GanttProject are similarly around twice as large as those of XOM for the two approaches. The longest task to run was for the multi-objective run of the GanttProject input, at over an hour. The average time taken for those tasks was 53 minutes and 6 seconds. 6 CONCLUSIONS In this paper an experiment was conducted to test a new fitness objective using the MultiRefactor tool. The priority objective measures the classes modified in a refactoring solution and gives an ordinal score that indicates the number of refactorings that relate to the important classes in the input program. These “priority classes” are specified as an extra input in order for the program to calculate when the important classes are inspected. There is also an option to include a list of “non-priority classes” which, if refactored, will have a negative effect on the priority score. This objective helps the search to generate refactoring solutions that have focused on what a software developer envisions to be the more important areas of the software code, and away from other areas that should be avoided. The priority objective was tested in conjunction with a quality objective (derived from previous experimentation) in a multi-objective setup. To measure the effectiveness of the priority objective, the multi-objective approach is compared with a mono-objective approach using just the quality objective. The quality objective values are inspected to deduce whether improvements in quality can still be derived in this multi-objective approach and the priority scores are compared to measure whether the developed priority function can be successful in improving the focus of the refactoring approach. The average quality improvement scores were compared across six different open source inputs and, for the most part, the mono-objective approach gave better improvements. The likely reason for the better quality score in the mono-objective approach is due to the opportunity for the mono-objective GA to focus on that single objective without having to balance the possibly contrasting aim of favoring priority classes and disfavoring non-priority classes. The multi-objective approach was able to yield improvements in quality across all the inputs. In one case, with the Beaver input, the multi-objective was able to not only yield an improvement in quality, but also generate a better improvement on average than the mono-objective approach. This may be due to the smaller size of the Beaver input, which could mean a restricted number of potential refactorings in the mono-objective approach. It could also be influenced by the larger range of results gained the multi-objective approach for that input. The average priority scores were compared across the six inputs and, for the mono-objective approach, were able to give some improvement. However, in some specific runs, the priority scores were negative. This would relate to there being more non-priority classes being refactored in a solution than priority classes, which, for the mono-objective approach, is unsurprising. The average priority scores for the multi-objective approach were better in each case. It is presumed that, as the mono-objective approach has no measures in place to improve the priority score of its refactorings, the solutions are more likely to contain non-priority classes and less likely to contain priority classes than the solutions generated with the multi-objective approach. The average execution times for each input were inspected and compared for each approach. For most inputs, the multi-objective approach was slightly quicker than the mono-objective counterpart. The times for each input program increased depending on the size of the program and the number of classes available. The average times ranged from two minutes for the Mango program, to 53 minutes for GanttProject. While the increased times to complete the tasks for larger programs makes sense due to the larger amount of computation required to inspect them, XOM took less time than GanttProject and JHotDraw. Although XOM has more lines of code than these inputs, the reason more this is likely due to the number of classes available in each program, which is more reflective to the time taken to run the tasks for them. Therefore, it seems to be implied that the number of classes available in a project will have a more negative effect on the time taken to execute the refactoring tasks on that project than the amount of code. It was expected that, due to the higher complexity of the multi-objective GA in comparison to the basic GA, the execution times for the multi-objective tasks would be higher also. Although the times taken were similar for each approach, and were more affected by the project used, this wasn’t the case for all of the inputs. This may have been due to the stochastic nature of the search. Depending on the iteration of the task run, there may be any number of refactorings applied in a solution. If one solution applied a large number of refactorings, this could likely have a noticeable effect on the time taken to run the task. The counterintuitive execution times between the mono-objective and multi-objective tasks may be a result of this property of the GA. In order to test the aims of the experiment and derive conclusions from the results a set of research questions were constructed. Each research question and their corresponding set of hypotheses looked at one of two aspects of the experiment. RQ1 was concerned with the effectiveness of the quality objective in the multi-objective setup. To address it, the quality improvement results were inspected to ensure that each run of the search yielded an improvement in quality. In all 30 of the different runs of the multi-objective approach, there was an improvement in the quality objective score, therefore rejecting the null hypothesis H1 and supporting H1. RQ2 looked at the effectiveness of the priority objective in comparison with a setup that did not use a function to measure priority. To address this, a non- A parametric statistical test was used to decide whether the mono-objective and multi-objective data sets were significantly different. The priority scores were compared for the multi-objective priority approach against the basic approach and the multi-objective priority scores were found to be significantly higher than the mono-objective scores, supporting the hypothesis $H_2$ and rejecting the null hypothesis $H_2_0$. Thus, the research questions addressed in this paper help to support the validity of the priority objective in helping to improve the focus of a refactoring solution in the MultiRefactor tool while in conjunction with another objective. For future work, further experimentation could be conducted to test the effectiveness of the priority objective. The authors also plan to investigate other properties in order to create a better supported framework to allow developers to maintain software based on their preferences and their opinions of what factors are most important. It would also be useful to gauge the opinion of developers in industry and find out their opinion of the effectiveness of the Multi-Refactor approach, and of the priority objective in an industrial setting. ACKNOWLEDGEMENTS The research for this paper contributes to a PhD project funded by the EPSRC grant EP/M506400/1. REFERENCES
{"Source-Url": "http://www.scitepress.org/Papers/2018/66319/66319.pdf", "len_cl100k_base": 6806, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 27875, "total-output-tokens": 7962, "length": "2e12", "weborganizer": {"__label__adult": 0.0003464221954345703, "__label__art_design": 0.0002543926239013672, "__label__crime_law": 0.00028324127197265625, "__label__education_jobs": 0.0007376670837402344, "__label__entertainment": 3.618001937866211e-05, "__label__fashion_beauty": 0.00015616416931152344, "__label__finance_business": 0.00016570091247558594, "__label__food_dining": 0.0002853870391845703, "__label__games": 0.0004749298095703125, "__label__hardware": 0.0005564689636230469, "__label__health": 0.0003690719604492187, "__label__history": 0.00014352798461914062, "__label__home_hobbies": 7.390975952148438e-05, "__label__industrial": 0.0002772808074951172, "__label__literature": 0.00018227100372314453, "__label__politics": 0.00016629695892333984, "__label__religion": 0.00035500526428222656, "__label__science_tech": 0.00354766845703125, "__label__social_life": 7.367134094238281e-05, "__label__software": 0.0034313201904296875, "__label__software_dev": 0.9873046875, "__label__sports_fitness": 0.00028967857360839844, "__label__transportation": 0.0003561973571777344, "__label__travel": 0.0001722574234008789}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35248, 0.03556]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35248, 0.3952]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35248, 0.92297]], "google_gemma-3-12b-it_contains_pii": [[0, 3869, false], [3869, 8930, null], [8930, 13813, null], [13813, 18978, null], [18978, 21865, null], [21865, 25205, null], [25205, 30626, null], [30626, 35248, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3869, true], [3869, 8930, null], [8930, 13813, null], [13813, 18978, null], [18978, 21865, null], [21865, 25205, null], [25205, 30626, null], [30626, 35248, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35248, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35248, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35248, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35248, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35248, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35248, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35248, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35248, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35248, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35248, null]], "pdf_page_numbers": [[0, 3869, 1], [3869, 8930, 2], [8930, 13813, 3], [13813, 18978, 4], [18978, 21865, 5], [21865, 25205, 6], [25205, 30626, 7], [30626, 35248, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35248, 0.29412]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
6cc007e407b1b9e2b341eecba92602dd58ca0080
Advanced Software Testing and Debugging (CS598) Spec-based Testing Spring 2022 Lingming Zhang Reminder • Check your presentation schedule on course website • Form your course project team (you can work individually!) • Post on Campuswire (#find-teammates) if you need to find a teammate • Or let me know if you need help • Proposal presentation (Feb 17, in class) • Proposal submission (Feb 21, 11:59pm) • Join your group on Campuswire before submission as this is a group assignment • If you are working by yourself, join any empty group on Campuswire ("People"->"Groups") • If you have a teammate, make sure you join the same group as your teammate Spec-based test generation // specification for removing from binary tree /*@ public normal_behavior @ requires has(n); // precondition @ ensures !has(n); // postcondition @*/ This class - Korat: Automated Testing Based on Java Predicates (ISSTA'02) - TestEra: A Novel Framework for Automated Testing of Java Programs (ASE'01) What specifications to use? • Formal specifications in specifically designed languages (e.g., Z and Alloy) • Precise and concise • Hard to write (steep learning curve) • Korat directly utilizes **Java predicates** for encoding the specifications • Some existing formal specifications (e.g., JML) can be automatically transformed to Java • **Programmers can also use the full expressiveness of the Java language to write specifications!** public boolean repOk() { if (root == null) return true; // empty tree Set visited = new HashSet(); visited.add(root); LinkedList workList = new LinkedList(); workList.add(root); while (!workList.isEmpty()) { Node current = (Node)workList.removeFirst(); if (current.left != null) { if (!visited.add(current.left)) return false; // tree has no cycle workList.add(current.left); } if (current.right != null) { if (!visited.add(current.right)) return false; // tree has no cycle workList.add(current.right); } } return true; // valid non-empty tree } Finitization - **Finitization**: a set of bounds that limits the size of the inputs - Specifies the number of objects for each used class - A set of objects of one class forms a **class domain** - Specifies the set of classes whose objects each field can point to - The set of values a field can take forms its **field domain** - Note that a field domain is a union of some class domains ```java public static Finitization finBinaryTree(int NUM_Node) { Finitization f = new Finitization(BinaryTree.class); ObjSet nodes = f.createObject("Node", NUM_Node); nodes.add(null); f.set("root", nodes); f.set("Node.left", nodes); f.set("Node.right", nodes); return f; } ``` Generated finitization description for BinaryTree ### State space - **finBinaryTree** with **NUM_Node=3** <table> <thead> <tr> <th>BinaryTree</th> <th>N0</th> <th>N1</th> <th>N2</th> </tr> </thead> <tbody> <tr> <td>root</td> <td>?</td> <td>?</td> <td>?</td> </tr> <tr> <td>left</td> <td>?</td> <td>?</td> <td>?</td> </tr> <tr> <td>right</td> <td>?</td> <td>?</td> <td>?</td> </tr> </tbody> </table> - Each field with type Node includes 4 possible choices: - \{null, N0, N1, N2\} - Total number of possible tests for a tree with 3 nodes: - \(4 \times (4 \times 4)^3 = 2^{14} = 16,384\) - Total number of possible tests for a tree with \(n\) nodes: - \((n+1) \times ((n+1) \times (n+1))^n = (n+1)^{2n+1}\) State space: more examples • The number of “trees” explodes rapidly! • n=3: over 16,000 “tests” • n=4: over 1,900,000 “tests” • n=5: over 360,000,000 “trees” • Limit us to only very small input sizes Are they all valid tests? State space: examples • finBinaryTree with NUM_Node=3 ```text BinaryTree root N0 left N1 right N2 null null null null null BinaryTree root N0 left N1 right N1 null null null null null N2 left null right null null null null null null ``` State space: vector representation • To systematically explore the state space, Korat orders all the elements in every class domain and every field domain • The ordering in each field domain is consistent with the orderings in the class domains • Each candidate input is then a vector of field domain indices! BinaryTree N0 root N0 left N1 right N2 left null right null left null right null left N1 right N2 Test: [ 1, 2, 3, 0, 0, 0, 0 ] Class domain: [N0, N1, N2] Field domain: [null, N0, N1, N2] Search - The search starts with the candidate vector set to all zeros - Then, iterate through the following steps: - Construct the actual test based on the current vector - Invoke `repOK()` to check the test validity and record accessed field ordering - Increment the field domain index for the last field in the **recorded field ordering** - If the index exceeds the limit, reset it to 0 and increment the previous field in field ordering <table> <thead> <tr> <th>BinaryTree root</th> <th>N0 left</th> <th>N0 right</th> <th>N1 left</th> <th>N1 right</th> <th>N2 left</th> <th>N2 right</th> </tr> </thead> <tbody> <tr> <td>N0</td> <td>N1</td> <td>N1</td> <td>null</td> <td>null</td> <td>null</td> <td>null</td> </tr> </tbody> </table> Current: [ 1, 2, 2, ?, ?, ?, ? ] Next: [ 1, 2, 3, ?, ?, ?, ? ] Search: why field ordering accessed matters <table> <thead> <tr> <th>BinaryTree</th> <th>N0</th> <th>N0</th> <th>N1</th> <th>N1</th> <th>N1</th> <th>N2</th> <th>N2</th> </tr> </thead> <tbody> <tr> <td>root</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>N0</td> <td></td> <td></td> <td>N1</td> <td>null</td> <td>null</td> <td>null</td> <td>null</td> </tr> <tr> <td>N1</td> <td>2</td> <td>2</td> <td>?</td> <td>?</td> <td>?</td> <td>?</td> <td>?</td> </tr> <tr> <td>N2</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> - Any test vectors of the form [1,2,2,?,?,?,?] are invalid! - Keeping the accessed field ordering enables us to prune all such tests - $4^4$ tests pruned for this single step! • Only the root is accessed since it is null • Any test vectors of the form [0,?,?,?,?,?,?,?] do not need to be repeated! • Keeping the accessed field ordering enables us to prune all such tests • 25% of all tests pruned by this single test input! Can we further prune the state space? Isomorphism - \( \mathbf{O} \): \( O_1 \cup ... \cup O_n \), the sets of objects from \( n \) classes - \( \mathbf{P} \): the set consisting of \text{null} and all values of primitive types that objects in \( \mathbf{O} \) can reach - Two candidates, \( \mathbf{C} \) and \( \mathbf{C}' \), are \textit{isomorphic} iff there is a permutation \( \pi \) on \( \mathbf{O} \), mapping objects from \( O_i \) to objects from \( O_i \) for all \( 1 \leq i \leq n \), such that: \( \forall o, o' \in \mathbf{O}. \forall f \in \text{fields}(o). \forall p \in \mathbf{P}. \) - \( o.f==o' \) in \( \mathbf{C} \) iff \( \pi(o).f==\pi(o') \) in \( \mathbf{C}' \) AND - \( o.f==p \) in \( \mathbf{C} \) iff \( \pi(o).f==p \) in \( \mathbf{C}' \) \textbf{Two data structures are isomorphic if a permutation exists between the two that preserves structure} Isomorphism: examples <table> <thead> <tr> <th></th> <th>N0</th> <th>N1</th> <th>N2</th> </tr> </thead> <tbody> <tr> <td>root</td> <td>N0</td> <td>N1</td> <td>N2</td> </tr> <tr> <td>left</td> <td>N1</td> <td>null</td> <td>null</td> </tr> <tr> <td>right</td> <td>N2</td> <td>null</td> <td>null</td> </tr> </tbody> </table> Test1: [ 1, 2, 3, 0, 0, 0, 0 ] Test2: [ 1, 3, 2, 0, 0, 0, 0 ] Test3: [ 2, 0, 0, 1, 3, 0, 0 ] They are isomorphic! We just need one of them... Nonisomorphism - **Algorithm**: only allow an index into a given class domain to exceed previous indices into that domain by 1 - Initial prior index: -1 - **Example**: assume we are generating tests with three fields from the same class domain Nonisomorphism: more examples <table> <thead> <tr> <th>BinaryTree</th> <th>N0</th> <th>N1</th> <th>N2</th> </tr> </thead> <tbody> <tr> <td>root</td> <td>N0</td> <td>N1</td> <td>N2</td> </tr> <tr> <td>left</td> <td>null</td> <td>null</td> <td>null</td> </tr> <tr> <td>right</td> <td>null</td> <td>null</td> <td>null</td> </tr> </tbody> </table> Test1: \[ [1, 2, 3, 0, 0, 0, 0] \] Test2: \[ [1, \text{X}, 2, 0, 0, 0, 0] \] Test3: \[ [\text{X}, 0, 0, 1, 3, 0, 0] \] Korat results for BinaryTree with up to 3 nodes - Only 9 valid tests out of $2^{14}$ possibilities! Test generation • Valid test cases for a method must satisfy its precondition • Korat uses a class that represents method’s inputs: • One field for each parameter of the method (including the implicit this) • A repOk predicate that uses the precondition to check the validity of method’s inputs • Given a finitization, Korat then generates all inputs with repOk=true ```java class BinaryTree_remove { BinaryTree This; // the implicit "this" BinaryTree.Node n; // the Node parameter //@ invariant repOk(); public boolean repOk() { return This.has(n); } } public static Finitization finBinaryTree_remove(int NUM_Node) { Finitization f = new Finitization(BinaryTree_remove.class); Finitization g = BinaryTree.finBinaryTree(NUM_Node); f.includeFinitization(g); f.set("This", g.getObjects(BinaryTree.class)); f.set("n", /***/); return f; } ``` Test generation for remove(Node n) Test oracle - To check partial correctness of a method, a simple test oracle could just invoke `repOk` in the post-state to check the class invariant. - The current Korat implementation uses the JML tool-set to automatically generate test oracles from method postconditions. ```java //@ public invariant repOk(); //class invariant /*@ public normal_behavior @ requires has(n); // precondition @ ensures !has(n); // postcondition @* public void remove(Node n) { ... } ``` JML specification for removing from binary tree <table> <thead> <tr> <th>Testing activity</th> <th>JUnit</th> <th>JML+JUnit</th> <th>Korat</th> </tr> </thead> <tbody> <tr> <td>Generating tests</td> <td></td> <td></td> <td>X</td> </tr> <tr> <td>Generating oracle</td> <td></td> <td>X</td> <td>X</td> </tr> <tr> <td>Running tests</td> <td>X</td> <td>X</td> <td>X</td> </tr> </tbody> </table> Benchmark subjects <table> <thead> <tr> <th>benchmark</th> <th>package</th> <th>finitization parameters</th> </tr> </thead> <tbody> <tr> <td>BinaryTree</td> <td>korat.examples</td> <td>NUM_Node</td> </tr> <tr> <td>HeapArray</td> <td>korat.examples</td> <td>MAX_size, MAX_length, MAX_elem</td> </tr> <tr> <td>LinkedList</td> <td>java.util</td> <td>MIN_size, MAX_size, NUM_Entry, NUM_Object</td> </tr> <tr> <td>TreeMap</td> <td>java.util</td> <td>MIN_size, NUM_Entry, MAX_key, MAX_value</td> </tr> <tr> <td>HashSet</td> <td>java.util</td> <td>MAX_capacity, MAX_count, MAX_hash, loadFactor</td> </tr> <tr> <td>AVTree</td> <td>ins.namespace</td> <td>NUM_AVPair, MAX_child, NUM_String</td> </tr> </tbody> </table> Overall results <table> <thead> <tr> <th>benchmark</th> <th>size</th> <th>time (sec)</th> <th>structures generated</th> <th>candidates considered</th> <th>state space</th> </tr> </thead> <tbody> <tr> <td>BinaryTree</td> <td>8</td> <td>1.53</td> <td>1430</td> <td>54418</td> <td>$2^{53}$</td> </tr> <tr> <td></td> <td>9</td> <td>3.97</td> <td>4862</td> <td>210444</td> <td>$2^{63}$</td> </tr> <tr> <td></td> <td>10</td> <td>14.41</td> <td>16796</td> <td>815100</td> <td>$2^{72}$</td> </tr> <tr> <td></td> <td>11</td> <td>56.21</td> <td>58786</td> <td>3162018</td> <td>$2^{82}$</td> </tr> <tr> <td></td> <td>12</td> <td>233.59</td> <td>208012</td> <td>12284830</td> <td>$2^{92}$</td> </tr> <tr> <td>HeapArray</td> <td>6</td> <td>1.21</td> <td>13139</td> <td>64533</td> <td>$2^{20}$</td> </tr> <tr> <td></td> <td>7</td> <td>5.21</td> <td>117562</td> <td>519968</td> <td>$2^{25}$</td> </tr> <tr> <td></td> <td>8</td> <td>42.61</td> <td>1005075</td> <td>5231385</td> <td>$2^{29}$</td> </tr> <tr> <td>LinkedList</td> <td>8</td> <td>1.32</td> <td>4140</td> <td>5455</td> <td>$2^{91}$</td> </tr> <tr> <td></td> <td>9</td> <td>3.58</td> <td>21147</td> <td>26635</td> <td>$2^{105}$</td> </tr> <tr> <td></td> <td>10</td> <td>16.73</td> <td>115975</td> <td>142646</td> <td>$2^{120}$</td> </tr> <tr> <td></td> <td>11</td> <td>101.75</td> <td>678570</td> <td>821255</td> <td>$2^{135}$</td> </tr> <tr> <td></td> <td>12</td> <td>690.00</td> <td>4213597</td> <td>5034894</td> <td>$2^{150}$</td> </tr> <tr> <td>TreeMap</td> <td>7</td> <td>8.81</td> <td>35</td> <td>256763</td> <td>$2^{92}$</td> </tr> <tr> <td></td> <td>8</td> <td>90.93</td> <td>64</td> <td>2479398</td> <td>$2^{111}$</td> </tr> <tr> <td></td> <td>9</td> <td>2148.50</td> <td>122</td> <td>50209400</td> <td>$2^{130}$</td> </tr> <tr> <td>HashSet</td> <td>7</td> <td>3.71</td> <td>2386</td> <td>193200</td> <td>$2^{119}$</td> </tr> <tr> <td></td> <td>8</td> <td>16.68</td> <td>9355</td> <td>908568</td> <td>$2^{142}$</td> </tr> <tr> <td></td> <td>9</td> <td>56.71</td> <td>26687</td> <td>3004597</td> <td>$2^{166}$</td> </tr> <tr> <td></td> <td>10</td> <td>208.86</td> <td>79451</td> <td>10029045</td> <td>$2^{190}$</td> </tr> <tr> <td></td> <td>11</td> <td>926.71</td> <td>277387</td> <td>39075006</td> <td>$2^{215}$</td> </tr> <tr> <td>AVTree</td> <td>5</td> <td>62.05</td> <td>598358</td> <td>1330628</td> <td>$2^{50}$</td> </tr> </tbody> </table> Korat vs. TestEra Table 4: Performance comparison. For each benchmark, per- <table> <thead> <tr> <th>benchmark</th> <th>size</th> <th>Korat</th> <th>Alloy Analyzer</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td>struc. gen.</td> <td>total time</td> </tr> <tr> <td>BinaryTree</td> <td>3</td> <td>5</td> <td>0.56</td> </tr> <tr> <td></td> <td>4</td> <td>14</td> <td>0.58</td> </tr> <tr> <td></td> <td>5</td> <td>42</td> <td>0.69</td> </tr> <tr> <td></td> <td>6</td> <td>132</td> <td>0.79</td> </tr> <tr> <td></td> <td>7</td> <td>429</td> <td>0.97</td> </tr> <tr> <td>HeapArray</td> <td>3</td> <td>66</td> <td>0.53</td> </tr> <tr> <td></td> <td>4</td> <td>320</td> <td>0.57</td> </tr> <tr> <td></td> <td>5</td> <td>1919</td> <td>0.73</td> </tr> <tr> <td>LinkedList</td> <td>3</td> <td>5</td> <td>0.58</td> </tr> <tr> <td></td> <td>4</td> <td>15</td> <td>0.55</td> </tr> <tr> <td></td> <td>5</td> <td>52</td> <td>0.57</td> </tr> <tr> <td></td> <td>6</td> <td>203</td> <td>0.73</td> </tr> <tr> <td></td> <td>7</td> <td>877</td> <td>0.87</td> </tr> <tr> <td>TreeMap</td> <td>4</td> <td>8</td> <td>0.75</td> </tr> <tr> <td></td> <td>5</td> <td>14</td> <td>0.87</td> </tr> <tr> <td></td> <td>6</td> <td>20</td> <td>1.49</td> </tr> <tr> <td>AVTree</td> <td>2</td> <td>2</td> <td>0.55</td> </tr> <tr> <td></td> <td>3</td> <td>84</td> <td>0.65</td> </tr> <tr> <td></td> <td>4</td> <td>5923</td> <td>1.41</td> </tr> </tbody> </table> Korat for methods <table> <thead> <tr> <th>benchmark</th> <th>method</th> <th>max. size</th> <th>test cases generated</th> <th>gen. time</th> <th>test time</th> </tr> </thead> <tbody> <tr> <td>BinaryTree</td> <td>remove</td> <td>3</td> <td>15</td> <td>0.64</td> <td>0.73</td> </tr> <tr> <td>HeapArray</td> <td>extractMax</td> <td>6</td> <td>13139</td> <td>0.87</td> <td>1.39</td> </tr> <tr> <td>LinkedList</td> <td>reverse</td> <td>2</td> <td>8</td> <td>0.67</td> <td>0.76</td> </tr> <tr> <td>TreeMap</td> <td>put</td> <td>8</td> <td>19912</td> <td>136.19</td> <td>2.70</td> </tr> <tr> <td>HashSet</td> <td>add</td> <td>7</td> <td>13106</td> <td>3.90</td> <td>1.72</td> </tr> <tr> <td>AVTree</td> <td>lookup</td> <td>4</td> <td>27734</td> <td>4.33</td> <td>14.63</td> </tr> </tbody> </table> Discussion • Strengths • Limitations • Future work This class • Korat: Automated Testing Based on Java Predicates (ISSTA'02) • TestEra: A Novel Framework for Automated Testing of Java Programs (ASE'01) TestEra vs Korat • Similarities: • Both target structurally complex test input generation based on specifications • Both automatically generate all non-isomorphic tests within a given input size • Differences • TestEra uses Alloy\(^1\) as the specification language • Alloy is a simple declarative language based on first-order logic • TestEra uses Alloy and Alloy Analyzer to generate the tests and to evaluate the correctness criteria • TestEra produces concrete Java inputs as counterexamples to violated correctness criteria \(^1\) https://www.csail.mit.edu/research/alloy TestEra components - A specification of inputs to a Java program written in Alloy - Class invariant and precondition - A correctness criterion written in Alloy - Class invariant and post-condition - An concretization function - Which maps instances of Alloy specifications to concrete Java objects - An abstraction function - Which maps the concrete Java objects to instances of Alloy specifications TestEra big picture TestEra: example - A recursive method for performing merge sort on acyclic singly linked lists ```java class List { int elem; List next; static List mergeSort(List l) { ... } } ``` ```alloy module list import integer sig List { elem: Integer, next: lone List } ``` - Signature declaration introduces the List type with functions: - `elem: List → Integer` - `next: List → List` Input specification module list import integer sig List { elem: Integer, next: lone List } fun Acyclic(l: List) { all n: l.*next | lone n.~next // at most one parent no l.~next } // head has no parent one sig Input in List {} fact GenerateInputs { Acyclic(Input) } • ~: transpose (converse relation) • *: reflexive transitive closure • Subsignature Input is a subset of List and it has exactly one atom which is indicated by the keyword one Correctness specification fun Sorted(l: List) { all n: l.*next | some n.next => n.elem <= n.next.elem } //? fun Perm(l1: List, l2:List) all e: Integer | #(e.^elem & l1.*next) = #(e.^elem & l2.*next) //? fun MergeSortOK(i:List, o:List) { Acyclic(o) Sorted(o) Perm(i, o) } one sig Output in List {} fact OutputOK { MergeSortOk(Input, Output) } Counter-examples • If an error is inserted in the method for merging where \((l_1.\text{elem} \leq l_2.\text{elem})\) is changed to \((l_1.\text{elem} \geq l_2.\text{elem})\) • Then TestEra generates a counter example: Counterexample found: Input List: 1 -> 1 -> 3 -> 2 Output List: 3 -> 2 -> 1 -> 1 TestEra: case studies • Red-Black trees • Tested the implementation of Red-Black trees in java.util.TreeMap • Introduced some bugs and showed that they can catch them with TestEra framework • Intentional Naming System • A naming architecture for resource discovery and service location in dynamic networks • Found some bugs • Alloy Analyzer • Found some bugs in the Alloy Analyzer using TestEra framework Discussion • Strengths • Limitations • Future work Thanks and stay safe!
{"Source-Url": "https://lingming.cs.illinois.edu/courses/cs598lmz-s22/spec-testing.pdf", "len_cl100k_base": 6386, "olmocr-version": "0.1.50", "pdf-total-pages": 38, "total-fallback-pages": 0, "total-input-tokens": 67368, "total-output-tokens": 7446, "length": "2e12", "weborganizer": {"__label__adult": 0.0004038810729980469, "__label__art_design": 0.0002923011779785156, "__label__crime_law": 0.0003533363342285156, "__label__education_jobs": 0.004550933837890625, "__label__entertainment": 6.29425048828125e-05, "__label__fashion_beauty": 0.00018978118896484375, "__label__finance_business": 0.00015676021575927734, "__label__food_dining": 0.0004658699035644531, "__label__games": 0.000701904296875, "__label__hardware": 0.0005745887756347656, "__label__health": 0.0004830360412597656, "__label__history": 0.0002155303955078125, "__label__home_hobbies": 0.00013625621795654297, "__label__industrial": 0.00034308433532714844, "__label__literature": 0.00028586387634277344, "__label__politics": 0.0002624988555908203, "__label__religion": 0.0005068778991699219, "__label__science_tech": 0.003467559814453125, "__label__social_life": 0.00022208690643310547, "__label__software": 0.00389862060546875, "__label__software_dev": 0.9814453125, "__label__sports_fitness": 0.0004625320434570313, "__label__transportation": 0.0005116462707519531, "__label__travel": 0.0002453327178955078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18185, 0.0555]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18185, 0.19457]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18185, 0.5846]], "google_gemma-3-12b-it_contains_pii": [[0, 95, false], [95, 663, null], [663, 846, null], [846, 998, null], [998, 1446, null], [1446, 2113, null], [2113, 2875, null], [2875, 3412, null], [3412, 3647, null], [3647, 3888, null], [3888, 4413, null], [4413, 5179, null], [5179, 5707, null], [5707, 5957, null], [5957, 5995, null], [5995, 6844, null], [6844, 7152, null], [7152, 7399, null], [7399, 7714, null], [7714, 7815, null], [7815, 8755, null], [8755, 9565, null], [9565, 10223, null], [10223, 12569, null], [12569, 14175, null], [14175, 14921, null], [14921, 14973, null], [14973, 15125, null], [15125, 15719, null], [15719, 16128, null], [16128, 16148, null], [16148, 16553, null], [16553, 17010, null], [17010, 17391, null], [17391, 17694, null], [17694, 18112, null], [18112, 18164, null], [18164, 18185, null]], "google_gemma-3-12b-it_is_public_document": [[0, 95, true], [95, 663, null], [663, 846, null], [846, 998, null], [998, 1446, null], [1446, 2113, null], [2113, 2875, null], [2875, 3412, null], [3412, 3647, null], [3647, 3888, null], [3888, 4413, null], [4413, 5179, null], [5179, 5707, null], [5707, 5957, null], [5957, 5995, null], [5995, 6844, null], [6844, 7152, null], [7152, 7399, null], [7399, 7714, null], [7714, 7815, null], [7815, 8755, null], [8755, 9565, null], [9565, 10223, null], [10223, 12569, null], [12569, 14175, null], [14175, 14921, null], [14921, 14973, null], [14973, 15125, null], [15125, 15719, null], [15719, 16128, null], [16128, 16148, null], [16148, 16553, null], [16553, 17010, null], [17010, 17391, null], [17391, 17694, null], [17694, 18112, null], [18112, 18164, null], [18164, 18185, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 18185, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18185, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18185, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18185, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 18185, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18185, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18185, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18185, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18185, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18185, null]], "pdf_page_numbers": [[0, 95, 1], [95, 663, 2], [663, 846, 3], [846, 998, 4], [998, 1446, 5], [1446, 2113, 6], [2113, 2875, 7], [2875, 3412, 8], [3412, 3647, 9], [3647, 3888, 10], [3888, 4413, 11], [4413, 5179, 12], [5179, 5707, 13], [5707, 5957, 14], [5957, 5995, 15], [5995, 6844, 16], [6844, 7152, 17], [7152, 7399, 18], [7399, 7714, 19], [7714, 7815, 20], [7815, 8755, 21], [8755, 9565, 22], [9565, 10223, 23], [10223, 12569, 24], [12569, 14175, 25], [14175, 14921, 26], [14921, 14973, 27], [14973, 15125, 28], [15125, 15719, 29], [15719, 16128, 30], [16128, 16148, 31], [16148, 16553, 32], [16553, 17010, 33], [17010, 17391, 34], [17391, 17694, 35], [17694, 18112, 36], [18112, 18164, 37], [18164, 18185, 38]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18185, 0.2177]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
c6d5a59ac5fdce1cc042c9eb85f71a24ca09408c
SecureMDD: A Model-Driven Development Method for Secure Smartcard Applications N. Moebius, H. Grandy, W. Reif, K. Stenzel Report 10 2008 INSTITUT FÜR INFORMATIK D-86135 AUGSBURG SecureMDD: A Model-Driven Development Method for Secure Smartcard Applications Nina Moebius, Holger Grandy, Wolfgang Reif, Kurt Stenzel Lehrstuhl für Softwaretechnik und Programmiersprachen, Universität Augsburg, 86135 Augsburg, Germany Abstract In this paper we introduce a method to apply model-driven ideas to the development of secure systems. Using MDD techniques, our approach, called SecureMDD, provides a possibility to verify the correctness of a system at the modelling stage. To do so, we generate different platform-specific models from one common platform-independent UML model. The considered platforms are JavaCard and a formal model. The formal model is used for the verification of security properties. For the verification results to carry over to the Java(Card) code, these models have to be equivalent with respect to security aspects. This requires complete code generation without the possibility to manually complete the Java(Card) code. To devise such sophisticated models, we extend action elements of activity diagrams. In this paper we focus on the part of our approach which is used to generate secure smartcard code. Chapter 1 Introduction Model-driven development has become one of the most promising approaches to handle the complexity of computer systems. This is achieved by rigorous application of domain modeling techniques and accomplished by using transformation functions e.g., for the generation of executable code. Especially the domain of security-critical distributed systems is a promising field for the application of model-driven development because the required quality of the applications and their correctness regarding specifications is crucial. Applications of this domain are often E-Commerce applications, for example electronic payment systems or electronic ticket systems. But also applications such as the German electronic health card which is realized as service-oriented architecture and has to cope with security aspects such as secrecy and role-based access control fit into this domain. These applications have in common that they are based on cryptographic protocols. These protocols are very difficult to design. In this paper, we present SecureMDD, which is a model-driven technique for developing security-critical applications. In our opinion, only by integrating security considerations from the very beginning of application development the resulting product can be really secure. On the other hand, only by using established modeling techniques already elaborated in the Software Engineering research, the inherent complexity of such applications can be handled with more ease. Then, design errors, e.g., in the design of the security protocols, are less likely to occur. Besides just treating the generation of secure executable code we furthermore extend our approach to the model-driven generation of formal specifications from the same platform-independent model. Using this formal model we verify certain security properties for the modeled application. For verification techniques our group already developed a formal approach [11]. On the other hand, model-driven development also allows for generation of executable code. Our goal is to generate code that is, in terms of refinement, correct with respect to the formal specification. We already developed a refinement methodology for hand-written code and model [9]. By extending this methodology to generated code and model we are bringing together formal methods and model-driven Software Engineering into one integrated approach. To the best of our knowledge this is the first approach dealing with the integration of formal verification of security aspects and the generation of executable code. To be able to prove the security of the modeled application and generate an implementation of it at the same time, we have to be able to completely model an application, including method bodies. For this reason we extend UML activity diagrams by a simple language which allows the expression of state changes. Note that our aim is to automatically generate an implementation of the security-critical parts of the application, we do not consider user interfaces, data base access and so on. Significant research has already been done on the application of model-driven development to security-critical applications, e.g., [17] [20] [2] [19]. But none of those approaches integrates the application of formal techniques on the one hand and the generation of correct code on the other hand. We will give a detailed comparison to those approaches in Section 6. We currently focus our work on smartcard scenarios. Smartcards are inherently security-critical, but also relatively small in their size and therefore easier to cope with. In a second step, we plan to extend our method to other areas of security-critical applications, e.g. using Web Services. All considered applications have in common that they are built on application-specific security protocols. In this paper we introduce a method to generate executable JavaCard code from a platform-specific model. This PSM is generated from a platform-independent model that also serves as source model for a formal specification. The platform-specific model can be completed by platform-specific information but this does not cause any inconsistencies with the platform-independent model. The introduced approach can be applied to all (smartcard) applications that are based on cryptographic protocols. Section 1.1 presents our integrated model-driven approach and introduces the different modeling levels. Section 2 gives an overview of the specifics regarding the programming of Java smartcards. We illustrate our approach by a case study called Mondex which is an electronic payment system and shortly introduced in Section 3. Section 4 presents our modeling of security-critical systems with UML which we extend to be able to completely model the system. In Section 5 we describe the generation of executable JavaCard code from these models. Section 6 introduces work that is related to ours and Section 7 concludes. ### 1.1 From Abstract Models to Secure Executable Code A lot of research work, for example [22] and [23], addresses the abstract specification of security protocols as well as the proof that the specified systems are secure. Most of these approaches deal with proofs of security properties only at the level of abstract specifications but do not consider the implementation of the system. In practice, this does not suffice since additional weaknesses may be added on the code level. Initially, we tried to bridge the gap between abstract specification and code by adopting a refinement technique and using interactive verification. This approach guarantees that security properties of the abstract specification are also valid at code level [12] [10]. This work turned out to be successful but also very time-consuming. Therefore, our new approach aims to generate both, a formal model for verification as well as executable code, from a common platform-independent UML model. We define the formal specification as well as the smartcard application to be two different platforms. We generate platform-specific models for each platform from the platform-independent model and, in a next step, the formal model resp. code. Figure 1.1 gives an overview of our approach. The platform-independent level defines an abstract view of the application under development including dynamic and static aspects of all involved components. This model is transformed into a platform-specific model that contains information needed for the generation of JavaCard code for the smartcard as well as Java Code for the component communicating with the card. Thus, we generate platform-specific models defining the behavior as well as data types for each component of the application. In this paper we focus on the generation of the smartcard part of an application but our approach is extendable to generate the Java code runnable on a smartcard terminal as well. Then, the JavaCard code, resp. Java code for the terminal, is created by model-to-text transformation from the platform-specific Card PSM resp. Terminal PSM. Furthermore, the platform-independent models are translated into a platform-specific model, Formal PSM, containing the required information to generate a formal model based on abstract state machines [13] [4]. In a next step, after generating the formal specification from the Formal PSM by model-to-text transformation, security properties can be specified and verified using the formal model in the interactive verification system KIV [1]. For the formal specification as well as the verification we adapt the specification methods and techniques developed in the Prosecco approach. Here, a formal specification of a security-critical application is given as an algebraic specification in combination with abstract state machines. An overview of the approach can be found in [11]. In the following we focus on the code generation for smartcards and the corresponding platform-specific modeling. The transformations are not yet implemented but the concepts introduced here are going to be used as templates for their realization. Figure 1.1: Overview of SecureMDD Chapter 2 Smartcards using JavaCard in a Nutshell The JavaCard [25] technology facilitates the use of Java on resource-constrained devices such as smartcards. Since these small devices are not very powerful, only a subset of the functionality of Java is supported. Unsupported functions are, for example, garbage collection, dynamic class loading and threads. Even the use of large primitive types such as integers is unsupported. Another difference between JavaCard programs and Java programs is the limited storage space on the smartcard. To avoid memory leaks best practice is to allocate all memory during the initialization phase. Smartcards communicate with a terminal by receiving and answering commands using Application Protocol Data Units (APDUs). The card does not initiate the communication, instead it waits until receiving a command APDU from the terminal. Then, the card processes the command APDU and returns a response APDU. Typically, JavaCard programs are not written using object-oriented paradigms. Instead, byte arrays and primitive data types such as shorts are manipulated directly. Chapter 3 An Electronic Payment System: Mondex We illustrate our approach with an example, namely an electronic purse system called Mondex [6]. Mondex is a product of Mastercard International [15] and is used to replace coins by electronic cash. The Mondex case study recently received a lot of attention because its formal verification has been set up as a challenge for verification tools [26] that several groups [16] as well as our group [24] [14] worked on. To pay with his Mondex card, the customer of a shop inserts his card into a card reader which is also connected to the Mondex card of the shop owner. Then, the amount payable is withdrawn from the card of the customer and debited to the one of the shop owner. To ensure the practicability of the purse system, several security properties have to be considered. At first, it must not be possible to "create" money, i.e. to add money on one card without reducing the amount of another card. Furthermore, it has to be ensured that "no money is lost". If, for example, a card is removed from a card reader too early, a recovery mechanism has to guarantee that the lost amount can be recovered. Another point is the requirement that a Mondex card, after released to the customer, is on its own, i.e. it has to ensure the security for all transactions without the help of a central server. Chapter 4 Modeling of security-critical applications with UML The modeling methodology of our approach differs from the work of other groups (which is discussed in Section 6) by providing a way to completely model an application. This includes the static aspects as well as dynamic views. UML facilitates the description of a system from different views. In our approach the following types of diagrams are used: Use case diagrams and descriptions are used to give an overview of the functionality of the system under development. Class diagrams are used to model the static view of an application. Sequence diagrams and activity diagrams are used to model the dynamic aspects of the system. Deployment diagrams serve for definition of the structure of the (distributed) system under development and to model the attacker abilities that are needed to prove that the system is secure. The aim of our modeling approach is to extend the Unified Modeling Language to be able to model security-critical applications that are based on cryptographic protocols. Static aspects of a (distributed) application, modeled in class diagrams, are the components of the system, i.e. smartcards and terminals, that communicate to run a protocol, the message types that are used for communication as well as the data types. Specifics regarding the modeling of a security-critical application are the representation of encrypted data, digital signatures and hashed values. Furthermore, we need a facility to represent nonces (random numbers) as well as private, public and symmetric keys. We solve this by defining UML stereotypes to model the encryption, signing and hashing of data as well as appropriate data types to represent keys, secrets and nonces. To be able to model the processing of a received message (resp. to automatically generate method bodies) we define a language to extend UML activity diagrams such that it is possible to express state changes like variable assignments. The language is tailored to security-critical applications. For example, primitives exist to express the encryption and decryption of data. Furthermore, we use UML stereotypes and tagged values to add security-specific information to UML model elements. In this Section we illustrate our methodology to model a security-critical application with UML. After presenting the platform-independent modeling, we discuss the addition of information on the platform-specific level. In this paper we concentrate on the smartcard part of the UML model. Since JavaCard code in our approach is mainly generated from class and activity diagrams, in the following the use of these diagrams is introduced in detail. 4.1 Platform-independent Models Figure 4.1 shows the part of the class diagram representing the smartcard and associated classes for Mondex. Figure 4.1: Part of the platform-independent class diagram consisting of the purse and its associated data types The class representing the smartcard component, called Purse in Figure 4.1, is denoted by the stereotype≪smartcard≫. Furthermore, the associated data types are denoted by stereotype≪data≫. A purse stores its name and a continuous transaction number in an object called PurseData. Each component of the system that participates in the protocol run requires a status that indicates the state the component is in. This state is given as an enumeration of all possible states and the association is annotated with the stereotype≪status≫. Moreover, the current transaction details are recorded by a data type called PayDetails. If a transaction fails, the current transaction details are stored in a list of PayDetails (denoted by stereotype≪list≫) where at most 10 failed transactions can be logged. Furthermore, a purse stores its balance, a symmetric key which is the same on all cards as well as a counter that counts the number of failed transactions. On the platform-independent level all types of numerical values are defined by an abstract data type Number. For strings we use the abstract type String. These values have to be refined on the platform-specific level. Figure 4.2: Message type Req with associated classes During a protocol run messages are exchanged between the protocol participants. On the platform-independent level each message type is modeled by a class derived from an abstract class Message. Figure 4.2 shows one message type of the Mondex application, called Req (= request). A Req message is sent from the TO purse to the FROM purse to initialize a transfer of money. A Req message contains encrypted data. Data that is going to be encrypted is labeled as plaintext in the diagram using the stereotype≪plainedata≫. In our case the data to be encrypted is of type PlainReq. An object of this type contains the PayDetails of the current transaction as well as a constant msgtype. During a protocol run an object of type Plainreq is going to be encrypted. The msgtype information is needed to prevent replay attacks since there are other messages in the protocol that contain the encrypted pay details of the transaction but use another msgtype. To model the exchange of messages between the protocol participants we use UML sequence diagrams. This part of the dynamic modeling which only shows the message flow is then used to generate the skeleton of an activity diagram for each protocol by applying a model-to-model transformation. Then, the generated diagrams are completed by the developer. Note that we use activity diagrams instead of state machines because we emphasize the protocol flow between the participants which can easily be visualized when using activity diagrams. Furthermore, since we define the state changes of every participant in detail, activity diagrams are suitable to refine the modeled message-based view. Here, we support the use of activity partitions to have partitions for each component participating in the protocol and sendSignal as well as acceptEvent actions to model the sending and receiving of messages. Furthermore, we provide support for actions to express assignments and object creation, decision nodes as well as initial and final nodes to model the begin and end of an activity. The use of parallel control flows is not allowed. As noted earlier, we extend activity diagrams by the ability to express state changes. We exploit the fact that the content of some model elements, for example the action elements, has no predefined syntax and semantics. Instead, the OMG allows the use of arbitrary strings. We use this omission to add a simple abstract programming language that is used to specify the processing of a message. The syntax of the language is similar to Java syntax but limited to simple expressions and statements. When generating a formal model from the modeled application, the UML diagrams resp. the activity diagrams extended by the self defined language, are translated into abstract state machines [4]. ASMs have a well-defined and relatively simple semantics. The semantics of our models are defined by giving a mapping from the semi-formal UML descriptions into a formal presentation using abstract state machines. The language facilitates the extension of sendSignal and acceptEvent elements by denoting the type of message that is sent resp. received and the assignment of message fields to local variables after receiving a message. For action elements assignments, the application of predefined arithmetical operations, the creation of objects as well as method invocation of predefined and self-defined methods is expressible. Complex state changes can be modeled using subdiagrams to improve readability. Furthermore, the language allows the definition of boolean expressions that are used as conditions for decision nodes. For example, updates of the state of a component, checks of preconditions that have to be satisfied as well as decryption of data is expressible. If the modeled protocol contains loops, these have to be defined in a separate activity (sub-)diagram. Figure 4.3 illustrates the use of activity diagrams to describe a protocol run. For each component participating in the protocol we use a partition (which is generated from the sequence diagram). The Figure shows one detail of the partition of the purse component. The purse increases its field sequenceNo which is part of the PurseData data (see Figure 4.1). The sequence number is used to have a unique identifier for each transaction and to avoid replay attacks. Afterwards the state of the Purse is set to EPV which stands for “expecting value” and denotes that the purse is ready to receive a Val message. Then, an object plain of type Plainreq is created with msgtype REQ (which is defined as a constant in the class diagram) and the current pay detail pdAuth of the purse as parameters. Afterwards, the Plainreq object is encrypted with the symmetric key sessionkey (which is a field of class Purse). The encrypted data is sent as Req message. The receiving and sending of messages is also generated on the basis of the sequence diagrams. We use certain predefined methods that can be used in the activity diagram such as encrypt or decrypt. It may happen that complex algorithms are used that are difficult to model with activity diagrams. For this reason, the developer is allowed to add self-defined methods that are not specified resp. modeled on the platform-specific level. These have to be annotated with stereotype ≪selfdefined≫ in the activity diagram. For each self-defined method the model-to-text transformation generates a method signature. The code of this method has to be added by the developer on the platform-specific implementation level. To define the number of components in the system, the communication links as well as the attacker abilities, we use UML deployment diagrams. Since the deployment diagram is more important for the generation of the formal model than for code generation, we omit the details 4.2 Platform-specific Models The platform-independent models are transformed into different platform-specific models. For each component of the distributed system we generate one platform-specific model that is later used to automatically generate code. This platform-specific view contains the parts of the system under development that are relevant for the component. Furthermore, some platform-specific details are added by the transformation function. For example, associations annotated with stereotype ≪list≫ in the platform-independent model are realized by arrays. Also, constructor signatures are added in the class diagram. The activity diagram models are refined to models containing JavaCard code. Another aspect that has to be handled is the refinement of the abstract primitive type Number used in the platform-independent model to platform-specific types. JavaCard only supports the primitive types boolean, short and byte. The use of e.g. integers, characters as well as strings is not possible. By transforming the platform-independent model into a platform-specific model for the smartcard, all fields of abstract type String are replaced by byte arrays, e.g. ascii representation. All fields of abstract type Number are replaced by the primitive Java type short. If the developer prefers to use bytes for some fields, he is allowed to change this default type to byte in the platform-specific model. For example, the sequenceNo of a Purse, stored in the PurseData object, is of type Number in the platform-independent class diagram. In the platform-specific model this field is transformed into a field of type short as default value. The developer may change it to byte. During a protocol run different arithmetical operations may be performed on these primitive fields. Since the JavaCard types are bounded, an over- or underflow may occur. To prevent this, we add checks for over- and underflow for each arithmetical operation. For example, if two values are added we check if the result is within the valid range. If an over- or underflow is caused, an exception is thrown. If an expression within the program consists of more than one arithmetical operation, a range check is added for each operation. The checking methods for addition, subtraction, multiplication, division and remainder are application-independent and have to be implemented only once. To avoid replay attacks, for every transaction of money a different sequenceNo is used. Thus, for each transfer of money the sequenceNo field is incremented on both purses. In the platform-specific model, we add a method call of method Figure 4.3: Part of the platform-independent activity diagram of the Mondex application and refer to [21]. rangeCheckAdd(short x, short y) that checks if the addition of 1 to the sequenceNo causes an over- or underflow. ```java public static void rangeCheckAdd(short x, short y) { if (x > 0 && y > 0 && (short) (x + y) < 0) { ISOException. throwIt(ISO7816.SW_CONDITIONS_NOT_SATISFIED); } if (x < 0 && y < 0 && (short) (x + y) >= 0) { ISOException. throwIt(ISO7816.SW_CONDITIONS_NOT_SATISFIED); } } ``` Since JavaCard does not support integers, it is not possible to check whether the result of the addition is greater than the maximal short value, e.g. 32767. If the addition causes an overflow, the resulting short value will be less than zero. To be consistent to the formal model we have to ensure that the JavaCard program behaves in the same way as the formal specification given as abstract state machine. For this reason, when adding a range check in the JavaCard program, we do the same in the ASM model although the formal model uses unbounded integers. The checks in the formal model are realized by testing if the arithmetic operation produces an output that is within a valid range. Another platform-specific detail that is added by the developer is the specification of the used encryption algorithm, padding scheme etc. These information are added in the activity diagram models. Since the cryptographic operations are modeled as abstract operations in the formal model, the addition of these information does not cause any inconsistencies neither with the formal model nor with the platform-independent model. To model our applications with UML and to define our UML profile we use the modeling tool Magic Draw. To implement our model-to-model transformations we use operational QVT, for model-to-text transformations we make use of the language XPand. Both transformation languages are part of the Eclipse Modeling Project [8]. Chapter 5 Automated Generation of Code Programming smartcards (with JavaCard) is different from programming Java with J2SE. Although JavaCard is a subset of Java, coding techniques differ a lot. JavaCard programs are usually not written in an object-oriented manner. Typically, Java syntax is used to manipulate byte arrays directly omitting object-oriented paradigms like modularization and encapsulation. In our opinion, one challenge of model-driven code generation approaches is to reduce the gap between input and target platforms. For this reason, we decided to make further use of the classes defined in the platform-independent models (and later transformed to platform-specific classes) instead of flatten out the object-oriented view of the application into a program consisting of byte array representations for each object resp. class. Thus, in our approach the class implementing the protocol steps of the cryptographic protocol operates on the data types defined in the platform-independent model by the developer. To generate the JavaCard code of the protocol implementations we use the protocol specification given as an activity diagram. Since every protocol of the application is modeled as an activity diagram and completely specified using our protocol definition language, it is possible to generate executable code. The JavaCard API provides several facilities to encrypt data, create digital signatures or generate hash values. The cryptographic operations are implemented on byte arrays. That is, the data to be encrypted resp. signed or hashed has to be converted into a byte array representation before applying the cryptographic operation. That is, all data types annotated with stereotype ≪plaindata≫ in the class diagram have to be converted into a byte array representation before applying the encryption operation. To accomplish this, we add methods for encoding and decoding of plain data objects that follow the same rules as the en- and decoding of message objects. Then, we define a decrypt operation that returns an object of type plain data. If the decryption fails or the decrypted byte array is not a valid representation of an object of type plain data the return value is null. Referring to the class diagram (see Figure 4.2) and the activity diagram (see Figure 4.3) a Plainreq object is encrypted by transforming it into a byte array and applying a JavaCard method that returns the encrypted value as a byte array. If the application stores an encrypted value, this is done using a special data type EncData that just contains the encrypted byte array. Listing 5.1 shows the JavaCard code (of class Purse) that is generated from the part of the activity diagram shown in Figure 4.3. In line 2 the sequenceNo of the PurseData with name data is incremented. Before, it is checked if the addition of 1 causes an overflow by calling the method rangeCheckAdd(short x, short y). Then, the state of the purse is set to EPV. In a next step, the sending of a Req message is prepared (line 5). An object plain of type Plainreq is requested from the ObjectStore. The ObjectStore preallocates all required messages needed for protocol execution at the initialization phase. Then, the plain object is encrypted with the sessionkey, which is a field of the Purse class. The returned object is of type EncData which has one field of type byte array that contains the encrypted result of the operation. The EncData object is requested from the ObjectStore within the encrypt method. Afterwards, the 1 Checker.rangeCheckAdd(data.sequenceNo, (short)1); 2 data.sequenceNo = (short)(data.sequenceNo + 1); 3 state = EPV; 4 5 Plainreq plain = ObjectStore. 6 newPlainreq(Constants.PLAINREQ,pdAuth); 7 EncData encmess = (EncData) Crypto. 8 encrypt(sessionkey, plain); 9 10 Message outmsg = ObjectStore.newReq(encmess); 11 comm.sendMsg(outmsg); 12 ObjectStore.returnEncData(encmess); 13 encmess = null; 14 ObjectStore.returnPlainData(plain); 15 plain = null; 16 ObjectStore.returnMessage(outmsg); 17 outmsg = null; Listing 5.1: JavaCard code generated from the activity diagram ObjectStore is asked for a Req message that contains the encrypted value (line 10). The returned message is sent using the sendMsg(Message msg) method of the communication interface comm which encodes the outmsg into a byte array representation. This byte array is written into the output buffer. The content of this buffer is sent to the terminal at the end of the protocol step. Then, the requested objects are returned to the ObjectStore (line 12,14 and 16) and the references pointing at this object are set to null. The encrypt method as well as the sendMsg method do not change the values of the fields of the objects plain and encmess and do not store these objects or field values without copying it. Otherwise, there would exist references to objects that were returned to the store and therefore ”free”. Furthermore, when receiving a new Plainreq object from the ObjectStore (line 5), the passed PayDetails pdAuth has to be copied to the Plainreq field pd (see Figure 4.2) because otherwise a reference to the pdAuth object of the Purse class would exist. If the Plainreq object is later returned to the ObjectStore, the store would manage this object with reference to the PayDetails pdAuth and may issue the Plainreq object again. This may cause side effects. Chapter 6 Related Work The most closely related work is UMLSec developed by Jan Jürjens [17]. To model security-critical systems with UML and to prove that several (predefined) security properties hold for the modeled system, Jürjens defines a UML profile. Using the profile, properties such as secrecy and integrity as well as role-based access control are expressible. Jürjens provides tool support for verifying properties by linking the UML tool to a model checker resp. automated theorem prover. Moreover, model-based testing and non-interference analysis is part of the approach. In [3] the employment of the UMLSec approach in an industrial context is presented. The security properties mainly addressed in UMLSec are those standard properties which are expressible by using the predefined stereotypes. The generated formal model reflects an abstract view of the parts of the application that are required for verification. In our approach we generate a formal model of the entire application which can be used to express and verify application-dependent properties such as “No money can be created within the Mondex application”. Another difference is the integrative aspect of our approach. We aim to generate secure code as well as a formal model for verification whereas Jürjens mainly focuses on the generation of a formal model and, based on this model, to verify security aspects. In [18] another approach of the same author is presented. Here, an implementation of an application is written in Java (by hand). Then, the code is translated into an abstract model of the application. The generated abstract model is used to prove security properties using an automated theorem prover. The approach is evaluated by two case studies, an electronic purse system and an implementation of the TLS protocol. Basin et al. [2] [20] present a model-driven methodology for developing secure systems which is tailored to the domain of role-based access control. The aim is to model a component-based system including its security requirements using UML extension mechanisms. To support the modeling of security aspects and of distributed systems several UML profiles are defined. Furthermore, transformation functions are defined that translate the modeled application into access control infrastructures. The platforms for which infrastructures are generated, are Enterprise JavaBeans, Enterprise Services for .Net as well as Java Servlets. In [19] Kuhlmann et al. model the Mondex system with UML. Only static aspects of the application including method signatures are defined by using UML class diagrams. To specify the security properties that have to be valid the approach uses the object constraint language. The defined constraints are checked using the tool USE (UML-based Specification Environment). USE validates a model by testing it, i.e. it generates object diagrams as well as sequence diagrams of possible protocol runs. The approach neither considers the generation of code nor the use of formal methods to prove the security of the modeled application. The models are only validated by testing. Deubler et al. present a method to develop security-critical service-based systems [7]. For modeling and verification the tool AutoFocus [5] is used. AutoFocus is similar to UML and facilitates the modeling of an application from different views. Moreover, the tool is linkable to the model checker SMV. The approach focuses on the specification of an application with AutoFocus and, in a next step, the generation of SMV input files and formal verification using SMV. The generation of secure code is not part of the approach. Chapter 7 Conclusion and Outlook We presented a model-driven approach for the development of security-critical distributed applications. Our goal is to generate both, an implementation and a formal model, from the same platform-independent model to assure correctness and security of the code with respect to the formal model simply by construction. Security properties can be verified on the formal model, for which we already developed a suitable verification approach. This paper presented the transformation of platform-independent models into executable code by using additional platform-specific models. The current methodology focuses on JavaCard applications and can be applied to all security-critical smartcard applications. Our goal is to further extend the approach to be able to cope with more complex distributed systems, e.g. using Web Services. Our long-term goal is to develop a model-driven approach where we are able to carry over the security properties to the application code in terms of formal refinement. Bibliography
{"Source-Url": "https://opus.bibliothek.uni-augsburg.de/opus4/frontdoor/deliver/index/docId/1108/file/TR_2008_10.pdf", "len_cl100k_base": 7139, "olmocr-version": "0.1.50", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 40119, "total-output-tokens": 9525, "length": "2e12", "weborganizer": {"__label__adult": 0.0004041194915771485, "__label__art_design": 0.0002161264419555664, "__label__crime_law": 0.0004665851593017578, "__label__education_jobs": 0.0003616809844970703, "__label__entertainment": 4.076957702636719e-05, "__label__fashion_beauty": 0.00014674663543701172, "__label__finance_business": 0.0002493858337402344, "__label__food_dining": 0.00030732154846191406, "__label__games": 0.0005331039428710938, "__label__hardware": 0.0007538795471191406, "__label__health": 0.0004298686981201172, "__label__history": 0.00016558170318603516, "__label__home_hobbies": 6.240606307983398e-05, "__label__industrial": 0.0003669261932373047, "__label__literature": 0.00015914440155029297, "__label__politics": 0.0002574920654296875, "__label__religion": 0.0003783702850341797, "__label__science_tech": 0.0079803466796875, "__label__social_life": 5.835294723510742e-05, "__label__software": 0.00365447998046875, "__label__software_dev": 0.98193359375, "__label__sports_fitness": 0.0002970695495605469, "__label__transportation": 0.00051116943359375, "__label__travel": 0.00019049644470214844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41337, 0.04105]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41337, 0.55438]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41337, 0.89777]], "google_gemma-3-12b-it_contains_pii": [[0, 181, false], [181, 181, null], [181, 419, null], [419, 1329, null], [1329, 4964, null], [4964, 9428, null], [9428, 9462, null], [9462, 10572, null], [10572, 11921, null], [11921, 14743, null], [14743, 17029, null], [17029, 21872, null], [21872, 24597, null], [24597, 26485, null], [26485, 30017, null], [30017, 31879, null], [31879, 35368, null], [35368, 35527, null], [35527, 36558, null], [36558, 38942, null], [38942, 41337, null]], "google_gemma-3-12b-it_is_public_document": [[0, 181, true], [181, 181, null], [181, 419, null], [419, 1329, null], [1329, 4964, null], [4964, 9428, null], [9428, 9462, null], [9462, 10572, null], [10572, 11921, null], [11921, 14743, null], [14743, 17029, null], [17029, 21872, null], [21872, 24597, null], [24597, 26485, null], [26485, 30017, null], [30017, 31879, null], [31879, 35368, null], [35368, 35527, null], [35527, 36558, null], [36558, 38942, null], [38942, 41337, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41337, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41337, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41337, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41337, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41337, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41337, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41337, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41337, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41337, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41337, null]], "pdf_page_numbers": [[0, 181, 1], [181, 181, 2], [181, 419, 3], [419, 1329, 4], [1329, 4964, 5], [4964, 9428, 6], [9428, 9462, 7], [9462, 10572, 8], [10572, 11921, 9], [11921, 14743, 10], [14743, 17029, 11], [17029, 21872, 12], [21872, 24597, 13], [24597, 26485, 14], [26485, 30017, 15], [30017, 31879, 16], [31879, 35368, 17], [35368, 35527, 18], [35527, 36558, 19], [36558, 38942, 20], [38942, 41337, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41337, 0.0]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
56f9d4d7fbd72d005d15eecfeff62bcdedb589a4
The Role of Requirements in the Success or Failure of Software Projects Azham Hussain\(^1\)*, Emmanuel O. C. Mkpojiogu\(^2\), Fazillah Mohmad Kamal\(^3\) \(^1\)School of Computing, Universiti Utara Malaysia, Sintok 06010, Malaysia, \(^2\)School of Computing, Universiti Utara Malaysia, Sintok 06010, Malaysia, \(^3\)School of Quantitative Sciences, Universiti Utara Malaysia, Sintok 06010, Malaysia. *Email: azham.h@uum.edu.my ABSTRACT Requirements engineering (RE) is pivotal and central to every successful software development project. There are several reasons why software projects fail; however, poorly elicited, documented, validated and managed requirements contribute grossly to software projects failure. Software project failures are normally very costly and risky and these could even at times be life threatening also. Projects that overlook RE processes often suffer or are most likely to suffer from failures, challenges and other consequent risks. The cost of project failures and overruns when estimated is quite great and grave. In addition, software project failures or overruns portend a challenge in today’s competitive market environment. It affects negatively the image, goodwill, profitability, and revenue drive of companies and decreases the marketability of their products, as well as, the perceived satisfaction of their customers and clients (which also leads to poor loyalty). In this paper, RE was discussed. Its role in software projects success was elaborated. The place of software requirements process in relation to software project failure was explored and examined. Furthermore, project success, challenge and failure factors were also discussed with emphasis placed on requirements factors as they play a major role in software projects’ successes, challenges and failures. The paper relied on secondary statistics to explore and examine factors responsible for the successes, challenges and failures of software projects in large, medium and small scaled software companies. Keywords: Requirements Engineering Process, Software Projects, Failure, Success JEL Classifications: L86, M15 1. BACKGROUND Requirement is a statement about a proposed system that all stakeholders agree must be made true in order for the customers’ problems to be truly solved. It is an expression of the ideas to be embodied in a system or an application under development. Requirement is the statement of system service or constraint describing the user-level properties, general systems, specific constraints and needs of clients. However, it may also describe the attributes and behaviour of a system (Inam, 2015). Furthermore, Gupta and Wadhwa (2013) stated that requirement forms the basis for the original assessment and ideas for developing and validating any product. Krauss (2012) further stated that it is critical to defining the purpose and process of a project and it helps to analyse and manage a project. More so, requirement has to do with capturing the objectives and the purpose of a system. It is the conditions or the capability needed by users to solve problems or meet their objectives. The accuracy and quality of requirements immensely contribute to the success of a project/system development (Krauss, 2012). Furthermore, quality requirements are pivotal and key to customer/user product satisfaction (Hussain et al., 2015; Mkpojiogu and Hashim, 2015; 2016; Hussain et al., 2016a; 2016b; 2016c; Hussain and Mkpojiogu, 2016a; 2016b; 2016c). Every project has some basic requirements that defines what the end users, customers, clients, developers, suppliers or business (i.e., stakeholders) require from it coupled with some needs of the system for efficient functioning. Requirement is a key factor during every software development as it describes what different stakeholders need and how the system will satisfy these needs. It is generally expressed in natural language so that everyone can understand it well. It helps the analyst to better understand which elements and functions are necessary in the development of a particular software project. More so, requirements are considered as an input to design, implementation and validation phase of software product development. Thus, a software project is successful or a failure during software development because of poor requirement elicitation as well as in requirements managing process (Pfleeger and Atlee, 2006). RE is one of the branches of software engineering. It is the systematic processes and techniques for requirements elicitation, requirement analysis, specification, verification and management of requirements. It is the initial phase of software engineering process in which user requirements are collected, understood, and specified for developing quality software products. In other words, it is a practical and systematic approach through which the software or system engineer collects functional or non-functional requirements from different customers/clients for the design and development of quality software products (Swarnalatha et al., 2014). Requirements engineering (RE) is an incremental and iterative process, performed in parallel with other software development activities such as design, implementation, testing and documentation. RE process is divided into two main set of activities; namely, requirements development and requirement management (Hussain et al., 2016). Software requirement development mainly covers the activities of discovering, analysing, documenting, verification and validation of requirements whereas software requirement management commonly includes activities related to traceability and dynamic change management of software requirements (Pandey and Suman, 2012; Swarnalatha et al., 2014). Research reveals that software project failures are mainly due to inadequate requirements, changing requirements, poor requirements, and impracticable expectations, etc. Nonetheless, the application of a systematic approach will reduce the challenges of RE process and the chances of any project failing. It is also very crucial to gather accurate information about the proposed system/product and analyse the organizational needs and practices, document the requirement acquisition and ensure completeness and consistency with stakeholder requirements whilst effectively managing conflicting requirements (Hussain et al., 2015; Mkpojiogu and Hashim, 2015; 2016; Hussain et al., 2016a; Hussain and Mkpojiogu, 2016a). Requirements of software are captured through RE which is the process of determining requirements (Cheng and Atlee, 2009). Cheng and Atlee (2009) mentioned that successful RE involves the discovering of the stakeholders needs, understanding of the requirements contexts, modelling, analysing, negotiating, validating, as well as assessing documented requirements; and managing of the requirements (Shah and Patel, 2014). There are many researches that identify the need for the development of quality software that meet the needs and objectives of the customers and give value to stakeholders (Wiegers, 2013; Inam, 2015). Asghar and Umar (2010) pointed out that RE is acknowledged as the first phase of software engineering process and it is considered as one of the main phases in software development. Furthermore, Khan et al. (2014), and Shah and Patel (2014), asserted that, unclear requirement is the main reason of software project failures. Khan et al. (2014) said that “RE phase is difficult and crucial.” Also, Young (2004) stated that the neglect of RE contributes to project failures. RE impacts productivity as well as product quality. Thus, it can be stated that RE is an essential phase for software development (Sankhwar et al., 2014), and therefore RE practices should be taken into consideration in every software development project. In this paper, RE process is defined based on Wiegens (2003). He maintained that RE is composed of two main activities which are: Requirements development and requirements management. According to Kavitha and Thomas (2011), proper comprehension and management of requirements are the main determinants of success in the process of development of software. In this paper, secondary statistics from previous studies were closely examined and used to assess and succinctly understand why software projects succeed or fail. In summary, there are many reasons for software project failures; however, poorly engineered requirements process contributes immensely to the reason why software projects fail (Inam, 2015). Software projects failure are usually costly and risky and could also be life threatening. Projects that undermine RE suffer or are likely to suffer from failures, challenges and other attending risks. The cost of project failures and overruns when estimated is very huge. Furthermore, software project failure or overruns pose a challenge in today’s competitive market environment. It affects the company’s image, goodwill, profitability, and revenue drive and decreases the marketability, and the perceived satisfaction of customers and clients (which leads to their poor loyalty to the company and their products) (Hussain and Mkpojiogu, 2016a; 2016b; 2016c; Hussain et al., 2016b). The remaining part of this paper is presented as follows: Section 2: Why software projects succeed or fail; Section 3: The role of requirements in software projects success; and lastly, Section 4: Conclusion. 2. WHY SOFTWARE PROJECTS SUCCEED OR FAIL A software project, as categorized by the Standish Group, can be successful, challenged, or failed. A successful software project is one that is completed on time and within allocated budget, and which has all the originally specified features and functions. A challenged project is one that is completed but with time and budget overrun and also with fewer features and functions when compared to those originally specified. A failed project is one that is aborted or cancelled before its completion. It is also one that is completed, but never implemented (Kamuni, 2015). 2.1. Software Projects’ Success, Challenge, and Failure Factors The Standish Group study of 2009 reported that only 34% of software projects succeeded, 44% were challenged and 22% failed (Kamuni, 2015). The 1995 Chaos report established that RE practices contributed more than 42% of overall project success. Likewise, inappropriate RE practices represent more than 43% of the reasons for software project failure. In addition, many previous researchers have identified that 70% of the requirements were difficult to identify and 54% were not clear and well organized (Gause and Weinberg, 1989; Asghar and Umar, 2010; Khan and Mahrin, 2014; Young, 2004; Sankhwar et al., 2014; Wiegers, 2003; lists “incomplete requirements” as the leading cause of software project failure. The Standish Group reports a low point in 1994 in which only 16% of projects were successful (Wiklund and Pucciarelli, 2009). Gause and Weinberg (1989) also pointed out that: (i) Requirements are difficult and challenging to describe in natural language; (ii) requirements have many different types and levels of details; (iii) requirements are difficult to manage if they are not in control; (iv) most of the requirements change during software development. Taimour (2005) identified the following: Poor planning including missing dependencies, requirements changed and not finalized, key requirements missed and high turnover of top IT manager; as reasons why software projects fail. The Standish Group chaos report (1994) show that 29% of all projects succeeded (i.e., delivered on time, on budget, with required features and function); 53% were challenged (i.e., delivered late, over budget and/or with less the required features and functions); and 18% failed (cancelled prior to completion or delivered, but never used). Figures 1-3 display the project success, challenged, and failure factors of the Chaos report as republished by Project Smart. In the Figure 1, user involvement (15.90%), executive management support (13.90%) and clear statement of requirements (13%) are the top three factors responsible for project success. In Figure 2, lack of user input (12.80%), incomplete requirements and specification (12.30%), and changing requirements (11.80%) are the top three factors responsible for challenged projects. In Figure 3, incomplete requirements (13.10%), lack of user involvement (12.40%), and lack of resources (10.60%) are the top three factors causing impaired or failed projects. From these presentations, it is very clear that requirements related issues are the top factors for affect software project success, challenge, and failure (impairment). Furthermore, the 1995 Standish Group chaos report (Project Smart, 2014) identified user involvement, executive managerial support, clear statement of requirement, proper planning, realistic expectation, and smaller projects milestones, etc. as success factors and reports incomplete requirements, lack of user involvement, lack of resources, unrealistic expectation, lack of executive support, changing requirements and specifications, lack of planning, etc. as problem causes. Wiklund and Pucciarelli (2009) in their study revealed that 25% of projects fail out-right, 20-25% do not meet return on investment and up to 50% require material rework. In addition, from the 1995 Chaos report, the figures for failure were equally disheartening in companies of all sizes. Only 9% of projects in large companies were successful. At 16.2% and 28% respectively, medium and small companies were somewhat more successful. A whopping 61.5% of all large company projects were challenged compared to 46.7% for medium companies and 50.4% for small companies. 37.1% of projects were impaired and subsequently cancelled (failed) in medium companies, compared to 29.5% in large companies and 21.6% in small companies (Project Smart, 2014). The Standish Group categorized software companies into large, medium and small based on their annual income. A large company is one with >$500 million in annual revenue. A medium company is one that has between $200 million and $500 million in yearly revenue while a small company has between $100 million to $200 million revenue per year. The Standish Group observed that only 9% of the projects in large companies, 16.2% of projects in medium companies and 28% of projects in small companies were successful. Furthermore, 61% of all large company projects were challenged. Most of the failed projects were within the medium scale company category (37.1%) in comparison to large companies (29.5%) and small companies (21.6%) (Kamuni, 2015). In a related survey by the Standish Group, the success rate was 24% in large software companies, 37.2% in medium scale companies and 48% in small scale software companies. In addition, 69.5% of large software company projects were very challenging, in comparison to 52.7% and 60.4% in medium and small software companies respectively. Also, 39.5% of projects in large software companies were cancelled in comparison to 45.1% and 31.6% in medium and small scale software companies, respectively (Swarnalatha et al., 2015). As could be consistently observed, poor requirements processes are responsible for software projects challenges and failures. A good requirements collection and process contributes to software projects successes. One of the major causes of both cost and time overruns is restarts. For every 100 projects that start, there are 94 restarts. This does not mean that 94 of 100 will have one restart; some projects can have several restarts (Project Smart, 2014). The most important aspect of the research is discovering why projects fail. To do this, The Standish Group surveyed IT executive managers for their opinions about why projects succeed. The three major reasons why a project will succeed are user involvement, executive management support, and a clear statement of requirements. There are other success criteria, but with these three elements in place, the chances of success are much greater. Without them, chance of failure increases dramatically. The survey participants were also asked about the factors that cause projects to be challenged. Opinions about why projects are impaired and ultimately cancelled ranked incomplete requirements and lack of user involvement at the top of the list (Project Smart, 2014) (Figures 1-3). Another key finding of the survey is that a high percentage of executive managers believe that there are more project failures now than 5 and 10 years ago. This is in spite of the fact that technology has had time to mature (Project Smart, 2014) (Figure 4). 3. THE ROLE OF REQUIREMENTS PROCESS IN SOFTWARE PROJECT SUCCESS RE is the important phase of software development process. It basically aims at collecting meaningful and well defined requirements from clients in the proper way. It is important to develop quality software that can satisfy user’s needs without errors. It is mandatory to apply RE practices at every stage of software development process (Swarnalatha et al., 2014). RE is commonly accepted to be the most important, critical and complex process in the software development process. A well-defined requirement is software functionality that satisfies client’s needs (Inam, 2015). The RE process has the highest impact on the capabilities of the emerging software product (Swarnalatha et al., 2014). RE is important because it helps to define the purpose of any project by defining the constraints, specifying the process involved and documenting it. It also ensures incremental improvement by matching the most effective measures with the crucial problems. Furthermore, it critically identifies what the stakeholders’ need and helps to make decisions more efficiently, hence providing effective results. The success or failure of a project is dependent on the accuracy and effective management of requirements. It is crucial to determine the mix of effective techniques to use for requirement acquisition and properly document the process and the requirements to reduce the challenges and chances of failure. RE should therefore be the starting point and backbone of any project or decision because it helps to determine and focus on the objective, match needs of stakeholders to the product development process thereby increasing the chances of achieving the best result. However, it must be managed throughout the entire system or product development life cycle for project success and the mitigation of failures. 4. CONCLUSIONS RE is at the foundation of every successful software project. There are many reasons for software project failures; however, poorly engineered requirements process contributes immensely to the reason why software projects fail. Software project failure is usually costly and risky and could also be life threatening. Projects that undermine RE suffer or are likely to suffer from failures, challenges and other attending risks. The cost of project failures and overruns when estimated is very huge. Furthermore, software project failures or overruns pose a challenge in today’s competitive market environment. It affects the company’s image, goodwill, and revenue drive and decreases the perceived satisfaction of customers and clients. In this paper, RE was discussed. Its role in software projects success was elaborated. The place of software requirements process in relation to software project failure was explored and examined. Also, project success and failure factors were also discussed with emphasis placed on requirements factors as they play a major role in software projects' challenges, successes... and failures. The paper relied on secondary statistics to explore and examine factors responsible for the successes, challenges and failures of software projects in large, medium and small scaled software companies. In conclusion, the success or failure of any given software development project hinges on how the software requirements process was carried out. The cost or the risks involved in a poorly engineered requirements process are great and sometimes irreparable. RE stands as a bedrock upon which the success of software projects stands. Colossal wastes can be avoided if adequate attention is given to proper RE in all software development projects. In this paper, the connection of requirements to project success or failure is established and emphasized using secondary data analysis from previous studies. As could be consistently observed, poor requirements processes are responsible for software projects challenges and failures while on the other hand, a good requirements collection and process contributes to software projects successes. Thus, it behoves of software project planners, analysts, engineers and managers to incorporate adequate RE process in every software development project to achieve project success and eliminate project failures and challenges. 5. ACKNOWLEDGMENT This study was funded by Ministry of Higher Education under RAGS grant. REFERENCES Wiegers, K.E. (2003), Software Requirements: Practical techniques for gathering & managing requirement through the product development cycle. USA: Microsoft Corp.
{"Source-Url": "http://repo.uum.edu.my/20326/1/IRMM%206%20S7%202016%20306%20311.pdf", "len_cl100k_base": 4310, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 16691, "total-output-tokens": 6426, "length": "2e12", "weborganizer": {"__label__adult": 0.0003581047058105469, "__label__art_design": 0.0003333091735839844, "__label__crime_law": 0.0003218650817871094, "__label__education_jobs": 0.002132415771484375, "__label__entertainment": 6.151199340820312e-05, "__label__fashion_beauty": 0.0001577138900756836, "__label__finance_business": 0.0008249282836914062, "__label__food_dining": 0.0003819465637207031, "__label__games": 0.0005769729614257812, "__label__hardware": 0.0004525184631347656, "__label__health": 0.00045943260192871094, "__label__history": 0.00015878677368164062, "__label__home_hobbies": 7.200241088867188e-05, "__label__industrial": 0.0002722740173339844, "__label__literature": 0.00034618377685546875, "__label__politics": 0.0001852512359619141, "__label__religion": 0.00031447410583496094, "__label__science_tech": 0.007659912109375, "__label__social_life": 0.00010412931442260742, "__label__software": 0.0061798095703125, "__label__software_dev": 0.97802734375, "__label__sports_fitness": 0.0002149343490600586, "__label__transportation": 0.0003132820129394531, "__label__travel": 0.0001506805419921875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26938, 0.04115]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26938, 0.16681]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26938, 0.92781]], "google_gemma-3-12b-it_contains_pii": [[0, 4068, false], [4068, 10495, null], [10495, 14117, null], [14117, 19767, null], [19767, 26347, null], [26347, 26938, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4068, true], [4068, 10495, null], [10495, 14117, null], [14117, 19767, null], [19767, 26347, null], [26347, 26938, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26938, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26938, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26938, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26938, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26938, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26938, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26938, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26938, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26938, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26938, null]], "pdf_page_numbers": [[0, 4068, 1], [4068, 10495, 2], [10495, 14117, 3], [14117, 19767, 4], [19767, 26347, 5], [26347, 26938, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26938, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
8e09596bfbda43b159c3c6c56b2801051528aedc
Abstract Multi-party multi-turn dialogue comprehension brings unprecedented challenges on handling the complicated scenarios from multiple speakers and criss-crossed discourse relationship among speaker-aware utterances. Most existing methods deal with dialogue contexts as plain texts and pay insufficient attention to the crucial speaker-aware clues. In this work, we propose an enhanced speaker-aware model with masking attention and heterogeneous graph networks to comprehensively capture discourse clues from both sides of speaker property and speaker-aware relationships. With such comprehensive speaker-aware modeling, experimental results show that our speaker-aware model helps achieves state-of-the-art performance on the benchmark dataset Molweni. Case analysis shows that our model enhances the connections between utterances and their own speakers and captures the speaker-aware discourse relations, which are critical for dialogue modeling. 1 Introduction Training models to understand dialogue contexts and answer questions has been shown even more challenging than common machine reading comprehension (MRC) tasks on plain text (Reddy, Chen, and Manning 2019; Choi et al. 2018). In this paper, we focus on the challenging multi-party multi-turn dialogue MRC, whose given passage consists of multiple utterances announced by three or more speaker roles (Li et al. 2020). Compared to two-party dialogues (Lowe et al. 2015; Wu et al. 2016; Zhang et al. 2018), multi-party multi-turn dialogues have much more complex scenarios: First, every speaker role has a different speaking manner and speaking purposes, which leads to a unique speaking style of each speaker role. Thus the speaker property of each utterance provides unique clues (Liu et al. 2021; Gu et al. 2020). Second, instead of speaking in rotation in two-party dialogues, the transition of speakers in multi-party dialogues is in a random order, breaking the continuity as that in common non-dialogue texts due to the presence of crossing dependencies which are commonplace in a multi-party chat. Third, there may be multiple dialogue threads tangling within one dialogue history happened between any two or more speakers, making interrelations between utterances much more flexible, rather than only existing in adjacent utterances. Thus, the multi-party dialogue appears discourse dependency relations between non-adjacent utterances, which leads up to a graphical discourse structure (Shi and Huang 2019; Li et al. 2020). To demonstrate the challenge of the multi-party multi-turn dialogue MRC, we present an example in Table 1, which is from the multi-party multi-turn dialogue benchmark dataset Molweni (Li et al. 2020). Figure 1 depicts the corresponding speaker-aware discourse structure of the example dialogue, with different colors indicating different speakers. In this dialogue, there are four speakers, whose conversation develops as Dr. Willis, NickGarvey and smo help benkong2 with a system error. Along with the context, there are two relevant questions expected to be answered. An extractive span is given as an answer of question 1, while question 2 is unanswerable only based on this dialogue. The mainstream work of machine reading comprehension on multiple multi-turn dialogues commonly adopts the... 2 Background and related work 2.1 Dialogue Reading Comprehension Researches on dialogues MRC aim to teach machines to read dialogue contexts and make response (Reddy, Chen, and Manning 2019, Choi et al. 2018, Sun et al. 2019, Cui et al. 2020), whose common application is building intelligent human-computer interactive systems (Chen et al. 2017a, Shum, He, and Li 2018, Li et al. 2017, Zhu et al. 2018). Training machines to understand dialogue has been shown much more challenging than the common MRC as every utterance in dialogue has an additional property of speaker role, which breaks the continuity as that in common non-dialogue texts due to the presence of complex discourse dependencies which are caused by speaker role transitions (Afantenos et al. 2015, Shi and Huang 2019, Li et al. 2020). Early studies mainly focus on the matching between the dialogue contexts and the questions (Huang, Choi, and Yih 2018, Zhu, Zeng, and Huang 2018). As PrLMs prove to be useful as contextualized encoder with impressive performance, a general way is employing PrLMs to handle the whole input texts of a dialogue context and a question as a linear sequence of successive tokens, where contextualized information is captured through self-attention (Qu et al. 2019, Liu et al. 2020, Li et al. 2020). Such a way of modeling would be suboptimal to capture the high-level relationships between utterances in the dialogue history. To leverage speaker-aware information for better performance, Gu et al. (2020) proposed Speaker-aware BERT for two-party dialogue tasks by reorganizing utterances according to spoken-from speaker and spoken-to speaker and adding a speaker embedding at token representation stage. Liu et al. (2021) went further with speaker property, designing a decoupling and fusing network to enhance the turn order and speaker of each utterance. Both of them show that speaker property is helpful on dialogue MRC. However, existing studies mostly work on retrieval-based response selection task and on two-party datasets or those without speaker annotations, which drives us to make an attempt to extend to QA task on the multi-party scenario. In this work, we focus on QA task of multi-party multi-turn dialogue MRC, which involves more than two speakers in a given dialogue passage (Li et al. 2020) and expects an answer for each relevant question. Different from existing contributions of speaker-aware works, we regard discourse relations as a reflection of speaker transition information, thus leverage these complex relations to model speaker-aware information comprehensively. 2.2 Discourse Structure Modeling Discourse parsing focuses on the discourse structure and relationships of texts, whose aim is to predict the relations between discourse units and to discover the discourse structure between those units. Discourse structure has shown benefits to a wide range of NLP tasks, including MRC on multi-party multi-turn dialogue (Asher et al. 2016, Xu et al. 2021, Ouyang, Zhang, and Zhao 2021, Takanobu et al. 2018, Gao et al. 2020, Jia et al. 2020). In addition to the concerned discourse parsing on dialogue-related NLP tasks, most existing studies on linguistics-motivated discourse parsing are based on two annotated datasets, Penn Discourse TreeBank (PDTB) (Prasad et al. 2008) or Rhetorical Structure Theory Discourse TreeBank (RST-DT) (Mann and Thompson 1988). PDTB focuses on shallow discourse relations but ignores the overall discourse structure (Qin et al. 2017; Cai and Zhao 2017; Bai and Zhao 2018; Yang and Li 2018). In contrast, RST is constituency-based, where related adjacent discourse units are merged to form larger units recursively (Braud, Coavoux, and Søgaard 2017; Wang, Li, and Wang 2017; Yu, Zhang, and Fu 2018; Joty, Carenini, and Ng 2015; Li, Li, and Chang 2016; Liu and Lapata 2017). However, RST only discovers the relations between neighbor discourse units, which is not suitable for our concerned multi-party dialogues. In this work, we use discourse parsing as an application-motivated technique for the dialogue MRC task. Our task relies on the dependency-based structures where dependency relations may appear between any two adjacent or non-adjacent utterances which may be presented by the same speaker (Shi and Huang 2019; Li et al. 2020). Compared to the existing works mentioned above, our work is distinguished because: 1) we leverage speaker-aware information comprehensively for better performance progress; 2) we are one of the pioneers to model the speaker-aware discourse structure as graphs in dialogue MRC, to tackle the discourse tangle caused by speaker role transitions; 3) we firstly study general MRC task on multi-party multi-turn dialogue scenario with enhanced speaker-aware clues. ### 3 Methodology Here, we present our enhanced speaker-aware model, as is shown in Figure 2, which enhances speaker-aware information through three extended modules. Our model contains a PrLM for encoding, three modules for disentanglement of complicated speaker-aware information, namely, **Speaker Masking**, **Speaker Graph**, **Discourse Graph**, and a span extraction layer for generating a final answer based on the fused representations. In this section, we will formulate the task and introduce every part of our model in detail. 3.1 Task Formulation Supposing we conduct MRC on a multi-party multi-turn dialogue context $C$, which consists of $n$ utterances and can be represented as $C = \{U_1, U_2, \ldots, U_n\}$. Each utterance $U_i$ consists of a name identity of the speaker and a sentence by the speaker, denoted by $U_i = \{S_i, W_i\}$, where the sequence $W_i$ can be denoted as a $l_i$-length sequence of words, $W_i = \{w_{i1}, w_{i2}, \ldots, w_{il_i}\}$. According to this multi-party multi-turn dialogue context, a question $Q$ is put forward, and for this question, the model is expected to find a span from the dialogue context as a correct answer, or make a decision that this question is impossible to answer only based on the provided dialogue context. 3.2 Encoding In order to utilize a PrLM such as BERT as an encoder to obtain the contextualized representations, we firstly concatenate the dialogue context and a question in the form of [CLS] question [SEP] context [SEP]. For the convenience of dividing utterances, we insert [SEP] token between each pair of adjacent utterances. The concatenated sequence is fed into a PrLM, and the output of the PrLM is the initial contextualized representations for each token, denoted as $H \in \mathbb{R}^{L \times D}$, where $L$ denotes the input sequence length in tokens, $D$ denotes the dimension of hidden states. 3.3 Speaker Masking Having obtained the output contextualized representations from a PrLM, we design a decoupling module to capture the speaker property of each utterance and represent the speaker transition information of the dialogue passage. We modify the mask-based Multi-Head Self-Attention mechanism proposed by [Liu et al. 2021], adapting it to multi-party dialogues. The mask-based MHSA is formulated as follows: $$A(Q, K, V, M) = \text{softmax}\left(\frac{QK^T}{\sqrt{d_k}} + M\right)V,$$ $$\text{head}_i = A(HW_1^Q, HW_1^K, HW_1^V, M),$$ $$\text{MHSA}(H, M) = [\text{head}_1, \text{head}_2, \ldots, \text{head}_N]W^O,$$ where $A$, heads, $Q$, $K$, $V$, $M$ denote the attention, head, query, key, value and mask, $H$ denotes the original representations from PrLM, and $W_1^Q, W_1^K, W_1^V, W^O$ are parameters. Operator $[,]$ denotes concatenation. Instead of speaking in turn between two people, we have to explicitly identify the speaker of each utterance. In the implementation, we build a vector to label the speaker identity of each utterance, according to which, we mask utterances from the same speaker and utterances from different speakers. This step is denoted as: $$M_1[i, j] = \begin{cases} 0, & S_i = S_j \\ -\infty, & \text{otherwise,} \end{cases}$$ $$M_2[i, j] = \begin{cases} 0, & S_i \neq S_j \\ -\infty, & \text{otherwise,} \end{cases}$$ $$\text{Channel}_1 = \text{MHSA}(H, M_1),$$ $$\text{Channel}_2 = \text{MHSA}(H, M_2),$$ where $S$ denotes the speaker identity, thus $M_1$ and $M_2$ denote masks of the same speaker and different speakers. $\text{Channel}_1$ contains the decoupled information of the same speaker while $\text{Channel}_2$ contains the decoupled information of the different speakers, as is shown in Figure 3. ![Figure 3: Speaker-aware Masking for the example shown in Table 1](image) Finally, we fuse the information from $\text{Channel}_1$, $\text{Channel}_2$ and the original contextualized representation $H$ together, using the gate-based fusing method [Liu et al. 2021]. The fusing method is formulated as: $$E_1 = \text{ReLU}(\text{FC}([H, C_1, H \odot C_1])),$$ $$E_2 = \text{ReLU}(\text{FC}([H, C_2, H \odot C_2])),$$ $$G = \text{Sigmoid}(\text{FC}([E_1, E_2])),$$ $$H_C = G \odot C_1 + (1 - G) \odot C_2,$$ where $C_1$ and $C_2$ denote the shorthand of the two channels, and FC is shorthand of a fully-connected layer. Finally, we get the speaker-aware representations $H_C$, which is in the same size of the original contextualized representation $H$. 3.4 Graph Modeling Complicated transitions of speaker roles segment text into separated utterances and breaks the consistency of passage, thus results in intricate interrelations among utterances. We assume that these relations are reflection of speaker property, and will provide passage-level clues for MRC. We utilize the graph neural network to construct two heterogeneous graphs, called speaker graph and discourse graph, which are both in the form of relational graph convolutional network (R-GCN) following [Schlichtkrull et al. 2018]. Speaker graph modeling relations of speaker property of each utterance. Discourse graph is built based on the speaker-aware discourse parsing relations, which are resulted from the complex non-adjacent dependencies caused by speaker transition and thus capture the latent speaker-aware information. **Speaker Graph** Since speaker property of each utterance impacts the dialogue development hugely, we build speaker graph to model relations of utterances based on speaker property. Specifically, we build an R-GCN to connect utterances from the same speaker, letting information exchanged among statements of one speaker, hoping to capture speaker manner. We denote the graph as $G_s = (V_s, E_s)$, where $V_s$ denotes the set of vertices and $E_s$ denotes the set of edges. First we add vertices $v_1^n, v_2^n, \ldots, v_n^n$ to represent every single utterance and a special global vertex $v_{d+1}^n$ for context-level information, denoted as: $$V_s = \{v_1^n, \ldots, v_n^n, v_{d+1}^n\},$$ where $n$ is the number of utterances. For each pair of utterances sharing the same speaker, we construct one edge and a reverse edge, which is denoted as $v_i^n \leftrightarrow v_j^n$, $S_i = S_j$. Finally, we construct a self-directed edge, $v_i^n \rightarrow v_i^n$, for each vertex and we connect the global vertice to every other vertice, denoted as $v_{d+1}^n \rightarrow v_i^n$, $i \neq n + 1$. Figure 4 illustrates the graph structure of the example dialogue in Table 1 with different colors for different kinds of edges. The original representations of utterance vertice are the contextualized representations of [SEP] token extracted from $H$, and the original representations of the global vertice is formed by embedding. The information exchange process can be formulated as: $$h_i^{l+1} = \sigma \sum_{r \in \mathbb{R}} \sum_{j \in N_i^r} \frac{1}{c_{i,r}} W_r^{(l)} h_j^{(l)} + W_0^{(l)} h_i^{(l)},$$ where $\mathbb{R}$ denotes the set of relations with other vertives. $N_i^r$ denotes the set of neighbours of vertice $v_i$, which are connected to $v_i$ through relation $r$, and $c_{i,r}$ is the element number of $N_i^r$ used for normalization. $W_r^{(l)}$ and $W_0^{(l)}$ are parameter matrices of layer $l$, $\sigma$ is activated function, which in our implementation is ReLU (Glorot, Bordes, and Bengio 2011; Agarap 2018). After information exchange with neighbour nodes, we get the vectors of each utterance, containing speaker-aware interrelation information. After $L$ layers, we get $H^L_S \in \mathbb{R}^{(n+1) \times D}$ as the last-layer output of the graph. Based on the intuition that each token inside the same utterance shares the same speaker information, we expand $H^L_S$ to the same dimension of $H$ for later fusion, which is denoted as $H^L_S \in \mathbb{R}^{L \times D}$. The extension is illustrated in Figure 5. **Discourse Graph** Discourse relations contain latent speaker-aware information. In parallel to the speaker graph, we build a graph according to the annotated discourse relations to connect relevant utterance pairs. The preprocessing includes two steps. First, we assign a label for every considered relation. Second, we simplify each relation in the form of (first utterance, second utterance, relation label). After simplification, we construct a graph according to the simplified representations of relations. We denote the graph as $G_d = (V_d, E_d)$, where $V_d$ denotes the set of vertices and $E_d$ denotes the set of edges. Following kinds of vertices are constructed into the graph, utterance vertices for each utterances, relation vertices for each existing relations, and a global vertice to represent the dialogue-level information, which can be denoted as: $$V_d = \{v_1^d, \ldots, v_n^d, v_{d+1}^d, \ldots, v_{n+r+1}^d\},$$ where $n$ is the number of utterances and $n_r$ is the number of corresponding relations. In terms of $E_d$, for each relation $(v_i^d, v_j^d, r_m)$, we construct oriented edges $v_i^d \rightarrow v_j^d$ and $r_m \rightarrow v_i^d$ and also reverse oriented edges $r_m \rightarrow v_j^d$ and $v_i^d \rightarrow v_j^d$. As the same as speaker graph, we add a self-directed edge $v_i^d \rightarrow v_j^d$ to every vertice and for each vertice except the global one, a global vertice-directed edge $v_{d+1}^d \rightarrow v_i^d$, $i \neq n + 1$ is added. An example is shown in Figure 6. ![Figure 6](attachment:image.png) Figure 6: Discourse graph of the example dialogue in Table 1. Similar to the speaker graph, the original representations of utterance vertices are the contextualized representations of [SEP] token. The original representations of relation vertices and the global vertice are formed by embedding. Finally, we get the vectors of each utterance containing speaker-aware discourse structure information after the fusing of information from related vertives. The formulation of message-passing is the same as the speaker graph, where the set of relations $\mathbb{R}$ contains more kinds of relations as shown in Figure 6. The output of the last-layer of discourse graph is denoted as $H^L_G \in \mathbb{R}^{(n+n_r+1) \times D}$, we keep the vectors for utterances $H^L_G[0 : n]$ and conduct the same extension as shown. Table 2: Experimental results on the test set of Molweni and FriendsQA. All results are from our implementations except public baselines. <table> <thead> <tr> <th>Model</th> <th>Molweni EM</th> <th>Molweni F1</th> <th>FriendsQA EM</th> <th>FriendsQA F1</th> </tr> </thead> <tbody> <tr> <td></td> <td>BERT_base</td> <td></td> <td>BERT_large</td> <td></td> </tr> <tr> <td></td> <td>45.3</td> <td>58.0</td> <td>51.8</td> <td>65.5</td> </tr> <tr> <td>+Speaker Masking</td> <td>49.6</td> <td>63.4</td> <td>52.7</td> <td>65.8</td> </tr> <tr> <td>+Speaker Graph</td> <td>49.0</td> <td>63.3</td> <td>52.7</td> <td>66.0</td> </tr> <tr> <td>+Discourse Graph</td> <td>49.0</td> <td>63.0</td> <td>52.1</td> <td>65.5</td> </tr> <tr> <td>+Our architecture</td> <td>49.7</td> <td>64.4</td> <td>52.9</td> <td>66.9</td> </tr> <tr> <td></td> <td>BERTwwm</td> <td></td> <td>BERTwwm</td> <td></td> </tr> <tr> <td></td> <td>53.9</td> <td>67.5</td> <td>53.9</td> <td>67.5</td> </tr> <tr> <td>+Speaker Masking</td> <td>55.8</td> <td>68.7</td> <td>55.8</td> <td>68.7</td> </tr> <tr> <td>+Speaker Graph</td> <td>54.9</td> <td>68.9</td> <td>54.9</td> <td>68.9</td> </tr> <tr> <td>+Discourse Graph</td> <td>55.2</td> <td>68.3</td> <td>55.2</td> <td>68.3</td> </tr> <tr> <td>+Our architecture</td> <td>56.0</td> <td>69.1</td> <td>56.0</td> <td>69.1</td> </tr> <tr> <td></td> <td>ELECTRA</td> <td></td> <td>ELECTRA</td> <td></td> </tr> <tr> <td></td> <td>57.3</td> <td>70.4</td> <td>57.3</td> <td>70.4</td> </tr> <tr> <td>+Speaker Masking</td> <td>57.9</td> <td>71.0</td> <td>57.9</td> <td>71.0</td> </tr> <tr> <td>+Speaker Graph</td> <td>57.6</td> <td>72.1</td> <td>57.6</td> <td>72.1</td> </tr> <tr> <td>+Discourse Graph</td> <td>58.4</td> <td>71.8</td> <td>58.4</td> <td>71.8</td> </tr> <tr> <td>+Our architecture</td> <td>58.6</td> <td>72.2</td> <td>58.6</td> <td>72.2</td> </tr> </tbody> </table> Table 3: Ablation study. <table> <thead> <tr> <th>Model</th> <th>EM</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>BERT_base</td> <td>45.3</td> <td>58.0</td> </tr> <tr> <td>+Speaker Masking</td> <td>49.6</td> <td>63.4</td> </tr> <tr> <td>+Speaker Graph</td> <td>49.0</td> <td>63.3</td> </tr> <tr> <td>+Discourse Graph</td> <td>49.0</td> <td>63.0</td> </tr> <tr> <td>+Our architecture</td> <td>49.7</td> <td>64.4</td> </tr> <tr> <td>BERT_large</td> <td>51.8</td> <td>65.5</td> </tr> <tr> <td>+Speaker Masking</td> <td>52.7</td> <td>65.8</td> </tr> <tr> <td>+Speaker Graph</td> <td>52.7</td> <td>66.0</td> </tr> <tr> <td>+Discourse Graph</td> <td>52.1</td> <td>65.5</td> </tr> <tr> <td>+Our architecture</td> <td>52.9</td> <td>66.9</td> </tr> <tr> <td>BERTwwm</td> <td>53.9</td> <td>67.5</td> </tr> <tr> <td>+Speaker Masking</td> <td>55.8</td> <td>68.7</td> </tr> <tr> <td>+Speaker Graph</td> <td>54.9</td> <td>68.9</td> </tr> <tr> <td>+Discourse Graph</td> <td>55.2</td> <td>68.3</td> </tr> <tr> <td>+Our architecture</td> <td>56.0</td> <td>69.1</td> </tr> <tr> <td>ELECTRA</td> <td>57.3</td> <td>70.4</td> </tr> <tr> <td>+Speaker Masking</td> <td>57.9</td> <td>71.0</td> </tr> <tr> <td>+Speaker Graph</td> <td>57.6</td> <td>72.1</td> </tr> <tr> <td>+Discourse Graph</td> <td>58.4</td> <td>71.8</td> </tr> <tr> <td>+Our architecture</td> <td>58.6</td> <td>72.2</td> </tr> </tbody> </table> in Figure 5 then we get $H_G \in \mathbb{R}^{L \times D}$. ### 3.5 Fusing Decoupled information from aforementioned three modules is fused to predict the answer. We concatenate $H_C$, $H_S$, $H_G$ and $H$ together to obtain the final speaker-enhanced contextualized representations: $$P = \left[ H_C, H_S, H_G, H \right]$$ Following the standard process for span-based MRC [Devlin et al. 2019; Glass et al. 2020; Zhang, Yang, and Zhao 2021], the representations are fed to a fully connected layer to calculate the probability distribution of the start and end positions of answer spans, and cross-entropy function is used as the training object to minimized. ### 4 Experiments Our method is evaluated on multi-party multi-turn dialogue MRC benchmark, Molweni [Li et al. 2020] and FriendsQA (Yang and Choi 2019). #### 4.1 Datasets **Molweni** Molweni (Li et al. 2020) is a multi-party multi-turn dialogue dataset that derives from Ubuntu Chat Corpus (Lowe et al. 2015) which consists of 10,000 multi-party multi-turn dialogues context. On average, each dialogue context contains 8.82 utterances from 3.51 speaker roles. Following annotations are made on the raw dataset, making Molweni an ideal evaluation dataset for our research. 1) Answerable and unanswerable extractive questions according to dialogues. 2) Elementary discourse units (EDUs) on the utterance level, including the utterance and a speaker name. 3) Discourse relations for each dialogue passage, reflecting interrelations between utterances. **FriendsQA** To verify the generality, we also evaluate our model on FriendsQA (Yang and Choi 2019), which is a challenging multi-party multi-turn dialogue dataset including 1,222 human-to-human conversations from the TV show Friends. 10,610 answerable extractive questions are annotated. Discourse relations are annotated by using the tool of Shi and Huang (2019). #### 4.2 Baseline Following Li et al. (2020), we use BERT as a naive baseline, where the contextualized output is used for span extraction directly. In addition, we compare our model with existing speaker-aware work (Liu et al. 2021; Gu et al. 2020). Since they work on response selection task on two-party scenario or datasets without explicit speaker annotations, we adjust and implement their ideas on QA task of the multi-party scenario. We also apply BERTlarge and BERTwwm(word masking) as baselines, to see if the advance of our method still holds on top of the stronger PrLMs. 4.3 Setup Our implementations are based on Transformers Library (Wolf et al., 2020). Exact match (EM) and F1 score are the two metrics to measure performance. We fine-tune our model employing AdamW (Loshchilov and Hutter, 2019) as the optimizer. The learning-rate is set to 3e-5, 5e-5, and 4e-6. In addition, the input sequence length is set to 348, which our inputs are truncated or padded to. 4.4 Results Table 2 shows the results of our experiments. The experimental results show that our model outperforms all baselines and achieves SOTA on benchmark Molweni. We also see that our model helps effectively capture speaker role information and speaker-aware discourse structure information and then strengthens the ability of multi-party multi-turn MRC. 5 Analysis 5.1 Ablation study Since our speaker-aware information enhancing method includes three separate modules, we perform an ablation study to verify the contributions of our three speaker-aware modules. Respectively, we ablate each aforementioned modules and train them under the same hyper-parameters. As shown in Table 3, experimental results indicate that each module plays an effective part in the whole model, and the Speaker Masking module contributes the most. 5.2 Case Analysis To intuitively show how our model improves the ability of MRC on multi-party multi-turn dialogues, we present an analysis on the predictions from baseline (BERT<base>) and predictions from our model to show how our speaker-aware enhancement strategies help fix wrong cases of baseline. We select examples of different types of questions and compare the predictions, as shown in Figure 7. In the first Who-type case, the answer given by baseline model is gnomefreak, which is the nearest speaker name to opened the repositories. While lightbright, the answer given by our model is the gold answer, which is the speaker of the utterances containing the phrase opened the repositories. Our model is able to fix this since we regard each utterance as an EDU and effectively model the speaker information. For the Why-type question in case 2, the baseline model failed to find a plausible answer. However the Clarification-question relation and QAP relation among u7 (from fyrestrtr), u8 (from alexxOrsova) and u9 (from alexxOrsova) is very obvious, which are captured by our model. In the third case, which is a What-type case, the answer ubuntu given by baseline model is reasonable already, based on u2 which contains the key word use. But our model gives the gold answer linux, which is a more precise span from u0, which is from noone. As these cases show, our model enhances the connections between utterances and their own speakers and captures the speaker-aware discourse relations, which helps to fix some wrong cases. 6 Conclusion In this work, we study machine reading comprehension on multi-party multi-turn dialogues and propose an enhanced speaker-aware model to model speaker information comprehensively and firstly leverage discourse relation in dialogue MRC. Our model is evaluated on two multi-party multi-turn dialogue benchmarks, Molweni and FriendsQA. Experimental results show the superiority of our method compared to previous work. In addition, we analyze the contribution of each module by ablation study and present examples for intuitive illustration. Our work verifies that speaker roles and interrelations are significant characters of dialogue contexts. Our model takes advantage of enhancing the connections between utterances and their speakers and capturing the speaker-aware discourse relations. References References
{"Source-Url": "http://arxiv-export-lb.library.cornell.edu/pdf/2109.04066", "len_cl100k_base": 7298, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 34997, "total-output-tokens": 11229, "length": "2e12", "weborganizer": {"__label__adult": 0.001438140869140625, "__label__art_design": 0.0032978057861328125, "__label__crime_law": 0.000988006591796875, "__label__education_jobs": 0.022735595703125, "__label__entertainment": 0.003675460815429687, "__label__fashion_beauty": 0.0007548332214355469, "__label__finance_business": 0.00080108642578125, "__label__food_dining": 0.0011930465698242188, "__label__games": 0.007144927978515625, "__label__hardware": 0.001739501953125, "__label__health": 0.00156402587890625, "__label__history": 0.0010919570922851562, "__label__home_hobbies": 0.00015783309936523438, "__label__industrial": 0.0009069442749023438, "__label__literature": 0.039306640625, "__label__politics": 0.0012493133544921875, "__label__religion": 0.0015649795532226562, "__label__science_tech": 0.360595703125, "__label__social_life": 0.0008268356323242188, "__label__software": 0.0672607421875, "__label__software_dev": 0.479736328125, "__label__sports_fitness": 0.0006566047668457031, "__label__transportation": 0.0010175704956054688, "__label__travel": 0.00040531158447265625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38699, 0.06644]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38699, 0.33145]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38699, 0.8321]], "google_gemma-3-12b-it_contains_pii": [[0, 3302, false], [3302, 6382, null], [6382, 8612, null], [8612, 13833, null], [13833, 18275, null], [18275, 23333, null], [23333, 26920, null], [26920, 32786, null], [32786, 38699, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3302, true], [3302, 6382, null], [6382, 8612, null], [8612, 13833, null], [13833, 18275, null], [18275, 23333, null], [23333, 26920, null], [26920, 32786, null], [32786, 38699, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38699, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38699, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38699, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38699, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38699, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38699, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38699, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38699, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38699, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38699, null]], "pdf_page_numbers": [[0, 3302, 1], [3302, 6382, 2], [6382, 8612, 3], [8612, 13833, 4], [13833, 18275, 5], [18275, 23333, 6], [23333, 26920, 7], [26920, 32786, 8], [32786, 38699, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38699, 0.23333]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
d6b96269958e612479343d4bd525b8df0a5c9a4e
The Role of Computer Security Incident Response Teams in the Software Development Life Cycle ABSTRACT: This article describes one type of organizational entity that can be involved in the incident management process, a Computer Security Incident Response Team (CSIRT), and discusses what input such a team can provide to the software development process and what role it can play in the SDLC. CSIRTs in organizations performing software development and in related customer organizations may have valuable information to contribute to the life cycle. They may also be able to learn valuable information from developers concerning the criticality, operation, and architecture of software and system components that will help them identify, diagnose, and resolve computer security incidents in a more timely manner. INTRODUCTION Incident management activities, while not specifically called out in the software development life cycle (SDLC), are an important part of the maintenance, operations, and sustainment of any software or hardware product. Decisions made during the SDLC, from user interface design to providing facilities for patch management, can significantly change the likelihood of incidents and the success of any response to them. Knowledge gained from detecting and responding to computer security incidents provides insight into real risks and threats to the integrity, confidentiality, and availability of software and hardware products. This information can be used at the beginning of the SDLC to help define better security requirements in products and provide a better understanding of the threat environment within which these products must operate. Knowledge gained from containing and mitigating computer security risks and threats can also help identify auditing and recovery requirements for systems and software. Such requirements can include building in alerting capabilities when files and components that should not be changing are modified, establishing policy and configuration setting capabilities to identify and control specific software and hardware components that should not be changed during normal operations, or providing functionality for logging unau- authorized changes or malicious attacks in a manner that would preserve evidence in a forensically sound manner. Although computer security incident management may seem to come at the end of the SDLC, the more knowledge from such activities are applied throughout the design, development and implementation of operating, application, enterprise systems and networks the more effective both the SDLC process and the incident management process will be. In the long run, feedback from incident management activities can be used to produce systems that are easier to manage, have reduced operational risk, are less impacted by cyber attacks, and have improved networked systems security and survivability. Knowledge of the intricacies of how specific software and hardware components function and interface with each other is critical to understanding how those systems are at risk, how they can be exploited, and how incidents can be successfully mitigated. This type of information is usually known by the systems' developers, administrators, and owners. Incident management staff, especially in a large software-diverse organization, may not have this knowledge. Being able to rely on the expertise of developers for assistance in analyzing not only the risks but the best resolution strategies for systems can decrease the time it takes to contain and recover from malicious activity. How successfully information is collected and shared between product developers and incident management staff will depend not only on the group responsible for incident management activities but also on the structured relationships between that group and the system developers. Best practices in CSIRT development and implementation call for the CSIRT to identify staff and departments throughout the enterprise with whom they must coordinate and work. Although this list includes many throughout the IT, security, and management structure, it very rarely if ever includes system developers. This article will build a case for why such an interface is important and will explore some methods for encouraging such interaction. Before proceeding, some basic introductory information on what a CSIRT is and what it does may be required. For details, see the BSI article Defining Computer Security Incident Response Teams. CSIRTS AND THE SOFTWARE DEVELOPMENT LIFE CYCLE Computer security incidents can often be the first place where symptoms of wider, ongoing problems are noticed. Such information—if properly captured and disseminated—can help to identify root causes of vulnerabilities, recurring security problems and incidents, and mitigation and resolution strategies related to such incidents and vulnerabilities. Historically, this information has not been passed to developers or introduced into the SDLC. Incident response personnel or CSIRT staff have not generally had an established interface with developers. This is beginning to change as more people in both fields (security and software engineering) have begun to understand the benefits of such information sharing. There are many points in the SDLC where information exchange between these two communities can provide useful information for designing better systems as well as handling incidents on those systems. Because CSIRT staff know what type of information they need in order to understand and resolve incidents, they will have their own set of requirements for how detection, response, and remediation processes should be built into or at least supported by software systems and applications. These types of requirements need to be incorporated early in the design phase. Such requirements might relate to what type of event logging is required at an application or host level, what type of a notification is required for alerting staff to unauthorized system changes, or what type of functionality would allow for rapid containment. A good example of incident response functional requirements might be designing software that is forensically friendly. This means designing software that has been engineered to capture information and evidence in a forensically sound manner, providing a record of where information came from and how it was collected in a way that cannot be repudiated. When collecting incident data or evidence, developers could work with incident handlers to understand how the data is to be analyzed, used, and archived as part of the response process. There may be specific information that can be collected and structured for automatic input into a tracking system or repository or collected in a way to increase the completeness and confidence in the information. Many of the topic and content areas within the BSI Web site describe components of the SDLC where knowledge between developers and CSIRT staff can be exchanged. Many topics provide best practices and guidance, outlining both requirements for better software and systems and ways to improve the development process. Reviewing some of these requirements and methods identifies multiple areas where information on software and hardware vulnerabilities and malicious attacks and events are used to identify potential risks and threats to the security and quality of products. Understanding the cause, history, and mitig- tion of these risks and threats can provide information that can be used to design better software requirements, model threat environments, test and evaluate soft- ware security and recovery components, and measure software survivability. A CSIRT, by virtue of its mission and function, is a repository of incident and vulnerability information affecting its parent organization as well as its constitu- ency. This information can be used to provide real life risk and threat informa- tion. It can also be used to provide guidance for reviewing and testing the se- curity components and requirements of developed or acquired hardware and software. The following sections support this assertion by presenting various topics from the BSI collection and highlighting where they call out the need for information that could be provided by a CSIRT. The following sections also discuss some potential interfaces where CSIRT information and bi-directional interaction can be introduced into the SDLC. **Software Assurance** The draft document Security in the Software Life Cycle states that “Software assurance … includes the disciplines of software reliability…., software safety, and software security.” Software security is defined in the same document as “the ability of software to resist, tolerate, and recover from events that intention- ally threaten its dependability.” The document goes on to say that The objective of software security is to design, implement, configure, and sup- port software systems in ways that enable them to 1. "Continue operation correctly in the presence of most attacks by either re- stricting the exploitation of faults or other weakness in the software by the attacker, or tolerating the errors and failures that result from such exploits; 2. "Isolate, contain, and limit the damage resulting from any failures caused by attack-triggered faults that the software was unable to resist or tolerate, and recover as quickly as possible from those failures" [Goertzel 2006]. Because of the nature of CSIRT activities, information is continually collected on how various exploits and vulnerabilities work, what new and emerging trends and threats are developing, and what new mitigation tactics and strategies work best to isolate, contain, limit, or eradicate any damage. CSIRTs often perform these analysis and response functions or coordinate with other groups in the or- ganization that perform these functions. Knowledge from such activities can be used to develop abuse and business cases that can be used in the development of system requirements as well as operational exercises that can be used to test the security of system components. This information can also be used to understand the root cause of vulnerabilities so that preventive measures and configurations can be built into software and hardware. Information on how vulnerabilities are exploited can provide guidance to software developers on how to eliminate those vulnerabilities by building defenses into systems, software, and hardware. Knowing what intruders are trying to obtain or compromise identifies what needs to be protected and gives software developers information on what components need to crash or recover gracefully without exposure to exploitation. As active participants in the handling of computer security incidents and vulnerabilities, CSIRT staff understand the type of tools required to perform such work efficiently and effectively. CSIRTs can provide information to system developers about what software functions need to be built into products to support incident detection, analysis, and mitigation activities. **Security Requirements Engineering** Of course it makes sense that CSIRTs and software developers would work together during the requirements elicitation phase of the SDLC, specifically in defining security requirements. The BSI site describes requirements elicitation as “best practices for security requirements engineering, including processes that are specific to eliciting, specifying, analyzing, and validating security requirements.” In Security Requirements Engineering, Mead talks about various methods that can be used to help define security requirements specific to particular applications and then test that those requirements are being met [Mead 2006b]. With their understanding of attackers, motives, targets, and techniques, CSIRTs should be involved in any security requirements elicitation activity to provide different views and probabilities for what type of attacks might be realistically executed. In Requirements Elicitation Introduction, Mead reviews numerous elicitation methods ranging from controlled requirements expression (CORE) to misuse cases to Joint Application Development (JAD). CSIRTs would be valid participants in any of these methods as they can help identify security problems based on historical incident data and trends as well as predictions of future intruder activity. **Attack Patterns** Another strategy for understanding software risks and corresponding mitigation strategies is the area of attack patterns. Attack patterns, as described by Barnum and Sethi, provide insight into methods used to exploit, compromise, and basi- cally “break” software. According to the authors, attack patterns are “an abstraction mechanism for describing how a type of observed attack is executed [Barnum 2006a].” The attack is described from the perspective of the intruder or attacker. The BSI content area related to attack patterns lists common fields of information that are captured to describe the details of an attack [Attack Patterns 2006]: - Pattern name and classification - Attack prerequisites - Description - Targeted vulnerabilities or weaknesses - Method of attack - Attacker goal - Attacker skill level required - Resources required - Blocking solutions - References Such information can be very useful in the development of software security requirements. In Introduction to Attack Patterns, Barnum and Sethi explain how understanding attack patterns can provide software developers with insight into the real environment in which software and hardware must exist. Software developers can then use this information to develop software requirements and resulting products that are not only more resistant to such attacks but able to recover more quickly when attacked. The incentive behind using attack patterns is that software developers must think like attackers to anticipate threats and thereby effectively secure their software. Due to the absence of information about software security in many curricula and the traditional shroud of secrecy surrounding exploits, software developers are often ill-informed in the field of software security and especially software exploitation. The concept of attack patterns can be used to teach the software development community how software is exploited in reality and to implement proper ways to avoid the attacks [Barnum 2006a]. CSIRT staff and their incident and vulnerability repositories are valuable valid sources of information about current and new attack patterns and trends. CSIRT staff can also serve as subject matter experts to provide an in-depth perspective on how these attacks and corresponding mitigation strategies work and how they can be translated into software development requirements. Through their day-to- day work, CSIRT staff can help distill the patterns that are evident across various attacks. Organizing joint sessions to brainstorm new attack patterns or review existing patterns could be one way that managers stimulate interaction between software developers and incident management staff. The output of this interaction can provide input into software requirements for products to be able to defend against such attacks. It is not enough to just understand the specifics of one attack. Looking at the higher level problem or the pattern across the attacks is what will help developers build more secure and resilient software. In Attack Pattern Usage, Barnum and Sethi state that Attack patterns can be an invaluable resource for helping to identify both positive and negative security requirements...Many vulnerabilities result from vague specifications and requirements...Requirements should specifically address these ambiguities to avoid opening up multiple security holes [Barnum 2006b]. Attack patterns can provide information on common security flaws and problems. These problems can be addressed in various parts of the SDLC. Initially, they can be used to identify those security requirements that need to be met through the development and design phases. They can then be used to help generate sample attacks that can be used to test that the software security requirements have been met and that the software reacts to the attacks in the desired manner. In general, attack patterns allow the requirements gatherer to ask “what if” questions to make the requirements more specific. If an attack pattern states “Condition X can be leveraged by an attacker to cause Y,” a valid question may be “What should the application do if it encounters condition X?” [Barnum 2006b]. During the testing phase, the same attack should be executed or simulated and then the question becomes “Does the application react to condition X according to the identified security requirements?” This should be followed by the question “Does the reaction prevent the attack from succeeding (or at least mitigate its impact)?” CSIRT and other incident management staff can help design and execute these tests and corroborate that the reaction is the desired one. Other questions might include “Are there any other conditions that would cause Y to happen?” or “Can condition X be leveraged by an attacker to cause a different result?” Again, CSIRT experience in the areas of incident and vulnerability handling and mitigation, along with collected historical attack information, can be used to help create and understand attack patterns and corresponding mitigation strategies. This information, as previously said, can then be used to develop software requirements related to preventing, deterring, or mitigating such attacks. **Threat Modeling** Threat modeling, in a general sense, addresses who would want to attack you, how they might do it, and what resources they might use to do it. CSIRTs should be actively participating in such threat modeling to help software developers understand who might want to attack their applications, what these attackers are looking for (i.e., what’s valuable to them), and what type of techniques they might use to perform the attacks. CSIRT staff can also use their experience and knowledge of the intruder community to help identify how realistic different attack scenarios will be based on their understanding of the likely threat. CSIRT staff can contribute case studies for threat modeling and also explain new trends, emerging attack techniques, and intruder behavior and motivations. Threat modeling has also been defined as “a structured approach for identifying, evaluating, and mitigating risks to system security” [Goertzel 2006]. This is another area where input from CSIRTs’ real-life experiences, expertise, and research can be used to help determine new and emerging threats and risks. Goertzel and colleagues recommend building risk analysis throughout the development process. Risk assessments can be used to help verify that identified risks and threats are being adequately addressed or handled through either safe and secure software failure modes, secure configurations, implemented auditing and alerting mechanisms or planned incident response procedures. CSIRT staff, although not usually trained in risk assessment methodologies, can provide input about how their organization’s critical systems and data may be at risk. They may participate on the assessment team or be interviewed by the assessment team as subject matter experts. **Architectural Risk Analysis** The Architectural Risk Analysis section of the BSI Web site discusses the process for conducting architectural risk assessments. The Architectural Risk Analysis document defines this process as “a risk management process that identifies flaws in software architecture and determines risks to business information assets that result from those flaws” [Hope 2005]. The document outlines the process and lists the key steps as - Asset identification - Architectural risk analysis (threats and vulnerabilities) - Risk mitigation - Risk management and measurement Here again, CSIRT case studies, situational awareness, and expertise can provide information into the SDLC. By virtue of their expertise and day-to-day operational tasks, CSIRTs collect, track, record, and analyze real-life information on threats and vulnerabilities that may impact their parent organization or constituency’s enterprise. As previously discussed, they can also provide input about attack patterns and identified historical risks. All of this information is useful in performing architectural risk analysis. Architectural Risk Analysis is described in the IEEE article “Bridging the Gap between Software Development and Information Security” by Gary McGraw and Kenneth van Wyk, as one of a series of touchpoints where coordination and data sharing between software developers, risk analysts, and CSIRT staff (along with other information security experts) can provide real-life information on the types of risks and threats that software developers must take into account when designing and implementing software and systems [McGraw 2005]. **Touchpoints in the SDLC** McGraw and van Wyk go on to describe other touchpoints that, like Architectural Risk Analysis, illustrate how the expertise and lessons learned from information security staff and specialists (e.g., CSIRTs) can enhance secure software development efforts. These touchpoints, which are presented as best practices that can be implemented in the SDLC in an effort to allow for collaboration and coordination between the normally isolated areas of information security and software development, include - "developing abuse cases that can be used to help refine requirements and build business cases" - performing business risk analysis - implementing test planning such as security functionality and risk-driven testing - performing code review - performing penetration testing - deploying and operating applications in a secure …environment" [McGraw 2005] **Abuse Cases** It’s easy to see how CSIRTs can provide real examples for developing abuse cases. They know the real abuse that goes on within their own environment and study abuses that happen to others. They share information on attacks with other CSIRTs, read articles on such cases, and attend conferences to learn about this type of information. They work to understand the technical details of attacks and vulnerability exploits and can use this knowledge to develop abuse scenarios. **Business Risk Analysis** Working with auditors and risk analysts, infrastructure groups, and software developers, CSIRTs—whether internal to the software development organization or coming from the organization’s customers—can provide input into the types of risks and threats that may impact business operations. By itself, the CSIRT is not generally the group that calculates the business impact, but it is important that they understand the business impact so that they can prioritize response actions. The CSIRT though, through their experiences, will be able to explain how various attacks and compromises can occur, providing the true details of the risk. Auditors, along with financial and risk analysts working with the groups that maintain and support enterprise software and systems, can use this information to determine what impacts such malicious activity will have on the infrastructure and thereby the business. CSIRTs within customer organizations can provide more in-depth information about their own environment and the risk and threats they face as well as the general remedial assistance they require in handling computer security incidents. This group may be a good reference for software developers to understand customers’ day-to-day incident handling requirements. Engaging customer CSIRTs through focus groups, as part of a needs analysis or software design processes, may alert developers to common problems that can be remediated in the initial software development activity. **Software Testing** Although the CSIRT is not the group that will normally perform any software testing, they can provide test scenarios based on their real-life experiences or research. Real-life scenarios will provide a good method of testing how well software stands up to and handles current risks as well as what type of security functions have been configured and implemented. Just because they are not generally part of the testing group does not mean that the CSIRT shouldn’t be part of the testing group. This could be another approach to ensuring interaction between developers, testers, and incident management specialists. CSIRT staff could focus any testing they might do on looking for common secure coding errors and vulnerabilities. **Code Reviews** McGraw and van Wyk state that it is generally not the information security staff (CSIRTs, in our case) that reviews code to ensure that security bugs and weak- nesses have been removed or mitigated [McGraw 2005]. However, some members of CSIRTs may actually have the expertise to do this type of review. There is a small cadre of CSIRT staff and security experts who are well versed in certain malicious code and artifact analysis techniques—such as surface analysis (including source code review) and reverse engineering techniques (disassembly and decompilation of binary code)—who have the skills to do a security code review. However, many teams may not always have the resources and time to allow such proactive activities to occur. However, it may be possible, though, to take some general lessons learned from CSIRT artifact analysis work and synthesize these into best practices that can then be incorporated into secure programming methods, techniques, and training. Such information may also be used to implement security quality testing as part of a code review. There are a few books on secure coding that have been released over the past few years that present these types of best practices. These include ### Penetration Testing Depending on the services provided, a CSIRT may perform penetration testing as part of its operational activities (either by request or on a periodic basis with management approval). If the CSIRT is performing this function, it is important for any weaknesses and vulnerabilities discovered in systems to be relayed back to developers so that they can fix current problems and also incorporate these lessons into any future software development activities. If the CSIRT does not perform penetration testing, they again can provide real-life exploit scripts and malicious code that might be incorporated into a penetration testing toolkit that can be used by other groups in the enterprise that are responsible for performing this activity. Use of such malicious code, though, should always be done cautiously. CSIRTs often keep a database of such exploit and malicious code in their incident tracking system or in its own secure repository. Such code can be analyzed for comparative purposes or to identify patterns and trends in exploitation across protocols, applications, or operating systems. It can also be used to help build anti-virus and intrusion detection signatures. Another significant area where penetration testing can play a part in the SDLC is discussed by van Wyk in his BSI document Adapting Penetration Testing for Software Development Purposes. He states that penetration testing is usually not incorporated early enough into the SDLC, being done after a product is deployed rather than as part of the testing phase. He recommends involving security and incident management staff earlier to perform penetration testing to help determine significant security problems before products are placed into production. This will allow for remediation of risks that, if left until deployment, could open software and hardware components to significant malicious impact [van Wyk 2006]. He also discusses the benefits of black box and white box penetration testing approaches. “In a black box test, the tester starts with no knowledge whatsoever of the target system, whereas in a white box test, the tester gets details about the system and can assess it from a knowledgeable insider’s point of view” [van Wyk 2006]. White box testing is one activity where CSIRT or other incident management staff and software developers could collaborate. This is one way to develop an interface or communications channel between these diverse groups. The software developers have the detailed view of how the components operate, and the CSIRT or incident management staff can apply this knowledge during penetration testing exercises. **Deployment and Operations** McGraw and van Wyk argue that deployment and operations should be viewed as part of the SDLC [McGraw 2005]. If accepting this argument, then this provides another touchpoint where CSIRT staff can have input into the SDLC by providing information that can be used during deployment and operations activities to prevent incidents and protect critical assets. Specifically CSIRTs can provide best practice guidance for implementing secure default network and system configurations to prevent incidents. They, by virtue of their position, will also be the key players in instituting any type of incident reporting guidelines and response plans for handling computer security incidents that do occur. CSIRTs collect and analyze data to determine the cause of computer security incident as well as the correct strategy to mitigate the resulting danger. Information resulting from this analysis can be fed back into the software design, deployment, and operations phases to help prevent similar incidents in the future. The deployment and operations phases of the SDLC should include activities to detect, analyze, and respond to computer security incidents. Performing such activities successfully is critical to the sustainment of any software and hardware product. Software design should also then include requirements for supporting these activities. Since CSIRT or incident management staff generally perform these functions, they can be a good source of information for building such software requirements. Involving CSIRTs early in the SDLC, through joint design meetings or requirements setting interviews, can help determine what software products require to be easily remediable. For example, most incident handlers will agree that software logging features are not adequate and the currently available analysis and archival tools are not optimized or easy to use. Searching and correlating information can be difficult and retaining incident information in a way that can be reviewed to help understand current problems and needed remediation leaves much to be desired. CSIRTs can provide information to software developers concerning what information should be automatically collected as well as how that information should be stored and structured to maximize and support analysis, trending, and response activities. Obtaining customer or product CSIRT input into the design of user interfaces and deployment features can provide a reality check that software is indeed easily remediable. An example of this could be building software in a modularized fashion so that patching can occur in an optimized way. Another can be ensuring that patches themselves are not the same size as the original program. CSIRTs can provide recommendations for the best security configurations for applications and even provide some pre-incident planning and preparation recommendations that can help developers support the CSIRT mission. For example, in prioritizing incidents, it is important for CSIRT staff to understand not only the criticality of a service or server but how that component fits into the total enterprise architecture, what trusted relationships it has with other components, what data is contained on the system, and what data is shared (including how it is shared and with whom). The CSIRT can obtain some of this information from the system developers and administrators. They can also work with the developers to identify ways that the recognition of these issues can be included in the initial system requirements and design phases. Understanding how the software and hardware products work and interrelate will help the CSIRT determine the severity, scope, and impact of an incident. **Improving Development Practice** In Essential Factors for Successful Software Security Awareness Training, van Wyk and Steven discuss methods for socializing security experts and developers. They propose a curriculum for educating developers, management, and execu- tives about security issues with a focus for developers on understanding attacker exploits and corresponding mitigation strategies and methods for executing software security touchpoints within the SDLC [Steven 2006]. CSIRT staff who work in an organization developing software products or designing and implementing enterprise systems would be good candidates to help build such a curriculum within their organization. Their knowledge of incident and vulnerability activity and history within the organization along with their general understanding of security concepts, risks, and threats, their expertise in attack methods and corresponding mitigation strategies, and their experience with existing organizational systems will allow them to provide real-life, relevant examples to the course attendees. Another way to create communications channels between CSIRT and development staff is holding joint discussions to review new vulnerabilities and their long-term impacts. A third way may be to specifically assign a member of the CSIRT to participate in design reviews or development work (if time and resources permit.) **Evolutionary Systems Design** CSIRT involvement in the SDLC does not stop with deployment and operations. Once a system is installed, the development cycle still continues. In an article on evolutionary software design, Lipson talks about the need for installed software to be able to adapt to changes in its environment, usage, or components. He introduces the concept of “perpetual design,” saying that “all SDLC activities must be perpetual if the quality attributes of a system are to be sustained over time” [Lipson 2006]. Lipson goes on to explain that *Any significant change in system requirements can certainly affect the underlying risk management assumptions [for the system], but the effects of other changes might not be as obvious. Therefore, one of the most essential uses for risk management resources would be to support security and survivability monitoring to provide early warnings of emerging threats and increased risks to the system.* Lipson states that “the first step in evolving to meet new threats to your system’s security is to recognize the need for change.” Information gathered as a result of recognizing this need for change will be used to institute evolutionary design changes. He then lists the following change factors or triggers that must be monitored as influences on the evolutionary design of secure systems: - business and organizational • threat environment • operating environment • economic environment and the acquisition marketplace • political, social, legal, and regulatory environment • relationships to other systems and infrastructures • lessons learned and system feedback Two of the triggers mentioned by Lipson should be familiar by now as obvious areas where CSIRTs can provide business and risk intelligence: • threat environment • attack techniques – new and existing techniques used by intruders • malicious adversaries – changes in attackers such as the rise in new cyber criminals • lessons learned and system feedback from sources including • system instrumentation and audits – network monitoring and alerts • operational experience (attacks, accidents, and failures) – real-life incidents and experiences • results of periodic security and survivability evaluations – operational exercises, penetration testing, and vulnerability scanning • technical society meetings, security courses, seminars, journals, news reports – learning from what has happened to others All of the above sources of change information are related to activities that CSIRT staff may perform, information that they may collect and analyze, or incidents they may receive and respond to. Because CSIRT staff perform these functions on a day-to-day basis, they will also have the most up-to-date information and therefore knowledge on what evolutionary changes might be required. Such knowledge needs to be fed back into the SDLC. Some not so obvious triggers where CSIRTs may have knowledge that can be useful to identifying evolutionary changes include • the operating environment. New technologies or trends related to security tools and best practices may provide new techniques for hardening system configurations and preventing or containing attacks. • the legal or regulatory environment. New requirements for organizations to report security breaches or any unauthorized release of personal privacy information may change the monitoring and alerting requirements for software, change the reporting requirements related to security events and incidents, or mandate response capabilities to be established. the organizational environment and its relationships to other systems and infrastructures. Changes in business practice and user behavior may require significant changes to the threat model used in designing software. Changes can be both technical or socially based. For example, the availability of information can be improved and its confidentiality threatened by both the widespread availability of large-capacity personal storage devices (USB sticks, mobile phones, PDAs, and MP3 players) and the increasing practice of home or remote working. Changes such as moving connectivity from dedicated private lines to communication over the shared Internet should be reflected in software design. Though not always directly responsible for these areas, CSIRTS are often the groups who learn about such changes first and can pass this information on to other parts of the enterprise. Such information is often gathered through public monitoring of security sites and mailing lists. This type of technology watch function performed by CSIRTs results in information that is important for software developers to know, understand, and eventually synthesize into software and hardware requirements. **SUMMARY** “It will be easier to produce software that is secure if risk management activities and checkpoints are integrated throughout the development life cycle” [Goertzel 2006]. CSIRTs are one source of information and expertise that can provide real insight into current and emerging computer security risk and threats. This experience can be translated into strategies for preventing and responding to computer security incidents. At the most proactive level, such information and corresponding analysis can be fed into the SDLC and used to better define security, alerting, and recovery requirements in systems, hardware, and software. It is important for CSIRTs—as well as product developers and managers—to understand the role that CSIRTs can play in the SDLC. An effective organization will look to establish methods and communication channels that encourage and support interaction between these two communities. It is also important for these groups to understand that this interaction should be bidirectional: the CSIRT providing risk and threat information and attack explanations, and the product developers helping the CSIRT to understand how software and hardware components and processes work and are intended to be used. CSIRT staff and software developers working together, result in the design and implementation of better software requirements, which in turn, results in more effective analysis and mitigation of computer security attacks and threats allowing critical business functions to be resilient and successful. <table> <thead> <tr> <th>Reference</th> <th>Title</th> </tr> </thead> <tbody> <tr> <td>Hope, Paco; Lavenhar, Steven; &amp; Peterson, Gunnar (2005)</td> <td>&quot;Architectural Risk Analysis.&quot; Build Security In.</td> </tr> <tr> <td>Killcrece, Georgia; Kossakowski, Klaus-Peter; Ruelle, Robin; &amp; Zajicek, Mark (2002)</td> <td>CSIRT Services.</td> </tr> <tr> <td>Killcrece, Georgia (2005)</td> <td>&quot;Incident Management.&quot; Build Security In.</td> </tr> <tr> <td>Lipson, Howard (2006)</td> <td>&quot;Evolutionary Design of Secure Systems – The First Step Is Recogniz-</td> </tr> <tr> <td>McGraw &amp; van Wyk, Kenneth (2005)</td> <td>&quot;Bridging the Gap between Software Development and Information Securi-</td> </tr> <tr> <td>Steven, John; and van Wyk, Kenneth (2006)</td> <td>&quot;Essential Factors for Successful Software Security Awareness Train-</td> </tr> <tr> <td>van Wyk, Kenneth (2006)</td> <td>&quot;Adapting Penetration Testing for Software Development Purposes.&quot; Bu-</td> </tr> </tbody> </table>
{"Source-Url": "https://resources.sei.cmu.edu/asset_files/WhitePaper/2013_019_001_299145.pdf", "len_cl100k_base": 7466, "olmocr-version": "0.1.49", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 38453, "total-output-tokens": 8519, "length": "2e12", "weborganizer": {"__label__adult": 0.00033283233642578125, "__label__art_design": 0.00022721290588378904, "__label__crime_law": 0.0009312629699707032, "__label__education_jobs": 0.0008788108825683594, "__label__entertainment": 4.3392181396484375e-05, "__label__fashion_beauty": 0.0001442432403564453, "__label__finance_business": 0.00032067298889160156, "__label__food_dining": 0.00023293495178222656, "__label__games": 0.000720977783203125, "__label__hardware": 0.0010423660278320312, "__label__health": 0.0003638267517089844, "__label__history": 0.00011396408081054688, "__label__home_hobbies": 6.049871444702149e-05, "__label__industrial": 0.00032258033752441406, "__label__literature": 0.00015532970428466797, "__label__politics": 0.00022614002227783203, "__label__religion": 0.0002624988555908203, "__label__science_tech": 0.0114288330078125, "__label__social_life": 6.711483001708984e-05, "__label__software": 0.00904083251953125, "__label__software_dev": 0.97265625, "__label__sports_fitness": 0.0002002716064453125, "__label__transportation": 0.0003218650817871094, "__label__travel": 0.00012058019638061523}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42392, 0.00507]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42392, 0.61876]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42392, 0.93375]], "google_gemma-3-12b-it_contains_pii": [[0, 2199, false], [2199, 4510, null], [4510, 7472, null], [7472, 9921, null], [9921, 12697, null], [12697, 14852, null], [14852, 17588, null], [17588, 20039, null], [20039, 22313, null], [22313, 24912, null], [24912, 27351, null], [27351, 29979, null], [29979, 32937, null], [32937, 35453, null], [35453, 37634, null], [37634, 40168, null], [40168, 40373, null], [40373, 42392, null], [42392, 42392, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2199, true], [2199, 4510, null], [4510, 7472, null], [7472, 9921, null], [9921, 12697, null], [12697, 14852, null], [14852, 17588, null], [17588, 20039, null], [20039, 22313, null], [22313, 24912, null], [24912, 27351, null], [27351, 29979, null], [29979, 32937, null], [32937, 35453, null], [35453, 37634, null], [37634, 40168, null], [40168, 40373, null], [40373, 42392, null], [42392, 42392, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42392, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42392, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42392, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42392, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42392, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42392, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42392, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42392, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42392, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42392, null]], "pdf_page_numbers": [[0, 2199, 1], [2199, 4510, 2], [4510, 7472, 3], [7472, 9921, 4], [9921, 12697, 5], [12697, 14852, 6], [14852, 17588, 7], [17588, 20039, 8], [20039, 22313, 9], [22313, 24912, 10], [24912, 27351, 11], [27351, 29979, 12], [29979, 32937, 13], [32937, 35453, 14], [35453, 37634, 15], [37634, 40168, 16], [40168, 40373, 17], [40373, 42392, 18], [42392, 42392, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42392, 0.0838]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
127b3f2ac2db0ed054235164c4b61accc35cc3d8
1 Review Recall the definitions of the classes $\text{NP}$, and $\text{coNP}$. **Definition.** (The class $\text{NP}$) A language $L \subseteq \{0,1\}^*$ is in $\text{NP}$, if there exists a polynomial $p : \mathbb{N} \to \mathbb{N}$ and a polynomial-time TM $M$ such that for any $x \in \{0,1\}^*$, $$x \in L \iff \exists y \in \{0,1\}^{p(|x|)} (M(x,y) = 1).$$ For example, $\text{SAT}$ is in $\text{NP}$. **Definition.** (The class $\text{coNP}$) A language $L \subseteq \{0,1\}^*$ is in $\text{coNP}$, if there exists a polynomial $p : \mathbb{N} \to \mathbb{N}$, and a polynomial-time TM $M$ such that for any $x \in \{0,1\}^*$, $$x \in L \iff \forall y \in \{0,1\}^{p(|x|)} (M(x,y) = 1).$$ For example, $\text{TAUTOLOGY}$ is in $\text{coNP}$. Note that in the definition of the class $\text{coNP}$, we have $\forall$ as the quantifier rather than $\exists$ as in $\text{NP}$. 2 Polynomial hierarchy Today we are going to talk about polynomial hierarchy. At first we define two new complexity classes: the class $\Sigma^P_2$, and the class $\Pi^P_2$. **Definition.** (The class $\Sigma^P_2$) The class $\Sigma^P_2$ is the set of all languages $L$ for which there exists a polynomial-time TM $M$, and a polynomial $q$ such that for any $x \in \{0,1\}^*$, $$x \in L \iff \exists y_1 \in \{0,1\}^{q(|x|)} \forall y_2 \in \{0,1\}^{q(|x|)} (M(x,y_1,y_2) = 1).$$ **Definition.** (The class $\Pi^P_2$) The class $\Pi^P_2$ is the set of all languages $L$ for which there exists a polynomial-time TM $M$, and a polynomial $q$ such that for any $x \in \{0,1\}^*$, $$x \in L \iff \forall y_1 \in \{0,1\}^{q(|x|)} \exists y_2 \in \{0,1\}^{q(|x|)} (M(x,y_1,y_2) = 1).$$ Note that the difference in the definitions above is the order of the quantifiers; in the class $\Sigma^P_2$ we have the quantifiers appearing in the order $\exists, \forall$, whereas in the class $\Pi^P_2$ the order is $\forall, \exists$. The definition of the polynomial hierarchy generalizes the definitions of $\text{NP}$, $\text{coNP}$, $\Sigma^P_2$ and $\Pi^P_2$. It consists of every language that can be defined by a combination of a polynomial-time computable predicate and a fixed number of quantifiers. **Definition.** (The class $\Sigma^P_i$) For $i \geq 1$, a language $L$ is in $\Sigma^P_i$ if there exists a polynomial-time TM $M$ and a polynomial $q$ such that $$x \in L \iff \exists y_1 \in \{0,1\}^{q(|x|)} \forall y_2 \in \{0,1\}^{q(|x|)} \ldots Q_i y_i \in \{0,1\}^{q(|x|)} (M(x,y_1,\ldots,y_i) = 1),$$ where $Q_i$ is either $\forall$ (if $i$ is even) or $\exists$ (if $i$ is odd). **Definition.** (The class $\Pi^P_i$) For $i \geq 1$, a language $L$ is in $\Pi^P_i$ if there exists a polynomial-time TM $M$ and a polynomial $q$ such that $$x \in L \iff \forall y_1 \in \{0,1\}^{q(|x|)} \exists y_2 \in \{0,1\}^{q(|x|)} \ldots Q_i y_i \in \{0,1\}^{q(|x|)} (M(x,y_1,\ldots,y_i) = 1),$$ where $Q_i$ is either $\forall$ (if $i$ is odd) or $\exists$ (if $i$ is even). In the class $\Sigma^P_i$ we have the quantifiers appearing in the order $\exists, \forall, \exists, \forall, \ldots$, whereas in the class $\Pi^P_i$ the order is $\forall, \exists, \forall, \exists, \ldots$; so the difference is in the order of the quantifiers. **Definition.** (The polynomial hierarchy) The polynomial hierarchy is the set $\text{PH} = \bigcup_i \Sigma^P_i$. Note that $\Sigma^P_1 = \text{NP}$, and $\Pi^P_1 = \text{coNP}$. Also, for every $i$, we have $\Sigma^P_i \subseteq \Pi^P_i \subseteq \Sigma^P_{i+2} \subseteq \cdots$. Hence, $\text{PH} = \bigcup_i \Pi^P_i$. So we have, $\text{PH} = \bigcup_i \Sigma^P_i = \bigcup_i \Pi^P_i$, $\text{NP} \subseteq \text{PH}$ and $\text{P} \subseteq \text{PH}$. **Complete problems for PH.** A quantified Boolean formula (QBF) is a formula of the form $$Q_1 x_1 \ Q_2 x_2 \ \ldots \ Q_n x_n \ \phi(x_1,x_2,\ldots,x_n),$$ where, each $Q_i$ is one of the two quantifiers $\forall$ or $\exists$, $x_i \in \{0,1\}$ for any $i \in \{1,\ldots,n\}$ and $\phi$ is an unquantified Boolean formula. We define the language $\text{TQBF}$ to be the set of all QBFs which are evaluated true. For any $i \geq 1$, the class $\Sigma^P_i$ has the following complete problem, which is special case of $\text{TQBF}$, with a fixed number of alternations: $$\Sigma^P_i \text{SAT} = \exists x_1 \forall x_2 \exists \ldots Q_i x_i \ \phi(x_1,x_2,\ldots,x_n) = 1,$$ where $Q_i$ is $\forall$ if $i$ is even and $\exists$ else. Similarly, for any $i \geq 1$, the class $\Pi^P_i$ has the following complete problem with a fixed number of alternations: $$\Pi^P_i \text{SAT} = \forall x_1 \exists x_2 \forall \ldots Q_i x_i \ \phi(x_1,x_2,\ldots,x_n) = 1,$$ where $Q_i$ is $\exists$ if $i$ is even and $\forall$ else. ### 3 Space-bounded computation Consider a Turing machine with the following configuration: - Tapes: − Input tape (read only) − Work tape (read/write) • Finite set $Q$ of possible states containing: − $q_{\text{start}}$: This is the special starting state of a Turing machine, which we describe as follows. The input tape of a Turing machine initially contains the start symbol $\searrow$, a finite nonblank symbol $x$, and the blank symbol $\blacksquare$ for the rest of its cells. All heads start at the left ends of the input tape and work tape. The machine is in the special starting state, which we denote by $q_{\text{start}}$. The configuration of the Turing machine associated with this state is called the start configuration on input $x$. − $q_{\text{halt}}$: This is a special halting state of a Turing machine, which has the property that once the machine is in $q_{\text{halt}}$, the transition function does not allow the machine to further modify the contents of the tape or change the states. So the Turing machine halts once it enters $q_{\text{halt}}$. − $q_{\text{accept}}$: This state exists only for a nondeterministic Turing machine. Recall that the only difference between a deterministic and nondeterministic Turing machine is that the latter has two transition functions, denoted by $\delta_0$ and $\delta_1$, and the special state $q_{\text{accept}}$. At each step of computation performed, a nondeterministic Turing machine makes an arbitrary choice as to which one of $\delta_0$ and $\delta_1$ to apply. If there exists some sequence of nondeterministic choices for any input $x$ that would cause the machine to reach the state $q_{\text{accept}}$, then the machine outputs 1 for that input. On the other hand, if every sequence of nondeterministic choices causes the machine to reach $q_{\text{halt}}$ before reaching $q_{\text{accept}}$ for any input, then the machine outputs 0 for that input. In this regard, both $q_{\text{accept}}$ and $q_{\text{halt}}$ can be considered as halting states for a nondeterministic Turing machine. Now we define space-bounded computation for a TM, which we apply only to the work tape. As a result we restrict the definition of space-bounded computation to languages (decision problems with answers confined in yes or no) only, and exclude function problems. Recall that, function problems require an answer more elaborate than that of decision problems, so for them we have to add a third tape to the TM for writing the output. **Definition. (Space-bounded computation)** Let $S : \mathbb{N} \rightarrow \mathbb{N}$ and $L \in \{0,1\}^*$. We have $L \in \text{SPACE}(S(n))$ if there exists a constant $c$ and a TM $M$ deciding $L$ such that at most $c \cdot S(n)$ locations of $M$’s work tapes (excluding the input tape) are visited by $M$’s head during its computation on every input of length $n$. Similarly, we have $L \in \text{NSPACE}(S(n))$, if there exists an NDTM $M$ deciding $L$ such that, regardless of its nondeterministic choices, it never uses more than $c \cdot S(n)$ nonblank tape locations on inputs of length $n$. **Some space complexity classes.** - $L = \text{SPACE}(\log n)$ - $NL = \text{NSPACE}(\log n)$ - $\text{PSPACE} = \bigcup_{c>0} \text{SPACE}(n^c)$ - $\text{NPSPACE} = \bigcup_{c>0} \text{NSPACE}(n^c)$ Later we will show that $\text{PSPACE} = \text{NPSPACE}$. First we show that $\text{PSPACE} \subseteq \text{EXP}$. $\text{PSPACE} \subseteq \text{EXP}$. We can bound the time complexity of a TM in terms of its space complexity. Suppose a language $L \in \text{PSPACE}$, then there exists a polynomial $S$, and a deterministic TM such that $M$ decides $L$ and halts after using at most $p(n)$ tape squares, where $x$ is the input of length The configuration of $M$ consists of its state, position of the head and the contents of the tape. As $M$ halts, it can never enter the same configuration twice, as otherwise it would enter an infinite loop and would never halt. The number of symbols for the tape alphabet is $|\Gamma|$, and number of states is $|Q|$. The position of the read-write head can be in one of $S(n)$ positions. Each cell of the tape contains one symbol from the alphabet, and the contents of such a cell cannot be modified if it is not visited by the read-write head. So there are $|\Gamma|^{S(n)}$ possibilities for the content of the tape. Hence, in total there are $|Q|S(n)|\Gamma|^{S(n)}$ possible configurations for $M$ during its computation on input $x$. Let us consider a polynomial $V$ such that $V(n) \geq \log |Q| + \log S(n) + S(n) \log |\Gamma|$, then clearly $L$ can be decided by $M$ in time $2^{V(n)}$. We can also represent $V(n) = c \cdot S(n)$, where $c$ is a constant depending on $|\Gamma|$, $|Q|$ and number of tapes. Recall that $\text{EXP} = \bigcup_k \text{DTIME}(2^{nk})$. So, $L \in \text{EXP}$. Thus $\text{PSPACE} \subseteq \text{EXP}$. **Space bounded configuration and $s-t$ connectivity problem.** There is a strong link between space bounded configuration and directed graph $s-t$ connectivity problem. The $s-t$ connectivity problem is described as follows. Consider a graph $G = (V,E)$, where $V$ is the set of vertices and $E$ is the set of edges. Consider two vertices of the graph, denoted by $s$ (source) and $t$ (sink). We want to determine if there is a directed path from $s$ to $t$. Now we describe the notion of a configuration graph for a TM. Consider a TM denoted by $M$. A configuration of a TM $M$ contains the contents of all nonblank entries of $M$’s tapes, along with its state and the position of the head. For every space $S(n)$ TM $M$ and input $x \in \{0,1\}^*$, the configuration graph of $M$ on input $x$ is denoted by $G_{M,x}$, which is a directed graph. The nodes of $G_{M,x}$ are associated with all possible configurations of $M$. The input of $M$ contains the value of $x$, and the work tapes can have at most $S(|x|)$ nonblank cells. The edges of $G_{M,x}$ are constructed as follows. Consider two nodes of $G_{M,x}$ associated with the configuration $C$ and $C'$ of $M$. There exists an edge from $C$ to $C'$, if $C'$ can be reached from $C$ in one step according to $M$’s transition function. If $M$ is deterministic, then the out-degree of the graph is one; if $M$ is nondeterministic, then the out-degree is at most two, as an NDTM has two transition functions. The start configuration of $M$ is unique and depends on the input $x$. The node of $G_{M,x}$ associated with this unique start configuration of $M$ is denoted by $\text{start-config}(x)$. We modify $M$ to erase the contents of its work tapes before it halts. So, we can assume that there is a unique single configuration, denoted by $C_{\text{accept}}$, such that $M$ halts and outputs 1 at it. The input $x$ is accepted by $M$ if and only if there exists a directed path from $\text{start-config}(x)$ to $C_{\text{accept}}$ in $G_{M,x}$. First, note that every vertex in $G_{M,x}$ can be described using $cS(n)$ bits, where $c$ is a positive constant, and $G_{M,x}$ has at most $2^{cS(n)}$ nodes. We can justify the claim by following the same line of logic presented in the justification of $\text{PSPACE} \subseteq \text{EXP}$. Now deciding if two nodes of $M$ are neighbors can be expressed as conjunction of many checks. Each check depends on a constant number of bits. Recall that for every Boolean function $f : \{0,1\}^l \rightarrow \{0,1\}$, there exists an $l$-variable CNF formula $\phi$ of size $2^l$ such that $\phi(u) = f(u)$ for every $u \in \{0,1\}^l$, where the the size of a CNF formula is defined to be the number of $\wedge/\vee$ symbols it contains. and can be expressed by constant sized CNF formulas. So, each check can be expressed by constant-sized CNF formulas, with the number of variables being proportional to the size fo the workspace of $M$. So, there exists an $O(S(n))$-size CNF formula $\phi_{M,x}$ such that for every two nodes $C, C'$ of $G_{M,x}$, we have $\phi_{M,x}(C, C') = 1$ if and only if $C$ and $C'$ encode two neighboring configuration in $G_{M,x}$. Recall that, A language $L$ is in $\text{DTIME}(S(n))$ if there is a TM that runs in time $cS(n) : c > 0$ and decides $L$. So $\text{DTIME}(S(n)) \subseteq \text{SPACE}(S(n)) \subseteq \text{NSPACE}(S(n))$. Now we show that, \( \text{NSPACE}(S(n)) \subseteq \text{DTIME}(2^{O(S(n))}) \). We can enumerate over all possible configurations and construct the graph \( G_{M,x} \) in time \( 2^{O(S(n))} \), and thus check if \( \text{start-config}(x) \) is connected to \( C_{\text{accept}} \) in \( G_{M,x} \). We can do the checking by using breadth-first search algorithm to solve \( s - t \) connectivity, which is linear in the size of the graph. Thus we have arrived at the following: For every space constructible \( S : \mathbb{N} \rightarrow \mathbb{N} \), \( \text{DTIME}(S(n)) \subseteq \text{SPACE}(S(n)) \subseteq \text{NSPACE}(S(n)) \subseteq \text{DTIME}(2^{O(S(n))}) \). So we have: \( L \subseteq \text{NL} \subseteq \text{P} \subseteq \text{NP} \subseteq \text{PH} \subseteq \text{PSPACE} \subseteq \text{EXP} \), as shown in Figure 1. ![Figure 1: Different complexity classes](image) ### 4 Complete problems for space classes We define \text{PSPACE}\text{-hard} and \text{PSPACE}\text{-complete} languages first. **Definition.** (\text{PSPACE}\text{-hard}) A language \( L' \) is \text{PSPACE}\text{-hard} if for every \( L \in \text{PSPACE} \), \( L \leq_p L' \). **Definition.** (\text{PSPACE}\text{-complete}) A language \( L' \) is \text{PSPACE}\text{-complete} if \( L' \) is \text{PSPACE}\text{-hard} and \( L' \in \text{PSPACE} \). Now we show that \text{TQBF} is \text{PSPACE}\text{-space complete}. To that goal, we first show that \( \text{TQBF} \in \text{PSPACE} \). Theorem 1. \( \text{TQBF is in PSPACE.} \) Proof. (Arora, pages 84-85) Consider the QBF \( \psi \) with \( n \) variables \[ \psi = Q_1 x_1 \ Q_2 x_2 \ldots \ Q_n x_n \ \phi(x_1, x_2, \ldots, x_n), \] where each \( Q_i \) is one of the two quantifiers \( \forall \) or \( \exists \), \( x_i \in \{0, 1\} \) for any \( i \in \{1, \ldots, n\} \) and \( \phi \) is an unquantified Boolean formula. The size of \( \phi \) is \( m \). All possible truth assignments of the variables can be arranged as the leaves of a full binary tree of depth \( n \). Here, the left subtree of the root contains all the truth assignments with \( x_1 = 0 \). The right subtree of the root contain all the assignments of \( x_1 = 1 \). Then we branch on \( x_2, x_3 \), and so on (see Figure 2). ![Binary tree for a TQBF with 4 variables](image) Figure 2: A binary tree for a TQBF with 4 variables Now we construct a simple recursive algorithm \( A \). Let \( s_{n,m} \) be the space used by \( A \). For convenience we assume that the unquantified Boolean formula \( \phi \) contains the variables (of the form \( x_i \)), their negation (of the form \( \neg x_i \)) and constants 0 corresponding to \( \text{false} \), and 1 corresponding to \( \text{true} \). If there are no variables (\( n = 0 \)), then the formula contains only the constants. In that case, the formula can be evaluated in \( O(m) \) time and space. So, we assume there is at least one variable (\( n > 0 \)). Define \[ \psi_{|x_1=b} = Q_2 x_2 \ldots Q_n x_n \ \phi(b, x_2, \ldots, x_n), \] where \( b \in \{0, 1\} \). So in \( \psi_{|x_1=b} \), we have modified \( \psi \), where the first quantifier \( Q_1 \) is dropped and all the occurrences of \( x_1 \) in \( \phi \) has been replaced by \( b \). Algorithm \( A \) will work as follows. If \( (Q_1 = \exists) \) then (output \( 1 \leftrightarrow A(\psi_{|x_1=0}) = 1 \lor A(\psi_{|x_1=1}) \)) Else (output \( 1 \leftrightarrow A(\psi_{|x_1=0}) = 1 \land A(\psi_{|x_1=1}) \)) So, \( A \) returns the correct answer on any QBF \( \psi \). Note that, space can be reused. After \( A(\psi|_{x_1=0}) \) is computed, \( A \) retains only the single bit of output from the computation, and reuses the space left for the computation of \( A(\psi|_{x_1=1}) \). So both \( A(\psi|_{x_1=0}) \) and \( A(\psi|_{x_1=1}) \) can be run in the same space. If we assume that \( A \) uses \( O(m) \) space to write \( \psi|_{x_1=b} \), then \( s_{n,m} = s_{n-1,m} + O(m) \). Now we can apply the same line of logic to \( x_2, x_3 \), and so on. Thus, \( s_{n,m} = O(n \cdot m) \). So, TQBF is in \textbf{PSPACE}. \( \square \)
{"Source-Url": "http://www.cs.toronto.edu:80/~toni/Courses/Complexity2015/lectures/lecture4.pdf", "len_cl100k_base": 5454, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 29199, "total-output-tokens": 5950, "length": "2e12", "weborganizer": {"__label__adult": 0.0004777908325195313, "__label__art_design": 0.0004830360412597656, "__label__crime_law": 0.00067901611328125, "__label__education_jobs": 0.0024890899658203125, "__label__entertainment": 0.00020706653594970703, "__label__fashion_beauty": 0.00023257732391357425, "__label__finance_business": 0.0003819465637207031, "__label__food_dining": 0.0008335113525390625, "__label__games": 0.0017404556274414062, "__label__hardware": 0.0018968582153320312, "__label__health": 0.001277923583984375, "__label__history": 0.0005803108215332031, "__label__home_hobbies": 0.0002589225769042969, "__label__industrial": 0.0010862350463867188, "__label__literature": 0.001373291015625, "__label__politics": 0.00045180320739746094, "__label__religion": 0.0011358261108398438, "__label__science_tech": 0.401123046875, "__label__social_life": 0.00016391277313232422, "__label__software": 0.0091094970703125, "__label__software_dev": 0.572265625, "__label__sports_fitness": 0.0004153251647949219, "__label__transportation": 0.000903606414794922, "__label__travel": 0.00025582313537597656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17118, 0.01899]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17118, 0.67405]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17118, 0.78593]], "google_gemma-3-12b-it_contains_pii": [[0, 2145, false], [2145, 4829, null], [4829, 8479, null], [8479, 13014, null], [13014, 14490, null], [14490, 16485, null], [16485, 17118, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2145, true], [2145, 4829, null], [4829, 8479, null], [8479, 13014, null], [13014, 14490, null], [14490, 16485, null], [16485, 17118, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 17118, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17118, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17118, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17118, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 17118, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17118, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17118, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17118, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17118, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 17118, null]], "pdf_page_numbers": [[0, 2145, 1], [2145, 4829, 2], [4829, 8479, 3], [8479, 13014, 4], [13014, 14490, 5], [14490, 16485, 6], [16485, 17118, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17118, 0.0]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
36f18b8b9f7fe7a4ee536cdd2f0ee655b18a2551
[REMOVED]
{"Source-Url": "https://www.springer.com/cda/content/document/cda_downloaddocument/9783319601694-c2.pdf?SGWID=0-0-45-1608097-p180895396", "len_cl100k_base": 7054, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 34634, "total-output-tokens": 9019, "length": "2e12", "weborganizer": {"__label__adult": 0.0004911422729492188, "__label__art_design": 0.00232696533203125, "__label__crime_law": 0.0003440380096435547, "__label__education_jobs": 0.00109100341796875, "__label__entertainment": 0.0001056194305419922, "__label__fashion_beauty": 0.00018870830535888672, "__label__finance_business": 0.0003333091735839844, "__label__food_dining": 0.00048065185546875, "__label__games": 0.0007562637329101562, "__label__hardware": 0.0006556510925292969, "__label__health": 0.0004093647003173828, "__label__history": 0.0003447532653808594, "__label__home_hobbies": 6.479024887084961e-05, "__label__industrial": 0.0004427433013916016, "__label__literature": 0.0004737377166748047, "__label__politics": 0.00029540061950683594, "__label__religion": 0.0006604194641113281, "__label__science_tech": 0.01259613037109375, "__label__social_life": 7.063150405883789e-05, "__label__software": 0.00524139404296875, "__label__software_dev": 0.9716796875, "__label__sports_fitness": 0.0002846717834472656, "__label__transportation": 0.0004076957702636719, "__label__travel": 0.00020945072174072263}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40731, 0.03114]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40731, 0.32362]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40731, 0.93504]], "google_gemma-3-12b-it_contains_pii": [[0, 1961, false], [1961, 4977, null], [4977, 7704, null], [7704, 10734, null], [10734, 13542, null], [13542, 16280, null], [16280, 19570, null], [19570, 20891, null], [20891, 23964, null], [23964, 25356, null], [25356, 28411, null], [28411, 28963, null], [28963, 31867, null], [31867, 34858, null], [34858, 37610, null], [37610, 40731, null], [40731, 40731, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1961, true], [1961, 4977, null], [4977, 7704, null], [7704, 10734, null], [10734, 13542, null], [13542, 16280, null], [16280, 19570, null], [19570, 20891, null], [20891, 23964, null], [23964, 25356, null], [25356, 28411, null], [28411, 28963, null], [28963, 31867, null], [31867, 34858, null], [34858, 37610, null], [37610, 40731, null], [40731, 40731, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40731, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40731, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40731, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40731, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40731, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40731, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40731, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40731, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40731, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40731, null]], "pdf_page_numbers": [[0, 1961, 1], [1961, 4977, 2], [4977, 7704, 3], [7704, 10734, 4], [10734, 13542, 5], [13542, 16280, 6], [16280, 19570, 7], [19570, 20891, 8], [20891, 23964, 9], [23964, 25356, 10], [25356, 28411, 11], [28411, 28963, 12], [28963, 31867, 13], [31867, 34858, 14], [34858, 37610, 15], [37610, 40731, 16], [40731, 40731, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40731, 0.13077]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
108d7be386513d4e81a4cb30ddf835928fbc39c9
A fast general purpose procedure to calculate **Inter-rater Agreement** for coreference annotation Erwin R. Komen Radboud University, Nijmegen, Netherlands June 2009 1. Introduction When the inter-rater agreement between two persons, who have annotated a text with coreference information, is to be measured, then a procedure is necessary that can work with different categories. The raters may have disagreed on the coreference target points, so when the inter-rater agreement with regard to these target points is measured, and when the categories to be considered are the Id’s of the target points within a text, then these two raters will not have identical categories. Inter-rater agreement in general can be measured using Cohen’s kappa (Cohen, 1960). Formula (1) gives a definition of Cohen’s kappa. \[ \kappa = \frac{P_0 - P_c}{1 - P_c} \] \[ P_0 = \text{proportion agreement observed: } \frac{\# \text{agreement}}{\# \text{possibilities}} \] \[ P_c = \text{Proportion agreement based on chance} \] A program like SPSS fails to calculate Cohen’s kappa, when the categories chosen by two raters do not completely overlap. If one wants to calculate Cohen’s kappa, then one could either resort to calculation by hand, or one could use an internet utility called RecalFront (Freelon, 2008). The URL provided in the reference’s section 4 points to the program RecalFront, where one can upload the information in the form of a comma separated file (CSV) and receive a report on the inter rater agreement. The author of RecalFront has not provided the mathematical basis behind his calculation of Cohen's kappa, but his implementation does allow for raters to have no complete overlap between categories chosen. This paper reports on a procedure and an implementation to calculate Cohen's kappa, which does not require raters to have identical categories. 2. Calculating Kappa 2.1. Symmetrical calculations Suppose someone wants to determine whether sentences from a text contain a topic (1) or not (0). In order to test the inter-rater agreement two people, say John and Mary, rate the same 10 sentences of a text. Their judgments are listed in (2). They agree on their judgments 6 times—in sentences 1, 2, 4, 5, 7, and 10. So their agreement is 60%. This is \(P_0\) from formula (1). Sentence \( x \) contains a topic (1) or not (0). \[ \begin{array}{cccccccccc} \text{Sentence} & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline \text{John} & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & 0 \\ \text{Mary} & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \] In order to calculate \( P_c \), which is the “chance agreement”, we make the symmetrical matrix in (3), where the rows are formed by evaluating how far John agreed with Mary, and the columns by evaluating how far Mary agreed with John. The cell with column “John: 0” and row “Mary: 0” contains the number of times John agreed with Mary that a sentence did not contain a topic (i.e. 5 times they both chose “0”). The cell to the right of it contains the amount of times John did not agree with Mary’s judgment that the sentence contained no topic (3 times). If John and Mary would have been in total agreement, the matrix would have held zero values except for those on the diagonal. \[ \text{(3) Symmetrical agreement matrix for John and Mary’s judgments} \] \[ \begin{array}{cccc} \text{Mary: 0} & \text{John: 0} & \text{John: 1} & \\ 5 & 3 & 8 \\ \text{Mary: 1} & 1 & 1 & 2 \\ 6 & 4 & 10 & \\ \end{array} \] The chance agreement \( P_c \) is now calculated by the sum of the product of the rows and columns along the diagonal, divided by the total number of possibilities. The formula for calculating \( P_c \) in a symmetrical matrix is given in (4). For every point \( M_{ij} \) on the diagonal of matrix \( M \), the total of the values in row \( i \) divided by \( n \) is multiplied by the total of the values of column \( i \) divided by the total number of judgments \( n \). The value for \( P_c \) thus becomes \((8*6 + 2*4)/100 = 0.56\). Since the agreement was 0.6, the value for kappa now becomes \((0.6-0.56)/(1-0.56) = 0.09\). \[ \text{(4) } P_c = \sum_{i=1}^{n} \left( \frac{\sum_{j=1}^{n} M_{ij}}{n} \cdot \frac{\sum_{j=1}^{n} M_{ji}}{n} \right) \] Crucial for the above calculation is the fact that both John and Mary used the same judgments—the values “0” and “1”. If John would have used “0”, “1”, and “2” (where “2” could have meant “there may be a topic, but I don’t know”), while Mary had stuck to “0” and “1”, then the chance agreement matrix \( M \) would have been asymmetrical, as illustrated in (5). Such a matrix contains 3 columns for John, but only 2 rows for Mary. The formula given in (4) now becomes useless, since it assumes that the number of rows equals the number of columns, equals \( n \). \[ \text{(5) Asymmetrical agreement matrix for John and Mary’s judgments} \] \[ \begin{array}{cccc} \text{Mary: 0} & \text{John: 0} & \text{John: 1} & \text{John: 2} \\ 5 & 1 & 1 & 7 \\ \text{Mary: 1} & 1 & 1 & 3 \\ 6 & 2 & 2 & 10 & \\ \end{array} \] 2.2. Restoring symmetry For a more general calculation we at least have to turn back to the original list of judgments shown in table (2). Let us assume again that John uses judgment “2” twice, as in \( X \). Sentence \( x \) contains no topic (0), a topic (1) or undeterminable (2). <table> <thead> <tr> <th></th> <th>1</th> <th>2</th> <th>3</th> <th>4</th> <th>5</th> <th>6</th> <th>7</th> <th>8</th> <th>9</th> <th>10</th> </tr> </thead> <tbody> <tr> <td>John</td> <td>0</td> <td>1</td> <td>2</td> <td>0</td> <td>0</td> <td>1</td> <td>0</td> <td>2</td> <td>0</td> <td>0</td> </tr> <tr> <td>Mary</td> <td>0</td> <td>1</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>1</td> <td>0</td> </tr> </tbody> </table> Calculating \( P_0 \), the percentage agreement, stays as is, since we only need to calculate the number of times John agrees with Mary, and that hasn’t changed. The chance agreement value \( P_c \) would need a symmetrical matrix \( M \). One might get the idea to supply zeros for Mary judging “2”. We would then get the restored symmetrical matrix, as shown in (7). Calculation of \( P_c \) would then yield \((8*6+2*2+0*2)/100 = 0.52\). The value for kappa would then become \((0.6-0.52)/(1-0.52) = 0.17\). This value would suggest that the inter-rater agreement between John and Mary is better, while our intuition would say that this is not the case. However, this is something intrinsically of the kappa value, but the calculation is correct. (7) Restored symmetrical agreement matrix for John and Mary’s judgments <table> <thead> <tr> <th></th> <th>John: 0</th> <th>John: 1</th> <th>John: 2</th> </tr> </thead> <tbody> <tr> <td>Mary: 0</td> <td>5</td> <td>1</td> <td>2</td> </tr> <tr> <td>Mary: 1</td> <td>1</td> <td>1</td> <td>0</td> </tr> <tr> <td>Mary: 2</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td></td> <td>6</td> <td>2</td> <td>2</td> </tr> </tbody> </table> A drawback of the method above is that we would be forced to always come up with symmetrical matrices for our judgments. That is to say, for a program like SPSS, which is able to calculate the kappa value, we would have to specify that Mary has used the value “2” zero times. This can be circumvented by building our own procedure, as will be explained in the next section. 2.3. A general approach A more general approach to calculating \( P_c \) would be to use a method that is not based on working with a matrix. Taking the values in (6) as a starting point, the following procedure could be followed: a) Take the percentage of sentences where John chose “0” and multiply this with the percentage of sentences where Mary chose “0”. b) Do the same for the sentences where each chose “1”. c) Take the percentage of sentences where John chose “2” (this is 20%) and multiply this with the percentage of sentences where Mary chose “2” (this is 0%). This yields 0. d) \( P_c \) is the sum of the products in a, b and c: \((6*8 + 2*2 + 2*0)/100 = 0.52\). Now suppose John and Mary are not judging sentences as to their topicality, but they are actually providing coreference information—they are laying relations from an NP to a preceding IP or NP. They use labels like “Inferred”, “Identity”, “CrossSpeech” and “Assumed” for their coreference relations. Ten of their results are shown in (8). <table> <thead> <tr> <th>NP Id</th> <th>21</th> <th>23</th> <th>27</th> <th>34</th> <th>50</th> <th>62</th> <th>78</th> <th>82</th> <th>84</th> <th>90</th> </tr> </thead> <tbody> <tr> <td>John</td> <td>0</td> <td>1</td> <td>3</td> <td>0</td> <td>0</td> <td>1</td> <td>0</td> <td>4</td> <td>0</td> <td>0</td> </tr> <tr> <td>Mary</td> <td>0</td> <td>1</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>1</td> <td>0</td> </tr> </tbody> </table> The agreement value is again 0.6, and our procedure would yield a \( P_c \) of \( \frac{6 \times 8 + 2 \times 2 + 1 \times 0 + 1 \times 0}{100} = 0.52 \), which is the same as the one calculated previously. John and Mary have not only indicated what the types of the coreference relations are, but they have also connected the NPs with antecedents. The ids of the antecedents are shown in table (9). Note that if we were to use the matrix method, we would have to build a sparsely populated 62*62 matrix, which would not be very effective. Instead, our procedure again yields a \( P_c \) of 0.52, since we only need to calculate the number of times the actually occurring values 0, 13, 23, and 62 are used by John and Mary. <table> <thead> <tr> <th>Id’s of the antecedents to which the NPs are connected.</th> </tr> </thead> <tbody> <tr> <td>NP Id</td> </tr> <tr> <td>-------</td> </tr> <tr> <td>John</td> </tr> <tr> <td>Mary</td> </tr> </tbody> </table> It seems we now have found a robust procedure. But how can this procedure be described in a formula, and, perhaps more importantly, how can it be calculated? The method I use constitutes of two cycles. The first cycle visits all \( n \) sentences (or measuring points) and derives the \( m \) (where \( m < n \)) different category values used by the two raters. The second cycle visits all \( m \) different values and sums the frequency of occurrence for rater 1 multiplied by the frequency of occurrence for rater 2. The formula for this method is shown in (10). The arrays \( R_1 \) and \( R_2 \) coincide with the rows for John and Mary in (9), while the array \( Value \) contains all \( m \) different judgments (e.g. target Id’s or coreferencing types in the examples above) used by both raters. \[ P_c = \sum_{i=1}^{m} \left( \frac{\sum_{j=1}^{n} R_{1,j} = \text{Value}_i}{n} \cdot \frac{\sum_{j=1}^{n} R_{2,j} = \text{Value}_i}{n} \right) \] 2.4. Implementation The implementation of the two-step-method introduced in 2.3 follows the procedure laid out in the previous paragraph, including the formula provided in (10). The function `CohensKappa()` in section 5, the appendix, provides a Visual Basic realization of the implementation. The first cycle, as shown in (11), fills the array \( V \) with the values used by rater 1 and/or rater 2 (these values are in column \( \text{intCol1} \) and \( \text{intCol2} \) of the string array \( \text{arData} \)). Each element in array \( V \) has a field with the actual value, a field with the number of times this value is used by rater #1 and a field with the frequency of occurrence for rater #2. The function `IncrVal()` finds the category value \( \text{intVal} \) and increments the frequency ```python P_c = \sum_{i=1}^{m} \left( \frac{\sum_{j=1}^{n} R_{1,j} = \text{Value}_i}{n} \cdot \frac{\sum_{j=1}^{n} R_{2,j} = \text{Value}_i}{n} \right) ``` First cycle of the kappa calculation ``` ' Visit all points For intI = 0 To intN ' Get the data for this line arLine = Split(arData(intI), ";") ' Get the values for rater #1 and rater #2 intVal1 = CInt(arLine(intCol1)) : intVal2 = CInt(arLine(intCol2)) ' Put these values in one array IncrVal(arVal, intVnum, intVal1, 1) IncrVal(arVal, intVnum, intVal2, 2) ' Keep track of agreement If (intVal1 = intVal2) Then intM += 1 Next intI ' Calculate P0 dblP_0 = intM / (intN + 1) ``` The first cycle furthermore calculates the agreement percentage by keeping track of how many times there is full agreement between rater 1 and rater 2. The second cycle, as shown in X, visits all the values used by rater 1 and/or 2 that are stored in the array V. It keeps track of the sum of the frequency of occurrence of each value for rater #1 multiplied by that of rater #2. This sum is later on divided by the square of the total number of measuring points \( n \), in accordance with the formula above, yielding \( P_c \). Second cycle of the kappa calculation ``` ' Visit all actual values stored in array [arVal] For intI = 1 To intVnum ' Keep track of the sum intM += arVal(intI).Freq1 * arVal(intI).Freq2 Next intI ' Calculate total Pc dblP_c = intM / ((intN + 1) * (intN + 1)) ``` The final step is to calculate the actual kappa using formula (1). The code behind this step is shown in (13). Calculation of the kappa value on the basis of \( P_0 \) and \( P_c \) ``` ' Pass back Kappa and Agreement dblKappa = (dblP_0 - dblP_c) / (1 - dblP_c) dblAgr = dblP_0 ``` 3. Conclusions Cohen’s kappa provides a measure that can be used to determine inter-rater agreement between different persons having annotated texts with coreference information. The set of target Id’s where the anaphoric references provided by rater #1 point to might not completely overlap with the set of target Id’s provided by rater #2. Such rating results cannot be processed with a program like SPSS. An internet tool like RecalFront provides the researcher with the possibility to determine Cohen’s kappa for ratings with non-identical category sets. However, the procedure followed by such a program is not available. This current paper proposes a fast two-stage algorithm to calculate Cohen’s kappa for category sets that do not completely overlap. An implementation of this algorithm in Visual Basic is provided. 4. References 5. Appendix: the code This appendix provides a module of Visual Basic code that can be used to calculate Cohen’s kappa. The main function is called `CohensKappa`, and it should be called with the parameters indicated. ```vbnet Module modStat ' ======================================================================= ' Name : modStat ' Goal : Statistical functions. In particular: asymmetric Cohen's Kappa ' History: ' 24-06-2009 ERK Created ' 17-11-2009 ERK Changed method slightly ' ======================================================================= Private Structure ValFreq Dim Value As Integer ' The value Dim Freq1 As Integer ' The frequency of rater #1 for this value Dim Freq2 As Integer ' The frequency of rater #2 for this value End Structure ' ======================================================================= ' Local types Dim arVal() As ValFreq ' Values for rater #1 and rater #2 Dim intVnum As Integer ' Number of values in [arVal] ' Name: CohensKappa ' Goal: Calculate Cohen's Kappa for non-symmetric matrices ' Input: arData() is a string array where each line contains integer ' data separated by ';' signs (i.e. the content of a CSV file) ' intCol1 and intCol2 define which columns in [arData] belong to ' rater #1 and rater #2 ' Return: dblKappa is Cohen's kappa (0 ... 1) ' dblAgr is the agreement (0 ... 1) ' History: ' 24-06-2009 ERK Created using two arrays [arV1] and [arV2] ' 17-11-2009 ERK Adapted for faster method using only one [arVal] ' ======================================================================= Public Function CohensKappa(ByRef arData() As String, ByVal intCol1 As Integer, _ ByVal intCol2 As Integer, ByVal dblKappa As Double, ByVal dblAgr As Double) As Boolean Dim arLine() As String ' The values of one line Dim intI As Integer ' Counter Dim intN As Integer ' Number of data points Dim intM As Integer ' Intermediate number Dim intVal1 As Integer ' Value for rater #1 Dim intVal2 As Integer ' Value for rater #2 Dim dblP_0 As Double ' P0 from the Kappa formula = % agreement Dim dblP_c As Double ' Pc from the Kappa formula: k=(P0-Pc)/(1-Pc) intN = UBound(arData) ``` ' Adapt [intN] for empty elements While (arData(intN) = "") OrElse (InStr(arData(intN), ";") = 0) intN -= 1 End While ' Initialise valFreq sets ValFreqClear(intN) intM = 0 ' Visit all points For intI = 0 To intN ' Get the data for this line arLine = Split(arData(intI), ";") ' Get the values for rater #1 and rater #2 intVal1 = CInt(arLine(intCol1)) : intVal2 = CInt(arLine(intCol2)) ' Put these values in one array IncrVal(arVal, intVnum, intVal1, 1) IncrVal(arVal, intVnum, intVal2, 2) ' Keep track of agreement If (intVal1 = intVal2) Then intM += 1 Next intI ' Calculate P0 dblP_0 = intM / (intN + 1) ' Calculate Sum for Pc intM = 0 ' Visit all actual values stored in array [arVal] For intI = 1 To intVnum ' Keep track of the sum intM += arVal(intI).Freq1 * arVal(intI).Freq2 Next intI ' Calculate total Pc dblP_c = intM / ((intN + 1) * (intN + 1)) ' Pass back Kappa and Agreement dblKappa = (dblP_0 - dblP_c) / (1 - dblP_c) dblAgr = dblP_0 ' Return success CohensKappa = True End Function ' ----------------------------------------------------------------------- ' Name: ValFreqClear ' Goal: Clear and initialise the ValFreq arrays ' History: ' 24-06-2009 ERK Created ' ----------------------------------------------------------------------- Private Sub ValFreqClear(ByVal intN As Integer) Dim intI As Integer ' Counter ' Set size ReDim arVal(0 To intN) ' Visit all potential members (intVnum will be smaller than intN) For intI = 0 To intN ' Access this member With arVal(intI) ' Clear the members .Freq1 = 0 ' Default frequency is zero .Freq2 = 0 ' Default frequency is zero .Value = -1 ' This value should be overridden by one of 0 ' or higher End With Next intI ' Reset the size to ZERO intVnum = 0 End Sub Private Sub IncrVal(ByRef arV() As ValFreq, ByRef intVnum As Integer, _ ByVal intVal As Integer, _ ByVal intRater As Integer) Dim intI As Integer ' Counter ' Check all members of [arV] For intI = 0 To intVnum ' Does this member contain the value? If (arV(intI).Value = intVal) Then ' Which rater? If (intRater = 1) Then ' Increment the frequency of it arV(intI).Freq1 += 1 Else ' Increment the frequency of it arV(intI).Freq2 += 1 End If ' Exit Exit Sub End If Next intI ' Member was not found, so add to [arV] intVnum += 1 With arV(intVnum) .Value = intVal ' Which rater? If (intRater = 1) Then .Freq1 = 1 Else .Freq2 = 1 End If End With End Sub End Module
{"Source-Url": "http://repository.ubn.ru.nl/bitstream/handle/2066/79408/79408.pdf?sequence=1", "len_cl100k_base": 5684, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 21372, "total-output-tokens": 6275, "length": "2e12", "weborganizer": {"__label__adult": 0.0003294944763183594, "__label__art_design": 0.0003161430358886719, "__label__crime_law": 0.0004107952117919922, "__label__education_jobs": 0.002506256103515625, "__label__entertainment": 8.183717727661133e-05, "__label__fashion_beauty": 0.00016117095947265625, "__label__finance_business": 0.000255584716796875, "__label__food_dining": 0.00045943260192871094, "__label__games": 0.0004830360412597656, "__label__hardware": 0.0008788108825683594, "__label__health": 0.00128173828125, "__label__history": 0.00024068355560302737, "__label__home_hobbies": 0.00011527538299560548, "__label__industrial": 0.0004010200500488281, "__label__literature": 0.00046324729919433594, "__label__politics": 0.0002911090850830078, "__label__religion": 0.000476837158203125, "__label__science_tech": 0.07086181640625, "__label__social_life": 0.00016224384307861328, "__label__software": 0.014617919921875, "__label__software_dev": 0.904296875, "__label__sports_fitness": 0.00029659271240234375, "__label__transportation": 0.00037217140197753906, "__label__travel": 0.0001939535140991211}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19304, 0.03767]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19304, 0.6463]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19304, 0.85101]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2301, false], [2301, 5275, null], [5275, 8053, null], [8053, 11398, null], [11398, 13806, null], [13806, 16488, null], [16488, 18396, null], [18396, 19304, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2301, true], [2301, 5275, null], [5275, 8053, null], [8053, 11398, null], [11398, 13806, null], [13806, 16488, null], [16488, 18396, null], [18396, 19304, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 19304, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19304, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19304, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19304, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19304, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19304, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19304, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19304, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19304, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19304, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2301, 2], [2301, 5275, 3], [5275, 8053, 4], [8053, 11398, 5], [11398, 13806, 6], [13806, 16488, 7], [16488, 18396, 8], [18396, 19304, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19304, 0.07018]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
c002df391c2b4efc7d3b92f6416fb04d256852c0
Web Services Reliable Messaging Policy Assertion (WS-RM Policy) Working Draft 01, July 6th 2005 Abstract: This specification describes a domain-specific policy assertion for WS-ReliableMessaging [WS-RM] that can be specified within a policy alternative as defined in WS-Policy Framework [WS-Policy]. Composable Architecture: By using the XML [XML], SOAP [SOAP], and WSDL [WSDL 1.1] extensibility models, the WS* specifications are designed to be composed with each other to provide a rich Web services environment. This by itself does not provide a negotiation solution for Web services. This is a building block that is used in conjunction with other Web service and application-specific protocols to accommodate a wide variety of policy exchange models. Status: TBD # Table of Contents <table> <thead> <tr> <th>Section</th> <th>Page</th> </tr> </thead> <tbody> <tr> <td>1 INTRODUCTION</td> <td>5</td> </tr> <tr> <td>1.1 Goals and Requirements</td> <td>5</td> </tr> <tr> <td>1.1.1 Requirements</td> <td>5</td> </tr> <tr> <td>1.2 Notational Conventions</td> <td>5</td> </tr> <tr> <td>1.3 Namespace</td> <td>5</td> </tr> <tr> <td>1.4 Compliance</td> <td>6</td> </tr> <tr> <td>2 RM POLICY ASSERTIONS</td> <td>7</td> </tr> <tr> <td>2.1 Assertion Model</td> <td>7</td> </tr> <tr> <td>2.2 Normative Outline</td> <td>8</td> </tr> <tr> <td>2.3 Assertion Attachment</td> <td>9</td> </tr> <tr> <td>2.4 Assertion Example</td> <td>9</td> </tr> <tr> <td>3 SECURITY CONSIDERATIONS</td> <td>11</td> </tr> <tr> <td>4 REFERENCES</td> <td>12</td> </tr> <tr> <td>4.1 Normative</td> <td>12</td> </tr> <tr> <td>4.2 Non-Normative</td> <td>12</td> </tr> <tr> <td>APPENDIX A. ACKNOWLEDGMENTS</td> <td>14</td> </tr> <tr> <td>APPENDIX B. XML SCHEMA</td> <td>15</td> </tr> <tr> <td>APPENDIX C. REVISION HISTORY</td> <td>18</td> </tr> <tr> <td>APPENDIX D. NOTICES</td> <td>19</td> </tr> </tbody> </table> 1 Introduction This specification defines a domain-specific policy assertion for reliable messaging for use with WS-Policy [WS-Policy] and WS-Reliable Messaging [WS-RM]. 1.1 Goals and Requirements 1.1.1 Requirements 1.2 Notational Conventions The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119 [KEYWORDS]. This specification uses the following syntax to define normative outlines for messages: • The syntax appears as an XML instance, but values in italics indicate data types instead of values. • Characters are appended to elements and attributes to indicate cardinality: o "?" (0 or 1) o "*" (0 or more) o "+" (1 or more) • The character "|" is used to indicate a choice between alternatives. • The characters "[" and "]" are used to indicate that contained items are to be treated as a group with respect to cardinality or choice. • An ellipsis (i.e. "...") indicates a point of extensibility that allows other child, or attribute, content. Additional children and/or attributes MAY be added at the indicated extension points but MUST NOT contradict the semantics of the parent and/or owner, respectively. If an extension is not recognized it SHOULD be ignored. • XML namespace prefixes (See Section Namespace) are used to indicate the namespace of the element being defined. 1.3 Namespace The XML namespace [XML-ns] URI that MUST be used by implementations of this specification is: http://schemas.xmlsoap.org/ws/2005/02/rm/policy Table 1 lists XML namespaces that are used in this specification. The choice of any namespace prefix is arbitrary and not semantically significant. The following namespaces are used in this document: <table> <thead> <tr> <th>Prefix</th> <th>Namespace</th> <th>Specification</th> </tr> </thead> <tbody> <tr> <td>wsrm</td> <td><a href="http://schemas.xmlsoap.org/ws/2005/02/rm/policy">http://schemas.xmlsoap.org/ws/2005/02/rm/policy</a></td> <td>This specification</td> </tr> </tbody> </table> 1.4 Compliance An implementation is not compliant with this specification if it fails to satisfy one or more of the MUST or REQUIRED level requirements defined herein. A SOAP Node MUST NOT use the XML namespace identifier for this specification (listed in Section Namespace) within SOAP Envelopes unless it is compliant with this specification. Normative text within this specification takes precedence over normative outlines, which in turn take precedence over the XML Schema [XML Schema Part 1, Part 2] descriptions. 2 RM Policy Assertions WS-Policy Framework [WS-Policy] and WS-Policy Attachment [WS-PolicyAttachment] collectively define a framework, model and grammar for expressing the requirements, and general characteristics of entities in an XML Web services-based system. To enable an RM Destination and an RM Source to describe their requirements for a given Sequence, this specification defines a single RM policy assertion that leverages the WS-Policy framework. 2.1Assertion Model The RM policy assertion indicates that the RM Source and RM Destination MUST use WS-ReliableMessaging [WS-RM] to ensure reliable message delivery. Specifically, the WS-ReliableMessaging protocol determines invariants maintained by the reliable messaging endpoints and the directives used to track and manage the delivery of a Sequence of messages. The assertion defines an inactivity timeout parameter that either the RM Source or RM Destination MAY include. If during this duration, an endpoint has received no application or control messages, the endpoint MAY consider the RM Sequence to have been terminated due to inactivity. This assertion also defines a base retransmission interval parameter that the RM Source MAY include. If no acknowledgement has been received for a given message within the interval, the RM Source will retransmit the message. The retransmission interval MAY be modified at the Source's discretion during the lifetime of the Sequence. This parameter does not alter the formulation of messages as transmitted, only the timing of their transmission. Similarly, this assertion defines a backoff parameter that the RM Source MAY include to indicate the retransmission interval will be adjusted using the commonly known exponential backoff algorithm [Tanenbaum]. Finally, this assertion defines an acknowledgement interval parameter that the RM Destination MAY include. Per WS-ReliableMessaging [WS-RM], acknowledgements are sent on return messages or sent stand-alone. If a return message is not available to send an acknowledgement, an RM Destination MAY wait for up to the acknowledgement interval before sending a stand-alone acknowledgement. If there are no unacknowledged messages, the RM Destination MAY choose not to send an acknowledgement. This parameter does not alter the formulation of messages or acknowledgements as transmitted; it does not alter the meaning of the wsm:AckRequested directive. Its purpose is to communicate the timing of acknowledgements so that the RM Source may tune appropriately. 2.2 Normative Outline The normative outline for the RM version assertion is: ```xml <wsrm:RMAssertion [wsp:Optional="true"]? ... > <wsrm:InactivityTimeout Milliseconds="xs:unsignedLong" ... /> ? <wsrm:BaseRetransmission IntervalMilliseconds="xs:unsignedLong".../>? <wsrm:ExponentialBackoff ... /> ? <wsrm:AcknowledgementInterval Milliseconds="xs:unsignedLong" ... /> ? ... </wsrm:RMAssertion> ``` The following describes additional, normative constraints on the outline listed above: /wsrm:RMAssertion A policy assertion that specifies that WS-ReliableMessaging [WS-RM] protocol MUST be used for a Sequence. /wsrm:RMAssertion/@wsp:Optional="true" Per WS-Policy [WS-Policy], this is compact notation for two policy alternatives, one with and one without the assertion. The intuition is that the behavior indicated by the assertion is optional, or in this case, that WS-ReliableMessaging MAY be used. /wsrm:RMAssertion/wsrm:InactivityTimeout A parameter that specifies a period of inactivity for a Sequence. If omitted, there is no implied value. /wsrm:RMAssertion/wsrm:InactivityTimeout/@Milliseconds The inactivity timeout duration, specified in milliseconds. /wsrm:RMAssertion/wsrm:BaseRetransmissionInterval A parameter that specifies how long the RM Source will wait after transmitting a message and before retransmitting the message. If omitted, there is no implied value. /wsrm:RMAssertion/wsrm:BaseRetransmissionInterval/@Milliseconds The base retransmission interval, specified in milliseconds. /wsrm:RMAssertion/wsrm:ExponentialBackoff A parameter that specifies that the retransmission interval will be adjusted using the exponential backoff algorithm [Tanenbaum]. If omitted, there is no implied value. /wsrm:RMAssertion/wsrm:AcknowledgementInterval A parameter that specifies the duration after which the RM Destination will transmit an acknowledgement. If omitted, there is no implied value. /wsrm:RMAssertion/wsrm:AcknowledgementInterval/@Milliseconds The acknowledgement interval, specified in milliseconds. 2.3 Assertion Attachment Because the RM policy assertion indicates endpoint behavior over an RM Sequence, the assertion has Endpoint Policy Subject [WS-PolicyAttachment]. WS-PolicyAttachment defines three WSDL [WSDL 1.1] policy attachment points with Endpoint Policy Subject: - **wsdl:portType** – A policy expression containing the RM policy assertion MUST NOT be attached to a wsdl:portType; the RM policy assertion specifies a concrete behavior whereas the wsdl:portType is an abstract construct. - **wsdl:binding** – A policy expression containing the RM policy assertion SHOULD be attached to a wsdl:binding. - **wsdl:port** – A policy expression containing the RM policy assertion MAY be attached to a wsdl:port. If the RM policy assertion appears in a policy expression attached to both a wsdl:port and its corresponding wsdl:binding, the parameters in the former MUST be used and the latter ignored. Per WS-ReliableMessaging [WS-RM], a wsrm:CreateSequence request MAY contain an offer to create an “inbound” Sequence. If the RM policy assertion is attached to an endpoint declaring only input messages, the endpoint MUST reject a wsrm:CreateSequence request containing a wsrm:Offer to create a corresponding Sequence. If the assertion is attached to an endpoint declaring both input and output messages, the endpoint MUST reject a wsrm:CreateSequence request that does not contain a wsrm:Offer to create a corresponding Sequence. 2.4 Assertion Example Table 2 lists an example use of the RM policy assertion. Table 2: Example policy with RM policy assertion ```xml <wsp:UsingPolicy wsdl:required="true" /> <wsp:Policy wsu:Id="MyPolicy" /> ``` Line (09) in Table 2 indicates that WS-Policy [WS-Policy] is in use as a required extension. Lines (11-19) are a policy expression that includes a RM policy assertion (Lines 12-17) to indicate that WS-ReliableMessaging [WS-RM] must used. Line (13) indicates the endpoint will consider the Sequence terminated if there is no activity after ten minutes. Line (14) indicates the RM Source will retransmit unacknowledged messages after three seconds, and Line (15) indicates that exponential backoff algorithm will be used for timing of successive retransmissions should the message continue to go unacknowledged. Line (16) indicates the RM Destination may buffer acknowledgements for up to two-tenths of a second. Lines (23-26) are a WSDL [WSDL 1.1] binding. Line (24) indicates that the policy in Lines (11-19) applies to this binding, specifically indicating that WS-ReliableMessaging must be used over all the messages in the binding. 3 Security Considerations It is strongly RECOMMENDED that policies and assertions be signed to prevent tampering. It is RECOMMENDED that policies SHOULD NOT be accepted unless they are signed and have an associated security token to specify the signer has proper claims for the given policy. That is, a relying party shouldn't rely on a policy unless the policy is signed and presented with sufficient claims to pass the relying parties acceptance criteria. It should be noted that the mechanisms described in this document could be secured as part of a SOAP message using WS-Security [WSS] or embedded within other objects using object-specific security mechanisms. 4 References 4.1 Normative 4.2 Non-Normative Appendix A. Acknowledgments This document is based on initial contribution to OASIS WS-RX Technical Committee by the following authors: Stefan Batres, Microsoft (Editor), Ruslan Bilorusets, BEA, Don Box, Microsoft, Luis Felipe Cabrera, Microsoft, Derek Collison, TIBCO Software, Donald Ferguson, IBM, Christopher Ferris, IBM (Editor), Tom Freund, IBM, Mary Ann Hondo, IBM, John Ibbotson, IBM, Lei Jin, BEA, Chris Kaler, Microsoft, David Langworthy, Microsoft, Amelia Lewis, TIBCO Software, Rodney Limprecht, Microsoft, Steve Lucco, Microsoft, Don Mullen, TIBCO Software, Anthony Nadalin, IBM, Mark Nottingham, BEA, David Orchard, BEA, Shivajee Samdarshi, TIBCO Software, John Shewchuk, Microsoft, Tony Storey, IBM. The following individuals have provided invaluable input into the initial contribution: Keith Ballinger, Microsoft, Allen Brown, Microsoft, Michael Conner, IBM, Francisco Curbera, IBM, Steve Graham, IBM, Pat Helland, Microsoft, Rick Hill, Microsoft, Scott Hinkelman, IBM, Tim Holloway, IBM, Efim Hudis, Microsoft, Johannes Klein, Microsoft, Frank Leymann, IBM, Martin Nally, IBM, Peter Niblett, IBM, Jeffrey Schlimmer, Microsoft, Chris Sharp, IBM, James Snell, IBM, Keith Stobie, Microsoft, Satish Thatte, Microsoft, Stephen Todd, IBM, Sanjiva Weerawarana, IBM, Roger Wolter, Microsoft. The following individuals were members of the committee during the development of this specification: TBD Appendix B. XML Schema A normative copy of the XML Schema [XML Schema Part 1, Part 2] description for this specification may be retrieved from the following address: http://schemas.xmlsoap.org/ws/2005/02/rm/wsrm-policy.xsd <?xml version="1.0" encoding="UTF-8"?> <!-- OASIS takes no position regarding the validity or scope of any intellectual property or other rights that might be claimed to pertain to the implementation or use of the technology described in this document or the extent to which any license under such rights might or might not be available; neither does it represent that it has made any effort to identify any such rights. Information on OASIS’s procedures with respect to rights in OASIS specifications can be found at the OASIS website. Copies of claims of rights made available for publication and any assurances of licenses to be made available, or the result of an attempt made to obtain a general license or permission for the use of such proprietary rights by implementors or users of this specification, can be obtained from the OASIS Executive Director. OASIS invites any interested party to bring to its attention any copyrights, patents or patent applications, or other proprietary rights which may cover technology that may be required to implement this specification. Please address the information to the OASIS Executive Director. Copyright (C) OASIS Open (2005). All Rights Reserved. This document and translations of it may be copied and furnished to others, and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared, copied, published and distributed, in whole or in part, without restriction of any kind, provided that the above copyright notice and this paragraph are included on all such copies and derivative works. However, this document itself may not be modified in any way, such as by removing the copyright notice or references to OASIS, except as needed for the purpose of developing OASIS specifications, in which case the procedures for copyrights defined in the OASIS Intellectual Property Rights document must be followed, or as required to translate it into languages other than English. --> The limited permissions granted above are perpetual and will not be revoked by OASIS or its successors or assigns. This document and the information contained herein is provided on an "AS IS" basis and OASIS DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. --> ```xml <xs:schema targetNamespace="http://schemas.xmlsoap.org/ws/2005/02/rm/policy" xmlns:tns="http://schemas.xmlsoap.org/ws/2005/02/rm/policy" xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified" attributeFormDefault="unqualified"> <xs:element name="RMAssertion"> <xs:complexType> <xs:sequence> <xs:element name="InactivityTimeout" minOccurs="0"> <xs:complexType> <xs:attribute name="Milliseconds" type="xs:unsignedLong" use="required" /> <xs:anyAttribute namespace="##any" processContents="lax" /> </xs:complexType> </xs:element> <xs:element name="BaseRetransmissionInterval" minOccurs="0"> <xs:complexType> <xs:attribute name="Milliseconds" type="xs:unsignedLong" use="required" /> <xs:anyAttribute namespace="##any" processContents="lax" /> </xs:complexType> </xs:element> <xs:element name="ExponentialBackoff" minOccurs="0"> <xs:complexType> <xs:anyAttribute namespace="##any" processContents="lax" /> </xs:complexType> </xs:element> <xs:element name="AcknowledgementInterval" minOccurs="0"> <xs:complexType> <xs:attribute name="Milliseconds" type="xs:unsignedLong" use="required" /> <xs:anyAttribute namespace="##any" processContents="lax" /> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> ``` </xs:element> <xs:element namespace="##other" processContents="lax" minOccurs="0" maxOccurs="unbounded" /> </xs:sequence> <xs:anyAttribute namespace="##any" processContents="lax" /> </xs:complexType> </xs:element> </xs:element> </xs:complexType> </xs:element> </xs:schema> ## Appendix C. Revision History <table> <thead> <tr> <th>Revision</th> <th>Date</th> <th>By Whom</th> <th>What</th> </tr> </thead> <tbody> <tr> <td>wd-01.doc</td> <td>2005-07-06</td> <td>Ümit Yalçınalp</td> <td>Initial version created based on submission by the authors.</td> </tr> <tr> <td>ws-01.swx</td> <td>2005-09-01</td> <td>Ümit Yalçınalp</td> <td>Reformatted using Open Office</td> </tr> </tbody> </table> Appendix D. Notices OASIS takes no position regarding the validity or scope of any intellectual property or other rights that might be claimed to pertain to the implementation or use of the technology described in this document or the extent to which any license under such rights might or might not be available; neither does it represent that it has made any effort to identify any such rights. Information on OASIS’s procedures with respect to rights in OASIS specifications can be found at the OASIS website. Copies of claims of rights made available for publication and any assurances of licenses to be made available, or the result of an attempt made to obtain a general license or permission for the use of such proprietary rights by implementors or users of this specification, can be obtained from the OASIS Executive Director. OASIS invites any interested party to bring to its attention any copyrights, patents or patent applications, or other proprietary rights which may cover technology that may be required to implement this specification. Please address the information to the OASIS Executive Director. Copyright (C) OASIS Open (2005). All Rights Reserved. This document and translations of it may be copied and furnished to others, and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared, copied, published and distributed, in whole or in part, without restriction of any kind, provided that the above copyright notice and this paragraph are included on all such copies and derivative works. However, this document itself may not be modified in any way, such as by removing the copyright notice or references to OASIS, except as needed for the purpose of developing OASIS specifications, in which case the procedures for copyrights defined in the OASIS Intellectual Property Rights document must be followed, or as required to translate it into languages other than English. The limited permissions granted above are perpetual and will not be revoked by OASIS or its successors or assigns. This document and the information contained herein is provided on an "AS IS" basis and OASIS DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
{"Source-Url": "https://www.oasis-open.org/committees/download.php/14534/WS-ReliableMessagingPolicy-1.0draft-01.pdf", "len_cl100k_base": 4976, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 41213, "total-output-tokens": 6061, "length": "2e12", "weborganizer": {"__label__adult": 0.00027298927307128906, "__label__art_design": 0.0003578662872314453, "__label__crime_law": 0.0007200241088867188, "__label__education_jobs": 0.0003819465637207031, "__label__entertainment": 5.8591365814208984e-05, "__label__fashion_beauty": 0.00011366605758666992, "__label__finance_business": 0.0007791519165039062, "__label__food_dining": 0.0002386569976806641, "__label__games": 0.00027632713317871094, "__label__hardware": 0.0005478858947753906, "__label__health": 0.0003056526184082031, "__label__history": 0.00016057491302490234, "__label__home_hobbies": 5.650520324707031e-05, "__label__industrial": 0.0003046989440917969, "__label__literature": 0.0002453327178955078, "__label__politics": 0.00035762786865234375, "__label__religion": 0.0002892017364501953, "__label__science_tech": 0.013580322265625, "__label__social_life": 6.669759750366211e-05, "__label__software": 0.0151519775390625, "__label__software_dev": 0.96533203125, "__label__sports_fitness": 0.00017511844635009766, "__label__transportation": 0.00030231475830078125, "__label__travel": 0.00014984607696533203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21904, 0.02558]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21904, 0.08628]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21904, 0.79944]], "google_gemma-3-12b-it_contains_pii": [[0, 97, false], [97, 97, null], [97, 771, null], [771, 1390, null], [1390, 2995, null], [2995, 3972, null], [3972, 6494, null], [6494, 8538, null], [8538, 10198, null], [10198, 11135, null], [11135, 11805, null], [11805, 13165, null], [13165, 13165, null], [13165, 14578, null], [14578, 16780, null], [16780, 18752, null], [18752, 19040, null], [19040, 19508, null], [19508, 21904, null]], "google_gemma-3-12b-it_is_public_document": [[0, 97, true], [97, 97, null], [97, 771, null], [771, 1390, null], [1390, 2995, null], [2995, 3972, null], [3972, 6494, null], [6494, 8538, null], [8538, 10198, null], [10198, 11135, null], [11135, 11805, null], [11805, 13165, null], [13165, 13165, null], [13165, 14578, null], [14578, 16780, null], [16780, 18752, null], [18752, 19040, null], [19040, 19508, null], [19508, 21904, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21904, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21904, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21904, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21904, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21904, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21904, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21904, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21904, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21904, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21904, null]], "pdf_page_numbers": [[0, 97, 1], [97, 97, 2], [97, 771, 3], [771, 1390, 4], [1390, 2995, 5], [2995, 3972, 6], [3972, 6494, 7], [6494, 8538, 8], [8538, 10198, 9], [10198, 11135, 10], [11135, 11805, 11], [11805, 13165, 12], [13165, 13165, 13], [13165, 14578, 14], [14578, 16780, 15], [16780, 18752, 16], [18752, 19040, 17], [19040, 19508, 18], [19508, 21904, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21904, 0.11508]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
76bf25ef0ce25e3d548becbb2f8e1d1d9e9354c6
Using Component-oriented Process Models for Multi-Metamodel Applications Fahad R. Golra Université Européenne de Bretagne Institut Télécom / Télécom Bretagne Brest, France Email: fahad.golra@telecom-bretagne.eu Fabien Dagnat Université Européenne de Bretagne Institut Télécom / Télécom Bretagne Brest, France Email: fabien.dagnat@telecom-bretagne.eu Abstract—Recent advancements in Model Driven Engineering (MDE) call for the corresponding software development processes. These processes need to be able to assist multi-metamodel development by extending support for the usage of models and transformations. Business processes are sequenced on the basis of their sequential contingencies whereas we argue that software development processes are meant to be sequenced along their intrinsic factors. In this paper, we present a process metamodel from our framework, inspired from the component-based paradigm, to automate the software development processes. This approach presents the concept of structured artifacts and exploits the activity sequencing based on events and constraints. I. INTRODUCTION The recent progress of Model Driven Engineering (MDE) has shaped the software industry to be model-centric. Development through the evolution of one model from requirements till deployment passing through a series of transformations is the aim of MDE [1]. The progress towards the achievement of this goal needs corresponding process modeling support, which has been quite overlooked [2]. The factors restraining software industry to unleash the full potential of Model Driven Engineering are quite explored [3]. One of these important factors is the lack of coherence between software process modeling and software development paradigm. To achieve coherence amongst the model driven development paradigm, software development lifecycles and software development processes, we argue that the software development process modeling should also be model-centric. Besides this, a process modeling approach should allow the flexibility for process improvement, not only at organizational level but also within a project. This can be achieved if the processes can be replaced or updated without affecting their context. Process modeling domain is dominated by business process models that focus on the workflow and sequence of the processes [4]. This results in the lack of appropriate mechanisms for validation, verification and state maintenance of the work products. Software process modeling being targeted for the software industry should match the perspectives of the domain experts. The software jargon is more familiar with execution, initiation, implementation, and typing. We argue that current software process models are more close to business process modeling which creates a gap between the process modeling and architecture modeling. In order to reduce this gap, software process modeling approach is tailored to a software development paradigm (Component-based software engineering), that is more comprehensible to software domain experts. The choice of component-based paradigm helps in developing process components with specified interfaces. This approach favors both the execution mechanisms and the process improvement. This paper is structured as follows. Section 2 describes the recent endeavors in the field of software process modeling. Section 3 describes the General Process Metamodel. Section 4 presents the complete approach as in a Multi-metamodel framework. Finally Section 5 outlines the conclusions. II. SOFTWARE PROCESS MODELING Various approaches have been proposed to model and execute processes. This section describes the most notable approaches. SPEM2.0 is presented by OMG as a standard with the vision of separation between the usage and the content for the software development processes [5]. Different UML diagrams are used to model different views of the process model. The usage of these diagrams adds expressibility to process model but as a side effect, the model under study faces semantic limitations. These semantics limitations can vary from inter-process communications, artifact definitions, event descriptions to execution. Lack of proper explanation for exception handling leads to the absence of reactive control for the processes. In terms of execution, process enactment is nothing more than a mapping to project plan. This offers no support even for process simulations. The process components introduced by SPEM2.0 lack their proper semantics as well. These process components take WorkProducts as ports of a component, thus linking process components together on the basis of these WorkProducts. Keeping SPEM as a standard, various other techniques tend to extend it in order to overcome its shortcomings. OPSS is a workflow management system that is implemented on top of Java Event-based Distributed Infrastructure (JEDI) to use its event-based approach [6]. OPSS presents a translator that automatically translates the UML process models to java code. OPSS then enacts the process by executing this java code. The architecture of OPSS revolves around two main components as agents and the state server. The State server is responsible for managing the state of the processes, which helps in the coordination of agents. An event notification system defines a standard inter-operation mechanism between agents. Besides the proactive control, the use of events in state transitions offers a reactive control in activity sequencing. As events are used to trigger transitions thus it is also possible to define error states leading to exception handling. This approach adds up to the semantics of UML at a lower level by translating it to java code. **Chou’s method** is a process modeling language that consists of two layers: high level UML-based diagrams and a corresponding low level process language [7]. The high level UML based diagrams use a variation of activity and class diagrams, whereas the low level process language is object oriented language and models a process as a set of classes (the *process program*). The high level diagrams hide the complexity of the rich semantics at the lower level. These process classes exploit exception handling and event signaling to offer reactive control. An automatic mapping between the two levels is not provided (with some tool), however their correspondence is well documented. This approach shares a common drawback with OPSS, that is to be complex. The software industry did not welcome the complex modeling approaches (model programs) presented in past decade. **MODAL** is a more recent work, enriching the original metamodel of SPEM to exploit the potential of Model Driven Engineering [8]. A concept of *intention* is introduced in MODAL to keep track of methodological objectives set by different stake-holders from an activity. Contrary to SPEM, it focuses more on execution of the process models, thus the ports of the process components are not taken up as work products, rather they act as services i.e. much like the service ports in component based programming paradigm. These process components are setup as hierarchical abstraction levels: Abstract Modeling Level, Execution Modeling Level and Detailed Modeling Level, which describe the process components from coarse grained analysis to fine grained analysis. SPEM does not provide the reactive control over the process sequence, thus a flexible constraint application offered by MODAL can help in developing more stable process models. **UML4SPM** is a recent endeavor for achieving executability for process models [9]. It is presented using two packages: the Process Structure package which defines the primary process elements and the Foundation package that extends a subset of the concepts from UML, specifically tailored for process modeling. Sequence control is offered at two levels (Activity sequencing and Action sequencing) through control flows and object flows. Actions serve as basic building blocks for the activities. This control is defined in terms of control nodes. A strong proactive control in the language is ensured by the use of these control nodes along with action and activity sequencing. An activity has the ability to invoke an action that can alter the control sequence of activities, thus offering reactive control. Though they take full advantage of MDE for the specification of their own approach, their approach is not targeted towards the support of model driven software development. **xsPEM** stands out amongst all the process modeling languages discussed earlier in terms of execution semantics [10]. This approach extends SPEM with project management capabilities, a better process observation and event descriptions. This gives a concrete basis for the process execution. A solid tool support is provided by a graphical editor and a model simulator. xSPEM also offers model validation by translating the properties on SPEM into LTL properties on the corresponding Petri Net model. Project monitoring is handled by a mapping to BPEL. All these approaches tend to support enactment but they do not exploit the full potential of MDE through the use of models. An approach where model is the basis of communication between the processes is still missing. Not much work has been done on the process modeling approach where models can serve as the input and output artifacts of a process. Though many current approaches offer processes as components, they fail to provide the execution semantics for these process components. Most of the process modeling approaches use SPEM as their basis and tend to add up to its expressibility by translating the model into a process program or complementing the model with a process program. A concrete approach for a semantically rich process modeling language is still missing. ### III. General Process Model General Process metamodel defines all the basic structure of our framework. It is presented using three packages where activity implementation package and contract package merge into the core package. The core package defines the five core entities of our process framework, as illustrated in Figure 1. A process is defined as a collection of activities that can be arranged in a sequence through dependencies based on their contracts. Instead of focusing on the proactive control for the process, more focus is given to activity being a basic building block for the process. A process having both activities and their associated dependencies represents an architecture using activities as basic entities and dependencies to define the flow between them. An activity is decomposable thus presenting the process model as a hierarchical structure. Activities can be shared amongst different processes and super activities. An activity behaves as a black box component, where the interface to its context and content (in case of composite activity) is through its contract. Inspired from the component based paradigm, all the interactions between two activities is handled through this contract. Each activity is performed by role(s). No explicit control flow is defined for the activities. The contracts of the activities and the dependencies between them together allow the dynamic sequencing of flow for the processes. This dynamic sequence of processes gives a reactive control for process models that have the capability of restructuring the control flow at runtime. Another package called the activity implementation package defines the content of the activity, as shown in Figure 2. As recently adopted by SPEM2.0, the current approach is to separate usage from the content. We have also used this approach for the definition of activity. Each activity is implemented by one or more activity implementations. These activity implementations present a set of different alternative usages. One amongst these implementations can be used later on to decide the process instance. A contract that serves as an interface to the context or content is associated to each activity implementation. An activity implementation has to conform to the contract of its activity (type). This contract is used by the dependencies that serve for the transition of control flow at the time of execution. An activity implementation can either be primitive or composite. In case of a composite activity implementation, it contains a process. A composite activity containing a process, encapsulates the activities and dependencies contained by that process, which serves as its content. In order to interact with the content, this composite activity implementation uses the internal contracts. All the interactions of the contents of this composite activity to its context has to pass through this activity. In case of a primitive activity, its implementation defines the procedure for its realization. A primitive activity can be automatic, semi-automatic or manual. All activities are performed by roles. If no performer is amongst the associated roles, then the activity is automatic. If no tool is amongst the associated roles, then it is a manual activity. In case of a semi-automatic activity a performer performs the activity by using a tool. Such a categorization of activities helps manage different types of activities where special focus is given to automation of activities in order to use model transformations. The third package of the general process metamodel is the contract package as shown in Figure 3. Activities can be connected together for the purpose of control flow only using these contracts. A Contract may be internal or external. In case of composite activities, for each external contract there is a corresponding internal contract of the opposite nature. A nature of a contract is either required or provided. Thus for a required external contract, there exists a provided internal contract and vice versa. Each contract defines the events, properties and artifacts, so that the dependency can link together two activities. These events, properties and artifacts can be optional or mandatory, based on the behavior of the activity. An event in our framework can be a triggering event, a consumable event or a producible event. A triggering event is responsible to trigger an activity. Every required contract defines the triggering events for the activity, which are responsible to trigger its initiation. Provided contracts do not have any triggering event. A consumable event is an event that may be consumed by an activity for its realization. If the consumable event is mandatory, the activity can not be performed without this event, however an optional event can be neglected. Consumable events are defined in the required contract of the activities. A producible event is defined in the provided contract, which may or may not be mandatory. This event is produced as a target or side effect of the realization of the current activity. This provided event may be used by the successor activity as a required event. An artifact defines the resource needed or produced by an activity. An artifact in our process framework can either be a model or any other resource. In case of a model, an artifact defines the metamodel for the contract. This adds up the flexibility to use models as a basis of communication between the activities. Such models can be used for the semi-automatic transformations in order to carry out model transformations. A needed resource is a requisite which is defined in the required contract whereas a produced resource is defined in the provided contract as a deliverable. An artifact can be optional or mandatory, depending upon the nature of the activity. The contracts of an activity also define the properties that help linking two activities together through the use of dependencies. There are three types of properties i.e. invariants, pre-conditions and post-conditions. Pre-conditions are defined in the required contract of an activity. These are used to evaluate the properties, that need to be met in order to start the realization of the activity. Invariants are the properties that need to be met throughout the execution of an activity and are thus also defined in the required contract. Post conditions on the other hand are defined in the provided contract of an activity and record the conditions created by the realization of an activity. The contracts of the sub-activities conform to the contracts of the super-activities. One of the major benefits of using the component based approaches for process modeling, is to restrict the interaction through the specified contracts. Having defined specified contracts for the activities and activity implementations, we have the flexibility to choose any of the alternatives in activity implementations at runtime. Besides this, any activity implementation of an activity (type) can be replaced by any other alternative to work in the same context. This serves both for the content and context of an activity. Replacing one of the sub-activity implementations for an activity, does not affect its interaction with its context. Same way, some modification in the context, does not affect the interaction with the content of an activity. IV. MULTI-METAMODEL PROCESSES The hallmark of MDE is the usage of multiple models along with defined transformations amongst them. Models are created, modified, merged or split, as the software development project advances. We argue that a unique process model cannot capture all the semantics of the processes at different development stages. For this reason, our approach presents three metamodels: the General Process Metamodel, the Application Process Metamodel and the Instance Process Metamodel. The General Process Metamodel is used to document the process best practices. It is not specific to certain organization or project. When this General Process Metamodel is applied to a specific project by some organization, it is refined to guide the development process. This project specific metamodel is called Application Process Metamodel. The Instance Process Metamodel is responsible for the execution of the processes for a project, thus it takes into account the time plan and status of the project. Model transformations are used between the models conforming to these metamodels. We are looking forward to provide the tool support for carrying out these transformations. This global picture is illustrated in Figure 4. The development of these three process modeling metamodels along with the defined transformation definitions would allow us to create a complete framework. We have only presented the first metamodel i.e. General Process Metamodel in this paper, however the Application Process Metamodel and Instance Process Metamodel are underdevelopment. The Application Process Metamodel refines this metamodel with some additional features so as to make it application specific. A categorization of activities is added to group them in disciplines. Activities can also be categorized on other aspects like tool collections and role collections. Guidance is used to guide each activity. In order to add planning, processes are then refined to phases and sub-phases, where each of them has a milestone. Finally the Instance Process Metamodel is used to guide one specific process model instance. This metamodel refines the Application Process Metamodel so as to add up the capabilities to manage time. Actual project data (time-line, resource utilization, etc.) can be compared and analyzed against the expected data. This gives the ability to use the reactive control for process sequencing to reschedule the processes accordingly. In our approach, we use model transformations between these metamodels. The vision of this software process modeling framework is that an organization can keep its general software development process models in the form of best practices acquired from the experience. These models can then be transformed to application specific models when some specific application is under development. Finally this application process model can be transformed to instance process model that gives the support for the execution of process models. The tool for this software process modeling framework is visioned to be developed on top of METADONE, a METACASE tool providing a graphical environment for metamodeling [11]. The instantiation semantics for these processes is being sought, which would allow us to have an executable software development process modeling approach that can support model driven engineering. Furthermore we are looking for defining a formal semantics of these processes. Such formal semantics would allow us to verify the properties of the processes all way starting from the requirements phase till the deployment. V. PROCESS EXAMPLE In order to facilitate the understanding of our approach, we are modeling a process example that is responsible for developing a transformation definition and executing it to get the desired output model. This process is named as Transformation Definition Process. For the purpose of clarity we are presenting here a graphical notation, which is only intended for the explanation purposes. It should be noted that our approach is currently not providing any graphical modeling language, and it entire scope remains on the semantic representation of process models. This example depicted in figure 5 shows the nature of the building blocks for the process model. Transformation Definition process in our example has only one activity, named as Transformation Activity. Transformation activity defines its contracts for all the activity interactions with its contents and context. Each contract for Transformation activity shown in the figure is represented by a black and a white block. The black block is representing the external contract, whereas the white block is representing the internal contract. For external contracts, this figure draws the required contracts at the left hand side of the activity components, whereas the provided contracts are at the right hand side. The placement of internal contracts are opposite of that of external components. Thus Transformation activity has defined four external contracts (marked as black blocks), where Input Metamodel, Output Metamodel and Input Model are the required contracts and Output Model is a provided contract. Each of these external contracts have corresponding internal contracts (marked as white blocks) with opposite nature, thus Transformation activity has four internal contracts, where Input Metamodel, Output Metamodel and Input Model are the provided contracts and Output Model is a required contract. Transformation activity is composed of Transformation process, which in turn has four activities as Pattern Identification, Technology Decision, Rule Generation and Transformation Execution. A dependency can be witnessed between Pattern Identification and Transformation activities for Input Metamodel and Output Metamodel. A straight black line between the internal provided contract Input Metamodel of Transformation activity and the external required contract Input Metamodel of Pattern Identification activity, represents the dependency between them. In the General Process Metamodel, a contract specifies the artifact that an activity requires or produces, thus each contract shown in this example has an associated artifact, which is quite comprehensible from the name encoding of the contracts. As discussed earlier, figure 5 is a representational model, so it does not show the assigned roles for each activity. In case of Transformation activity, a system analyst is the performer and is assisted by tools like Validation tool and Transformation tool. Pattern Identification activity is responsible for identifying the matching patterns among input metamodel and output metamodel. This activity produces a pattern list that is made available to its context through the Matching Pattern List contract. Technology Decision activity is dependant on Pattern Identification for the Matching Pattern List and on Transformation activity for Input Metamodel and Output Metamodel. Once the technology is chosen for the transformation, the rules are generated for the transformation through the Rule Generation activity. Rule Generation activity produces a Transformation Definition for carrying out the transformation on the input model to generate the output model. Transformation Execution activity takes the Transformation Definition from Rule Generation and Input Metamodel, Output Metamodel and Input Model from Transformation activity through its specified required contracts. This in turn produces the Output Model for which Transformation activity is dependant on Transformation Execution. Finally, Transformation activity specifies the provided external contract Output Model for its context. Out of the activities defined in the Transformation Development process, all the activities are semi-automatic activities except Technology Decision which is a manual activity that does not need any intervention of tools. Transformation activity is a composite activity that relies on the Transformation process for its implementation. All the other activities in the example are primitive activities, as they do not compose any process and have a procedural implementation definition. This representational figure does not specify the pre-conditions on the artifacts, which are added in the implementation details of these activities. The dependencies between the activities are realized by using an event based mechanism in our framework. VI. CONCLUSION We have presented a process modeling approach that models activities as components which have their defined contracts. The overall hierarchical structure of the process metamodel allows to model processes at different abstraction levels. Moreover, the complete framework is aimed to be defined using three metamodels i.e. General, Application and Instance metamodels, which helps capture the semantics of processes at all development stages. By accepting models as required and provided artifacts and extending support for model transformations, processes are adapted with the capabilities to support Model Driven Engineering. Moreover, the inspirations from component based architecture add to the expressiveness of software process modeling by using a familiar jargon for the domain experts. REFERENCES
{"Source-Url": "https://portail.telecom-bretagne.eu/publi/public/fic_download.jsp?id=5735", "len_cl100k_base": 4718, "olmocr-version": "0.1.49", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 17799, "total-output-tokens": 5858, "length": "2e12", "weborganizer": {"__label__adult": 0.0003094673156738281, "__label__art_design": 0.0002846717834472656, "__label__crime_law": 0.0002493858337402344, "__label__education_jobs": 0.0005402565002441406, "__label__entertainment": 4.106760025024414e-05, "__label__fashion_beauty": 0.00011479854583740234, "__label__finance_business": 0.00014781951904296875, "__label__food_dining": 0.00030875205993652344, "__label__games": 0.000347137451171875, "__label__hardware": 0.0004127025604248047, "__label__health": 0.00030612945556640625, "__label__history": 0.00015938282012939453, "__label__home_hobbies": 5.364418029785156e-05, "__label__industrial": 0.0002589225769042969, "__label__literature": 0.0001951456069946289, "__label__politics": 0.00018775463104248047, "__label__religion": 0.000354766845703125, "__label__science_tech": 0.003858566284179687, "__label__social_life": 6.771087646484375e-05, "__label__software": 0.0035953521728515625, "__label__software_dev": 0.9873046875, "__label__sports_fitness": 0.00025177001953125, "__label__transportation": 0.000335693359375, "__label__travel": 0.00017070770263671875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29173, 0.00997]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29173, 0.57749]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29173, 0.92044]], "google_gemma-3-12b-it_contains_pii": [[0, 5002, false], [5002, 10370, null], [10370, 16958, null], [16958, 18929, null], [18929, 24914, null], [24914, 29173, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5002, true], [5002, 10370, null], [10370, 16958, null], [16958, 18929, null], [18929, 24914, null], [24914, 29173, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29173, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29173, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29173, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29173, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29173, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29173, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29173, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29173, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29173, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29173, null]], "pdf_page_numbers": [[0, 5002, 1], [5002, 10370, 2], [10370, 16958, 3], [16958, 18929, 4], [18929, 24914, 5], [24914, 29173, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29173, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
5d6e578c30d6deabe211761031fef71aa848b0a3
On the Development of a Theoretical Model of the Impact of Trust in the Performance of Distributed Software Projects Sabrina Marczak, Vanessa Gomes Faculdade de Informática Pontifícia Universidade Católica do Rio Grande do Sul (PUCRS) Porto Alegre, RS, Brazil sabrina.marczak@pucrs.br, vanessa.gomes@acad.pucrs.br Abstract—Trust is often defined as the belief that the trustee will meet the positive expectations of the trustor. Although several studies have discussed the topic, little is still known about the impact of trust (or lack of it) in the performance of distributed software projects. In this paper we present initial findings of an empirically informed study that aimed to identify which factors influence positively or negatively one’s perceived trustworthiness of others in the project and the impact of such factors on specific project performance measures. Availability, competence, expertise, face-to-face communication, and leadership are among the factors considered to positively influence the development of trust and the consequent achievement of performance metrics. This is a first step on a larger investigation aiming to develop a theoretical model of the impact of trust in the performance of distributed software projects. Such a model can be used by researchers as a reference framework to further investigate the topic and by practitioners to better manage and organize distributed software teams. Index Terms—Distributed software development, trust, trust influential factors, project performance, empirical study, theoretical model I. INTRODUCTION Trust is a topic of interest in several disciplines, such as psychology, sociology, and computer science, and as such can be examined at different levels—individual, team, and institutional, for example, and defined accordingly. One of the most common definitions for trust is the belief that the trustee will meet the positive expectations of the trustor [1]. These expectations form as a result of actions and behaviors, and can influence the level of trust as a member poses in another. The importance of trust in distributed software projects has been recognized by several researchers (e.g. [2][3][4]). In an empirically-based study of six software development companies Jalali, Gencel, and Smite [5] found that trust building is initially a static process that evolves to a dynamic situation as the project develops and team members change opinion about their colleagues behavior towards the project activities. Al-Ani and colleagues confirm that trust development is a dynamic process in their study of 5 multinational companies and further knowledge describing the phases that compose such dynamic process, namely formation, dissolution, adjustment and restoration [6]. In an empirical study of a large organization Al-Ani and Redmiles identified team size, project type, diversity, and leadership as factors that might affect the development of trust in distributed teams [1]. Trainer and Redmiles [7] have expanded on these factors based on a review of literature. Expertise, years of experience, availability, reputation are examples of suggested factors. Other studies of distributed software teams suggest the critical role trust plays in project performance (e.g., [8][9]). However, to the best of our knowledge the only study on the field exploring such relationship is from Moe and Smite [10]. They have investigated the impact of lack of trust in the success or failure of distributed software projects in an Easter European company and found that the lack of trust caused a decrease in productivity, quality, information exchange and feedback, and morale among the employees as well as an increase in relationship conflicts. We sought to expand this initial knowledge and conducted an empirical study of 4 distributed software projects from 3 distinct companies to investigate which factors influence one’s perceived trustworthiness of others in the project and what is the impact of such factors on project performance measures. We present our initial findings in this paper. II. METHOD Our empirical study consisted of the application of a survey instrument that listed 30 trust factors obtained from a systematic literature review (refer to Table 1 for the complete list) and 7 project performance metrics obtained through interviews with experienced project managers of distributed software projects. The metrics are: cost deviation, effort deviation, productivity, requirements completion, requirements volatility, product quality, and time adherence. Appendix A and B present the factors and metrics definitions, respectively. The instrument was applied either in person or over the telephone and its application lasted 30 min on average. To answer the instrument the participant was instructed to indicate his opinion based on his own overall experience working with distributed teams about which of the 30 trust factors previously defined are associated with each one of the 7 defined metrics. The following types of association were indicated: (i) negative (-), whether the trust factor negatively influences the performance metric; (ii) neutral (0), whether the influence can be positive or negative depending on the situation; (iii) positive (+), whether there is a positive influence; and (iv) N/A (NA), whether the respondent believes the factor does not apply. We understand that perceptions about the factors can be interchangeable. For example, availability might positively affect one’s trust in a colleague but lack of availability might have a negative influence. Therefore, we asked the participants to indicate the category they believe that better describe their perception. A total of 33 participants from 4 distinct projects accepted invitation to participate. Two projects were from a large US-based IT manufacturing company (6 members and 67% response rate for project 1, 6 members and 50% for project 2), one from a large Brazilian company with customers located in several states and the US (8 members and 75% response rate), and one from a local IT company with offices in 4 states (30 members and 67% response rate). Respondents were on average between 31-40 years old, have 11.5 years experience in software engineering, and 6.5 years experience working with distributed teams. The participants’ roles distribution is as follows: 3 business analysts, 6 system analysts, 9 developers, 7 test engineers, 4 testers, 2 build engineers, and 2 managers. ### III. Initial Findings Simple statistics analysis was conducted to describe the findings. The percentages in Table 1 indicate the relative number of the predominant answers for each trust factor over the total of the 33 respondents. For instance, line 1 indicates that the adoption of patterns factor was considered to influence the establishment of trust and the participants believe that it positively affects each one of the listed performance metrics, some more than others. Hence, this finding suggests that the more the distributed teams adopt patterns to guide work the more they rely on their colleagues benefiting the cost of the project (CD), helping achieving the estimated effort (ED), increasing productivity (P), facilitating attending the requirements agreed (RC), reducing requirements volatility (RV), increasing quality (PD), and staying on time (TA). Figure 1 summarizes the factors that positively influence the development of trust and as a consequence positively affect the defined performance measures. We can see that for the collaboration, competence, expertise, leadership, and work experience factors about three-thirds of the respondents agree on their opinion for each of the metrics. For the other 5 factors, namely adoption of patterns, availability, F2F communication, monitoring, and prior work experience on average half of the respondents reached an agreement. This percentage is still significant considering this is a step of a larger investigation. <table> <thead> <tr> <th>Trust factors</th> <th>CD</th> <th>ED</th> <th>P</th> <th>RC</th> <th>RV</th> <th>PQ</th> <th>TA</th> </tr> </thead> <tbody> <tr> <td>Adoption of patterns</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> </tr> <tr> <td>Availability</td> <td>61%</td> <td>55%</td> <td>67%</td> <td>52%</td> <td>45%</td> <td>85%</td> <td>55%</td> </tr> <tr> <td>Betrayal</td> <td>45%</td> <td>52%</td> <td>55%</td> <td>58%</td> <td>39%</td> <td>64%</td> <td>61%</td> </tr> <tr> <td>Collaboration</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> </tr> <tr> <td>Comm. media</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> </tr> <tr> <td>Competence</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> </tr> <tr> <td>Culture</td> <td>85%</td> <td>79%</td> <td>76%</td> <td>79%</td> <td>64%</td> <td>82%</td> <td>82%</td> </tr> <tr> <td>Expertise</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> </tr> <tr> <td>F2F comm.</td> <td>76%</td> <td>67%</td> <td>82%</td> <td>88%</td> <td>67%</td> <td>76%</td> <td>70%</td> </tr> <tr> <td>Fear of job loss</td> <td>52%</td> <td>39%</td> <td>48%</td> <td>42%</td> <td>42%</td> <td>42%</td> <td>36%</td> </tr> <tr> <td>Freq. of mtgs.</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> </tr> <tr> <td>Geographical distance</td> <td>48%</td> <td>36%</td> <td>39%</td> <td>42%</td> <td>36%</td> <td>39%</td> <td>42%</td> </tr> <tr> <td>Homophily</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> </tr> <tr> <td>Informal com.</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> </tr> <tr> <td>Intuition</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> </tr> <tr> <td>Language</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> </tr> <tr> <td>Leadership</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> </tr> <tr> <td>Monitoring</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> </tr> <tr> <td>Prior work experience</td> <td>61%</td> <td>61%</td> <td>58%</td> <td>52%</td> <td>45%</td> <td>48%</td> <td>52%</td> </tr> <tr> <td>Project size</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> </tr> <tr> <td>Project type</td> <td>48%</td> <td>58%</td> <td>52%</td> <td>64%</td> <td>48%</td> <td>67%</td> <td>73%</td> </tr> <tr> <td>Project changes</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> </tr> <tr> <td>Reputation</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> </tr> <tr> <td>Response time</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> </tr> <tr> <td>Role</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> </tr> <tr> <td>Shared personal info</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> </tr> <tr> <td>Team diversity</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> </tr> <tr> <td>Team size</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> </tr> <tr> <td>Virtual comm.</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> </tr> <tr> <td>Work experience</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> <td>+</td> </tr> </tbody> </table> **TABLE 1. RESULTS** IV. Discussion We have found that about two-thirds of the factors identified from literature influence the development of trust and interfere with its growth according to the situation the team members are in. For instance, cultural differences might facilitate or challenge the development of trust towards remote colleagues according to the participants’ perceptions of culture. Three of these factors—project type, team diversity, and team size—were suggested as factors that might have a negative influence on the development of trust by Al-Ani and Redmiles [1]. Our initial findings suggest that such factors can also play a positive role in certain situations. Further qualitative exploration is necessary to gain a better understanding of which situations create the environment to lead to such findings. We also found that about one-third of the factors positively affect one’s perceived trustworthiness of others in the project and that this is positive for the achievement of the projects defined metrics. Leadership, the fourth suggested factor from Al-Ani and Redmiles’ work [1], is among these factors and corroborates their findings. Only two of the factors, betrayal and project changes, are considered having a negative impact. The act of betraying someone is somehow expected to negatively impact a relationship as well as frequent changes to baseline versions of work products (e.g., requirements or architecture) are known to affect project outcomes (e.g., [11]). More specifically, results indicate that the productivity and quality metrics are positively or negatively impacted by most of the factors that lead to trust (or lack of it), corroborating to a certain extent with Moe and Šmite’s previous findings [10]. Our preliminary findings help confirm that trust building is a dynamic process and that it impacts project outcomes in distributed software projects. In addition, the large amount of factors indicated as of neutral influence suggest that the establishment of trust is context-specific. V. Final Remarks In this paper, we present the initial findings of our survey of 33 participants from 4 distinct projects of 3 large IT companies. The purpose of the survey is to identify which factors positively or negatively influence one’s perceived trustworthiness of others in the project and the impact of such factors on specific project performance measures. We found that our participants considered the majority of the factors neutral, meaning that they might influence positively or negatively the development of trust and as a consequence might impact positively or negatively the achievement of the project performance metrics. One-third of the factors are considered to have positive influence, and only two of them had a negative influence. The number of participants and their concentration on the same location (a state in the South of Brazil) are limitations of this study. However, this is a relevant first step towards developing a theoretical model of the impact of trust in the performance of distributed software projects. We expect the availability of such model to be helpful to researchers as a reference framework to further investigate the topic and to practitioners to guide improvement actions aiming to better define and manage distributed software teams. Next we will replicate the study is large scale and consider global distribution to minimize the impact of cultural and time zone differences influence on the model. We will then be able to apply more refined statistical methods to confirm the significance of our findings. ACKNOWLEDGMENT This study is part of a major research investigation under development at the Mundudos Research Groups, coordinated by Professor Dr. Rafael Prikladnicki. We thank for his support and valuable insights. This research is partially financed by Dell Computers of Brazil Ltda with funds from the Brazilian Federal Law No. 8.248/91. REFERENCES A. DEFINITION OF THE TRUST FACTORS Adoption of patterns: To adopt certain standards to support the work to be done such as CMMI, ISO, BPMI, ITIL, among others. Availability: Being a handy person, always present and available to answer questions and to assist coworkers, in person or virtually. Betrayal: To be false or disloyal, to reveal against one’s desire or will. Collaboration: Cooperation between project members. Communication media: Technological means employed to establish communication. Eg.: E-mail, chat, phone, etc. Competence: The quality of being competent; adequacy; possession of required skill, knowledge, qualification, or capacity. Culture: The culture of a country, which may be different from another country, with different customs that can be conflicting. Eg.: The culture of India is different from the culture of Brazil. Expertise: To have in-depth knowledge about a certain topic, technology or business domain. F2F communication: To communicate with other in a presentational manner. Fear of job loss: One’s fear of losing the job to a remote colleague; to believe that others might want to take away one’s role in the project. Frequency of meetings: The frequency at which meetings are set up. Geographical distance: Physical distance between project teams. Homophily: Similarity among team members. Eg.: similar age or gender. Informal communication: Unplanned meetings. Eg.: Discussing a project issue in the coffee area or while smoking a cigarette. Intuition: Direct perception of truth or fact, independent of any reasoning process; immediate apprehension. Eg.: To sympathize with a colleague even without any personal contact. Language: The language spoken by the project members. Leadership: The function of a leader, a person who guides a group. Monitoring: Constant monitoring of progress of team members. Prior work experience: Time one has previously worked with a colleague or knows another colleague from working together in past projects. Project size: Number of people allocated to work on a project. Project type: The classification of a project according to its main goal. E.g.: Improvement, maintenance, new development, innovation. Project changes: Modifications that occur in the project after a baseline is approved. E.g.: Changes to the defined scope. Reputation: A favorable and publicly recognized name or standing for merit, achievement, reliability. Response time: Delay between the time something is requested and it is resolved. Role: It is the role that person plays in project. Eg Project manager. Shared personal information: Share personal information with coworkers in order to foster interpersonal relationships. Team diversity: Different profiles within the same team. Eg.: Having in a single team shy, outgoing, among other characteristics’ team members. Team size: Number of members of a team. Virtual communication: Type of communication characterized by not being face-to-face and supported through technological means. Years of professional experience: Time of professional experience that a team member has. B. DEFINITION OF THE METRICS Cost deviation: The percentage between the actual cost / estimated cost. Effort deviation: Deviation of the number of hours/man in comparison with the planned effort. Productivity: Number of lines of code per developer. Requirements completion: Number of requirements completed compared to the list of requirements agreed in the project scope. Requirements volatility: Number of additional requirements added to the initial project scope. Product quality: Measurements of the quality of the system being delivered. Eg., the percentage of defects found in each testing phase. Time adherence: Adherence to the project schedule.
{"Source-Url": "http://repositorio.pucrs.br:80/dspace/bitstream/10923/14137/2/On_the_Development_of_a_Theoretical_Model_of_the_Impact_of_Trust_in_the_Performance_of_Distributed_Software_Projects.pdf", "len_cl100k_base": 4560, "olmocr-version": "0.1.53", "pdf-total-pages": 4, "total-fallback-pages": 0, "total-input-tokens": 16247, "total-output-tokens": 4541, "length": "2e12", "weborganizer": {"__label__adult": 0.0004673004150390625, "__label__art_design": 0.00044035911560058594, "__label__crime_law": 0.0006327629089355469, "__label__education_jobs": 0.00872039794921875, "__label__entertainment": 7.987022399902344e-05, "__label__fashion_beauty": 0.00017750263214111328, "__label__finance_business": 0.0027256011962890625, "__label__food_dining": 0.0004706382751464844, "__label__games": 0.0006504058837890625, "__label__hardware": 0.0005769729614257812, "__label__health": 0.0008516311645507812, "__label__history": 0.00027370452880859375, "__label__home_hobbies": 0.0001722574234008789, "__label__industrial": 0.0004858970642089844, "__label__literature": 0.0005164146423339844, "__label__politics": 0.00036454200744628906, "__label__religion": 0.0004014968872070313, "__label__science_tech": 0.0267181396484375, "__label__social_life": 0.0006155967712402344, "__label__software": 0.0135650634765625, "__label__software_dev": 0.93994140625, "__label__sports_fitness": 0.0003116130828857422, "__label__transportation": 0.0006461143493652344, "__label__travel": 0.00026726722717285156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18877, 0.02187]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18877, 0.08067]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18877, 0.93205]], "google_gemma-3-12b-it_contains_pii": [[0, 4930, false], [4930, 9579, null], [9579, 12789, null], [12789, 18877, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4930, true], [4930, 9579, null], [9579, 12789, null], [12789, 18877, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 18877, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18877, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18877, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18877, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18877, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18877, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18877, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18877, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18877, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18877, null]], "pdf_page_numbers": [[0, 4930, 1], [4930, 9579, 2], [9579, 12789, 3], [12789, 18877, 4]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18877, 0.28319]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
c70285cfd6798c7dead5e5e3c685360722102363
FASTER Programming Language and translators 4115 Anand Rajan (asr2171) August 22\textsuperscript{nd} 2014 # Table of Contents 1. Introduction.................................................................................................................. 3 2. Language Tutorial......................................................................................................... 4 2.1 How to Compile and Run FASTER............................................................................. 6 3. Language reference Manual............................................................................................. 7 3.1 Lexical Conventions.................................................................................................. 7 3.2 Objects and Lvalues.................................................................................................. 9 3.3 Expressions and Operators....................................................................................... 10 3.4 Statements.............................................................................................................. 12 3.5 Functions.................................................................................................................. 14 3.6 Program Structure and Scope................................................................................. 15 3.7 Example.................................................................................................................... 15 4. Project Plan....................................................................................................................... 18 4.1 Project timeline....................................................................................................... 18 4.2 Software Development Environment........................................................................ 18 4.3 Processes.................................................................................................................. 18 5 Architecture................................................................................................................... 19 5.1 Overview.................................................................................................................. 19 5.2 Scanning, parsing and AST..................................................................................... 19 5.3 SAST Creation......................................................................................................... 20 5.4 Compiler / Byte code generation............................................................................. 20 5.5 Generator................................................................................................................ 20 6. Test Plan.......................................................................................................................... 22 6.1 Unit testing.............................................................................................................. 22 6.2 Regression testing.................................................................................................. 22 6.3 Automation............................................................................................................. 22 7. Lessons Learned............................................................................................................. 25 8. APPENDIX..................................................................................................................... 26 1. Introduction FASTER is a programming language that focuses on making parallel programs based on collections easier to write. Given a collection (of either ints or structs), a user can write an easier syntax to parallelize this array/collection. The language will have some similarity to C and the similarities and differences will be described in the following sections. 2. Language Tutorial Here is a quick example of an example FASTER program ```c main() { int i; i = 0; print("my first FASTER program"); print(i); } ``` As you can see it looks similar enough to C, and now spicing things up a bit further to show some different features of the language. ```c main() { int i; int b; struct test2 arraytest[40]; struct test2 arraytest1[40]; struct test2 x; int a[40]; arraytest[1].a = 1; for (i =0; i < 40; i = i +1) { arraytest[i].a = i; arraytest[i].b = i ; ``` As one can see in heart of FASTER is this parallel notation for arrays <table> <thead> <tr> <th>arraytest:</th> </tr> </thead> <tbody> <tr> <td>{a:</td> </tr> <tr> <td>print(a.b);</td> </tr> <tr> <td>print(&quot; &quot;);</td> </tr> <tr> <td>};</td> </tr> </tbody> </table> As one can see in heart of FASTER is this parallel notation for arrays As one can see this notation is neat and concise and takes the strain away from using the loop from the user. 2.1 How to Compile and Run FASTER First FASTER programs use the file extension .fs. The ocaml executable that is generated will be named faster. To interact with FASTER you have multiple options. <table> <thead> <tr> <th>Argument</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>-b &lt; file_name</td> <td>Generates the byte code of the program</td> </tr> <tr> <td>&lt; file_name</td> <td>This prints the C code to standard output and also outputs the C code to a faster_generated.c file</td> </tr> <tr> <td>Make rtest param=filename</td> <td>Also to make and execute the C code of the program this option is provided, this will generate the C code and then execute the C code.</td> </tr> </tbody> </table> One can also run gcc -fopenmp faster_generated.c to compile the generated.c file themselves to see if there are any issues. 3.1 Lexical Conventions There are six kinds of tokens: identifiers, keywords, constants, expression operators and other separators. In general blanks, tabs, newlines and comments as described below are ignored except they serve as token separators. At least one of these characters is required to separate otherwise adjacent identifiers, constants, and certain operator-pairs. 3.0.1 Comments The characters /* introduce a comment, which terminates with the characters */. FASTER does not support comment nesting. 3.0.2 Identifiers An identifier is a sequence of letters and digits, but the first character must be alphabetic. The underscore “_” character counts as an alphabetic character. Upper and lower case characters are different, such that foo and FOO are considered different identifiers. 3.0.3 Constants 3.0.3.1 Integer Constants An integer constant is a sequence of digits. An example would be 23. FASTER does not allow negative integers. 3.0.3.2 String constants A string is a sequence of zero or more characters surrounded by double quotes “ “. String constants in FASTER can only be used as a parameter in the print method. They have no other role to play in terms of declaration, assignment or manipulation of a string. 3.0.4 Primitive types The only primitive types in FASTER that can be used is an int and manipulated with an int. As mentioned previously string constants are only used to display strings when debugging. The way you declare your ints are with int following the name of the identifier, after which there would be a semi colon. So for example declaring and assigning to this identifier x would be 3.0.5 Structures A structure is a programmer-defined data type made up of other primitive types (only ints in this cases as the language only supports ints. Having other arrays or structures inside structures are not allowed in FASTER. 3.0.6 Defining Structures You can define structure using the struct keyword followed by the declarations of structures members enclosed in braces after which a semicolon is necessary. You declare each member of structure just like you normally declare a variable with, data-type variable name followed with a semi colon. You should also include a name for your structure in between the struct keyword and the opening brace. In this following case a struct stock is declared with two primitive types defined in it, price and quantity. ``` struct stock { int price; int quantity; }; ``` 3.0.7 Declaring Struct variables To declare struct variables in your code, you use the struct keyword followed by the struct name you had defined the struct (in the above example), followed an identifier variable that you can come up with. So an example of this would be below. ``` int main() { struct stock s1; s1.price = 20; } ``` In the example above a variable with variable name s1 is defined, which is of type (struct stock). The member expression to get to price member is shown above, this will be discussed in greater detail in the Member expressions part (See 3.4.1). 3.0.8 Arrays An array is a data structure that lets you store one or more elements consecutively in memory. In FASTER, the array elements are indexed beginning at position zero, not one. 3.0.9 Declaring Arrays You declare an array specifying the data type for its elements, its name and the number of elements it can store in it. In FASTER the data types that arrays work on are the primitive types and also any structs that you define. ```c int main() { int test_array[10]; struct stock stocks[100]; } ``` In the above example you declare an “int array” that contains 10 elements. And then continuing the stock example we had from the beginning, we declare an array of stock structures here. 3.0.10 Accessing Array elements You can access the elements of the array by specifying the array name followed by the element index in square brackets. Remember that the array elements are numbered starting with zero. So an example of this is below. ```c int main() { struct stock stocks[100]; stocks[0].price = 100; } ``` In the above stock array defined, the first element of the stock is accessed which is of type stock, and then its price integer is set to 100. 3.1 Objects and Lvalues An object is a region of storage that can be manipulated; a lvalue is an expression referring to an object. An obvious example of an lvalue expression is an identifier. The name lvalue comes from the assignment expression “E1 = E2” in which the left operand E1 must be an lvalue expression. These terms will be brought up again in the following sections. 3.2 Expressions and Operators An expression consists of at least one operand and zero or more operators. Some examples are 3 or 2+2. In this section, the different types of expressions and operators will be noted. One thing to note is that a parenthesized expression is a primary expression whose type and value are identical to those of the unadorned expression. The presence of parentheses does not affect whether the expression is an lvalue. 3.2.1 Member Access Expressions You can use the member access operator . to access the members of a structure. You put the name of the structure variable on the left side of the operator, and the name of the member in the right side of the operator. ```c int main() { struct stock s1; s1.price = 20; } ``` In the following example (continuing on our stock example), the price of s1 is set to 20 via the member access expression. 3.2.2 Array Subscripts You can access array elements by specifying the name of the array, and the array subscript enclosed in brackets. Here is an example below that shows how the first element (zero'th index) of the array stocks is accessed and used. ```c int main() { struct stock stocks[100]; stocks[0].price = 100; } ``` 3.2.3 Multiplicative, Divide and Mod Operators The multiplicative operators are * and / and the % mode operators they group from left to right. 3.2.3.1 Expression * Expression The binary operator * indicates multiplication. If both operands are of type int, then this result int is allowed. No other combinations are allowed. 3.2.3.2 Expression / Expression The binary operator / indicates division. The same considerations as multiplication apply here. 3.2.4 Additive Operators The additive operators + and – group from the left to right. 3.2.4.1 Expression + Expression The result will be the sum of these expressions. So the same type check rules that applied to multiplication apply here too. 3.2.4.2 Expression – Expression Here the binary operator – indicates subtraction. The same checking rules as multiplication apply here. 3.2.5 Relational Operators The relational operators < (Less than), > (Greater than), <= (Less than equal to), >= (Greater than equal to) group from left to right. More interestingly here, they only apply to ints again. And the result of these relations will result in 0 or 1. 3.2.6 Equality Operators The equality operators == (equal to) and != (not equal to) group from left to right and again follow the rules of the multiplicative operators, in that only ‘int’ types are allowed. This may seem overly restrictive but I wanted them to follow the same pattern as the rest of the binary operators. Also without the notion of pointers in the language, the notion of equality may get muddled, so I went with the route of this restriction. 3.2.7 Assignment Operators The assignment operator = groups from right to left (right associative). All this requires for it to succeed taking the below as an example Lvalue = expression It requires an lvalue as its left operand, and the lvalue’s type should match the type of the assigned expression. The value of the expression replaces that of the object referred to by the lvalue. So all types in the system can get assigned to one another including ints, arrays and structs. 3.2.8 Operator Precedence The following table describes the operator precedence that for the operators that were explained in detail above: <table> <thead> <tr> <th>Precedence</th> <th>Expressions / Operators</th> </tr> </thead> <tbody> <tr> <td>Highest (left associative)</td> <td>F (arg,arg...) Dot operator (.) Array subscripts ([]) and also parenthesis ()</td> </tr> <tr> <td>(left associative)</td> <td>* (Multiply) / (Divide) % (Mod)</td> </tr> <tr> <td>(left associative)</td> <td>+ - (Add subtract)</td> </tr> <tr> <td>(left associative)</td> <td>Less than, Greater than, Less than equal to, Greater than equal to ( &lt;, &gt;, &lt;=, &gt;=)</td> </tr> <tr> <td>(left associative)</td> <td>== != ( l == r , l != r)</td> </tr> <tr> <td>Lowest (right associative)</td> <td>Assign ( l = r)</td> </tr> </tbody> </table> 3.3 Statements 3.3.1 Expression statements You can turn any expression into a statement by adding a semicolon to the end of the expression. Here are some examples: ``` 5; 2 + 2; 10 >= 9; ``` Expression statements are only useful when they have some kind of side effect, such as storing a value, calling a function, or (this is esoteric) causing a fault in the program. Here are some more useful examples: ``` int y; int x; y = x + 25; ``` 3.3.2 The If Statement You can use the if statement to conditionally execute part of your program, based on the truth value of the expression. Here is the general form of the if statement. ``` if (test) then-statement else ``` else-statement If test evaluates to true, then then-statement is executed and else-statement is not. If test evaluates to false, then else-statement is executed and then-statement is not. The else clause is optional. 3.3.3 For Statement The for statement is a loop statement whose structure allows easy variable initialization, expression testing, and variable modification. It’s very convenient for making counter controlled loops. Here is the general form of the for statement. ```plaintext for (initialize; test; step) statement ``` The for statement first evaluates the expression initialize. Then it evaluates the expression test. If test is false, then the loop ends and the program resumes after statement. Otherwise if test is true, the statement is executed. Finally step is evaluated, and the next iteration of the loop begins evaluating the test again. More often than not initialize step assigns values to a variable that are then generally used as counters. 3.3.4 Parallel Array Access: The generalized form of the parallel access syntax of the array is as follows: ```plaintext Array: Id: Statements modifying Id; } ``` So the syntax has an array variable followed by a colon, followed by a left curly brace, and then followed by an identifier, that indicates each element of the array. Then there is another colon and then there are a bunch of statements that modify the Id potentially and ending again with a close curly brace. This program is a good example that encapsulates most of the constructs we have talked about in this language reference manual. ```plaintext struct stock { int price; int quantity; int calc; }; ``` ```c main() { int i; struct stock stocks[100]; for (i =0; i < 100; i = i +1) { stocks[i].price = i * 3; stocks[i].quantity = i * 7; } stocks: {s: s.calc = s.price * s.quantity * 17 * 34 * 56 - 45 +123; print (s.calc); } } ``` Here stocks variable is an array of type stock structs is declared initially. The different elements of stocks are then initialized to somewhat different quantities. Then using the parallel array syntax values in the calc element of stocks is calculated using the above expressions (s.price * s.quantity * 17 .... ). This is then printed out using the print function. So without the user straining themselves to think about openmp or worry to include any other libraries, this array syntax is available for them to use. ### 3.3.5 While The while statement is a loop statement with an exit test at the beginning of the loop. Here is the general form of the while statement: ``` while (test) statement ``` The while statement first evaluates test. If test evaluates to true, statement is executed, and then test is evaluated again. The statement continues to execute repeatedly as long as test is true after each execution of statement. ### 3.4 Functions You can write functions to separate parts of your program into distinct subprocedures. To write a function, you must at least create a function definition. Every program requires at least one function, called main. That is where the program’s execution begins. There are couple of built in functions print and exit(). Print allows integers and strings to be printed and prints to the standard output whereas exit() calls the system call exit and exits from the program. ### 3.4.1 Function Declaration/Definition You declare and define a function by specifying the name of a function, a list of parameters, and the function’s return type. And then begins the body of the function. Here is the general form: ``` return-type function-name (parameter-list) { function-body } ``` *return-type* indicates the data type of the value returned by the function. You can declare a function that doesn’t return anything by using the return type void. Unfortunately in faster as of now only void type is implemented, so your functions don’t return anything. *function-name* can be any valid identifier *parameter-list* consists of zero or more parameters, separated by commas. A typical parameter consists of a data type and an optional name for the parameter A function-body is simply just a series of statements. ### 3.4.2 Calling functions You can call a function by using its name and supplying any needed parameters. Here is the general form of a function call: ``` function-name (parameters) ``` ### 3.5 Program Structure and Scope A FASTER program must exist solely in a single source file. The scope of local variables is that they will only be visible in the function they are defined. There will be errors in duplicate definitions of local variables. The only exception to this scope is the scope of the parallel access arrays. Local variables in the function are not available inside the parallel access, array, and the parallel access loop variable is not available to the function outside. ### 3.6 Example Finally an example is given of the program, which is an example of parallel linear search, which continues on from our stock example. So we have an array of stocks and this is parallel used to find the desired quantity (379) in an easy parallel fashion, without the user needing to even think of any parallelism at all. ```c void main() { int i; struct stock stocks[1000]; for (i =0; i < 1000; i = i +1) { stocks[i].price = i; stocks[i].quantity = i * 17; } stocks: {s: if (s.price == 379) { print(s.quantity); print("\n"); exit(); } }; } struct stock { int price; int quantity; int calc; }; ``` Another example is parallel prime factorization as shown below, ```c void main() { int b[99987147]; int i; b[0] = 2; ``` b[1] = 2; for (i = 0; i < 99987147; i = i+1) { if (i > 1) { b[i] = i; } } b: {b1: if (99987147 % b1 == 0) { print (b1); print (" is a prime factor \n"); } }; } 4. Project Plan 4.1 Project timeline <table> <thead> <tr> <th>Date</th> <th>Milestone</th> </tr> </thead> <tbody> <tr> <td>June 11th</td> <td>Initial proposal</td> </tr> <tr> <td>July 2nd</td> <td>LRM Due</td> </tr> <tr> <td>July 19th</td> <td>Beginning to dig deeper and try understand Ocaml</td> </tr> <tr> <td>July 25th</td> <td>Understood Microc</td> </tr> <tr> <td>July 29th</td> <td>Started to work on parser.mly, scanner.mll and ast.ml</td> </tr> <tr> <td>Aug 2nd</td> <td>Ast, parser and scanner ready</td> </tr> <tr> <td>Aug 2nd</td> <td>Begin to work on the semantic.ml</td> </tr> <tr> <td>Aug 9th</td> <td>Semantic.ml done begin to work on compile.ml and byte code generation. Keep working on test cases in the same time.</td> </tr> <tr> <td>August 16th</td> <td>Code generation working realize need to revise some features after Prof’s advise. Attempting to revise on them.</td> </tr> <tr> <td>August 18th</td> <td>Work on final report</td> </tr> </tbody> </table> 4.2 Software Development Environment a. Development environment on vim in Linux VM environment. b. Programming environment Ocaml 3.12.1 c. Gcc Compiler on linux to compile the generated C code d. 4.1 Processes followed 1. Version control: Use P4V and backed on NEC depository at work. This is backed up on tape in different intervals of the day. 2. Coding style for Ocaml: a. Use Vim indentation for when the are nested if statements in a function b. Try to break down complex functions into smaller functions c. Try to keep one statement per line. d. Each code block following let in should be indentented. 5 Architecture 5.1 Overview The architecture of FS is shown above. The input and output files are marked in orange. Overall FASTER follows a traditional compiler model with a lexical analyzer and parser at the front end, followed by generation of a semantically checked and typed abstract syntax tree (SAST) from an abstract syntax tree (AST) and finally bytecode generation. This byte code generation is then followed by the code generator taking these byte codes and converting them into C code. GCC then with the –fopenmp directive is used to convert the generated C code into an FS executable. 5.2 Scanning, parsing and AST The scanner creates lexical tokens from the stream of input character from the input program file. These tokens are then interpreted by the parser according to the precedence rules of the FASTER language. The parser’s main goal is to organize the tokens of the program into 2 record lists: function declarations and structure declarations. Within each record lies the respective declarations along with the name and type information of the data structures. Specific to the function declarations record is the creation of an AST of functions from groups of statements, statements evaluating the results of expressions, and expressions formed from operations and assignments of variables, references and constants. The below code snippet shows the expressions in the faster language. type expr = Literal of int | Id of string | Binop of expr * op * expr | Assign of expr * expr | Call of string * expr list | Array of expr * expr | MemberAccess of expr * op * string | Noexpr | String of string 5.3 SAST Creation For each function, the SAST component creates a series of function, structure and local indexes which hold information about the types and names of the expressions on the leaves of the AST. Starting with the leaves of the AST, each function, reference, variable or constant is assigned a type. Expressions using these values are then assigned a type based on the operation performed. As each node in the AST assigned a type, a series of type checks is performed based on the operation being applied. A summary of checks is given below. <table> <thead> <tr> <th>Checks</th> <th>Error</th> </tr> </thead> <tbody> <tr> <td>If Conditions</td> <td>The conditional expression must be int</td> </tr> <tr> <td>Variable assignments</td> <td>Left and right hand side must match</td> </tr> <tr> <td>Variable declarations</td> <td>Variables can not be declared twice</td> </tr> <tr> <td>Structs/Functions</td> <td>Structs can not have the same name as functions, nor can there be repeated function names nor can there be repeated struct names.</td> </tr> <tr> <td>Binary operations</td> <td>The left and right hand side has to be integers</td> </tr> <tr> <td>Function arguments</td> <td>The function arguments must type match</td> </tr> <tr> <td>Function return</td> <td>The return arguments must type match</td> </tr> </tbody> </table> 5.4 Compiler / Byte code generation The main function of the compiler is to convert the AST tree into a flattened list of bytecode nodes. The advantage of the flattened bytecode list, is that the code generator can use that solely to generate codes. 5.5 Generator The main job of the generator is to generate the C code from the bytecodes, looking at each of the statements of the bytecodes. 6. Test Plan 6.1 Unit testing Unit Testing was done in frequent intervals, where each unit being made was unit tested rigorously with multiple cases. The scanner, parser and AST were first tested and then in the later phase the semantic checker, the compiler and the code generator were tested. 6.2 Regression testing While hopping and heaving and changing features, the tests were useful barometer of regression, to see how much impact did the last minute changes have and so were useful in their own perspective. 6.3 Automation A shell script (provided by our dear lecturer) was provided to automate test cases and was use readily in each stage of our design. Following are the list of test cases, shown below. The ones that are marked as fail we actually expect to fail. | fail-test-duplicate-struct-name.fs | | fail-test-param1.fs | | test-argscrazy.fs | | test-arith1.fs | | test-arith2.fs | | test-array-basic.fs | | test-assign1.fs | | test-assign2.fs | | test-assign3.fs | | test-basic-struct.fs | | test-crap.fs | | test-duplicate-var-name.fs | | test-easy.fs | test-failarg1.fs test-failarg2.fs test-failarg3.fs test-for1.fs test-for2.fs test-for3.fs test-hello.fs test-if1.fs test-if2.fs test-if3.fs test-mod1.fs test-multiple-structs.fs test-multirec.fs test-ops1.fs test-param1.fs test-param2.fs test-param3.fs test-param4.fs test-parl1.fs test-parl2.fs test-parl3.fs test-parl4.fs test-parl5.fs test-parl6.fs test-parlLinearSearch.fs test-primefactor.fs test-prlprimefactor.fs test-rec.fs test-var1.fs <table> <thead> <tr> <th>test-while1.fs</th> </tr> </thead> <tbody> <tr> <td>test-while2.fs</td> </tr> <tr> <td>test-while3.fs</td> </tr> </tbody> </table> 7. Lessons Learned - OCaml has a steep learning curve. I started the project somewhat late so it took me some time to grasp what was going on so I could do more than the very basics. - As a CVN student, I felt it was harder to do to this pretty big project alone, especially when I was stuck and biting my head over some OCaml compiler error message oddity (funnily enough now ). Sometimes, this could suck huge amounts of time off, and it would have been a nice to run this over with someone else. Also felt it was hard for one person, to do everything, in terms of thinking of the report, learning this OCaml. So CVN students if you read this and have a choice get in a group. OCaml is not a language you want to get stuck on by yourself especially with limited help available online (that is if you can work with a person remotely). - Try not to have such a grandiose vision like I had initially of working on both OpenMP and OpenMPI, which is infeasible. - Functionally programming feels truly amenable to creating compilers, with its pattern matching, tools available around it, and the whole transformation paradigm that its associated with - Overall, despite somewhat of a struggle it was well worth to see what functional programming was about, and what a compiler does, to truly get a deeper understanding of computer science. 8. APPENDIX
{"Source-Url": "http://www.cs.columbia.edu/~sedwards/classes/2014/w4115-summer-cvn/reports/FASTER.pdf", "len_cl100k_base": 6579, "olmocr-version": "0.1.49", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 42497, "total-output-tokens": 7525, "length": "2e12", "weborganizer": {"__label__adult": 0.00040602684020996094, "__label__art_design": 0.00034546852111816406, "__label__crime_law": 0.0001856088638305664, "__label__education_jobs": 0.0012998580932617188, "__label__entertainment": 7.081031799316406e-05, "__label__fashion_beauty": 0.00013124942779541016, "__label__finance_business": 0.00017642974853515625, "__label__food_dining": 0.0004203319549560547, "__label__games": 0.0006985664367675781, "__label__hardware": 0.0009860992431640625, "__label__health": 0.00030541419982910156, "__label__history": 0.00018787384033203125, "__label__home_hobbies": 0.00010859966278076172, "__label__industrial": 0.000324249267578125, "__label__literature": 0.00026917457580566406, "__label__politics": 0.00017321109771728516, "__label__religion": 0.0004470348358154297, "__label__science_tech": 0.0036525726318359375, "__label__social_life": 9.745359420776369e-05, "__label__software": 0.0029754638671875, "__label__software_dev": 0.98583984375, "__label__sports_fitness": 0.0003826618194580078, "__label__transportation": 0.0004949569702148438, "__label__travel": 0.00020635128021240232}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30161, 0.03627]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30161, 0.63354]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30161, 0.84156]], "google_gemma-3-12b-it_contains_pii": [[0, 107, false], [107, 3605, null], [3605, 3980, null], [3980, 4546, null], [4546, 4767, null], [4767, 5844, null], [5844, 7515, null], [7515, 8938, null], [8938, 10503, null], [10503, 12056, null], [12056, 13822, null], [13822, 15648, null], [15648, 17316, null], [17316, 18855, null], [18855, 20587, null], [20587, 21442, null], [21442, 21653, null], [21653, 23500, null], [23500, 24914, null], [24914, 26899, null], [26899, 27027, null], [27027, 28290, null], [28290, 28735, null], [28735, 28810, null], [28810, 30150, null], [30150, 30161, null]], "google_gemma-3-12b-it_is_public_document": [[0, 107, true], [107, 3605, null], [3605, 3980, null], [3980, 4546, null], [4546, 4767, null], [4767, 5844, null], [5844, 7515, null], [7515, 8938, null], [8938, 10503, null], [10503, 12056, null], [12056, 13822, null], [13822, 15648, null], [15648, 17316, null], [17316, 18855, null], [18855, 20587, null], [20587, 21442, null], [21442, 21653, null], [21653, 23500, null], [23500, 24914, null], [24914, 26899, null], [26899, 27027, null], [27027, 28290, null], [28290, 28735, null], [28735, 28810, null], [28810, 30150, null], [30150, 30161, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30161, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30161, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30161, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30161, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 30161, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30161, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30161, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30161, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30161, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30161, null]], "pdf_page_numbers": [[0, 107, 1], [107, 3605, 2], [3605, 3980, 3], [3980, 4546, 4], [4546, 4767, 5], [4767, 5844, 6], [5844, 7515, 7], [7515, 8938, 8], [8938, 10503, 9], [10503, 12056, 10], [12056, 13822, 11], [13822, 15648, 12], [15648, 17316, 13], [17316, 18855, 14], [18855, 20587, 15], [20587, 21442, 16], [21442, 21653, 17], [21653, 23500, 18], [23500, 24914, 19], [24914, 26899, 20], [26899, 27027, 21], [27027, 28290, 22], [28290, 28735, 23], [28735, 28810, 24], [28810, 30150, 25], [30150, 30161, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30161, 0.12555]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
9067ab9a16eb88368b491932d087175623500f75
[REMOVED]
{"Source-Url": "https://rd.springer.com/content/pdf/10.1007%2F978-3-540-39857-8_44.pdf", "len_cl100k_base": 6182, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 27773, "total-output-tokens": 7872, "length": "2e12", "weborganizer": {"__label__adult": 0.0004467964172363281, "__label__art_design": 0.0006833076477050781, "__label__crime_law": 0.0006175041198730469, "__label__education_jobs": 0.00643157958984375, "__label__entertainment": 0.00014662742614746094, "__label__fashion_beauty": 0.00036454200744628906, "__label__finance_business": 0.0005869865417480469, "__label__food_dining": 0.000518798828125, "__label__games": 0.0008006095886230469, "__label__hardware": 0.0015106201171875, "__label__health": 0.00125885009765625, "__label__history": 0.0005083084106445312, "__label__home_hobbies": 0.00023424625396728516, "__label__industrial": 0.00104522705078125, "__label__literature": 0.0005474090576171875, "__label__politics": 0.0005183219909667969, "__label__religion": 0.0006856918334960938, "__label__science_tech": 0.46142578125, "__label__social_life": 0.00021386146545410156, "__label__software": 0.0171356201171875, "__label__software_dev": 0.5029296875, "__label__sports_fitness": 0.00036454200744628906, "__label__transportation": 0.000652313232421875, "__label__travel": 0.00027179718017578125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29681, 0.03104]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29681, 0.70453]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29681, 0.90378]], "google_gemma-3-12b-it_contains_pii": [[0, 2681, false], [2681, 5825, null], [5825, 8274, null], [8274, 10586, null], [10586, 12809, null], [12809, 15973, null], [15973, 18575, null], [18575, 21308, null], [21308, 24347, null], [24347, 27248, null], [27248, 29681, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2681, true], [2681, 5825, null], [5825, 8274, null], [8274, 10586, null], [10586, 12809, null], [12809, 15973, null], [15973, 18575, null], [18575, 21308, null], [21308, 24347, null], [24347, 27248, null], [27248, 29681, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29681, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29681, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29681, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29681, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29681, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29681, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29681, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29681, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29681, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29681, null]], "pdf_page_numbers": [[0, 2681, 1], [2681, 5825, 2], [5825, 8274, 3], [8274, 10586, 4], [10586, 12809, 5], [12809, 15973, 6], [15973, 18575, 7], [18575, 21308, 8], [21308, 24347, 9], [24347, 27248, 10], [27248, 29681, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29681, 0.14189]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
2489260855c2b62a8155f83e5bde75496e84d58b
Automated Support for Security Requirements Engineering in Practice Daniel Mellado Ministry of Work and Social Affairs, Information Technology Centre of the National Social Security Institute, Madrid, Spain. Daniel.Mellado@alu.uclm.es and Moisés Rodríguez, Eduardo Fernández-Medina, Mario Piattini University of Castilla-La Mancha, Alarcos Research Group, Information Systems and Technologies Department, Paseo de la Universidad 4, 13071 Ciudad Real, Spain. {Moises.Rodriguez; Eduardo.FdezMedina; Mario.Piattini}@uclm.es Abstract Security Requirements Engineering is a discipline that is emerging as an important branch of Software Engineering since we are becoming more aware of the fact that security must be dealt with early on the requirements phase. However, without a CARE (Computer-Aided Requirements Engineering) tool, the application of any methodology or requirements engineering process will normally fail because it has to be manually performed. Therefore, in this paper, we will present a prototype of SREPTOOL, which provides automated support to facilitate the application of the security requirements engineering process, SREP. SREPTOOL simplifies the management of security requirements by providing us with a guided, systematic and intuitive way to deal with them from the early phases of software development, simplifying the integration of the Common Criteria (ISO/IEC 15408) into the software development process as according to SREP proposes, as well as the management of the security resources repository. Finally we will illustrate SREPTOOL by describing a simple real case study, as a preliminary validation of it. Keywords: Security Requirements, CARE, Case study, Common Criteria, Security Engineering. 1. Introduction Nowadays, it is widely accepted the principle which establishes that security construction at the early phases of the software development process is cost-effective and generates more robust designs [17]. Therefore, software security is getting more and more interesting for software engineers [28]. This has caused that the discipline of Security Requirements Engineering is highly considered as part of Security Engineering applied to the process of development of information systems (IS) that so far, has not been paid the necessary attention [19]. This discipline known as Security Requirements Engineering, has become a very important part in the software development process for the achievement of secure software systems, because it provides techniques, methods and standards for tackling this task in the IS development cycle. It also implies the use of repeatable and systematic procedures to ensure that the set of requirements obtained is complete, consistent and easy to understand and analyzable by the different actors involved in the development of the system [18]. In spite of all these considerations, nowadays there are still many organizations that tend to pay little attention to security requirements. One of the reasons is the lack of CARE (Computer-Aided Requirements Engineering) tools that support the application of the methods, methodologies or processes of security requirements engineering. This implies, as described in [2], that the implementation of this kind of processes normally fails since it has to be manually performed. In [23] we compare several proposals of tools for IS security requirements, concluding that they did not reach the adequate level of integration into the development of IS, nor provided intuitive, systematic and methodological support for the management of security requirements at the early phases of the IS development, with the aim of developing secure IS that conforms to the most relevant security standards with regard to the management of security requirements (such as mainly ISO/IEC 15408 [12] as well as ISO/IEC 27001 [14], ISO/IEC 17799 [13] or ISO/IEC 21827 [10]). With this objective and starting from the former described concept of Security Requirements Engineering, we proposed SREP (Security Requirements Engineering Process) [24]. In this paper, we will describe the prototype of a security requirements management tool called SREPTOOL that we have developed to provide automated support to the SREP application. SREPTOOL will provide a guided, systematic and intuitive way for the application of the security engineering process SREP, as well as a simple integration with the rest of requirements and the different phases of the IS development lifecycle. It also facilitates the integration of the Common Criteria (CC) [12] into the software development process as well as the fulfilment of the IEEE 830:1998 standard [8]. To do so, it is helped by using the functionalities offered by ‘IBM Rational RequisitePro’ (CARE tool which is extended by SREPTOOL). Additionally, this prototype helps to develop IS which conforms to the aforementioned security standards with regard to the management of security requirements and without being necessary to perfectly know those standards and reducing the participation of security experts to get it, in other words, it improves the SREP efficiency. Furthermore, thanks to the Security Resources Repository included in SREPTOOL, it is easier to reuse security artifacts, thus improving quality successively. The rest of the paper is organized as follows: In section 2, we will summarize some of the basic characteristics of SREP with the aim of understanding the later explanation of the tool. Then, in section 3, we will offer a comparison of CARE tools and other security requirements tools. Later, in section 4, we will illustrate the tool by describing a simple real case study, as a preliminary validation of SREPTOOL, as well we will put forward the lessons learnt. Next, in section 5, we will present the related work and SREPTOOL main contributions. Lastly, our conclusions and future work will be set out in section 6. 2. Overview of SREP The Security Requirements Engineering Process (SREP) [24] is an asset-based and risk-driven process for the establishment of security requirements in the development of secure IS and whose focus seeks to build security concepts at the early phases of the development lifecycle. Basically, this process describes how to integrate the Common Criteria (CC) [12] into the software lifecycle model together with the use of a security resources repository to support reuse of security requirements, assets, threats, tests and countermeasures. Moreover, it facilitates the different types of traceability relationships (according to the traceability concepts in [25] that are based on [6, 18]): pre-traceability and post-traceability, backward traceability and forward traceability; inter-restrictions traceability and extra-requirements traceability. In a generic way, we can describe this process as an add-in of activities (that are decomposed into tasks where artifacts to get in and out are generated, and with the participation of different roles) that are integrated into the current model of any organization providing it with a security requirements approach. In [24], we have described in a more detailed way how SREP is integrated into the lifecycle of the Unified Process that, as we all know, is divided into a sequence of phases and each phase may include many iterations. Thus, the model chosen by SREP is a spiral process model, and its associated artifacts evolve throughout the lifecycle and are dealt with at the same time as the others functional and non-functional requirements and the rest of artifacts of the software development process. SREP proposes a Security Resources Repository (SRR), which facilitates development with requirements reuse. Moreover, reusing security requirements helps us increase their quality for an improved use in subsequent projects [27]. The security requirements can be obtained from security objectives or threats starting from the assets. Furthermore, SREP domains consist of artefacts belonging to a specific application field, such as finance or Social Security. The concept of package refers to a homogeneous set of requirements that can be applied to different domains and that are put together to satisfy the same security objectives as well as to mitigate the same threats, being a bigger and more effective reuse unit. Finally, we would like to point out the fact that using the CC, a large number of security requirements on the system itself and on the system development can be defined. Nevertheless, the CC do not provide us with methodological support, nor contain security evaluation criteria related to administrative security measures not directly related to the IS security measures. However, it is known that an important part of the security of an IS can be often achieved through administrative measures. Therefore and according to ISO/IEC 17799:2005, SREP suggests including the set of legal, statutory, regulatory and contractual requirements that the organization, its commercial partners, contractors, services providers and its socio-cultural environment should satisfy. After converting these requirements into software and system requirements format, these requirements along with the CC security requirements would be the initial subset of security requirements of the SRR for any project. 3. CARE Tools Comparative First of all, we had to initially decide whether to develop a new tool or to extend an existing one. Thus, taking into account the characteristics of the SREP process and the objectives of its application, we considered appropriate to extend an existing tool and to focus the search on the field of CARE tools, discarding tools of document or content management (CMS, Content Management Systems) as well as other type of CASE (Computer Aided Software Engineering) tools which are centred in other phases of the lifecycle. We believe that a major success factor for practical security requirements engineering will be seamlessly integrated with existing tools instead of requiring special purpose tools. <table> <thead> <tr> <th>Table 1 Summary of the comparative analysis of CARE tools</th> </tr> </thead> <tbody> <tr> <td><strong>RequisitePro</strong></td> </tr> <tr> <td>Extensibility of the functionality</td> </tr> <tr> <td>Traceability</td> </tr> <tr> <td>Integration with other tools of the life cycle</td> </tr> <tr> <td>Reuse Support</td> </tr> <tr> <td>Project Repository</td> </tr> <tr> <td>Specification Validation</td> </tr> <tr> <td>Specification Standards</td> </tr> <tr> <td>Previous experience. Use easiness and user interface</td> </tr> <tr> <td>Requirement Importation</td> </tr> <tr> <td>Version control and baselines</td> </tr> <tr> <td>User and Role Based Control Access</td> </tr> <tr> <td>Parameterized Requirements</td> </tr> </tbody> </table> Therefore, in this section, with the aim of obtaining a general view of CARE tools, we will present a summary of the state of the art of these tools. Firstly, we will identify the existing CARE tools and we will select those to be studied in depth; then, we will make a comparison between them with the help of an analysis framework. For the definition of the analysis framework that would facilitate the appropriate selection of a CARE tool, we took into account the general requirements of a good CARE tool [7], as well as the concrete needs that the CARE tool would have to support the correct application of the SREP process. Based on the INCOSE survey [9], we carried out a first selection of tools that fulfilled the majority of the key functions of a CARE tool, and that have shown us that, with their degree of penetration into the market, provide effective solutions as it is stated in diverse studies on the subject such as [1, 9]. In table 1, it is summarized the comparative analysis performed on the selected tools, using as comparison criteria those characteristics considered necessary for the satisfactory application of the SREP process. These tools provided almost all the needs required for a CARE tool. However, we observed with the performed analysis that none of them covered all the needs to provide automated support for SREP. In fact, some of their limitations were critical for a satisfactory application of SREP. Among other found weaknesses, we can highlight the fact that none of the tools made it possible requirements instantiation through parameterization. In the same way, they did not provide adequate techniques for either security requirements specification (such as security use cases [3]) or threats specification (misuse cases [26]). Also, they did not facilitate either automated methodological support for security requirements management, lacking of support for fundamental activities such as risk assessment or the fact of conforming to the current most relevant security standards with regard to the management of security requirements (such as ISO/IEC 15408, ISO/IEC 27001, ISO/IEC 17799 or ISO/IEC 21827). Finally, we decided to extend RequisitePro as a support for our prototype mainly due to next factors: • Extensibility: RequisitePro allows us to access the data stored in it (project, requirements, attributes, etc.), as well as let us control the RequisitePro user interface and Microsoft Word documents as well. Despite being a little more limited than in the other tools, it was clearer and more simple to adapt to SREPTOOL needs. • Automated integration with the rest of the life cycle activities. RequisitePro as it is integrated into the “Rational Suite Analyst Studio” package facilitated a key aspect for SREP, the integration not only with the other requirements but also with other artifacts of the IS lifecycle (as its integration with “Rational Rose” modelling elements). • Previous Experience. RequisitePro tool has been widely used as support tool in projects previous to SREPTOOL development (being used for example in [22]) since it is the corporative tool of the Information Technology Center of the National Social Security Institute (organism which the first author belongs to). So that this fact could be very interesting for carrying out real case studies of SREPTOOL to validate the tool. • Use easiness and multiuser. One of the most highlighted characteristics is its integration with Microsoft Word text processor as well as the possibility of watching all its functions in a unique view. Furthermore, it provides the possibility of multiuser access to the project and a collaborative web interface. • Traceability. RequisitePro allows the creation of traceability relationships between different types of requirements and it is visualized through a traceability matrix. • Other relevant factors: RequisitePro allows a certain reuse by using documents patterns. Besides, its repository supports several well-known commercial relational databases and it offers control of versions of requirements. 4. SREPTOOL: Case Study In this section, we will describe how SREPTOOL can be applied in practice. First of all, the technology used in the tool is put forward. Then, the functionalities of the tool will be described with the help of a simple and real case study to illustrate them. Due to space constraints, this case study is unrealistically simple to enable functionalities of SREPTOOL to be easily illustrated in this paper. 4.1. Developing the Tool To create the prototype, we have generated a library dll ActiveX that will be linked with RequisitePro. In this way, the objects of RequisitePro will be visible from SREPTOOL and on the other hand, the artifacts generated by the prototype will be visible from RequisitePro. Thus, the prototype functionality would be accessible from the main window of Rational RequisitePro, through the menu Tools→SREPTOOL. For the integration with RequisitePro, SREPTOOL has been developed as an add-in of it. To do so, we have used the RequisitePro extensibility interface, which allows us to access data stored in RequisitePro and controls the RequisitePro user interface and also allows us to control Microsoft Word documents. In addition, it will use two databases. The first one will contain the users information and it will be encoded. The second one will be the repository and its access will be controlled through the use of password. 4.2. Case Study The case study presented here is a representative case of a security critical IS in which security requirements have to be correctly treated in order to achieve a robust IS. It will be studied a real case of an e-government service available on the web of the Social Security of Spain (http://www.seg-social.es) which consists of an application (called ePension) that basically allows us to provide information about the status of the pension/s of a concrete citizen. Previously studied in [22], under a different perspective, in this case, it was carried out in the context of a reengineering process, which was performed over the application to adapt it to a new technical environment, so it was critical that the “new” application continued being secure. SREPTOOL helped us to obtain all the security requirements of the application. In this first iteration, as functional and non-functional requirements, which were registered in the IBM-Rational RequisitePro, we identified the following: - **Req1**: On request-1 from a User/Citizen, the system shall display the list of current existing dossiers about his/her pension (social welfare provision). This request shall include data that identify the User/Citizen without mistake, like the social security number. - **Req2**: On request-2 from a User/Citizen, the system shall display the details of the previously selected dossier (type of pension, disability, amount of money, bank account number, etc.). This request shall include data that identify the User/Citizen without mistake, like the social security number and the dossier number. - **Req3**: ePension must have W3C-WAI level double ‘A’ conformance (Usability). - **Req4**: Performance according to the organizational policy for a web application. Fig. 1 Activity 1 of SREPTOOL In addition we assume that the Organization has already introduced some elements into the Security Resources Repository (SRR), such as legal and regulatory requirements that the organization has to comply with, and their socio-cultural environment. After converting these requirements into software and system requirements format, these requirements along with the CC security requirements will be the initial subset of security requirements of the SRR, which together with their associated security elements (security objectives, assets, threats, …) will be the initial subset of security elements of the repository of SREPTOOL. 4.2.1 Activity 1: Definitions Agreement. As we can see in Fig. 1, SREPTOOL helped us to reach an agreement upon a common set of security definitions like: Information security, threat, confidentiality, etc., by providing us with the definitions of these concepts according to ISO/IEC 17799:2005 and ISO/IEC 27001. In addition, SREPTOOL allows us to define new standards as well as their concepts, which will be registered in the repository. It also allows us to state the evaluation assurance level (EAL) of the Common Criteria, such as EAL-1 (Functionally Tested) in this case study; and to set the stakeholders, that will participate in the project, and their roles, according to the available Human Resources previously introduced into the SRR. Furthermore, we collected the SREP starting artifacts in the Security Vision, such as the Organizational Security Policy, Legislation, Context and Security Environment and Previous Assumptions. Finally, SREPTOOL can automatically generate the “Security View Document” with all the data that we had already introduced into this activity (and which conforms to CC format, as all the automated documentation generated by it). 4.2.2 Activity 2. Assets Identification. After analysing the functional requirements (Req 1 and Req 2) and according to CC assurance requirement ADV_FSP.3.1D, we identified the Information as the most relevant asset type. Other assets would need to be considered in a case study without space constraints, including tangible and intangible assets such as reputation. Then we introduced the domain of the project (which is an input of SREP), and the tool showed us all the assets related to the domain. In case an asset is not in the repository, it could be introduced into this tab. As we can see in Fig. 2, in this case, the domain is “Web Application of Social Security” and the assets are as follows (in this first iteration of SREP): - Personal information about the pensioner: name, social security number, address. - Personal information about the pension/s: kind of pension (old-age / disability (type of disability) / widow’s pension.), amount of money, bank account number. 4.2.3 Activity 3: Security objectives Identification. Selecting the assets one by one, the tool showed us their related security objectives available in the selected domain on the repository. In Fig. 3, we selected the next security objectives: - SO1: To prevent unauthorised disclosure of information. (Confidentiality). - SO2: To prevent unauthorised alteration of information. (Integrity). - SO3: To ensure availability of information to the authorised users. - SO4: To ensure authenticity of users. - SO5: To ensure accountability. Additionally, for each one of the security objectives introduced into the project, SREPTOOL lets us establish a valuation. Moreover, we could create new security objectives that there were not at the SRR and we could define dependencies between them, but this case study is only a first iteration, thus we only used the security objectives defined in the SRR. Finally, it could be automatically generated the “Security Objectives Document” (with CC format). 4.2.4 Activity 4: Threats Identification. SREPTOOL allowed us to retrieve the threats associated with the assets of the project automatically. As we can see in Fig. 4, we identified the following threats that could prevent to reach the formerly identified security objectives: - Threat 1: Unauthorised disclosure of information. - Threat 2: Unauthorised alteration of information. - Threat 3: Unauthorised unavailability of information. - Threat 4: Spoof user automatically generate the “Security Problem Definition Document” with the help of the CC assurance class “ASE”. 4.2.5 Activity 5: Risk Assessment. Having identified the threats, then we carried out the risks assessment. In order to carry out this task, SREPTOOL uses a technique proposed by the guide of techniques of MAGERIT [21] and which is based on tables to analyse impact and risk of threats (and which conforms to ISO/IEC 13335 [11]). Firstly, and with the help of the stakeholders, we estimated the degradation that produces each selected threat for each one of the security objectives. Then, we introduced the likelihood with which each threat takes place. And, with these data the tool automatically calculated the risks, as we can see in Fig. 5. Finally, SREPTOOL generated the “Risk Assessment Document”. 4.2.6 Activity 6: Security Requirements Elicitation. In order to derive security requirements, each security objective was analysed for possible relevance together with its threats which imply more risk, so that the suitable security requirements or the suitable package of security requirements that mitigate the threats at the necessary levels with regard to the risk assessment were selected. Once the relevant threats to the project have been selected, we selected those security requirements that the user believed that are necessary. To do so, we had three options, as we can see in Fig. 6: - Having selected a threat, the prototype will show the requirements related to that threat in the SRR. The user will only have to select those requirements he/she considers relevant. - Having selected a class and one of its families (according to the definition of both concepts in the Common Criteria), the prototype will show the security requirements associated with such family (the security and assurance requirements of the CC must be previously inserted into the repository during SREPTOOL installation). The user will be able to select and add to its project the desired requirements. - Selecting one of the requirements packages and within the package, the desired requirements. In this case study we used the first option, and we selected and added to the project those security requirements that we considered relevant to the threats previously defined. These security requirements were: - SR1: The security functions of ePension shall use cryptography [assignment: cryptographic algorithm and key sizes] to protect confidentiality of pension information provided by ePension to a User. (CC requirement FCO_CED.1.1). - SR2: The security functions of ePension shall identify and authenticate a User by using credentials [assignment: challenger-response technique based on exchanging of random encrypted nonces, public key certificate] before a User can bind to the shell of ePension. (CC requirements FIA_UID.2.1 & FIA_UAU.1.1). SR3: When ePension transmits pension or pensioner’s information to the User, the security functions of ePension shall provide that user with the means [assignment: digital signature] to detect [selection: modification, deletion, insertion, replay, other integrity] anomalies. (CC requirement FCO_IED.1.1). SR4: The security functions of ePension shall ensure the availability of the information provided by ePension to a User within [assignment: a defined availability metric] given the following conditions [assignment: conditions to ensure availability]. (FCO_AED.1.1 requirement). SR5: The security functions of ePension shall require evidence that ePension has submitted pension information to a User and he/she has received the information. (CC requirement FCO_NRE.1.1). SR6: The security functions of ePension shall store an audit record of the following events [selection: the request for pension information, the response of ePension] and each audit record shall record the following information: date and time of the event, [selection: success, failure] of the event, and User identity. (CC requirements FAU_GEN). Moreover, SREPTOOL allows us to relate security requirements to the functional and non-functional requirements of the project. It also facilitates the definition of new security requirements by means of a security use case specification, with the help of a template, as well as the selection and/or creation of countermeasures and security tests. 4.2.7 Activity 7: Priorization. The purpose of this activity is that of automating the security requirements prioritization according to the risk of the threats mitigated by them and the dependences between other functional and non-functional requirements. For each one of the security requirements established in the project, we selected which level of priority we will assign it (Critical, Standard and Optional). Then by pressing the “Prioritize” button, SRETOOL sorts the security requirements list from more to less priority. 4.2.8 Activity 8: Requirements Inspection. In this activity, SREPTOOL facilitates the task of verifying that the security requirements conformed to IEEE 830:1998 and ISO/IEC 15408, because it made easier for the user the verification and validation of security requirements through checking those threats for which we have not specified security requirements in the project, together with the assurance requirements that have not been added to the project according to the assurance level defined in activity 1. As we can see in Fig. 7, SREPTOOL advises the quality assurer and the inspection team that we had forgotten to add some assurance requirements like: ADV_FSP.1 Basic functional specification; AGD_OPE.1 Operational user guidance; AGD_PRE.1 Preparative procedures; ALC_CMC.1 Labelling of the TOE; ALC_CM.1 TOE CM coverage. Finally, the tool can generate the “Validation Report” and the “Security Requirements Rationale Document” (under CC format). 4.2.9 Activity 9: Repository Improvement. SREPTOOL allows us to select those security artifacts modified/generated in the iteration and considered interesting for being introduced into the SRR. At last, in this activity, the tool generates the Security Target Document conforming to the Common Criteria (ISO/IEC 15408) that integrates all the information related to the rest of artifacts generated by SREPTOOL in the previous activities. 4.3 Lessons Learnt. Among the most important lessons learnt we may stand out from the case study presented above we can highlight the following ones: • The application of this case study has allowed us to improve and refine several functionalities of SREPTOOL. Furthermore, we refined SREP with regard to the participation of some roles in some activities. • Tool support is critical for the practical application of this process to large-scale software systems due to the number of handled artifacts and the several iterations that have to be carried out. • Integration with other tools of the lifecycle is essential to get an appropriate traceability of the security requirements. And an appropriate implementation of the security requirements engineering into an organization. 5. Related Work Extensive work has been carried out on security requirements during the last few years as it was presented in [23], and there are several works that deals with security requirements management tools, similar to SREPTOOL. There are proposals particularly similar to ours: SirenTool [20] is an add-in of Rational RequisitePro supporting the SIREN (Simple REuse of software requirements) method [27], which is a method to elicit and specify the security system and software requirements including a repository of security requirements initially populated by using MAGERIT and which can be structured according to domains and profiles in a similar way to SREPTOOL. Although SirenTool focuses on requirements lists and it only reuses requirements, which are retrieved via MAGERIT asset hierarchy or via the aforementioned repository structure. A distinguishing property of our suggestion is that we suggest reusing specifications of requirements and threats, as well as security objectives, assets, countermeasures and tests, so that the requirements can be retrieved via assets, security objectives or threats. ST-Tool [5] is a CASE tool developed for modelling and analysing functional and security requirements, it allows us to design and verify them. ST-Tool has been designed to support the Secure Tropos methodology [4]. It is an agent-oriented software development tool, which manages the concepts of actor, service and social relationship. In contrast to SREPTOOL it does not deal with security resources reuse, nor incorporate into its steps the CC and it does not facilitate the generation of reports. UMLsec-Tool [16] supports UMLsec [15]. There are presently several projects at the Munich University of Technology aimed at the development of automated tools for processing UMLsec models to make the methodology directly applicable in the IT industry. They provide an extension to the conventional process of developing use-case-oriented process for security-critical systems. They consider security aspects both in the static domain model and in the functional specification. For the elaboration of the functional aspects they introduce a question catalogue and for the domain model an UML-extension, UMLSec. In brief, the main contributions of SREPTOOL with respect to the former proposals are as follows: • SREPTOOL is a standard-centred tool. It helps us to develop IS which conforms to the most current important security standards in different activities of the requirements engineering process. It integrates the CC (ISO/IEC 15408) security functional requirements into the elicitation of security requirements and it also introduces the CC security assurance requirements into the requirements inspection. In addition, it conforms to ISO/IEC 13335 (GMTS) to carry out the risk assessment. Moreover, it facilitates the conformance with the sections about security requirements of ISO/IEC 17799:2005 (sections: 0.3, 0.4, 0.6 and 12.1) and ISO/IEC 27001:2005 (sections: 4.2.1, 4.2.3, 4.3, 6.a, 6.b and A.12.1.1). • It is a reuse-based tool based on a security resources repository, so that threats and requirements and their specifications, security objectives, assets, countermeasures and tests, are reused. Thus, their quality is successively increased. • SREPTOOL integrates the latest security requirements specification techniques (such as security use cases [3], misuse cases [26], and UMLsec in the next version). • It facilitates the automated integration of the security requirements with the rest of the security artifacts and lifecycle activities, not only with the other requirements but also with other artifacts of the IS lifecycle. • It automates the reports generation. • It is conducted by assets or threats and the risk 6. Conclusions and Future Work Nowadays, software security is generating a growing interest. Moreover, security requirements engineering has become a very important part in the software development process for the achievement of secure software systems. While traditional requirements management tools are not able to directly support the above-exposed security requirements management. We have shown in this paper that a seamless integration of security requirements engineering concepts and the most relevant security standards with regard to the management of security requirements (such as ISO/IEC 15408, ISO/IEC 27001, or ISO/IEC 17799) in these tools is possible. Thus, tools like SREPTOOL are actually a critical enabler for the industrial uptake of security requirements engineering, fact which was shown in the real case study performed at the National Social Security Institute (Spain) [22]. Finally, there is a set of aspects planned for the future of this prototype that will allow us to increase the level of automation of SREP application and so, a better efficiency of the organizations requirements engineering process. Among them, we can highlight the following: to extend the type of supported requirements specifications in order to support UMLSec [15]; to refine the integration with RequisitePro and to extend the tool for it to be supported in other CARE tools; to automate the creation of security use cases by using misuse cases created in SREP activity 4. Acknowledgments This paper is part of the ESFINGE (TIN2006-15175-C05-05) and RETISTRUST (TIN2006-26885-E) projects of the Ministry of Education and Science (Spain), and of the MISTICO (PBC-06-0082) and DIMENSIONS (PBC-05-012-2) projects of the Consejería de Ciencia y Tecnología de la Junta de Comunidades de Castilla- La Mancha and the FEDER. References
{"Source-Url": "https://clei.org/proceedings_data/CLEI2007/buscador/CLEI/clei_101.pdf", "len_cl100k_base": 7184, "olmocr-version": "0.1.49", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 27720, "total-output-tokens": 9602, "length": "2e12", "weborganizer": {"__label__adult": 0.0003306865692138672, "__label__art_design": 0.0003707408905029297, "__label__crime_law": 0.0006785392761230469, "__label__education_jobs": 0.0010404586791992188, "__label__entertainment": 5.114078521728515e-05, "__label__fashion_beauty": 0.0001672506332397461, "__label__finance_business": 0.0005259513854980469, "__label__food_dining": 0.00024890899658203125, "__label__games": 0.0004982948303222656, "__label__hardware": 0.0006542205810546875, "__label__health": 0.0004012584686279297, "__label__history": 0.00019478797912597656, "__label__home_hobbies": 8.499622344970703e-05, "__label__industrial": 0.00042724609375, "__label__literature": 0.00019478797912597656, "__label__politics": 0.00023508071899414065, "__label__religion": 0.0003323554992675781, "__label__science_tech": 0.025604248046875, "__label__social_life": 9.918212890625e-05, "__label__software": 0.01172637939453125, "__label__software_dev": 0.95556640625, "__label__sports_fitness": 0.0002038478851318359, "__label__transportation": 0.00031876564025878906, "__label__travel": 0.000148773193359375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40952, 0.02435]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40952, 0.31127]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40952, 0.91962]], "google_gemma-3-12b-it_contains_pii": [[0, 3313, false], [3313, 8749, null], [8749, 13162, null], [13162, 18132, null], [18132, 21271, null], [21271, 23252, null], [23252, 23713, null], [23713, 26574, null], [26574, 29093, null], [29093, 31598, null], [31598, 36365, null], [36365, 40952, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3313, true], [3313, 8749, null], [8749, 13162, null], [13162, 18132, null], [18132, 21271, null], [21271, 23252, null], [23252, 23713, null], [23713, 26574, null], [26574, 29093, null], [29093, 31598, null], [31598, 36365, null], [36365, 40952, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40952, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40952, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40952, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40952, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40952, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40952, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40952, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40952, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40952, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40952, null]], "pdf_page_numbers": [[0, 3313, 1], [3313, 8749, 2], [8749, 13162, 3], [13162, 18132, 4], [18132, 21271, 5], [21271, 23252, 6], [23252, 23713, 7], [23713, 26574, 8], [26574, 29093, 9], [29093, 31598, 10], [31598, 36365, 11], [36365, 40952, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40952, 0.09677]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
883a7778ed29d5a8111d6915c829381cfa0ea311
A LOW-COST MULTICOMPUTER FOR SOLVING THE RCPSP Grzegorz Pawiński¹, Krzysztof Sapiecha¹ ¹Department of Computer Science, Kielce University of Technology KEY WORDS: RCPSP, multicomputer, distributed processing model ABSTRACT: In the paper it is shown that time necessary to solve the NP-hard Resource-Constrained Project Scheduling Problem (RCPSP) could be considerably reduced using a low-cost multicomputer. We consider an extension of the problem when resources are only partially available and a deadline is given but the cost of the project should be minimized. In such a case finding an acceptable solution (optimal or even semi-optimal) is computationally very hard. To reduce this complexity a distributed processing model of a metaheuristic algorithm, previously adapted by us for working with human resources and the CCPM method, was developed. Then, a new implementation of the model on a low-cost multicomputer built from PCs connected through a local network was designed and compared with regular implementation of the model on a cluster. Furthermore, to examine communication costs, an implementation of the model on a single multi-core PC was tested, too. The comparative studies proved that the implementation is as efficient as on more expensive cluster. Moreover, it has balanced load and scales well. 1. INTRODUCTION Resource allocation, called the Resource-Constrained Project Scheduling Problem (RCPSP), attempts to reschedule project tasks efficiently using limited renewable resources minimising the maximal completion time of all activities [3 - 5]. A single project consists of \( m \) tasks which are precedence-related by finish-start relationships with zero time lags. The relationship means that all predecessors have to be finished before a task can be started. To be processed, each task requires a human resource (HR). The resources are limited to one unit and therefore have to perform different tasks sequentially. RCPSP is an NP-hard problem. In most cases, branch-and-bound is the only exact method which allows the generation of optimal solutions for scheduling rather small projects (usually containing less than 60 tasks and not highly constrained) within acceptable computational effort [1, 5]. Results of the Hartmann and Kolisch [8] investigation showed that the best performing heuristics were the GA of Hartmann [7] and the SA procedure of Bouleimen and Lecocq [2]. Their latest research revealed that the forward-backward improvement technique applied to X-pass methods, metaheuristics or other approaches produces good results and that the most popular metaheuristics were GAs and TS methods. In our previous works, cost-efficient project management based on a critical chain (CCPM) was investigated. The CCPM is one of the newest scheduling techniques [19]. It was used to solve a variant of the RCPSP. A goal of the management was to allocate resources in order to minimise the project total cost and complete it in a given time. A sequential metaheuristic from Deniziak [6] was adapted to take into account specific features of human resources participating in a project schedule. The research showed high efficiency of this adaptation for resource allocation [12]. An extension of the problem, where HRs are only partially available since they may be involved in many projects, was also investigated [14]. The research proved that the adaptation is efficient but the minimization was still time consuming and would require accelerating to cope with bigger real-life problems. Our latest research showed that the algorithm has got an inherent parallelism. Hence, a distributed processing model for solving the extension of the RCPSP was developed and tested on a regular PCs [13]. It gave a time of scheduling even 10 times smaller than the sequential processing. Therefore, in this research we present a new implementation of the model, on a low-cost multicomputer built from PCs connected through a local network. Furthermore, we compare it with regular implementation of the model on a cluster and show that it may be just as efficient, but not so expensive what might limit its practical value. The next section of the paper contains a brief overview of related work. Motivation for the research is given in section 3. An implementation of the distributed processing model for the algorithm is presented in section 4. Evaluation of the implementation in both distributed and parallel environments is given in section 5. The paper ends with conclusions. 2. RELATED WORK Researchers studied the problem and suggested their own solutions which can be divided into exact procedures and heuristics. Branch and bound methods are an example of the exact procedures (see e.g. [3], [4]). In [11] another method, a tree search algorithm, was presented. It is based on a new mathematical formulation that uses lower bounds and dominance criteria. An in-depth study of the performance of the latest RCPSP heuristics can be found in [10]. Heuristics described by the authors include X-pass methods, also known as priority rule-based heuristics, classical metaheuristics, such as Genetic Algorithms (GAs), Tabu search (TS), Simulated annealing (SA), and Ant Colony Optimisation (ACO). Non-standard metaheuristics and other methods were presented as well. The former consist of local search and population-based approaches, which have been proposed to solve the RCPSP. The authors investigated a heuristic which applies forward-backward and backward-forward improvement passes. For detailed description of the heuristic schedule generation schemes, priority rules, and representations refer to [8]. The effectiveness of scheduling methods can be further improved using parallel processing. Some implementations of parallel TS [15–17] and SA [18] algorithms for different combinatorial problems have already been proposed. The most common one is based on dividing (partitioning) the problem such that several partitions could be run in Parallelism in GAs can be achieved at the level of single individuals, the fitness functions or independent runs [21, 22]. All of the parallel approaches fall into three categories: the first uses a global model, the second uses a coarse-grained (island) model and the third uses a fine-grained (grid, cellular) model [20]. In the global model, a master process manages the whole population by assigning subsets of individuals to slave processes. In the island model a population is divided into sub-populations that are evolved separately. During evolution, some individuals are exchanged periodically between them. In the grid model a population is represented as a network of interconnected individuals where only neighbors may interact. It was observed that parallel GAs (PGAs) usually provide better efficiency than sequential ones [20]. The same parallel approaches can be applied for ACO. In [23] five strategies of parallel processing are described, which are mainly based on the well-known master/slave approach [24]. 3. MOTIVATION The sequential algorithms are time consuming, what considerably limits their usefulness. Speeding up the calculations would be desirable for project managers because it may allow managing complex projects in acceptable time. Parallel models offer the advantage of reducing the execution time and give an opportunity to solve new problems which have been unreachable in case of sequential models. The most popular parallel strategies are based on master/slave approach [24] with centralized management of distributing tasks and gathering results. The master can efficiently coordinate the system, avoiding potential conflicts before they take place, and react on failures of the slaves. However, global gathering and re-broadcasting of large configurations can be time-consuming. Costs of synchronization between slaves have to be considered, also. Some slaves may have to wait for completing other tasks, which is necessary to retain data integrity. Moreover, the master is the weakest point of the system. The system will slow down if the master cannot handle incoming requests. If the master crashes, the whole system will also crash. Another problem is load imbalance caused by unpredictable processing time of each slave. Summarizing, the gain coming from parallelization of the algorithm may be significantly reduced. From our research it also follows that parallel processing could reduce efficiently the amount of the time consumed by the metaheuristic algorithm [13]. Usually, such reduction requires a use of a cluster and hence is expensive what may limit its popularity. The key idea to overcome this inconvenience is to make use of multi-core architecture of low-cost PCs, instead of the cluster. Such a multi-multi computer is cheap, easily assembled and might be very useful for practical reasons. However, it should be proven that the implementation is as efficient as on the cluster, and that it has balanced load and scales well. 4. OPTIMIZATION ALGORITHM The metaheuristic algorithm starts with the initial point and searches for the cheapest solution satisfying given time constraints. The initial schedule is generated by greedy procedures that try to find a resource for each task basing upon to the smallest increase of the project duration or the project total cost. It is a suboptimal solution which the algorithm tries to enhance. In each pass of the iterative process, the current project schedule is being modified in order to get closer to the optimum. In the first add stage a new HR which is not in the schedule is attached to it. Tasks of HRs which have already been engaged in the schedule are moved to the HR but only when a positive gain is achieved. Afterwards, if there are HRs without allocated tasks, they are removed from the schedule. The best schedule goes to the next stage and the proceeding is repeated until no more free HRs are available. In the second rem stage all tasks allocated to the HR are moved onto other HRs, still remaining in the schedule, but only when a positive gain is achieved. Then again, HRs without allocated tasks are removed from the schedule. Finally, the best project schedule coming from all stages is chosen. The iterative process is repeated for every resource from the resource library until no improvement can be found. At the very end, project tasks may be shifted right to the latest feasible position into their forward free slack by means of As Late As Possible (ALAP) schedule. 4.1. Distributed processing model The distributed processing model is shown in Figure 1. ![Figure 1 Distributed processing model](image) In general, there are \( R \cdot (1 + R_r) \) schedule modifications that have to be calculated, where \( R \) is the number of HRs and \( R_r \) is the number of HRs that have left after particular add stage. However, not all of them can be performed at the same time. At the beginning, only \( R \) attempts to add a new HR to the schedule may be calculated. Each of the add stages could be performed simultaneously. Afterwards, if any of them is finished, \( R_r \) attempts in the rem stage may be started. The attempts to move all tasks from each of HRs may also be calculated separately. Thus, the maximal number of simultaneous modifications is \( R \cdot R_r \), when all the add stages finish at the same time. The process iteration ends after finishing all of the second stages. 4.2. Implementation of the model The distributed processing model (Figure 2) was implemented in Java. One application, which is a tasks dispatcher (D), manages a pool of threads responsible for communication with other worker applications, located on remote computers. ![Diagram of distributed processing model](image) Figure 2 Implementation of the distributed processing model (D - tasks dispatcher, T - thread, C - remote computer, P - process, RMI - remote method invocation) At the beginning, workers notify the dispatcher about their readiness to execute tasks. The tasks dispatcher creates a new thread for each worker and joins it to the pool. The pool contains as many threads as needed, but will reuse previously constructed threads when they are available. On the remote computers, workers run as independent processes, what makes them available for direct communication. Therefore, the tasks dispatcher may uniformly split the computational tasks, so as to workload could easily be balanced. Each remote computer runs as many processes as the number of processor cores, in order to use the whole computing power of multi-core machines. During executing an iteration of the algorithm, the tasks dispatcher sends schedule modification requests to the first free worker. To this end, it uses Remote Method Invocation (RMI) for communication. If a worker is not responding, it will be removed from the pool and the request will be sent to another free worker. Workers receive project data and the searching parameters so as to invoke a method, in order to perform the add or the rem stage. Afterwards, results of modifications are sent back to the dispatcher and then the thread can be reused. Synchronization occurs at the end of each of the iterations because all the rem stages have to be finished in order to choose the best schedule. 5. COMPARATIVE STUDIES The efficiency of the algorithm described in the paper was estimated on 100 randomly generated project plans containing from 30 to 60 tasks, and from 8 to 16 HRs with random data. Each project plan was scheduled several times and results were averaged. Tasks in the project plan may have at most 4 precedence relationships with probability 0.35. They can be easily scheduled because they have few predecessors or none. If the probability of inserting the precedence relationships were lower, the project plan would contain mostly unconnected tasks. On the other hand, tasks with two or more predecessors significantly decrease the search space. In each project, resource availability was reduced by allocating 30 tasks from PSPLIB, developed by Kolisch and Sprecher [9]. The set with 30 non-dummy activities currently is the hardest standard set of RCPSP-instances for which all optimal solutions are known [4]. However, we considered an extension of RCPSP where resources have already got their own schedule and a cost of the project, but not the project duration, should be minimized. So even though we take the project instances from PSPLIB, the results cannot be compared. The initial schedule was generated by two greedy procedures mentioned at the beginning of section 4. Implementation of the distributed model was run on two distributed systems: - multicomputer built from PCs (Cluster PCs) that comprises 10 multi-core computers with Intel Core i5-760 Processor (8M Cache, 2.80 GHz) and 2 GB of RAM memory, connected via a Gigabit Ethernet TCP/IP local network, - regular cluster that comprises 1 head node with Intel Xeon E5410@2.33GHz, 16GB of RAM memory and 10 processing nodes with Intel Xeon E5205@1.86GHz, 6GB of RAM memory, connected via a Gigabit Ethernet TCP/IP local network. Furthermore, to examine communication costs, an implementation of the model on a single multi-core PC was tested, too. 5.1. Tests which examine implementation of the model in distributed environments The algorithm scalability depends on the number of HRs because it is related to the number of schedule modifications. The number of independent requests, and consequently the need for workers, increases along with the increase of the number of HRs. Influence of changing the number of workers on the computation time towards the number of tasks is shown in Figure 3. In both distributed environments, the computation time significantly falls as the number of workers grows. Decline is particularly visible when only few workers are used. Finally, the computation time exceeds its minimum, no matter how many workers is used. In both environments, also the increase of the number of tasks influences the drop of the scheduling time. However, the cluster, despite slower CPUs, copes better along with the increase of the number of tasks. In the cluster, the growth of the scheduling time in more complex projects is slower, especially when only few workers are used. In general, a reduction of the computation time looks similar in both environments. It is worth noticing that, the computation time was reduced even to 6% of sequential computation time for the project with 60 tasks and 12 HRs (Figure 3b, left column). Figure 3 Computation times compared with the number of workers for constant number of HRs (left column – ClusterPCs, right column – the Cluster) A CPU usage in ClusterPCs during scheduling of a project with 35 tasks and 16 HRs was examined (Figure 4). The CPU usage was monitored every 50 ms and the reads were averaged at the end of calculations. More frequent reads could influence the processor load. The number of HRs was chosen so that enough simultaneous attempts were provided to make workers busy. PCs were running 4 workers each (one worker was assigned to every core). Figure 4 illustrates how the schedule modification requests spread over the available PCs. CPU usage on PC #1 is almost 100% but only when 4 workers are used. If the number of workers increases, the load is balanced by the use of the other PCs. The distributed algorithm scales well because the computational tasks may be uniformly split among workers. Summing up the cores usage (counted in 100%), it grows from 3,7 cores for 4 workers to 9,48 cores for 36 workers. The total core usage together with the tasks dispatcher was 10,02. Hence, the scheduling time was reduced 10 times by the use of 40 cores on 10 PCs. 5.2. Tests which examine the influence of the communication cost on algorithm performance Distributed tests were executed in order to examine how the network latency influences the algorithm performance. To that end, 4 workers were run on the ClusterPCs that comprises 2 multi-core PCs and compared with 4 workers on 2 processing nodes in the cluster and 4 workers on a single PC (so called LocalPC). All workers were using RMI for communication. At first, the number of modification requests was counted with respect to the number of resources and the number of tasks (Table 1). | No. resources | No. task | | |---------------|----------|---|---|---| | 10 | 634 | 755 | 480 | | 12 | 765 | 930 | 869 | | 14 | 1009 | 694 | 1492 | | 16 | 1412 | 1412| 1564 | The number of requests increases as the number of resources increases and varies along with the increase of the number of tasks. However, the more requests are sent, the greater will be the impact of communication cost on the performance. The average scheduling time for a project with 30 tasks is shown in Table 2. Table 2 Average time of transferring data between the tasks dispatcher and workers for a project with 30 tasks [ms] (Remote - workers located on 2 remote computers, Local - workers located on the same machine, Resnum - No. resources). <table> <thead> <tr> <th>Resnum</th> <th>No. tasks</th> <th>cluster</th> <th>ClusterPCs</th> <th>LocalPC</th> <th>Threads</th> </tr> </thead> <tbody> <tr> <td>10</td> <td>2 3 4</td> <td>2 3 4</td> <td>2 3 4</td> <td>2 3 4</td> <td>2 3 4</td> </tr> <tr> <td>12</td> <td>2 3 4</td> <td>2 3 4</td> <td>2 3 4</td> <td>2 3 4</td> <td>2 3 4</td> </tr> <tr> <td>14</td> <td>2 3 4</td> <td>2 3 4</td> <td>2 3 4</td> <td>2 3 4</td> <td>2 3 4</td> </tr> <tr> <td>16</td> <td>2 3 4</td> <td>2 3 4</td> <td>2 3 4</td> <td>2 3 4</td> <td>2 3 4</td> </tr> </tbody> </table> It is clear that the scheduling time decreases when the number of workers grows. Yet, the decline is very low between 3 and 4 workers in the LocalPC because computer resources start to be overloaded when 4 workers and the tasks dispatcher run on the same machine. On average, the LocalPC is about 13% faster than the corresponding ClusterPCs (for less than 4 workers), due to low communication costs. On the other hand the ClusterPCs is better when the number of workers exceeds the number of processor cores. It is also not limited to the number of workers. But even the usage of 4 workers reduced the scheduling time by 54% in the ClusterPCs and by 48% in the cluster, in the project with 30 tasks and 10 HRs. However, the reduction ratio in the former decreases along with the increasing number of resources and does not change in the latter. It means that the cluster copes better than PCs also with the increase of the number of resources. The average time of transferring data between the tasks dispatcher and workers for a project with 30 tasks [ms] (Remote - workers located on 2 remote computers, Local - workers located on the same machine, Resnum - No. resources). Table 3 Average time of transferring data between the tasks dispatcher and workers for a project with 30 tasks [ms] (Remote - workers located on 2 remote computers, Local - workers located on the same machine, Resnum - No. resources). <table> <thead> <tr> <th>Resnum</th> <th>No. tasks</th> <th>cluster</th> <th>ClusterPCs</th> <th>LocalPC</th> <th>Threads</th> </tr> </thead> <tbody> <tr> <td>10</td> <td>30 35 40</td> <td>30 35 40</td> <td>30 35 40</td> <td>30 35 40</td> <td>30 35 40</td> </tr> <tr> <td>12</td> <td>30 35 40</td> <td>30 35 40</td> <td>30 35 40</td> <td>30 35 40</td> <td>30 35 40</td> </tr> <tr> <td>14</td> <td>30 35 40</td> <td>30 35 40</td> <td>30 35 40</td> <td>30 35 40</td> <td>30 35 40</td> </tr> <tr> <td>16</td> <td>30 35 40</td> <td>30 35 40</td> <td>30 35 40</td> <td>30 35 40</td> <td>30 35 40</td> </tr> </tbody> </table> Yet, the increase of the time is much faster in the ClusterPCs, than in the cluster. Consequently, the data transfer in the ClusterPCs gets slower in the projects with more than 35 tasks and 10 HRs. On average, the data transfer is about 2,2 times slower in the ClusterPCs than within a single multi-core PC. On a single machine, it may be further reduced to less than 0,5 ms by the use of threads instead of processes in LocalPC (so called Threads). Threads are much lighter than processes and share the process' resources. Thus, even if only one multi-core machine is available, the scheduling time with the use of 4 workers may be reduced by about 47%. The scheduling time on a single machine with the use of 4 threads is relevant to the scheduling in ClusterPCs on 2 multi-core PCs with 4 workers on each. But still, if the need for workers is greater, the ClusterPCs is better. Moreover, running more threads than 5 on a 4-core processor is not so efficient. Comparison results of time needed to transfer data between the tasks dispatcher and 3 workers, averaged from all attempts, are shown on Figure 5. ![Figure 5](image) Figure 5 Comparison results of time needed to transferring data between the tasks dispatcher and 3 workers averaged from all attempts [ms] 6. CONCLUSIONS In the research, a distributed model was used in order to reduce the computation time for a solution of the RCPSP when resources are partially available. An implementation of the model on a multicomputer built from PCs was tested and compared with regular implementation of the model on a cluster. The tasks dispatcher and workers were connected through a local network and were using RMI for communication. The tasks dispatcher was using multithreading for spreading and gathering data while, at the same time, workers were calculating different schedule modifications and sending back the results. The workers were run on remote computers as independent processes and hence did not have to be synchronized. Workers were gathered in a pool managed by the tasks dispatcher and were available for a direct use. The best efficiency was obtained when there were as many processes running as the number of computer cores. Hence, the more cores inside the computer, the more workers can run on it and fewer PCs are needed. Consequently, the more workers the shorter the computation time, but only when there is enough work to do for the workers. Too few workers cannot handle rapidly growing calculation requests after the first stage of the algorithm. The maximum number of workers depends on the number of HRs because it is related to the number of schedule modifications. Thus, the project scheduling cannot be speed up if there is a lot of resources and not enough workers and vice versa. The research showed that the multicomputer built from multi-core PCs may be successfully used for reduction of the scheduling time. Obtained results are comparable with the cluster. In both environments the reduction of time looks similar. However, the cluster copes better along with the increase of the number of tasks and the number of resources. In the cluster the communication cost is lower than in the ClusterPCs, in the projects with more than 35 tasks and 10 HRs. On a single machine, the scheduling time is about 13%, faster than through a local network (for less than 4 workers) due to lack of the network latency. It can be further reduced by about 47% by the use of threads instead of processes. However, the computer resources start to be overloaded when the tasks dispatcher and more than 3 processes or more than 5 threads run on the same 4-core processor. Therefore, the ClusterPCs outperforms the LocalPC when more than 3 workers and the usage of threads when more than 7 workers are used. The experimental results showed that the distributed model is well-balanced. The computational tasks are uniformly splitted among workers. If the number of workers increases, the load spreads over the available PCs. The distributed algorithm scales well, adjusting to the number of workers. Moreover, if any of the workers crashes, its task will be taken over by another worker and the proceeding will be continued. Various complexities of the projects were tested. However in each, the scheduling time was significantly reduced by the distributed calculations, even up to 6% of sequential time. In comparison to the sequential computing, the number of used cores (counted in 100%) was 10 times higher, during scheduling of a project with 30 tasks and 16 HRs by 36 workers. LITERATURE
{"Source-Url": "https://journals.umcs.pl/ai/article/download/3501/pdf", "len_cl100k_base": 5757, "olmocr-version": "0.1.49", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 37583, "total-output-tokens": 7529, "length": "2e12", "weborganizer": {"__label__adult": 0.0002684593200683594, "__label__art_design": 0.0004203319549560547, "__label__crime_law": 0.0004754066467285156, "__label__education_jobs": 0.00311279296875, "__label__entertainment": 9.757280349731444e-05, "__label__fashion_beauty": 0.00018084049224853516, "__label__finance_business": 0.00107574462890625, "__label__food_dining": 0.0003314018249511719, "__label__games": 0.0007863044738769531, "__label__hardware": 0.003101348876953125, "__label__health": 0.0006227493286132812, "__label__history": 0.0004222393035888672, "__label__home_hobbies": 0.0002052783966064453, "__label__industrial": 0.0014657974243164062, "__label__literature": 0.00021791458129882812, "__label__politics": 0.0003497600555419922, "__label__religion": 0.0004343986511230469, "__label__science_tech": 0.465576171875, "__label__social_life": 0.00014519691467285156, "__label__software": 0.027587890625, "__label__software_dev": 0.49169921875, "__label__sports_fitness": 0.0002892017364501953, "__label__transportation": 0.0008072853088378906, "__label__travel": 0.00024306774139404297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30247, 0.07098]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30247, 0.3643]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30247, 0.9305]], "google_gemma-3-12b-it_contains_pii": [[0, 2645, false], [2645, 5984, null], [5984, 9445, null], [9445, 11418, null], [11418, 13270, null], [13270, 16512, null], [16512, 17092, null], [17092, 18846, null], [18846, 21804, null], [21804, 24494, null], [24494, 27715, null], [27715, 30247, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2645, true], [2645, 5984, null], [5984, 9445, null], [9445, 11418, null], [11418, 13270, null], [13270, 16512, null], [16512, 17092, null], [17092, 18846, null], [18846, 21804, null], [21804, 24494, null], [24494, 27715, null], [27715, 30247, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30247, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30247, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30247, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30247, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30247, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30247, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30247, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30247, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30247, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30247, null]], "pdf_page_numbers": [[0, 2645, 1], [2645, 5984, 2], [5984, 9445, 3], [9445, 11418, 4], [11418, 13270, 5], [13270, 16512, 6], [16512, 17092, 7], [17092, 18846, 8], [18846, 21804, 9], [21804, 24494, 10], [24494, 27715, 11], [27715, 30247, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30247, 0.18]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
a433ded70dd9abd3023766643cc74ee43a0b107f
Last time we designed finite state automata for a few more languages. After that, we started discussing regular expressions and languages that they describe. Today we show that every language that can be described by a regular expression can be recognized by some finite state automaton. 23.1 Connection between Regular Expressions and Finite Automata Regular expressions and finite automata define the same class of languages. We now formalize this equivalence of the expressive powers of regular expressions and finite automata. **Definition 23.1.** A language $L$ is regular if and only if it can be defined by a regular expression, i.e., it can be written as $L(R)$ for some regular expression $R$. **Theorem 23.2.** A language $L$ is regular if and only if it is accepted by some finite automaton, i.e., there exists a finite automaton $M$ such that $L = L(M)$. The proof of Theorem 23.2 consists of proving two implications, which we state as lemmas. We only prove the first implication today. **Lemma 23.3.** Every regular language can be decided by some finite automaton. That is, for every regular expression $R$, there is a finite automaton $M$ such that $L(R) = L(M)$. **Lemma 23.4.** For every language $L$ decidable by some finite state automaton, there is a regular expression $R$ such that $L = L(R)$. ### 23.1.1 A Note on Accepting Let’s conclude this section with a different way of describing what it means for a finite state automaton to accept. This will be useful later in this lecture. Consider a graph representation of a finite state automaton $M$. We say a path $e_1, e_2, \ldots, e_n$ is labeled by a string $x = x_1x_2 \ldots x_n$ if edge $e_i$ is labeled by $x_i$ for $i \in \{1, \ldots, n\}$. The machine $M$ accepts $x$ if there is a path from the start state to an accepting state that is labeled by $x$, and rejects otherwise. **Example 23.1:** Consider the machine $M_1$ from Figure 23.2a. The only path labeled with the string 010 starts at $\alpha$ and the next vertices on this path are $\alpha, \beta, \alpha$. This path does not lead to an accepting state, so $M_1$ rejects $x$. On the other hand, $M_1$ accepts $x = 001$ because the path labeled by this string starts in state $\alpha$ and the next three states it visits are $\alpha, \alpha, \beta$. The last state on the path is accepting. ### 23.2 Finite Automata from Regular Expressions We start with the proof of Lemma 23.3. Regular expressions are defined inductively, and we exploit this definition in a proof by structural induction. Given a regular expression $R$, we construct a finite automaton $M$ such that $L(R) = L(M)$. All regular expressions we discuss will be over an alphabet $\Sigma$. --- 1 23.2.1 Base Cases Recall that there are three elementary regular expressions, namely $\emptyset$, $\epsilon$, and $a$ for $a \in \Sigma$. Their corresponding languages are $\emptyset$, $\{\epsilon\}$, and $\{a\}$ for $a \in \Sigma$, respectively. The empty regular expression corresponds to the empty language. One automaton that decides this language is the automaton that starts in some non-accepting state and stays in that state for the entirety of its computation. We show it in Figure 23.1a. The regular expression $\epsilon$ corresponds to the language consisting only of the empty string. This is accepted by the automaton that starts in an accepting state, and moves to a rejecting state on any input. It stays in that rejecting state until it processes the entire input. We show this automaton in Figure 23.1b. Finally, see the automaton for the language of the regular expression $a$ in Figure 23.1c. It starts in a rejecting state since $\epsilon$ is not in the language. From there, it goes to an accepting state if the input is $a$, and goes to a rejecting state otherwise. For any additional input, the automaton goes to a rejecting state and stays there. Thus, the only way for the automaton to get to an accepting state is if the first and only symbol in the input is $a$. For the induction step, we need to show that if we have automata $M_1$ and $M_2$ for languages $L(R_1)$ and $L(R_2)$, respectively, we can construct finite state automata $N_1$, $N_2$ and $N_3$ such that $L(R_1 \cup R_2) = L(N_1)$, $L(R_1 R_2) = L(N_2)$, and $L(R_1^*) = L(N_3)$. We do so in the rest of this lecture. We remark that you designed automata for some languages obtained by concatenation on the last homework. Constructing automata for those languages required some insights. By getting a deeper understanding of the structure of a language, we can often construct very small automata that recognize it. For the inductive step of the proof of Lemma 23.3, we present general constructions one can use to construct automata for any union, concatenation, and Kleene closure of languages. 23.2.2 Union Let $R_1$ and $R_2$ be regular expressions over the alphabet $\Sigma$. It is possible to handle the case where the two regular expressions are over different alphabets, but for now let’s agree that both are over the same alphabet. Suppose $M_1 = (S_1, \Sigma, \nu_1, s_1, A_1)$ and $M_2 = (S_2, \Sigma, \nu_2, s_2, A_2)$ are such that $L(R_1) = L(M_1)$ and $L(R_2) = L(M_2)$. We combine $M_1$ and $M_2$ into a finite state automaton $N_1 = (S, \Sigma, \nu, s_0, A)$ that recognizes $L(R_1 \cup R_2) = L(R_1) \cup L(R_2)$. The finite automaton $N_1$ should accept a string $x$ if and only if at least one of $M_1$ and $M_2$ accepts $x$. ![Figure 23.1: Automata for the three simple regular expressions.](image-url) Let’s start with a construction that does not work. To see if \( x \in L(R_1 \cup R_2) \), we could run the machine \( M_1 \) on input \( x \), and then run \( M_2 \) on input \( x \). We accept if at least one of the machines accepts. Unfortunately, we cannot model this procedure using a finite state automaton as it requires us to read \( x \) twice. Finite automata cannot “rewind” the input halfway through and start reading it again from the beginning. Instead of running \( M_1 \) and \( M_2 \) in a sequence, we can run them in parallel. For that to work, we need to keep track of the state of both machines at the same time, and update the state of both machines after reading a symbol from the alphabet. To keep track of the states of \( M_1 \) and \( M_2 \) at the same time, we create a state for each pair \((s_1, s_2) \in S_1 \times S_2\). Since \( S_1 \) and \( S_2 \) are finite, \( S = S_1 \times S_2 \) is also finite (more specifically, \(|S| = |S_1| \cdot |S_2|\)). Now let’s design the transition function. Say \( N_1 \) is in state \((t_1, t_2)\) and reads the input symbol \( a \). The first component of the new state should correspond to the state \( M_1 \) goes to on input \( a \) from state \( t_1 \), and the second component of the new state should correspond to the state \( M_2 \) goes to on input \( a \) from state \( t_2 \). Thus, we have \( \nu((t_1, t_2), a) = (\nu_1(t_1, a), \nu_2(t_2, a)) \). We start running \( M_1 \) in state \( s_1 \), and \( M_2 \) in state \( s_2 \), so the start state is \( s_0 = (s_1, s_2) \). Finally, \( N_1 \) should accept if at least one of \( M_1 \) or \( M_2 \) accepts. If \( M_1 \) is in an accepting state \( t_1 \), \( M_2 \) can be in any state. This means that if \( t_1 \in A_1 \), \( (t_1, t_2) \in A \) for any \( t_2 \in S_2 \). Similarly, if the state of \( M_2 \) is \( t_2 \in A_2 \), it doesn’t matter what the state of \( M_1 \) is, so any state of the form \((t_1, t_2)\) with \( t_1 \in S_1 \) and \( t_2 \in A_2 \) should be accepting. In other words, \( A = A_1 \times S_2 \cup S_1 \times A_2 \). The first part of the union takes care of the states where \( M_1 \) accepts, and the second part takes care of the states where \( M_2 \) accepts. \textbf{Example 23.2:} Let \( M_1 \) and \( M_2 \) be the machines from Lecture 21 that accept all binary strings with an odd number of ones and all binary strings that start and end with the same symbol, respectively. Let’s use the procedure we described to construct the automaton for \( L(M_1) \cup L(M_2) \). The result is in Figure 23.2. Note that \(|S_1 \times S_2| = 10\), but we only have 9 states in Figure 23.2c. That is because the state \((\beta, F)\) is not reachable from the start state \((\alpha, F)\). Machine \( M_1 \) can only get to state \( \beta \) after reading at least one input symbol, whereas \( M_2 \) leaves state \( F \) right after reading the first symbol and never returns to that state. \section*{23.2.3 Nondeterministic Finite Automata} Assuming the same notation as in the previous section, we now describe how to construct a finite state automaton \( N_2 \) that accepts the concatenation \( L(R_1) \cup L(R_2) \) assuming that we have automata \( M_1 \) and \( M_2 \) that accept \( L(R_1) \) and \( L(R_2) \), respectively. We outline an initial attempt that runs the two machines in a sequence. Combine two machines \( M_1 \) and \( M_2 \) into \( N_2 \) with the set of states \( S = S_1 \cup S_2 \). Start in the start state of \( M_1 \), and eventually move to some state of \( M_2 \). If the empty string does not belong to \( L(M_2) \), is undesirable for \( N_2 \) to accept while it’s still in one of \( M_1 \)’s states because at that point it has not verified the input ends with a string in \( L(R_2) \). Accepting in such a state may not be the right thing to do in this situation. Hence, if \( M_2 \) doesn’t contain the empty string, only the states in \( A_2 \) are accepting. Otherwise the set of accepting states is \( A_1 \cup A_2 \). Note that \( M_1 \) and \( M_2 \) now run on separate parts of the input, so it is indeed possible to run them in a series. On the other hand, a different problem arises: When do we stop the computation of \( M_1 \) and start running \( M_2 \)? We can start \( M_2 \)’s computation after \( M_1 \) reaches an accepting state. The transition on input \( a \) from that accepting state is to the state \( M_2 \) goes to on input \( a \) from its start state \( s_2 \). When we make this transition, we are indicating that we think the part of the input that belongs to $L(M_1)$ has ended, and that $a$ is the first symbol of the part of the input that belongs to $L(M_2)$. As we will now see, going to $M_2$ right after reaching an accepting state of $M_1$ for the first time may be a mistake. **Example 23.3:** Let’s design a finite automaton for the language $L(M_2)L(M_2)$ using the strategy we outlined in the previous paragraph. Recall from Lecture 22 that $L(M_2)L(M_2) = \Sigma^*$ so the automaton we create should accept every string. We make two copies of $M_2$. Let’s call the second copy $M'_2$. Consider running the machine on input 00011. The machine $M_2$ starts in an accepting state. Thus, if we decide to move to a state of $M'_2$ right after reaching an accepting state of $M_2$, we go to the state $B'$ of $M'_2$ right when we read the first symbol of the input. But now $M'_2$ thinks the string starts with a zero, and it will end in the rejecting state $D'$ when it’s done processing the input. We see that transitioning to $M'_2$ as soon as possible is not the right move. We should transition to a state of $M'_2$ when we read the first 1 in this input, in which case we end in the state $C'$. We stay in that state until the rest of the computation, and accept. Unfortunately, moving to $M'_2$ after seeing the third zero also doesn’t always work. For example, consider the input 000011. The previous example should convince you that transitioning to $M_2$’s states is not a trivial problem. In fact, for any strategy we choose in the previous example, there is a string that causes incorrect behavior. To remedy this, we allow multiple transitions on the same input. On input $a$ in an accepting state of $M_1$, we allow both a transition to a state in $M_1$ on input $a$ according to $\nu_1$, and we also allow a transition to the state $\nu_2(s_2,a)$. Unfortunately, now the transitions are not defined by a function, so we don’t have a finite automaton anymore. The transitions are now defined by a relation, which gives rise to a computational model known as the nondeterministic finite automaton. We show the nondeterministic finite automaton for the language $L(M_2)L(M_2)$ from Example 23.3 in Figure 23.3. Note that since $\epsilon \in L(M_2)$, the states corresponding to the first copy of $M_2$ remain accepting. ![Figure 23.3: A nondeterministic finite automaton for the language $N_2 = L(M_2)L(M_2)$.](image) **Definition 23.5.** A nondeterministic finite automaton is a 5-tuple $N = (S, \Sigma, \nu, s_0, A)$ where - $S$ is a finite set of states. Lecture 23: Nondeterministic Finite Automata 23.2. Finite Automata from Regular Expressions - $\Sigma$ is a finite set of symbols called the alphabet. - The transition relation $\nu$ is a relation from $(S \times \Sigma)$ to $S$, and $((s, a), t) \in \nu$ means that $N$ can go from state $s$ to state $t$ if it reads symbol $a$. If there is no tuple $((s, a), t) \in \nu$ for some $s$ and $a$, $N$ rejects immediately without reading the rest of the input. - The automaton starts in the start state $s_0 \in S$. - The states in $A \subseteq S$ are the accepting states. The machine $N$ accepts $x$ if there is a path from $s_0$ to some state $t \in A$ that is labeled by $x$. The machine $N$ rejects $x$ otherwise. With the exception of the transition relation, Definition 23.5 is exactly the same as the definition of a finite state automaton. Also note that nothing really changes with the graph representation, except now multiple edges leaving a vertex can have the same label. This justifies why we defined acceptance using paths in a graph in Section 23.1.1. There could now be multiple paths from the start state that are labeled with a string $x$. If any one of them leads to an accept state, the machine accepts. Otherwise, the machine rejects. It is also possible that there is no path labeled with $x$, in which case the nondeterministic finite automaton rejects. Example 23.4: Consider the nondeterministic finite automaton in Figure 23.4 that operates on the alphabet $\{0, 1\}$. Observe that there is no transition from some state $s_1$ on input 1, so if it gets to state $s_1$ and reads a 1, it rejects. On the other hand, on any input $x$ with a 1 in it, there is a path to $s_1$ that is labeled by $x$: Stay in $s_0$, and transition to $s_1$ on the last occurrence of 1 in $x$. Thus, the automaton accepts all strings that contain a 1, and rejects all strings without ones. ![Figure 23.4: A nondeterministic finite automaton for Example 23.4.](image) A nondeterministic automaton is no longer a realistic model of computation because it has the capability of arbitrarily choosing which path to take, which is something a computer cannot do. One way of thinking about running a nondeterministic finite automaton is that it’s a process which spawns off a copy of itself for each valid transition, and each copy of the process makes a different transition and continues computing on its own. It then suffices if one copy of this process accepts. Another way to think about it is in terms of graphs. The existence of a path from a start state labeled $x$ and ending in an accept state implies that $x$ is in the language. Yet another way to think about it is that the machine somehow magically knows which path to follow in each case. If there is an accepting path, the machine picks a transition that follows that path whenever it has multiple transitions to choose from. 23.2.4 Concatenation Let’s now go back to designing a finite state automaton for the language $L(R_1R_2)$ out of $M_1$ and $M_2$. The strategy we described in the previous section gives us a generic way of constructing an automaton for such a language. In fact, the strategy we described works even if we have two nondeterministic finite automata \( N_a \) and \( N_b \) and we want to construct a nondeterministic finite automaton for the language \( L(N_a)L(N_b) \). We present this more general construction. Let \( M_1 = (S_1, \Sigma, \nu_1, s_1, A_1) \) and \( M_2 = (S_2, \Sigma, \nu_2, s_2, A_2) \) be nondeterministic finite automata. The nondeterministic finite automaton \( N_2 = (S, \Sigma, \nu, s_0, A) \) for the language \( L(M_1)L(M_2) \) is defined as follows. - \( S = S_1 \cup S_2 \) - \( \nu = \nu_1 \cup \nu_2 \cup \{(s, a), t\} | s \in A_1, ((s_2, a), t) \in \nu_2 \} - \( s_0 = s_1 \) - \( A = \begin{cases} A_2 & \epsilon \notin L(M_2) \\ A_1 \cup A_2 & \text{otherwise} \end{cases} \) We are not done yet because we want a deterministic finite automaton for \( L(R_1R_2) \). The following theorem which we will prove next time completes the construction. **Theorem 23.6.** Let \( N \) be a nondeterministic finite automaton. Then there exists a finite state automaton \( M \) such that \( L(N) = L(M) \). ### 23.2.5 Kleene Closure Finally, let \( R_1 \) be a regular expression and \( M_1 = (S_1, \Sigma, \nu_1, s_1, A_1) \) a finite state automaton such that \( L(M_1) = L(R_1) \). We describe how to use \( M_1 \) in the construction of an automaton \( N_3 \) that accepts the language of the regular expression \( R_1^* \). We only construct a nondeterministic automaton and then appeal to Theorem 23.6. We use some ideas from the construction of a nondeterministic finite automaton that recognizes the concatenation of two languages. Unfortunately, we cannot just repeat the construction multiple times because this would require an infinite amount of states. Instead, we show that one copy of the automaton \( M_1 \) that recognizes \( L(R_1) \) is sufficient. We construct \( N_3 \) as a copy of \( M_1 \) with some additional transitions. When \( M_1 \) is in an accepting state and receives input \( a \), we allow it to transition to any state \( t \) that satisfies \( ((s_0, a), t) \in \nu_1 \) (with this notation, it also follows that \( M_1 \) can be nondeterministic for this construction to work). This allows \( N_3 \) to decide that the last symbol it read from an input \( z \) was the first symbol of the next string in \( L(R) \) that is used as part of \( z \). Notice that the empty string belongs to \( L(R^*) \) no matter what \( R \) is, so if the start state of \( M_1 \) is rejecting, our construction so far fails to accept the empty string. We can remedy that by adding a start state \( s_{\text{init}} \) that is accepting and to which \( N_3 \) is never going to return. Then on input \( a \), we add a transition from \( s_{\text{init}} \) to state \( t \) if \( ((s_1, a), t) \in \nu_1 \). This completes the construction of \( N_3 \). **Example 23.5:** Since the construction we described applies to nondeterministic finite automata, let’s illustrate it on the automaton \( N \) from Figure 23.4. First, we disregard the empty string and only add transitions from the accepting state \( s_1 \) of machine \( N \). In particular, we add the transitions \( ((s_1, 0), s_0) \), \( ((s_1, 1), s_0) \) and \( ((s_1, 1), s_1) \) to the transition relation. This is shown in Figure 23.5a. We complete the construction by adding a state \( s_{\text{init}} \) which accepts the empty string in addition to all other strings in \( L(N)^* \). We add the transitions \( ((s_{\text{init}}, 0), s_0) \), \( ((s_{\text{init}}, 1), s_0) \) and \( ((s_{\text{init}}, 1), s_1) \) to the transition relation. The result is in Figure 23.5b. Here is a formal description of the automaton \( N_3 = (S, \Sigma, \nu, s_0, A) \) for \( L(R_1^*) \) where the nondeterministic finite automaton \( M_1 = (S_1, \Sigma, \nu_1, s_1, A_1) \) satisfies \( L(M_1) = L(R_1) \). [7] Figure 23.5: Turning the automaton $N$ from Figure 23.4 into an automaton that recognizes the language $L(N)^*$. - $S = S_1 \cup \{s_{\text{init}}\}$ - $\nu = \nu_1 \cup \{((s_1, a), t) | ((s_1, a), t) \in \nu_1 \land s_1 \in A_1\} \cup \{((s_{\text{init}}, a), t) | ((s_1, a), t) \in \nu_1\}$ - $s_1 = s_{\text{init}}$ - $A = A_1 \cup \{s_{\text{init}}\}$ Let’s argue that $L(R_1^+) \subseteq L(N_3)$. Since $L(R_1^+) = L(R_1)^* = \bigcup_{k=0}^{\infty} L(R_1)^k$, it suffices to show that $L(R_1)^k \subseteq L(N_3)$ for all $k \in \mathbb{N}$. We argue by induction. For the base case $L(R_1)^0 = \{\epsilon\}$, note that $N_3$ starts in an accepting state, so it accepts the empty string. Therefore, $L(R_1)^0 \subseteq L(N_3)$. Now assume that $L(R_1)^k \subseteq L(N_3)$, and consider a string $x \in L(R_1)^{k+1}$. We can write $x$ as the concatenation $x = x_1x_2 \ldots x_kx_{k+1}$ where $x_i \in L(R_1)$ for $i \in \{1, \ldots, k+1\}$. Let $x' = x_1x_2 \ldots x_k$. Then we have $x = x'x_{n+1}$ where $x' \in L(R_1)^k$ and $x_{n+1} \in L(R_1)$. Since $L(R_1)^k \subseteq L(N_3)$ by the induction hypothesis, there is a path labeled by $x'$ that starts in the start state of $N_3$ and ends in an accepting state $t$ of $N_3$. The automaton $N_3$ is in this state when it starts processing $x_{n+1}$. Since the state is accepting, $N_3$ correctly accepts if $x_{n+1} = \epsilon$ and $\epsilon \in L(R_1)$. When $x_{n+1} \neq \epsilon$, there is a path labeled by $x_{n+1}$ that starts in $M_1$’s start state $s_1$ and ends in some accepting state $t'$. We constructed $N_3$ so that for any input symbol $a$, it can get from any of its accepting states to any state of $M_1$ that is reachable from $s_1$ if the next input symbol is $a$. Furthermore, since $N_3$ contains a copy of $M_1$ in it, it can follow the same path as $M_1$ after processing the first symbol $a$, and thus can reach an accepting state if $M_1$ can. This means that $x \in L(N_3)$. Now the proof that $L(R_1^+) \subseteq L(N_3)$ is complete. Now we argue by induction on the length of a string in $L(N_3)$ that $L(N_3) \subseteq L(R_1)^*$. For the base case, note that $\epsilon \in L(N_3)$ because $N_3$ starts in an accepting state. We also have $\epsilon \in L(R_1)^0$, so $\epsilon \in L(R_1)^*$, and the base case is proved. Now assume that every string $y$ of length at most $n$ that belongs to $L(N_3)$ belongs to $L(R_1)^*$. In other words, for every $y$ of length at most $n$, there is an integer $k \in \mathbb{N}$ such that $y \in L(R_1)^k$. Consider $x \in L(N_3)$ such that $|x| = n + 1$, and look at a path labeled by $x$ that leads from the state $s_{\text{init}}$ to an accepting state of $N_3$. Consider the last time on the path when $N_3$ goes from an accepting state on input symbol $a$ to a state that is reachable from $M_1$’s start state $s_1$ on input symbol $a$. If the last time is after reading the initial symbol, this means that $N_3$ only uses transitions that are present in $M_1$ after reading the initial symbol, and $M_1$ can get to the same state as $N_3$ on the initial input symbol by construction. Thus, $M_1$ accepts the input, so $x \in L(M_1)$, which means $x \in L(R_1)$, and, therefore, $x \in L(R_1)^*$. Otherwise the last time happens after reading some symbol $x_i$ for $i > 1$, and $N_3$ moves to state $t$ after reading $x_i$. By construction, $N_3$ can go to state $t$ from its start state $s_{init}$ on input $x_i$. But then there is a path labeled by the string $x_i x_{i+1} \ldots x_{n+1}$ of length at most $n$ from $s_{init}$ to an accepting state of $N_3$, which means $x_i x_{i+1} \ldots x_{n+1} \in L(N_3)$, and this string also belongs to $L(R_1)^k$ for some $k$ by the induction hypothesis. Furthermore, the string $x_1 \ldots x_{i-1}$ of length at most $n$ is also a string that labels a path from $s_{init}$ to an accepting state of $N_3$, which means that $x_1 \ldots x_{i-1} \in L(R_1)^\ell$. But then $x \in L(R_1)^{\ell+k}$, and we are done.
{"Source-Url": "http://pages.cs.wisc.edu:80/~dieter/Courses/2011s-CS240/scribes/23/lecture23draft.pdf", "len_cl100k_base": 7118, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 35879, "total-output-tokens": 7729, "length": "2e12", "weborganizer": {"__label__adult": 0.0005402565002441406, "__label__art_design": 0.000514984130859375, "__label__crime_law": 0.0003921985626220703, "__label__education_jobs": 0.00223541259765625, "__label__entertainment": 0.00021660327911376953, "__label__fashion_beauty": 0.0002532005310058594, "__label__finance_business": 0.00023031234741210935, "__label__food_dining": 0.000782012939453125, "__label__games": 0.0013227462768554688, "__label__hardware": 0.0015859603881835938, "__label__health": 0.0009069442749023438, "__label__history": 0.00043320655822753906, "__label__home_hobbies": 0.00018405914306640625, "__label__industrial": 0.0007762908935546875, "__label__literature": 0.0011920928955078125, "__label__politics": 0.0003509521484375, "__label__religion": 0.0009670257568359376, "__label__science_tech": 0.135009765625, "__label__social_life": 0.0001691579818725586, "__label__software": 0.00743865966796875, "__label__software_dev": 0.8427734375, "__label__sports_fitness": 0.0004351139068603515, "__label__transportation": 0.000881195068359375, "__label__travel": 0.0002620220184326172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23681, 0.05285]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23681, 0.5423]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23681, 0.84095]], "google_gemma-3-12b-it_contains_pii": [[0, 2717, false], [2717, 5542, null], [5542, 10126, null], [10126, 10576, null], [10576, 12673, null], [12673, 15883, null], [15883, 19690, null], [19690, 22773, null], [22773, 23681, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2717, true], [2717, 5542, null], [5542, 10126, null], [10126, 10576, null], [10576, 12673, null], [12673, 15883, null], [15883, 19690, null], [19690, 22773, null], [22773, 23681, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23681, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23681, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23681, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23681, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 23681, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23681, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23681, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23681, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23681, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23681, null]], "pdf_page_numbers": [[0, 2717, 1], [2717, 5542, 2], [5542, 10126, 3], [10126, 10576, 4], [10576, 12673, 5], [12673, 15883, 6], [15883, 19690, 7], [19690, 22773, 8], [22773, 23681, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23681, 0.0]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
2f5162284ba99db047389218956acbb15ca7bc31
Comparative Study Parallel Join Algorithms for MapReduce environment A. Pigul m05pay@math.spbu.ru Saint Petersburg State University Abstract. There are the following techniques that are used to analyze massive amounts of data: MapReduce paradigm, parallel DBMSs, column-wise store, and various combinations of these approaches. We focus in a MapReduce environment. Unfortunately, join algorithms is not directly supported in MapReduce. The aim of this work is to generalize and compare existing equi-join algorithms with some optimization techniques. Key Words: parallel join algorithms, MapReduce, optimization. 1. Introduction Data-intensive applications include large-scale data warehouse systems, cloud computing, data-intensive analysis. These applications have their own specific computational workload. For example, analytic systems produce relatively rare updates but heavy select operation with millions of records to be processed, often with aggregations. Applications for large-scale data analysis use such techniques as parallel DBMS, MapReduce (MR) paradigm, and columnar storage. Applications of this type process multiple data sets. This implies need to perform several join operation. It’s known join operation is one of the most expensive operations in terms both I / O and CPU costs. Unfortunately, join algorithms is not directly supported in MapReduce. There are some approaches to solve this problem by using a high-level language PigLatin, HiveQL for SQL queries or implementing algorithms from research papers. The aim of this work is to generalize and compare existing equi-join algorithms with some optimization techniques. This paper is organized as follows the section 2 describe state of the art. Join algorithms and some optimization techniques were introduced in 3 section. Performance evaluation will be described in 4 section. Finally, future direction and some discussion of experiments will be given. 2. Related Work 2.1. Architectural Approaches Column storage is one of the architectural approaches to store data in columns, that the values of one field are stored physically together in a compact storage area. Column storage strategy improves performance by reducing the amount of unnecessary data from disk by excluding the columns that are not needed. Additional gains may be obtained using data compression. Storage method in columns outperforms row-based storage for workloads typical for analytical applications, which are characterized by heavy selection operation from millions of records, often with aggregation and by infrequent update operation. For this class of workloads I/O is major factor limited the performance. Comparison of column-wise and row-wise stores approaches is presented in [1]. Another architectural approach is a software framework MapReduce. Paradigm MapReduce was introduced in [11] to process massive amounts of unstructured data. Originally, this approach was contrasted with a parallel DBMS. Deep analysis of the advantages and disadvantages of these two architectures was presented in [25,10]. Later, hybrid systems appeared in [9, 2]. There are three ways to combine approaches MapReduce and parallel DBMS. - MapReduce inside a parallel DBMS. The main intention is to move computation closer to data. This architecture can be exemplified with hybrid database Greenplum with MAD approach [9]. - DBMS inside MapReduce. The basic idea is to connect multiple single node database systems using MapReduce as the task coordinator and network communication layer. An example is a hybrid database HadoopDB [2]. - MapReduce aside of the parallel DBMS. MapReduce is used to implement an ETL produced data to be stored in parallel DBMS. This approach is discussed in [28] Vertica, which also supports the column-wise store. Another group of hybrid systems combines MapReduce with column-wise store. MapReduce and column-wise store are effective in data-intensive applications. Hybrid systems based on this two techniques may be found in [20,13]. 2.2. Algorithms for Join Operation Detailed comparison of relational join algorithms was presented in [26]. In our paper, the consideration is restricted to a comparison of joins in the context of MapReduce paradigm. Papers which discuss equi-join algorithms can be divided into two categories which describe join algorithms and multi join execution plans. The former category deals with design and analyses join algorithm of two data sets. A comparative analysis of two-way join techniques is presented in [6, 4, 21]. The cost model for two-way join algorithms in terms of cost I/O is presented in [7, 17]. The basic idea of multi-way join is to find strategies to combine the natural join of several relations. Different join algorithms from relation algebra are presented in [30]. The authors introduce the extension of MapReduce to facilitate implement relation operations. Several optimizations for multi-way join are described in [3, 18]. Authors introduced a one-to-many shuffling strategy. Multi-way join optimization for column-wise store is considered in [20, 32]. The basic idea of multi-way join is to find strategies to combine the natural join of several relations. Different join algorithms from relation algebra are presented in [30]. The authors introduce the extension of MapReduce to facilitate implement relation operations. Several optimizations for multi-way join are described in [3, 18]. Authors introduced a one-to-many shuffling strategy. Multi-way join optimization for column-wise store is considered in [20, 32]. Theta-Joins and set-similarity joins using MapReduce are addressed in [23] and [27] respectively. 2.3. Optimization techniques and cost models In contrast to the SQL queries in parallel database, the MapReduce program contains user-defined map and reduce functions. Map and reduce functions can be considered as a black-box, when nothing is known about these functions, or they can be written on SQL-like languages, such as HiveQL, PigLatin, MRQL, or SQL operations can be extracted from functions on semantic basis. Automatic finding good configuration settings for arbitrary program offered in [16]. Theoretical designing cost models for arbitrary MR program for each phase separately presented in [15]. If the MR program is similar to the semantics of SQL, it allows us to construct a more accurate cost model or adapt some of the optimization techniques from relational databases. HadoopToSQL [22] allows to take advantage of two different data storages such as SQL database and the text format in MapReduce storage and to use index at right time by transforming the MR program to SQL. Manimal system [17] uses static analysis for detection and exploiting selection, projection and data compression in MR programs and if needed to employ B+ tree index. New SQL-like query language and algebra is presented in [12]. But they are needed cost model based on statistic. Detailed construction of the model to estimate the I/O cost for each phase separately is given in [24]. Simple theoretical considerations for selecting a particular join algorithm are presented in [21]. Another approach [7] for selecting join algorithm is to measure the correlation between the input size and the join algorithm execution time with fixed cluster configuration settings. 3. Join algorithms and optimization techniques In this section we consider various techniques of two-way joins in MapReduce framework. Join algorithms can be divided into two groups: Reduce-side join and Map-side join. The pseudo code presented in Listings, where R – right dataset, L – left dataset, V – line from file, Key – join key, that was parsed from a tuple, in this context tuple is V. 3.1. Reduce-Side join Reduce-side join is an algorithm which performs data pre-processing in Map phase, and direct join is done during the Reduce phase. Join of this type is the most general without any restriction on the data. Reduce-side join is the most time-consuming, because it contains an additional phase and transmits data over the network from one phase to another. In addition, the algorithm has to pass information about source of data through the network. The main objective of the improvement is to reduce the data transmission over the network from the Map task to the Reduce task by filtering the original data through semi-joins. Another disadvantage of this class of algorithms is the sensitivity to the data skew, which can be addressed by replacing the default hash partitioner with a range partitioner. There are three algorithms in this group: - General reducer-side join, - Optimized reducer-side join, - the Hybrid Hadoop join. General reducer-side join is the simplest one. The same algorithms are called Standard Repartition Join in [6]. The abbreviation is GRSJ and pseudo code is presented in Listing 1. This algorithm has both Map and Reduce phases. In the Map phase, data are read from two sources and tags are attached to the value to identify the source of a key/value pair. As the key is not effecting by this tagging, so we can use the standard hash partitioner. In Reduce phase, data with the same key and different tags are joined with nested-loop algorithm. The problems of this approach are that the reducer should have sufficient memory for all records with a same key; and the algorithm sensitivity to the data skew. ```plaintext Map (K: null, V from R or L) Tag = bit from name of R or L; emit (Key, pair(V,Tag)); Reduce (K': join key, LV: list of V with key K') create buffers Br and Bl for R and L; for t in LV do add t.v to Br or Bl by t.Tag; for r in Br do for l in Bl do emit (null, tuple(r.V,l.V)); ``` Listing 1: GRSJ. Optimized reducer-side join enhances previous algorithm by overriding sorting and grouping by the key, as well as tagging data source. Also known as Improved Repartition Join in [6], Default join in [14]. The abbreviation is ORSJ. In Listing 2 pseudo code is shown. In the algorithm all the values of the first tag are followed by the values of the second one. In contrast with the General reducer-side join, the tag is attached to both a key and a value. Due to the fact that the tag is attached to a key, the partitioner must be overridden in order to split the nodes by the key only. This case requires buffering for only one of input sets. Optimized reducer-side join inherits major disadvantages of General reducer-side join namely the transferring through the network additional information about the source and the algorithm sensitivity to the data skew. The Hybrid join [4] combines the Map-side and Reduce-side joins. The abbreviation is HYB and Listing 3 describe pseudo code. In Map phase, we process only one set and the second set is partitioned in advance. The pre-partitioned set is pulled out of blocks from a distributed system in the Reduce phase, where it is joined with another data set that came from the Map phase. The similarity with the Map-side join is the restriction that one of the sets has to be split in advance with the same partitioner, which will split the second set. Unlike Map-side join, it is necessary to split in advance only one set. The similarity with the Reduce-side join is that algorithm requires two phases, one of them for pre-processing of data and one for direct join. In contrast with the Reduce-side join we do not need additional information about the source of data, as they come to the Reducer at a time. 3.2. Map-Side join Map-side join is an algorithm without Reduce phase. This kind of join can be divided into two groups. First of them is partition join, when data previously partitioned into the same number of parts with the same partitioner. The relevant parts will be joined during the Map phase. This map-side join is sensitive to the data skew. The second is in memory join, when the smaller dataset send whole to all mappers and bigger dataset is partitioned over the mappers. The problem with this type of join occurs when the smaller of the sets cannot fit in memory. There are three methods to avoid this problem: - JDBM-based map join, - Multi-phase map join, - Reversed map join. Map-side partition join algorithm assumes that the two sets of data pre-partitioned into the same number of splits by the same partitioner. Also known as default map join. The abbreviation is MSPJ and Listing 4 describe pseudo code. At the Map phase one of the sets is read and loaded into the hash table, then two sets are joined by the hash table. This algorithm buffers all records with the same keys in memory, as is the case with skew data may fail due to lack of enough memory. Optimized reducer-side join **Listing 2: ORSJ.** ```java Map (K: null, V from R or L) Tag = bit from name of R or L; emit (pair(Key, Tag), pair(V, Tag)); Partitioner (K: key, V: value, P: the number of reducers) return hash_f(K.Key) mod P; Reduce (K: join key, LV: list of V’ with key K’) create buffers B, for R; for t in LV with t.Tag corresponds to R do add t.V to B; for l in LV with l.Tag corresponds to L do for r in B, do emit (null, tuple(r.V, l.V)); ``` **Listing 3: HYB** ```java Map (K: null, V from S) emit (Key, V); Reduce (K: join key, LV: list of V’ with key K’) for t in LV do emit (null, t); Map (K: null, V from B) emit (Key, V); Reduce (K: join key, LV: list of V’ with key K’) init() // for Reduce phase read needed partition of output om Job 1; add it to hashMap(Key, list(V)) H; for r in B, do for l in H.get(K) emit (null, tuple(r, l)); ``` **Listing 4: MSPJ** ```java Job 1: partition the smaller file S Map (K: null, V from S) emit (Key, V); Reduce (K: join key, LV: list of V’ with key K’) for t in LV do emit (null, t); Job 2: join two datasets Map (K: null, V from B) emit (Key, V); Reduce (K: join key, LV: list of V’ with key K’) init() // for Reduce phase read needed partition of output file from Job 1; add it to hashMap(Key, list(V)) H; for r in B, do for l in H.get(K) emit (null, tuple(r, l)); ``` In Map phase, we process only one set and the second set is partitioned in advance. The pre-partitioned set is pulled out of blocks from a distributed system in the Reduce phase, where it is joined with another data set that came from the Map phase. The similarity with the Map-side join is the restriction that one of the sets has to be split in advance with the same partitioner, which will split the second set. Unlike Map-side join, it is necessary to split in advance only one set. The similarity with the Reduce-side join is that algorithm requires two phases, one of them for pre-processing of data and one for direct join. In contrast with the Reduce-side join we do not need additional information about the source of data, as they come to the Reducer at a time. Map-side partition merge join is an improvement of the previous version of the join. The abbreviation is MSPMJ and pseudo code is presented in Listing 5. If data sets in addition to their partition are sorted by the same ordering, we apply merge join. The advantage of this approach is that the reading of the second set is on-demand, but not completely, thus memory overflow can be avoided. As in the previous cases, for optimization can be used the semi-join filtering and range partitioner. In-Memory Join does not require to distribute original data in advance unlike the versions of map joins discussed above. The same algorithms are called Map-side replication join in [7], Broadcast Join in [6], Memory-backed joins [4], Fragment-Replicate join in [14]. The abbreviation is IMMJ. Nevertheless, this algorithm has a strong restriction on the size of one of the sets: it must fit completely in memory. The advantage of this approach is its resistance to the data skew because it sequentially reads the same number of tuples at each node. There are two options for transferring the smaller of the sets: - using a distributed cache, - reading from a distributed file system. **Listing 5: MSPMJ.** ```java Job 1: partition S dataset as in HYB Job 2: partition B dataset as in HYB Job 3: join two datasets init() // for Map phase find needed partition SP of output file from Job 1; read first lines with the same key K2 from SP and add to buffer Bu; Map(K:null, V from B) while (K > K2) do read T from SP with key K2; while (K == K2) do add T to Bu; read T from SP with key K2; if (K == K2) then for r in Bu do emit(null, tuple(r, V)); Listing 6: IMMJ init() // for Map phase read S from HDFS; add it to hashMap(Key, list(V)) H; map (K null, V from B) if (K in H) then for T in H.get(K) do emit (null, tuple(V, T)); Listing 7: REV. init() // for Map phase read S from HDFS; add it to hashMap(Key, list(V)) H; map (K null, V from S) if (K in H) then for l in H.get(K) do emit (null, tuple(V, l)); Listing 8: JDBM For part P from S that fit into memory do IMMJ(P, B). Listing 9: Multi-phase map join. The next three algorithms optimize the In-Memory Join for a case, when two sets are large and no of them fits into the memory. JDBM-based map join is presented in [21]. In this case, JDBM library automatically swaps hash table from memory to disk. The same as IMMJ, but H is implemented by HTree instead of hashMap. Multi-phase map join [21] is algorithm where the smaller of the sets is partitioned into parts that fit into memory, and for each part runs In-Memory join. The problem with this approach is that it has a poor performance. If the size of the set, which to be put in the memory is increased twice, the execution time of this join is also doubled. It is important to note that the set, which will not be loaded into memory, will be read many times from the disk. Idea of Reversed map join [21] approach is that the bigger of the sets, which is partitions during the Map phase, loading in the hash table. Also known as Broadcast Join in [6]. The abbreviation is REV. The second dataset is read from a file line by line and joined using a hash table. 3.3. Semi-Join Sometimes a large portion of the data set does not take part in the join. Deleting of tuples that will not be used in join significantly reduces the amount of data transferred over the network and the size of the dataset for the join. This preprocessing can be carried out using semi-joins by selection or by a bitwise filter. However, these filtering techniques introduce some cost (an additional MR job), so the semi-join can improve the performance of the system only if the join key has low selectivity. There are three ways to implement the semi-join operation: - a semi-join using bloom-filter, - semi-join using selection, - an adaptive semi-join. Bloom-filter is a bit array that defines a membership of element in the set. False positive answers are possible, but there are no false-negative responses in the solution of the containment problem. The accuracy of the containment problem solution depends on the size of the bitmap and on the number of elements in the set. These parameters are set by the user. It is known that for a bitmap of fixed size $m$ and for the data set of $n$ tuples, the optimal number of hash functions is $k=0.6931 \times m/n$. In the context of MapReduce, the semi-join is performed in two jobs. The first job consists of the Map phase, in which keys from one set are selected and added to the Bloom-filter. The Reduce phase combines several Bloom-filters from first phase into one. The second job consists only of the Map phase, which filters the second data set with a Bloom-filter constructed in previous job. The accuracy of this approach can be improved by increasing the size of the bitmap. However in this case, a larger bitmap consumes more amounts of memory. The advantage of this method is its the compactness. The performance of the semi-join using Bloom-filter highly depends on the balance between the Bloom-filter size, which increases the time needed for its reconstruction of the filter in the second job, and the number of false positive responses in the containment solution. The large size of the data set can seriously degrade the performance of the join. Semi-join with selection extracts unique keys and constructs a hash table. The second set is filtered by the hash table constructed in the previous step. In the context of MapReduce, the semi-join is performed in two jobs. Unique keys are selected during the Map phase of the first job and then they are combined into one file during the Map phase. The second job consists of only the Map phase, which filters out the second set. The semi-join using selection has some limitations. Hash table in memory, based on records of unique keys, can be very large, and depends on the key size and the number of different keys. The Adaptive semijoin is performed in one job, but filters the original data on the flight during the join. Similar to the Reduce-side join at the Map phase the keys from two data sets are read and values are set equal to tags which identify the source of the keys. At the Reduce phase keys with different tags are selected. The disadvantage of this approach is that additional information about the source of data is transmitted over the network. ### Listing 12: Adaptive semi-join. ```java // before the MR job starts // optimal max = sqrt(|R|+|L|) getSamples (Red:the number of reducers, max: the max number of samples) C = max/Splits.length; Create buffer B; for s in Splits of R and L do get C keys from s; if (Ind.length > 1) add it to B if (V in L) sort B; node = random(Ind); else for i in Ind do emit (pair(Key, node), pair(V, Tag)); for j<(Red*P) T = B.length/(Red*P)*(j+1); write into file B[T]; emit (pair(Key, Ind), pair(V, Tag)); Partitioner (K:key, V:value, P: the number of reducers) return K.Ind; Reducer (K': join key, LV: list of V' with key K') The same as GRSJ ``` ### 3.4. Range Partitioners All algorithms, except the In-Memory join and their optimizations are sensitive to the data skew. This section describes two techniques of the default hash partitioner replacement. A Simple Range-based Partitioner [4] (this kind similar to the Skew join in [14]) applies a range vector of dimension \( n \) constructed from the join keys before starting a MR job. By this vector join keys will be splitted into \( n \) parts, where \( n \) is the number of Reduce jobs. Ideally partitioner vector is constructed from the whole original set of keys, in practice a certain number of keys is chosen randomly from the data set. It is known that the optimal number of keys for the vector construction is equal to the square root of the total number of tuples. With a heavy data skew into a single key value, some elements of the vector may be identical. If the key belongs to multiple nodes, a node is selected randomly in the case of data on which to build a hash table, otherwise the key is sent to all nodes (to save memory as a hash table is contained in the memory). Virtual Processor Partitioner [4] is an improvement of the previous algorithm based on increasing the number of partition. The number of parts is specified multiple of the tasks number. The approach tends to load the nodes with the same keys uniformly (compared with the previous version). The same keys are scattered on more nodes than in the previous case. ```java //before the MR job starts // optimal max = sqrt(|R|+|L|) getSamples (Red:the number of reducers, max: the max number of samples) C = max/Splits.length; Create buffer B; for s in Splits of R and L do get C keys from s; if (Ind.length >1) add it to B if (V in L) sort B; node = random(Ind); else for i in Ind do emit (pair(Key, node), pair(V, Tag)); for j<(Red*P) T = B.length/(Red*P)*(j+1); write into file B[T]; emit (pair(Key, Ind), pair(V, Tag)); Partitioner (K:key, V:value, P: the number of reducers) return K.Ind; Reducer (K': join key, LV: list of V' with key K') The same as GRSJ ``` ### Listing 13: The range partitioners. 3.6. Comparative analysis of algorithms The features of join algorithms are presented in the Table 1. The approaches with pre-processing is good when data is prepared in advance for example come from another MapReduce job. Algorithms with one phase and without tagging is more preferable due to the fact that no additional transferring data through the network are needed. Approaches that sensitive to the data skew may be improved by optimizations with range partitioner. In case of data low selectivity semi-join algorithms can improve performance and reduce the possibility of memory overflow. <table> <thead> <tr> <th>Pre-processing</th> <th>The number of phases</th> <th>Tags</th> <th>Sensitive to data skew</th> <th>Need distr. cache</th> <th>Memory overflow</th> <th>Join algorithm</th> </tr> </thead> <tbody> <tr> <td>GRSJ</td> <td>-</td> <td>2</td> <td>To value</td> <td>yes</td> <td>-</td> <td>Nested loop</td> </tr> <tr> <td>ORSJ</td> <td>-</td> <td>2</td> <td>To key and value</td> <td>yes</td> <td>-</td> <td>Nested loop</td> </tr> <tr> <td>HYB</td> <td>1 data</td> <td>2</td> <td>-</td> <td>yes</td> <td>-</td> <td>Hash</td> </tr> <tr> <td>MSPJ</td> <td>2 data</td> <td>1</td> <td>-</td> <td>yes</td> <td>-</td> <td>Hash</td> </tr> <tr> <td>MSPMJ</td> <td>2 data + sort</td> <td>1</td> <td>-</td> <td>yes</td> <td>-</td> <td>Sort-merge</td> </tr> <tr> <td>IMMJ</td> <td>-</td> <td>1</td> <td>-</td> <td>yes</td> <td>-</td> <td>Hash</td> </tr> <tr> <td>MUL</td> <td>1 data</td> <td>1*part</td> <td>-</td> <td>yes</td> <td>-</td> <td>IMMJ</td> </tr> </tbody> </table> Table 1: Comparative analysis of algorithms. The multiphase and JDBM map join algorithms is excluded from our experiments because of their poor performance. 4. Experiments 4.1. Dataset Data are the set of tuples, which attributes are separated by a comma. Tuple is split into a pair of a key and a value, where value is the remaining attributes. Generation of synthetic data was done as in [4]. Join keys are distributed randomly except experiment with the data skew. 4.2. Cluster configuration Cluster consists of three virtual machines, where one of them is master and slave at the same time, the remaining two are the slaves. Host configuration consists of 1 processor, 512 mb of memory for nodes, 5 gb is the disk size. Hadoop 20.203.0 runs on Ubuntu 10.10. 4.3. The General Case The base idea of this experiment is to compare executions time of different phases of various algorithms. Some parameters are fixed: the number of Map and Reduce tasks is 3, the input size is \(10^5 \times 10^5\) and \(10^6 \times 10^5\) tuples. For a small amount of data, Map phase, in which all tuples are tagged, and Shuffle phase, in which data are transferred from one phase to another, are more costly in Reduce-Side joins. It should be noted that GRSJ is better than ORSJ on small data, but it is the same on big data. It is because in first case time does not spend on combining tuples. Possible, on the larger data ORSJ outperform GRSJ when the usefulness of grouping by key will be more significant. Also for algorithms with pre-processing more time are spent on partitioning data. The algorithms in memory (IMMJ and REV) are similar in small data. Two algorithms are not shown in the graph because of their bad times: JDBM-based map join and Multi-phase map join. In large data IMMJ algorithm could not be executed because of memory overflow. 4.4. Semi-Join The main idea of this experiment is to compare different semi-join algorithms. These parameters are fixed: the number of Map and Reduce tasks is 3, the bitmap size of Bloom-filter is 25*10^5, the number of hash-functions in Bloom-filter is 173, built-in Jenkins hash algorithm is used in Bloom-filter. Adaptive semi-join (ASGRSJ) does not finish because of memory overflow. The abbreviation of Bloom-filter semi-join for GRSJ is BGRSJ. The abbreviation of semi-join with selection for GRSJ is SGRSJ respectively. 4.5. Speculative execution Speculative execution reduces negative effects of non-uniform performance of physical nodes. In this experiment two join algorithms GRSJ and IMMJ is chosen because of different numbers of phases and one of them sensitive to the data skew. Two dataset are considered: normal data that consists of 10^5*10^5 tuples and skew data that contain for one data 5*10^4 same key in tuples and for second data 10 same keys in tuples. In case of IMMJ, which is not sensitive to the data skew, the performance with speculative execution is the similar approach without it. In case of GRSJ algorithm with uniform data approach without speculative execution is better than with it. But GRSJ algorithm with skew data and speculative execution outperforms four times approach without it. 4.6. Distributed cache In [21] was showed that using of distributed cache is not always good strategy. They suggested that the problem can be a high speed network. This experiment was carried out for Reversed Map-Side join, because for which a distributed cache can be important. Replication was varied as 1, 2, 3 and size of data is fixed – $10^6 \times 10^6$ tuples. When data is small, the difference is not always visible. In large data algorithms with distributed cache outperform approach of reading from a globally distributed system. ![Figure 5: Performance of Reversed Map-Side join with and without using distributed cache.](image) 4.7. Skew data It is known that many of the presented algorithms are sensitive to the data skew. In this experiment take part such algorithms as Reduce-side join with Simple Range-based Partitioner for GRSJ (GRSJRange) and Virtual Processor Partitioner for GRSJ (GRSJVirtual), and also for comparing in memory join: IMMJ, REV because of resistant to the skew. Fixed parameters are used: size of two dataset is $2 \times 10^6$, one of the data set has the same key in $5 \times 10^5$ tuples, and another has the same keys in 10 or 1 tuples. In case with IMMJ was memory overflow. ![Figure 6: Processing the data skew.](image) Although these experiments do not completely cover the tunable set of Hadoop parameters, they are shown the advantages and disadvantages of the proposed algorithms. The main problems of these algorithms are time spent on pre-processing, transferring data, the data skew, and memory overflow. Each of the optimization techniques introduces additional cost to the implementation of the join, so the algorithm based on the tunable settings and specific data should be carefully chosen. Also important are the parameters of the network bandwidth when distributed cache are used or not used and a hardware specification of nodes because of it is importance when speculative executions are on. Speculative execution reduces negative effects of non-uniform performance of physical nodes. Based on the collected statistics such as data size, how many keys will be taking part in the join, these statistics may be collected as well as the construction of a range partitioner, the query planner can choose an efficient variant of the join. For example, in [5] was proposed what-if analyses and cost-based optimization. 5. Future work The algorithms discussed in this paper, only two sets are joined. It is interesting to extend from binary operation to multi argument joins. Among the proposed algorithms, there is no effective universal solution. Therefore, it is necessary to evaluate the proposed cost models for join algorithms. And for this problem it is need to use real cluster with more than three nodes in it and more powerful to process bigger data, due to the fact that the execution time on the virtual machine may be different from the real cluster in reading/writing, transferring data over the network and so on. Also the idea of processing the data skew in MapReduce applications from [19] can be applied to the join algorithms. Another direction to future work is to extend algorithm to support a theta-join and outer join. An interesting area for future work is to develop, implement and evaluate algorithms or extended algebraic operations suitable for complex similarity queries in an open distributed heterogeneous environment. The reasons to evaluate complex structured queries are: a need to combine search criteria for different types of information; a query refinement e.g. based on user profile or feedback; advanced users may need query structuring. The execution model and algebraic operation to be implemented are outlined in [31]. The main goal is to solve the problems presented in [8] as a problem. In addition, one of the issues is efficient physical representation of data. Binary formats are known to outperform the text both in speed reading and partitioning key / value pairs, and the transmission of compressed data over the network. Along with the binary data format, column storage has already been proposed for paradigm MapReduce. It is interesting to find the best representation for specific data. 6. Conclusion In this work we describe the state of the art in the area of massive parallel processing, presented our comparative study of these algorithms with optimizations such as semi-join and range partitioner. Also our directions of future work is discussed. References
{"Source-Url": "http://www.ispras.ru/proceedings/docs/2012/23/isp_23_2012_285.pdf", "len_cl100k_base": 7644, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 29624, "total-output-tokens": 10253, "length": "2e12", "weborganizer": {"__label__adult": 0.000339508056640625, "__label__art_design": 0.0004305839538574219, "__label__crime_law": 0.00057220458984375, "__label__education_jobs": 0.0012502670288085938, "__label__entertainment": 0.00012254714965820312, "__label__fashion_beauty": 0.0001958608627319336, "__label__finance_business": 0.0004351139068603515, "__label__food_dining": 0.00040841102600097656, "__label__games": 0.0006556510925292969, "__label__hardware": 0.001605987548828125, "__label__health": 0.0008702278137207031, "__label__history": 0.0003859996795654297, "__label__home_hobbies": 0.0001264810562133789, "__label__industrial": 0.0007314682006835938, "__label__literature": 0.0003812313079833984, "__label__politics": 0.00037932395935058594, "__label__religion": 0.0005311965942382812, "__label__science_tech": 0.3076171875, "__label__social_life": 0.00014662742614746094, "__label__software": 0.027679443359375, "__label__software_dev": 0.65380859375, "__label__sports_fitness": 0.0002701282501220703, "__label__transportation": 0.0005936622619628906, "__label__travel": 0.0002090930938720703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40126, 0.01973]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40126, 0.35788]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40126, 0.89722]], "google_gemma-3-12b-it_contains_pii": [[0, 4385, false], [4385, 9715, null], [9715, 14857, null], [14857, 17587, null], [17587, 20461, null], [20461, 24245, null], [24245, 27046, null], [27046, 29185, null], [29185, 31569, null], [31569, 37738, null], [37738, 40126, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4385, true], [4385, 9715, null], [9715, 14857, null], [14857, 17587, null], [17587, 20461, null], [20461, 24245, null], [24245, 27046, null], [27046, 29185, null], [29185, 31569, null], [31569, 37738, null], [37738, 40126, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40126, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40126, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40126, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40126, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40126, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40126, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40126, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40126, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40126, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40126, null]], "pdf_page_numbers": [[0, 4385, 1], [4385, 9715, 2], [9715, 14857, 3], [14857, 17587, 4], [17587, 20461, 5], [20461, 24245, 6], [24245, 27046, 7], [27046, 29185, 8], [29185, 31569, 9], [31569, 37738, 10], [37738, 40126, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40126, 0.02941]]}
olmocr_science_pdfs
2024-11-28
2024-11-28