id
stringlengths
40
40
text
stringlengths
9
86.7k
metadata
stringlengths
3k
16.2k
source
stringclasses
1 value
added
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
created
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
8ad7690fb49c69b219bd06031c09afc68c183bbe
A conceptual model for semantically-based e-government portals A Conceptual Model for Semantically-based E-Government Portals Alessio Gugliotta\textsuperscript{1,2}, Liliana Cabral\textsuperscript{2}, John Domingue\textsuperscript{2}, Vito Roberto\textsuperscript{1} \textsuperscript{1}University of Udine, Department of Computer Science, via delle scienze 206, 33100 Udine, Italy \textsuperscript{2}The Open University, Knowledge Media Institute, Walton Hall, Milton Keynes MK7 6AA, UK gugliott@dimi.uniud.it l.s.cabral@open.ac.uk j.b.domingue@open.ac.uk roberto@dimi.uniud.it Abstract: Issues of semantic interoperability and service integration for e-government portals are the domain of interest of the present paper. We propose a Conceptual Model for One-Stop e-Government Portals based on the Semantic Web Service technology. We describe our research into building the three basic ontologies and their integration with standard ontologies. The result is a project-independent reusable model. At the same time, we outline a simple methodology for applying the proposed conceptual model into a specific scenario. Keywords: E-Government, One-Stop Portal, Knowledge Management, Ontology, Semantic Web Services, Life Events. 1. Introduction and related work The current trend in e-Government applications calls for integrated services that are effective, simple to use, shaped around and responding to the needs of the citizen, and not merely arranged for the provider's convenience. In this way, the users need have no knowledge of -- nor direct interaction with -- the government entities involved. Thus, services need to be interoperable in order to allow for data and information to be exchanged and processed seamlessly across government. Many projects are being developed and various approaches have been proposed for the design of architectures to deliver e-government services. To quote a few examples, eGOV (eGov), FASME (FASME), EU-PUBLI.com (EuPubli), eGovSM (Mugellini 2005) propose solutions supporting service-based systems. However, no one adopts the Semantic Web technologies for the representation of concepts and actions. In the recent years, organizational knowledge models have been proposed (Gualtieri 2005), (Bonifaccio 2005), (Maicher 2005), aimed at building formal models of processes, resources, goals of enterprises. The models consist of ontologies based on a vocabulary, along with specifications of the semantics of the terms in the vocabulary. In the e-Government scenario, efforts are under way in which semantic technologies are involved. The e-POWER project (Van Engers 2002) adopts solutions to model inferences, like consistency checking and enforcement in legislation. The SmartGov project (SmartGov) developed a knowledge-based platform for assisting public sector employees to generate on-line transaction services. ICTE-PAN (ICTE) proposes a methodology for modelling Public Administration (PA) operations, and tools to transform models into design specifications for public portals. Such projects demonstrated the feasibility of semantic technologies, although no one explored the opportunities offered by a Semantic Web Service (SWS) infrastructure for the interoperability and integration of services. The ONTOGOV project (OntoGov) develops a platform that facilitates the consistent composition, reconfiguration and evolution of services. It relies upon the SWS technology, although its focus is rather on the service life-cycle than the interoperability and integration issues. In our project, we extend the concept of One-Stop Government Portal (Wimmer 2001) and propose the application of Knowledge Management techniques -- in particular, ontologies -- to achieve interoperability and integration issues and responding with the best services to the user needs. In this paper, we present the approach we adopted to design the ontologies and combine them into a sound conceptual model, which in turn serves as the basis of the semantically-enhanced middleware of a public portal. Our work is grounded on a technological paradigm capable to fit the distributed organization of knowledge, with focus on the supply of services. Both Public Administrations (PA’s) and citizens will benefit from a standard conceptual model for describing public services and life events: PA’s will have a shared description structure, thus the production and management of information and services will be eased, while interoperability among agencies would be fostered. On their side, citizens will more easily navigate through different services and administrations. The paper is structured as follows. Section 2 gives a short presentation of the main topics of our research, followed by the introduction of the tools we have adopted. Section 4 describes the methodology for constructing the conceptual model. As a driver for the section, we give a short overview of a case study adopting our conceptual model. Section 5 contains our conclusions and future work. 2. The main topics 2.1 Semantically-based e-Government portals A promising solution for interoperability and integration issues is offered by the so-called One-Stop Government Portals (Wimmer 2001). They integrate distributed components such as: Content Management Systems (CMS) for documents and information; workflow management techniques; cooperation solutions for the PA’s involved; content personalization subsystems for the end-users. Knowledge Management (KM) techniques (Beric 2003), (Apostolou 2005) play a key role in integrating the heterogeneous components by means of a semantically-enhanced middleware. In our approach, the latter operates between the portal and the web services interfacing the functionalities of the back-offices. In particular, ontologies enable the use of vocabularies concerning a domain in a consistent manner (Gomez-Perez 2004): they are tools to formalize knowledge and encode abstract-level data models such as life events, workflow procedures and services. The resulting framework allows for the semantic description, discovery, composition and invocation of services supplied by different Public Administrations, as well as the semantic description, publishing and updating of life events, in such a way to provide citizens with a personalized list of services satisfying their needs in a particular situation. 2.2 Adopting the semantic approach The main issues addressed in the development of a Semantic Web application are (Klischewski 2003): - **Conceptual modelling.** Defining the ontologies that describe the semantic structure of the knowledge in a service-supply scenario: the business logic of the services; the dependencies among the actors collaborating in the business logic; the user needs. - **The infrastructure for semantic interoperability.** Enabling the automated interpretation and paving a common ground to services. In the present paper we focus on the former issue. Ontologies are the basic infrastructure for the Semantic Web: everybody agrees on this, as the very idea of the Semantic Web hinges on the possibility to use shared vocabularies for describing resource contents and capabilities, whose semantics is described in a (reasonably) unambiguous and machine-processable form. Another key technology used in our application is the Semantic Web Services (SWS). They enable the semantic interoperability of distributed services on top of data (XML) and protocol (SOAP) standards (Nilo 2003). The semantic description facilitates activities such as the automated discovery and composition of services. 3. Tools for conceptual modelling To design the ontologies we followed a deductive approach based on existing upper and specialized ontologies, with the assistance of domain experts. In particular, we used the Description & Situations (Gangemi 2003, Section 3.1) as the reference upper ontology, and WSMO (Dumitru 2005, Section 3.2) for describing Semantic Web Services. We also used OCML (Operational Conceptual Modelling Language, Motta 1999) as the ontology description language. 3.1 Upper ontologies: DOLCE, Description& Situations Also called foundational, serve as starting points for building domain ontologies, to provide a reference point for easy and rigorous comparisons among different approaches, and create a framework for analyzing, harmonizing and integrating existing ontologies and metadata standards. They are conceptualizations containing specifications of domain-independent concepts and relations, based on formal principles from linguistics, philosophy and mathematics. Upper ontologies are ultimately devoted to facilitate mutual understanding and interoperability among people and machines. DOLCE (Descriptive Ontology for Linguistic and Cognitive Engineering) belongs to the WonderWeb project Foundational Ontology Library (WFOL) and is designed to be minimal, in that it includes only the most reusable and widely applicable upper-level categories, rigorous in terms of axiomatization and extensively researched and documented (Oltremari 2002). It has been chosen due to its internal structure -- rich axiomatization, explicit construction principles, careful reference to interdisciplinary literature, commonsense-orientation. In addition, being part of the WFOL, DOLCE will be mapped onto other foundational ontologies -- possibly more suitable for certain applications -- and be extended with modules covering different domains (e.g., legal and biomedical); with problems and lexical resources (e.g., WordNet-like lexica). Internal consistency and external openness make DOLCE specially suited to our needs. Figure 1 shows the taxonomy of DOLCE basic categories. ![Taxonomy of DOLCE basic categories](image) In particular, we used the Description&Situations (D&S or DOLCE+) (Gangemi 2003) -- a module of the DOLCE ontology. D&S is a theory describing context elements -- non-physical situations, plans, beliefs,.... as entities. D&S introduces a new category, Situation, that reifies a state of affairs and is composed by entities of the ground ontology (DOLCE). A Situation satisfies a Situation Description, which is aligned as a DOLCE non-physical endurant and composed of descriptive entities, i.e., Parameters, Functional Roles and Courses of Events. Axioms enforce that each descriptive component links to a certain category of DOLCE: Parameters are valued by Regions, Functional Roles are played-by Endurants and Courses of Events sequence Perdurants. ### 3.2 Web Service Modelling Ontology (WSMO) WSMO and OWL-S (OWL-S) aim at representing web services that make use of ontologies. The two efforts take different approaches: WSMO stresses the role of mediation in order to support automated interoperability between services, while OWL-S stresses action representations to support planning processes that provide automated composition. In our approach, we use WSMO for the following reasons: (i) it allows decoupling; (ii) addresses the integration and interoperability issues; (iii) offers a clear-cut separation between goals and services; (iv) it is centred on the Mediation concept: it helps mismatch resolution and supports heterogeneous knowledge. The main components of WSMO are the following: Goals, Web Services, Ontologies and Mediators. Goals represent the types of objectives that users would like to achieve via a web service. The WSMO definition of goal describes the state of the desired information space and the desired state of the world after the execution of a web service. A goal can import existing concepts and relations defined elsewhere, either by extending or simply re-using them as appropriate. Web service descriptions represent the functional behaviour of an existing deployed web service. The description also outlines how web services communicate (choreography) and how they are composed Ontologies are used by the three other WSMO components. Finally, Mediators specify mapping mechanisms (Cimpian 2005). One of the main features of WSMO is that goals, ontologies, and web services are linked by mediators; four kinds of the latter are defined: - **OO-mediators**: provide translation and harmonization between ontologies that are used by the Web services or any other WSMO component; - **GG-mediators**: provide a way to match goals at different levels of granularity. For example, a GG-mediator may take the responsibility to refine the goal *buy the ticket* to the goal *buy a train ticket* upon recognizing that there is a subclass relation between the two concepts; - **WW-mediators**: resolve the interoperability issues between Web Services at all levels: data, process, and protocol. Problems are solved at the level of both the single Web service choreography, and the orchestration of multiple Web services; - **WG-mediators**: handle partial matches between goals of the client and functionalities provided by web services. Concerning the needs for mediation within SWS’s, WSMO distinguishes three levels of mediation: - **Data Level Mediation**: between heterogeneous data sources; within ontology-based frameworks like WSMO, this is mainly concerned with ontology integration. - **Protocol Level Mediation**: between heterogeneous communication protocols, i.e. translation between technical transfer protocols (e.g. SOAP, HTTP, etc.). - **Process Level Mediation**: between heterogeneous business processes; this is concerned with mismatch handling on Web Service Interface description for information interchange between web services and clients. WSMO Mediators create a mediation-orientated architecture for Semantic Web Services, providing an infrastructure for handling the heterogeneities that possibly arise between WSMO components, as well as implementing the design concept of strong decoupling and strong mediation. A WSMO Mediator serves as a third party component connecting heterogeneous elements and resolving mismatches between them. Figure 2 shows the general structure of WSMO (Cimpian 2005). --- **Figure 2: WSMO Mediator Structure** ### 4. A methodology for constructing the Conceptual Model Both Public Administrations (PA’s) and citizens benefit from a standard conceptual model for describing public services and life events. The aim of the application of the methodology is the mapping of an e-Government System Reference Model with meta-ontologies -- i.e., ontologies defining classes and relations, instantiated with sub-classes and sub-relations. In this way, the result is a project-independent standard, a reusable model for e-government applications: it describes the global, uniform view of the scenario using commonly accepted or standardized concepts and properties (attributes and relations), and may have domain specific extensions. Its concepts/properties are either mapped or being mapped onto those in the organizational models. To introduce how agreed Public Administrations (PA’s) can use the Conceptual Model for developing specific extensions, we worked out a case study within the *change of circumstance* scenario, as a part of a portal for the Essex County Council. The end-users are the caseworkers of a Community Care department, helping the citizen to report his/her change of circumstance to the different agencies involved in the process. The citizen has to inform the County only once about the change and the Community Care department automatically notifies all the agencies involved. An example might be when a disabled mother moves in to her daughter’s home. The change of circumstance provokes a change in which services and benefits -- health, housing, equipments, etc. -- the citizens are eligible to receive. Multiple service-providing agencies need to be informed and interact. The case study involves services in two different domains (involved agencies): - **Citizen Assessment (Community Care Department):** relates to information about citizens registered in Essex County Council for assessment of services and benefits. This information is stored in the SWIFT database. - **Order Equipment (Housing Department):** relates to information about equipments which are provided to citizens registered in Essex. This information is stored in the ELMS database. ### 4.1 Mapping the system reference model In order to better clarify our approach, we briefly introduce the e-Government System Reference Model. As shown in Figure 3, there are four main actors in an e-government system: (i) **Politicians**, who define the laws; (ii) **Public Administrators** (i.e., domain experts), who define processes (workflow) for realizing services following the laws; (iii) **Programmers**, implementing services and applications; (iv) **End-users**, who use the services. It is important to notice that, at every level, the language could be different: indeed, a politician uses quite different languages/concepts as compared with a programmer, and, overall, end-user knows what he/she wants to achieve (moving house, getting married, etc.), but does not know exactly which services match the needs. ![Figure 3: Mapping the E-Government System Reference Model.](image) We mapped the reference model with four ontologies, one for each level of the model (Figure 3): - **Legacy Ontology:** defines the concepts and relations describing the laws and the political knowledge that defines the services; - **Workflow Ontology:** concepts and relations describing the workflow of specific services from the PA point of view; - **Service Ontology:** contains the description of services in terms of Semantic Web Services (SWS); - **Life Event Ontology:** defines a hierarchy of topics, concepts and relations of life events. The Legacy ontology is connected with the Workflow ontology, since a law defines a service workflow; each workflow element -- or the whole service -- refers to a law or a part of it. A service workflow is mapped onto the choreography and orchestration of the correspondent SWS descriptions. SWS descriptions are connected with the life event ones, associating the PA with the user point of view. The two latter ontologies are connected with the **E-Government Domain Ontology**, defining concepts and relations of the PA’s domain. It is a sort of interface between the PA’s point of view (Service Ontology) and the user one (Life Event). It describes the building blocks for the descriptions of the two above ontologies. We applied a bottom-up methodology: from the user to the politicians. We focused on the definition of the E-Government Domain, the Life Event and Service Ontologies. They are the basic elements for defining the semantically-enhanced middleware. ### 4.2 The E-Government Domain ontology It encodes concepts of the PA’s: organizational, legal, economic, business, information technology and end-user. Our aim was not to cover all the aspects connected with the e-Government. In fact, distinct PA’s could use the same concepts differently; a single Public Administration (PA) may not share the same point of view and have different interoperability needs by other PA’s. The domain standardization can help, but it does not necessarily unify the aims and languages of all the actors. involved. It is important that every PA keeps its autonomy in the description of its own domain; as we shall describe in the following, this does not affect our ultimate goals of interoperability and integration. We defined a meta-ontology that resides on three levels of abstraction: the instance, the conceptual level and bridging level. The first contains all instances of the conceptual level within the single PA domain. The conceptual level (Figure 4a) is composed by a Domain Ontology Reference Model (named E-Government Upper Level Ontology in Figure 4) and all PA Domain Ontologies. The former describes commonly accepted and standardized concepts and properties, the latter describe the specific extensions within every PA domain. In our work, we have defined the Domain Ontology Reference Model as an extension of D&S upper ontology (Section 3.1). In particular, we added concepts such as: legal-agent, person, postal address, citizen, organization, agency, etc. The PA Domain Ontologies are domain- and context-dependent. They are defined by each PA ending and adapting the Domain Ontology Reference Model with the concepts and the relations used for describing its services. Figure 4b shows the domain ontologies of the case study. Change-of-circumstances-citizen-ontology and swift-services-ontology describe the Citizen Assessment domain. Change-of-circumstances-equipment-ontology and elms-services-ontology describe the Order Equipment domain. Swift-services-ontology and elms-services-ontology are domain ontologies describing concepts of specific back-office databases and they are not derived from the egovernment-upper-ontology. ![Figure 4: (a) The Conceptual level of the E-Government Domain Ontology structure. (b) The domain ontologies of the case study.](image) The bridging level has been introduced for resolving mismatch problems between similar concepts defined in different PA Domains. The bridging level is part of the Service Ontology (mismatch resolution is strictly connected to the service description) and it is described in Section 4.4. ### 4.3 The Life Event ontology Usually, life event ontologies simply define a taxonomy of life events, and a service is related to one of the topics of the taxonomy. In our work, we refer to the life event ontology as the model describing the user point of view. A life event allows the user to identify his/her particular situation and better describe what he/she wants to achieve. The Life Event Ontology is a meta-ontology: a model for describing life events of a specific domain or project. As the E-Government Domain ontology (Section 4.2), we derived the Life Event ontology from the D&S upper ontology (Section 3.1). In particular, we refer to the D&S situation and description concepts. This is the reason why we used D&S instead of DOLCE as upper ontology. Actually, a life event has a double nature: it is an event, and so defines a process that a user wants to achieve, but it is also a situation, and so describes a particular moment (and needs) of the user life. In Figure 5, we report the UML diagram of the Life Event Ontology. The concepts description, situation, event, role and course are defined in D&S. The life event concept defines a hierarchy of topics and can branch into sub-life events. Moreover, at each life event it is possible to associate one or more user Goal defined in the Service Ontology (Section 4.4), representing what a user should do to achieve the desired result. Every life event is associated to a life event description that defines: - the event in terms of norms, information objects (documents), parameters, and results; - the specific (unique) situation of a user in terms of involved agents (applicant, actors and provider), objects and procedures involved. The defined properties of the life event description refer to concepts (agent, legal-agent, non-agentive-social-object, endurant, perdurant, etc.) of the E-Government Domain Ontology (Section 4.2). The life event concepts derived from the Life Event Ontology allow to: (i) define a taxonomy of events for organizing services; (ii) derive instances describing the particular situations of the citizens, allowing the introduction of an instance reasoning module for improving the answers of the systems; (iii) through the connection with the Service Ontology, obtain the services that satisfy the citizen needs in his/her specific situation. Figure 6 shows the Someone Move In life event and part of its description within our case study. Someone-Move-In is a sub-class of the Manage-Family-Related-Life-Event. It refers to three different goals of the Change-of-circumstances-citizen-goals and Change-of-circumstances-equipment-goals (Section 4.4): notify-change-of-address-goal, redirect-equipment-to-new-address-goal, and notify-change-of-family-goal. Someone-Move-In-Description defines the following roles: moving-date as parameter, citizen-applicant as applicant, government-provider as provider, three different involved actors (moving-in-person, destination-family-group, and origin-living-unit), and modified-living-unit as result. The defined roles (not reported in the figure) refer to concept of the Change-of-circumstances-citizen-ontology (Section 4.2). 4.4 The Service ontology It is the core of the conceptual model and contains the Semantic Web Services (SWS) definitions. It plays a double role: allows the description, composition, discovery, and invocation of the service supplied by the different PA’s and it is the glue of the conceptual model, integrating all the defined ontologies. It is composed of four ontologies (Figure 7): Web Service, Goal, WG Mediator and OO Mediator, following the WSMO definitions (Section 3.2). The Web Service Ontology contains the functional behaviours (capability) of all the supplied services. It is the description of the service-supply scenario from the PA's point of view. By means of the choreography and orchestration definitions, a web service is defined as the composition of several services generally supplied by different PA's (integration issue). The Goal Ontology represents the goals that users would like to achieve. It is the semantic interface between the Service and the Life Event ontologies. The WG Mediator Ontology contains all the WG-Mediators connecting a goal (source) with all the web services (targets) that satisfy it, allowing the discovery and invocation processes (user need matching issue). A web service may be selected by a discovery process, and then executed when a goal is required. Beside this, each mediator can be connected with a mediation service (expressed as a Goal) that allows the resolution of a mismatching problem at the protocol and process levels between services and user goals, or services and services (interoperability issue). The OO Mediator Ontology contains all the OO-Mediators that (i) connect an element of the Service Ontology (Goal or Web Service) with the specific PA Domain it refers to, and (ii) connect two concepts of distinct PA Domains that, for instance, are semantically equivalent but described differently, allowing the resolution of the mismatching problem at the data level (interoperability issue). The latter is the bridging level of the E-Government Domain Ontology, as introduced in Section 4.2. Figure 8 shows the notify-change-of-address-goal (used in the Someone-Move-in life event, Section 4.3) with its inputs and output referring to concepts of the Change-of-circumstances-citizen-ontology (Section 4.2), the notify-change-of-address-mediator with the previous goal as source component, and part of the county-council-provider-notify-change-of-address-ws description referring to the previous mediator and specifying the assumption for the execution of the web service. Following the WSMO approach, every PA creates a service description for each web service it is going to supply through the portal. This step typically occurs once for every service deployed, and does not need to be repeated, if the service does not change. If it does, only the particular description has to change, without affecting the other descriptions. All the descriptions are shared, in such a way that a PA can compose a web service with others - referring to available descriptions - or associate it with an existing goal. Figure 9 shows a graphical representation of the service ontologies (grey boxes). of the case study and their dependencies on the respective domain ontologies (light boxes). It is important to notice the absence of dependencies between ontologies crossing the two different domains. Figure 9: The derived service ontology structure (grey boxes) for the Change of Circumstance e-government scenario. 5. Conclusions and future work In this paper, we described a methodological approach for constructing a conceptual model, which is the base for a semantically-enhanced middleware, enabling interoperability and integration issues within a service-supply scenario. We presented the steps in the construction of the conceptual model. As tool supports, we rely on well-known technologies like WSMO and D&S. The proposed model offers significant advantages over other strategies. (i) It allows Public Administrations (PA’s) keeping autonomy in the description of their domain. (ii) It splits the scenario description into end-user and PA points of view, mapping the existing links. A particular care has been placed to user point of view with the introduction of specific meta-ontology for Life Events. (iii) It is based on the promising technology of Semantic Web Services enabling description, composition, discovery, and invocation of Web Services and mismatching resolution among heterogeneous domains. Beside this, the result is a project-independent standard, a reusable model that can be applied in different scenarios. We have applied the proposed approach to a case study of the Change of Circumstance scenario, as a part of a portal for the Essex County Council. For future developments we identified a number of open issues. First, we shall extend the definition of the conceptual model: the workflow and legacy ontologies have to be defined. In particular, the former will address interesting issues about the service workflow from both the user and the PA points of view; another issue is mapping a semantic description of a workflow into the SWS choreography and orchestration (Service Ontology). In addition, we plan to apply distributed knowledge management models to our conceptual model, in order to develop a more flexible approach reducing the shared knowledge at the minimum and leaving autonomy to the PA’s. A parallel work regards the development of the infrastructure for semantic interoperability (Section 2.2): for our case study we adopted the IRS-Ill framework (Domingue 2004) for the creation and execution of semantic web services. Acknowledgment This work is supported by the Data, Information and Process Integration with Semantic Web Services (DIP) – [http://dip.semanticweb.org](http://dip.semanticweb.org) References (SmartGov) “SmartGov Project Website”. [on-line] http://www.smartgov-project.org
{"Source-Url": "http://oro.open.ac.uk/23144/1/ICEG2005_Gugliotta.pdf", "len_cl100k_base": 6175, "olmocr-version": "0.1.49", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 26237, "total-output-tokens": 8144, "length": "2e12", "weborganizer": {"__label__adult": 0.00032806396484375, "__label__art_design": 0.0007452964782714844, "__label__crime_law": 0.0012950897216796875, "__label__education_jobs": 0.0023956298828125, "__label__entertainment": 0.00016748905181884766, "__label__fashion_beauty": 0.0002236366271972656, "__label__finance_business": 0.001880645751953125, "__label__food_dining": 0.000438690185546875, "__label__games": 0.00066375732421875, "__label__hardware": 0.0010099411010742188, "__label__health": 0.0008440017700195312, "__label__history": 0.0009613037109375, "__label__home_hobbies": 0.00012314319610595703, "__label__industrial": 0.0005993843078613281, "__label__literature": 0.0006604194641113281, "__label__politics": 0.0026226043701171875, "__label__religion": 0.00054931640625, "__label__science_tech": 0.2315673828125, "__label__social_life": 0.0002627372741699219, "__label__software": 0.09918212890625, "__label__software_dev": 0.65185546875, "__label__sports_fitness": 0.00020229816436767575, "__label__transportation": 0.0010004043579101562, "__label__travel": 0.00033164024353027344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34382, 0.01975]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34382, 0.31393]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34382, 0.87838]], "google_gemma-3-12b-it_contains_pii": [[0, 63, false], [63, 4397, null], [4397, 8602, null], [8602, 11852, null], [11852, 15640, null], [15640, 19160, null], [19160, 22950, null], [22950, 24899, null], [24899, 27582, null], [27582, 30884, null], [30884, 34382, null]], "google_gemma-3-12b-it_is_public_document": [[0, 63, true], [63, 4397, null], [4397, 8602, null], [8602, 11852, null], [11852, 15640, null], [15640, 19160, null], [19160, 22950, null], [22950, 24899, null], [24899, 27582, null], [27582, 30884, null], [30884, 34382, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34382, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34382, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34382, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34382, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34382, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34382, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34382, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34382, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34382, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34382, null]], "pdf_page_numbers": [[0, 63, 1], [63, 4397, 2], [4397, 8602, 3], [8602, 11852, 4], [11852, 15640, 5], [15640, 19160, 6], [19160, 22950, 7], [22950, 24899, 8], [24899, 27582, 9], [27582, 30884, 10], [30884, 34382, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34382, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
2e6ca63b29b0a04686725a3da4a636eaba3f90e8
Value-based Reflection Document Number: P0993r0 Audience: SG7 Date: Apr 2, 2018 Andrew Sutton <asutton@uakron.edu> Herb Sutter <hsutter@microsoft.com> 1 Introduction This paper describes an implementation of value-based reflection for C++ in the Clang compiler, including issues related to the design of the feature and its accompanying library. This is not a motivation paper. Readers are assumed to be (somewhat) familiar with concepts discussed in the various of preceding publications and proposals. Static reflection is a programming facility that exposes read-only data about entities in a translation unit compile-time values. Static reflection does not require support for runtime compilation since reflection values can be used with existing generative facilities (i.e., templates) or additional generative facilities (i.e., metaprogramming) to produce new code. In contrast, dynamic reflection provides information for navigating source-code data structures at runtime. Language supporting dynamic reflection also tend to make additional facilities available for generating and JIT-compiling new code. Dynamic reflection and code generation are not in the scope of this work. 2 Related work The proposals by Matúš Chochlík, Axel Naumann, David Sankel, the most recent of which is P0194r5, support reflection by way of template metaprogramming. These proposals are currently going through LWG and CWG review. The authors of this paper have also proposed a version of static reflection that was, in many ways equivalent to that of Chochlík, Naumann, Sankel (P0590r0). However, this approach supported metaprogramming using constexpr evaluation. A more recent proposal by the same authors propose a variation of reflection that supports reflection through constexpr programming (P0953r0). This proposal changes the meaning of the reflexpr operator so that it is a constant expression that yields a pointer to an object that reflects some construct in the translation unit. For example: ```cpp template <typename T> T min(const T& a, const T& b) { constexpr reflect::Type const * metaT = reflexpr(T); log() << "min" << metaT->get_display_name() << "(" << a <<", " << b << ") = "; T result = a < b ? a : b; log() << result << std::endl; return result; } ``` The most significant part of the proposal is shown in gray. The object is a base class in a hierarchy of entities that describe the elements of the program. There is one potential problem with this approach. Each reflection would implicitly generate a namespace scope constexpr object, probably with internal linkage. Presumably, we don’t want these objects to be exchanged between translation units. This paper modifies our previous work by reflecting values directly in objects rather than in the type of that object. Unlike P0953r0, we the value returned by reflexpr is opaque; it is effectively an int. Many more details and discussion of our approach follow. 3 Reflection and generative algorithms Static reflection enables the ability to write algorithms that operate on the semantic structure of entities in a program. We can, for example, use static reflection to define an operator equal to compare the values of classes. That is, we want to do this: ```cpp struct S { int a, b, c; }; int main() { S s1 { 0, 0, 0 }; S s2 { 0, 0, 1 }; assert(equal(s1, s1)); assert(!equal(s1, s2)); } ``` The definition of equal can be defined by two functions. The first, is the “driver”, which sets up the reflective algorithm: ```cpp template<typename T> bool equal(const T& a, const T& b) { constexpr meta::object type = reflexpr(T); constexpr meta::object first = meta::first_member(type); return compare<T, first>(a, b); } ``` This calls a second function, compare, which will recursively compare the members of a and b. The reflexpr operator yields a token describing the (eventual) class substituted for T. The type of this value is meta::object. The front function yields a reflection of the member of the class, whatever that may be.¹ The compare function is non-trivial; it is show below. ```cpp template<meta::object X, typename T> bool compare(const T& a, const T& b) { if constexpr (!meta::is_null(X)) { if constexpr (meta::is_data_member(X)) { auto p = valueof(X); if (a.*p != b.*p) return false; } } return false; } ``` ¹ It’s probably going to be the injected class name, which I argue is something that should never be reflected. Note that the algorithms is not constexpr and that the compile-time values of the reflection are passed via template arguments. We use constexpr if to selectively instantiate branches of the function: whenever the reflection X is non-null (i.e., does not correspond to an entity), we can continue recursing through the list. Otherwise, we have reached the end of the list of members, so the objects must be equal. If X is non-null and also a reflection of a data member, then we can compare the corresponding values of that data member in the objects a and b. The valueof operator yields a pointer-to-member value for the reflected data member. This can be used to extract and compare the actual values in a and b. An implementation using concepts is potentially more clearly specified, although less compact. // Base case template<meta::Null X, typename T> bool compare(const T& a, const T& b) { return true; } // Compares data members template<meta::DataMember X, typename T> bool compare(const T& a, const T& b) { auto p = valueof(X); if (a.*p != b.*p) return false; return compare<next(X)>(a, b); } // Skips non-data members template<meta::Object X, typename T> bool compare(const T& a, const T& b) { return compare<next(X)>(a, b); } In this example, the concepts Object, Null and DataMember can be implemented using meta:: reflection predicates. For example: template<typename T> constexpr bool Null = meta::is_null(T); In both implementatinos above, the reflection value must be passed as a template argument due to the use of the valueof operator. This operator takes a constant expression operand (a reflection) and yields a value whose type is determined by the entity reflected. Here, in each recursion through the function, valueof(X) will yield a pointer to a differently typed data member. Note that the algorithm cannot be written iteratively: for (auto m : meta::members(reflexpr(T))) { if (meta::is_data_member(m)) { // Code to compare data members } } We further note that the earlier proposed expansion-based for loop [p0589r0] will also fail to satisfy the requirement because the loop variable cannot be declared constexpr (although that may be fixed in a future version). This algorithm is generative. That is, the composition of the algorithm, not just its computation, depends on the values of reflected entities. In such cases, meta objects must be passed as template arguments. This distinction between normal and generative programming can make value-based reflection difficult to use; it requires implementers to understand when and how to context switch between the different processes of translation and evaluation. 4 Non-generative algorithms Not all reflective algorithms are generative (although we suspect that most are). Here is a simple example that counts (inaccurately) the number of entities nested within another. ```cpp constexpr int count(meta::object x) { int n = 0; for (auto k : x) { // See through templates... if (meta::is_template(k)) k = meta::templated_declaration(k); // Count recursively if (meta::has_members(k)) n += count(k); } return 1 + n; } ``` And its accompanying use: ```cpp constexpr int n = count(reflexpr(std)); std::cout << n << ' '; ``` Note that the variable n must be constexpr in order to force compile-time evaluation of the count function. In order eliminate the additional declaration, we introduce a new feature: immediate functions. An immediate function is one whose result is required to be computed at its call site. This is an extended form of constexpr, which allows a function to be evaluated in particular contexts (e.g., as an initializer of a constexpr variable). We could, for example, modify the definition of count like so: ```cpp immediate int count(meta::object x) { // Same as above } ``` The immediate specifier ensures that the function will be evaluated where it is used. That lets us use the function more simply: ```cpp std::cout << count(reflexpr(std)) << ' '; ``` Note that, within the definition of count, the meta:: functions must also be immediate functions. We discuss issues around immediate functions in much more detail in Section 5. 5 Syntax and semantics This section provides a high-level overview of the changes in the standard necessary to implement this feature. 5.1 Lexical conventions This proposal adds the following keywords: reflexpr valueof idexpr immediate 5.2 New operators We add a number of new operators to the language. In particular, we add a single reflection operator (takes a name denoting an entity and yields a meta::object value), and a number of projection operators (introduces a value, unqualified-id, type-specifier, or nested-name-specifier whose meaning is determined by the value of a meta::object). 5.2.1 The reflection operator There reflection operator has four forms: reflexpr(id-expression) reflexpr(type-id) reflexpr(namespace-name) reflexpr(template-name) In each case the operand of reflexpr is a name of some kind. That is, it denotes an entity or set of functions, as determined by the rules of name lookup. The result of the expression is prvalue meta::object. The value can be used with the projection operators and the meta library to query the semantic properties of reflected entities. Any expression whose type is meta::object is called a reflection. A reflection is a constant expression and can be used anywhere a constant expression of type meta::object is expected (e.g., as a template argument). Throughout, we use reflection as a grammar term to denote a constant expression whose type is meta::object. 5.2.2 The idexpr operator The idexpr operator is a variadic projection operator whose form is: idexpr(arg1, arg2, ..., argN) Each argument is a constant expression that is either a reflection, has integral type, or has type const char*. The operator forms an unqualified-id whose spelling is the concatenation of the operands as follows: - If the argument reflects a named declaration, then the identifier of the declaration. - If the argument has integral type, then the textual representation of the operands value in decimal notation. - If the argument has type const char*, then the sequence of characters in the null-terminated character array. • Otherwise, the program is ill-formed. The *unqualified-id* is then subject to lookup, if used in an expression, or it can be used in a declaration (in which case it is also looked up). For example: ```cpp void idexpr("foo_", 3)() { } // defines foo_3 void f() { idexpr("foo_", 3)(); // OK: calls foo_3 idexpr("foo_", 2)(); // error: no such function foo_2 } ``` If any operands of `idexpr` are dependent, an *unqualified-id* cannot be formed; and the *id-expression* persists in its syntactic form. This is effectively what compilers do when any type or expression is dependent: preserve the syntax until instantiation. Two `idexpr` ids are the same if they would generate the same *unqualified-id*. In the dependent case, this implies that the operands must be the same. **Note:** If an implementation uses (a form of) name lookup during template instantiation to find the templated declaration corresponding to an instantiation, this feature will cause problems. Clang does this when instantiating initializers of member variables in a class template instantiation. One solution is to use the index of the member. Another more robust approach is to separately instantiate all dependent `idexpr` names to determine if a match can be found during lookup. ### 5.2.3 The typename operator The *typename* operator can be used to form a *type-id* from a reflection. It has the syntax: ```cpp typename(reflection) ``` The reflection must reflect a type. This can appear wherever a type-id is allowed or as a *type-specifier* in a declaration. In the latter case, the same rules apply for using *typedef-names* in as a type-specifier. Its use is straightforward: ```cpp template<meta::object T> auto f() { typename(T) x; return x; } ``` When instantiated, the value of `T` is used to form a type-id. ```cpp f<reflexpr(int)>(); // has type int f<reflexpr(std::string)>(); // has type std::string f<reflexpr>(f)>(); // error: reflects a template, not a type ``` ### 5.2.4 The valueof operator The *valueof* operator yields the constant value of a reflected variable with static storage, function, pointer to member, enumerator, or non-type template parameter. It has the form: ```cpp valueof(reflection) ``` The reflection must reflect one of the entities above. The type and value of the expression depend on the entity: For a reflected variable or static data member, the meaning of the expression is the same as if referring to the reference to the object declared by the variable. For a reflected function or static member function, the expression is the same as if taking the address of that function. For a reflected enumerator, the expression is the same as if referring directly to that enumerator. For a reflected non-static class member, the expression is the same as taking the address of that class member. For a non-type template parameter, the expression is the same as if referring to that non-type template parameter. For example: ```cpp // at global scope int x = 42; int f() { return x; } enum E { A }; struct S { int x; } valueof(reflexpr(x)) // has type int&, value of 42 valueof(reflexpr(f)) // has type int (*)(()), value of &f valueof(reflexpr(A)) // has type E, value of A (i.e., 0) valueof(reflexpr(S::f)) // has type int S::* , value of &S::* ``` ### 5.2.5 The namespace operator The namespace operator can be used to form a nested-name-specifier referring to a reflection of a namespace or class. ```cpp namespace(reflection) ``` The reflection must be that of a namespace, class, or scoped enum. Note that you could presumably also use the typename operator for the same effect. ### 5.2.6 A general purpose projection operator It may be possible to collapse all of these different operators, except `idexpr`, into a single projection operator. This has been previously called postfix-$ in Herb’s metaclass proposal PXXX. That particular spelling does not work, because it is effectively unparsable in the contexts where projections may appear. However, an alternative spelling, say `deflexr(e)`, may work, if the entity reflected by the expression e has an unambiguous interpretation in the context where the operator appears. One benefit of using named operators is that it allows overloading. For example: ```cpp static int x; typename(meta::type(reflexpr(x)) y; // OK: y has type int typename(reflexpr(x)) z; // OK: z has the same type as x (int) ``` This isn’t possible using a single projection operator, especially in the last case: ```cpp deflexpr(x) z; // error: ambiguous parse ``` --- 2 This has not been implemented. Specifically, it is not clear whether the \texttt{defExpr(x)} phrase is intended to be used as an expression (i.e., as if using \texttt{valueof}), or a \textit{typeSpecifier} in a declaration. My preference is for specifically named operators. 5.2.7 Immediate functions In order to provide library support, we provide a new specifier (and semantics) for function declarations. The immediate specifier is a new \texttt{functionSpecifier}. When present, the declared function implicitly declared \texttt{constexpr}. A declaration of an immediate function must be a definition. \textbf{NOTE}: This section is intentionally incomplete. See Section 7.2 for discussion of design issues around immediate functions. 6 Library support There is a large support library required for static reflection. This library must define: - A type to represent the value of a reflected entity or other construct (possibly with language support (e.g., \texttt{std::nullptr_t}). - A set of algorithms and data types need to query reflected entities. This paper uses the namespace \texttt{cppx::meta} to enclose these definitions. 6.1 The \texttt{meta::object} type The \texttt{meta::object} type is the result type of \textit{reflection-expressions} and other library operations that return metaobjects. It is declared as: \begin{verbatim} using object = /* implementation-defined */; \end{verbatim} This type must: 1. be a regular (default constructible, copyable), literal type, 2. be usable as the type of a non-type template parameter, 3. contextually convertible to bool 4. be able to encode references to internal compiler data structures, 5. only not allow non-default values to be created, except by the implementation. The first two requirements are readily satisfied by any integral type. The 3rd imposes a minimum size for the type. This is likely no smaller than the size of a pointer in the compiler’s host architecture. Thus far, we have been able to encode all reflectable constructs as either 64-bit AST node pointers, or keys into internal tables, using the low-order bits in the address to differentiate between different kinds of data structures. Note that every \texttt{meta::object} is either a reflection (a handle to an internal data structure) or the \textit{null reflection} value. The latter is used throughout the library to indicate ends of lists and absent optional values. The 4th requirement prohibits programmers from misinterpreting arbitrary integer values as reflection values. The only valid user construction is the null reflection. Unfortunately, this property is not enforceable for integral types; we would need to introduce a special type to accommodate these semantics. Further, it is desirable if this type is only usable in constexpr functions, since the passing of its values at runtime will produce undefined behaviors. We discuss this idea in more detail in Section XXX. My implementation defines meta::object like this: ```cpp genlass class object : std::uintptr_t { }; ``` This generally supports the 4th requirement, except that a user can create of this type using a static_cast. Note that this assumes that the size of std::uintptr_t are the same for both the host and target architecture. Language support for this type would be needed to guarantee size requirements. ### 6.2 Library functions Functions in the meta library query semantic properties of the reflected entity or construct. Here is a brief synopsis of the main functions in the library: ```cpp enum object_kind { null_reflection, // ... one label for each kind of entity or construct ... namespace_reflection, class_reflection, // etc. }; ``` ```cpp // Classifiers immediate object_kind kind(object); immediate bool is_null(object); immediate bool is_namespace(object); immediate bool is_variable(object); immediate bool is_function(object); immediate bool is_parameter(object); immediate bool is_class(object); immediate bool is_union(object); immediate bool is_data_member(object); immediate bool is_bitfield(object); immediate bool is_member_function(object); immediate bool is_constructor(object); immediate bool is_destructor(object); immediate bool is_conversion(object); immediate bool is_enum(object); immediate bool is_enumerator(object); immediate bool is_template(object); immediate bool is_type(object); ``` ```cpp // Context traversal immediate object context(object); immediate bool has_members(object); immediate object first_member(object); immediate object next_member(object); immediate bool has_type(object); immediate object type(object); immediate object templated_declaration(object); ``` // Members class member_iterator { ... } class member_range; immediate member_range members(object); // Semantic properties immediate bool is_named(object); Immediate bool const char* name(object); immediate bool isExtern(object); immediate bool isStatic(object); immediate bool isThreadLocal(object); immediate bool isMutable(object); immediate bool isVirtual(object); immediate bool isPureVirtual(object); immediate bool isExplicit(object); immediate bool isInline(object); immediate bool isFriend(object); immediate bool isFinal(object); immediate bool isOverride(object); immediate bool isComplete(object); immediate bool isDefined(object); immediate bool isDefaulted(object); immediate bool isDeleted(object); immediate bool isPublic(object); immediate bool isPrivate(object); immediate bool isProtected(object); Omissions are to be expected; this is just an initial set of operations that have been somewhat obvious or necessary for examples thus far. For example, there is, as of yet, no method of traversing base class specifiers or elements of an overload set.\(^3\) The semantic properties of entities are queried based on the set of declaration-specifiers and other qualifiers appearing in the various declarations. This is intentional. We (as programmers) more commonly think about whether or not a function is static than we do that it has internal or external linkage. Or similarly that a variable is a static, not that it has static storage. In other words, the specifier is a reasonable surrogate for the semantic property. The context, first_member and next_member functions give an interface for walking up and down a tree. These functions will return the null reflection when there is no enclosing context (i.e., the global namespace), first member, or next member. \(^3\) Although this did appear in a previous reflection proposal. The reflections returned by \texttt{first\_member} and \texttt{next\_member} are those corresponding to the first declaration introducing an entity and not subsequent re-declarations or synonyms.\footnote{This is a change in semantics from the current implementation. I currently yield the set of declarations, but I have found this to be somewhat troublesome in practice. I should never, for example, be forced to think about the injected class name. It also raises questions. Should access-specifier-declarations be reflected? Static assertions? Is a friend really a declaration?} ```cpp namespace N { void f(); struct S { } void f() {} using M::g; using namespace N2; } constexpr void walk() { meta::object obj = meta::first_member(reflexpr(N)); while (obj) { // visit(obj) obj = meta::next_member(obj); } } ``` The elements visited, in order are: 1. \texttt{void f()}, where with the definition being visible. 2. \texttt{struct S { }}. Neither the \texttt{using-declaration} nor the \texttt{using-directive} are included in the traversal. Those should could be accessed by other functions such as \texttt{using\_names} and \texttt{using\_namespaces}. The \texttt{member\_iterator} and \texttt{member\_range} classes build on the \texttt{first\_member} and \texttt{next\_member} functions in an obvious way. These facilities define a forward range over the members of a namespace or class, parameters of a function, enumerators, etc. ### 6.3 Implementation The library facilities are implemented with the help of intrinsics. For example \texttt{meta::kind} is implemented like this: ```cpp imm\_diate ob\_ject\_kind kind(object x) { return __reflect\_index(x); } ``` The \texttt{__reflect\_index} intrinsic is somewhat like an intrinsic type trait except that its value is computed during \texttt{constexpr} evaluation and not translation (i.e., it is not a projection operator). The library currently requires 5 or 6 intrinsics, each returning various properties of a reflected entity. ### 7 Design issues The following sections describe open questions and issues related to this design. 7.1 Reflecting entities vs. declarations This proposal specifically emphasizes the reflection of properties related to entities (and some associated constructs) and not their syntactic elements: declarations. There are a few reasons for this. First, to do otherwise, would mean that the reflection operator (described in more detail below), when given a name, would be required to produce a list of declarations that introduce the name. For example: ```cpp static void f(); void f() { } constexpr meta::object fn = reflexpr(f); ``` How does a user determine if \( f \) is static? If the reflection operator reflects entities, then the answer is straightforward: the first declaration determined the linkage of the name. If the reflection operator yields a list of declarations, then one of two things must happen. Either: - Users would be required to traverse the list of declarations to determine the cumulative properties of the entity or entities found; - The reflection support library would be required to implement the function above, or it could expose an interface allowing direct query of entity corresponding to the declset. I believe that forcing users to do a) would ultimately be a disaster. This example is fairly simple and the solution obvious. However, taken in the extent, we would effectively require compile-time implementations of many of the things that the already does for us, like determine the meaning of a name. Consider: ```cpp using T = int; using U = const T; U x = 0; constexpr meta::object var = reflexpr(x); constexpr meta::object type = meta::type(var); // #1 ``` What should the variable \( type \) reflect at #1? I would be very surprised if the answer is not const int. If the answer is that it reflects something else, say, a typedef type, then the user will have to work harder to find the semantic properties of that type. This entails either walking through a list of “desugarings” or finding a function that does the right thing. Moreover, it is not clear if that interpretation can be supported by all of today’s conforming implementation. Users are not, and should not be expected to be, compilers.\(^5\) In either case, there is additional compile-time overhead needed to compute the result. I strongly suggest that reflection reflect the semantic properties of declarations rather than their syntax. It should be possible to extend the semantics of the proposal to include the ability to find declarations or specific uses of entities (e.g., give me all declarations of \( f \) or the aliased structure of \( U \)). Note also that static reflection is limited to entities introduced into a single translation unit. Reflections of entities cannot be passed (as values) between separately compiled translation units. Note, however, that because of the ODR, all properties of the same entity, queried in multiple translation units must be the same. \(^5\) Although the converse seems to be a popular opinion. 7.2 The runtime divide For many uses of reflection, we run into problems ensuring that reflective (but not necessarily generative) algorithms are evaluated at compile-time. For example, consider this use of the constexpr count function from the introductory sections. ``` std::cout << count(reflexpr(std)) << '\n'; ``` This usage of count is interesting. It seems like it should be okay, but it is almost certainly incorrect. We discuss the reasons for this from the perspective of our implementation experience. First, we note that `count(reflexpr(std))` may not be evaluated at compile time. There is nothing that prevents an implementation from evaluating a call to constexpr function if all the operands are constant. In this is the case, then the program would be well-formed and correct. If, however, the function is not evaluated during translation, this may result in a compiler error, although not a C++ diagnostic. The call to count constitutes an odr-use of the function, which will cause the compiler to emit the definition, along with definitions of the various meta:: functions used within the body of count. Almost all meta:: functions are implemented using compiler intrinsics (more below). These intrinsics traffic between meta::object values and program entities. In other words, they must be evaluated at compile-time. During lowering, those values are absent, and so the compiler cannot generate operations corresponding to those intrinsics. This is not a solved problem in our implementation. The next three sections describe possible solutions to this problem. 7.2.1 Immediate application We partially solve the problem by introducing a new form of constexpr specifier that requires the function to be evaluated at the call site. Using this, we can write count like so: ``` immediate int count(meta::object x) { ... } ``` When the analysis of a call expression to an immediate is finished, then that expression is evaluated as a constant expression. If evaluation fails, the program is ill-formed. That satisfies this usage scenario: ``` std::cout << count(meta::object x) << '\n'; ``` It is also worth noting that the examples given of std::file() and std::line() in Bjarne’s paper, PXXX, also fall into this category. ``` Logger("log entry from file {1} line {2}", std::file(), std::line()); ``` The use of std::file() and std::line() would need to be evaluated at the call site in order to produce constants to be substituted into the log entry later. Their implementation must be by compiler intrinsic, which returns a constant representing the current point of translation (the point in a program where translation is required to evaluate a constant expression). This feature raises other questions: Can one immediate function call another immediate function? Yes, but you cannot immediately evaluate the callee at the call site if the operands refer to parameters of calling immediate function. For example: ```cpp immediate int f(int n) { return f(n - 1); } ``` At the point you analyze the recursive call, the expression \( n - 1 \) cannot be evaluated. In other words, there is some criteria where by which immediate evaluation is actually deferred. Can a constexpr function call an immediate function? This is a variation on the theme above. The answer is yes, but likely only when the call can be evaluated at the call site: ```cpp immediate int f(int n) { n + 1; } constexpr int g(int n) { constexpr int x = f(42); // OK: 42 is a constant constexpr int y = f(n); // error: cannot evaluate immediately } ``` The reason that the call to \( f(n) \) would be an error is that \( g \) could be odr-used in a non-constexpr context, requiring its definition to be lowered to IR. The value of \( n \) is not a constant in that context. Should constexpr evaluation contexts really be self-selecting? In other words, is it sane to let a function determine when it is evaluated rather than leaving that to the developer? The answer is probably yes, as long as the function has no side effects. Today, constant expressions do not have side effects. But that may not always be the case. For example, metaprogramming and constexpr debugging both introduce novel kinds of side effects for constant expressions. In our metaprogramming implementation, we cache requests to generate source code during compile-time evaluation. These requests are interpreted by translation process to modify the program after the completion of a metapgram. Similarly, our (admittedly very simple) constexpr debugging scheme requests the emission of log messages, which are subsequently processed by the translation into compiler diagnostics. As a side note, these effects are never visible during evaluation. An immediate function that has side effects can generate those effects at unexpected times. For example: ```cpp immediate void print(int n) { meta::compiler.print(n); } immediate int f(int n) { for (int i = 0; i < n; ++i) print(42); } ``` The translation of this program fragment will print the number 42 exactly once. Can you overload immediate and constexpr functions? We don’t know, but it’s probably not a good idea. Is immediate part of the type system? We haven’t formulated rules that explain what happens when you try to take the address of an immediate function. This could allow immediate functions and their evaluation to leak across translation unit boundaries. This would not be good. 7.2.2 A constexpr operator As an alternative (or complement?) to we could consider adding a new constexpr operator that evaluates any expression where it appears. ```cpp std::cout << constexpr(count(reflexpr(std))) << '\n'; ``` The constexpr operator would evaluates the count function at this point in the translation, yielding the value. This is exactly what an immediate function would do implicitly. This operator may be useful in other contexts outside of reflection. Other committee members have expressed interest in something similar. 7.2.3 Metatypes Neither immediate functions nor a constexpr operator solve the problem originally alluded to: it is possible to lower function calls involving `meta::object` s. We would prefer for this to not happen. We think we could use the type system to achieve this effect. Specifically, we want something like the following rule: > A potentially evaluated expression that contains a use of a function whose type is a metatype, other than within a constant expression, is ill-formed. A metatype is a type whose objects can only be creating during the evaluation of a constant expression. We can define it inductively: - The type `meta::object` is a metatype. - A pointer to an object of metatype is a metatype. - A reference to an object of metatype is a metatype. - An array of objects of metatype is a metatype. - A function whose return type is a metatype or whose parameter-type-list contains a metatype is a metatype. - A class is a metatype if it is a literal type, and - Any of its base class specifiers denote metatypes - Any of its non-static data members have metatypes - And probably more cases... Note that this is orthogonal to the features needed to add new constexpr evaluation contexts. 7.3 The meta::object type There are some implementation restrictions on this value. Primarily, it must be usable as a template argument. Second, do the extent possible, it must be impossible for users to create non-default initialized objects of this type. Only the implementation is permitted to do so. The current implementation makes this a scoped enum whose underlying type is `std::uintptr_t`. I believe it would be better to add a new fundamental type that captures the desired semantics. In particular, we really want to enforce the requirement that users may not construct objects of this type except: - by default construction, - as the result of a copy, - as a result of `reflexpr`, or d) as a result of a meta:: library function. Doing otherwise would almost certainly produce and bit pattern that does not correspond to internal compiler object. That will lead to internal compiler errors and very possibly serious security problems. Note that programmers must also be prevented from using static_cast or reinterpret_cast to refer to objects of meta::object type. 8 A compile-time evaluation model for static reflection In our note on constexpr evaluation, we imagine a hypothetical compiler that implements constexpr evaluation by emitting a subset of the program being translated and producing an executable program. The implementation (translation) is free to instrument that program to provide additional information that can be used to e.g., generate good diagnostics. To implement static reflection in this model, we can augment the generated program-to-be-executed by serializing the abstract semantics graph as global data. All meta::object values are simply indexes into the nodes of the ASG. All applications of intrinsics evaluate the reachable semantic properties of those entities. This effectively restates static-reflection as dynamic reflection.6 Note that projection operators are applied during translation, although they depend on the evaluation of their operands. These operands are never “leaked” into the evaluation. --- 6 Defining this as a library facility would actually give us dynamic reflection capabilities. This need not be hypothetical.
{"Source-Url": "http://open-std.org/JTC1/SC22/WG21/docs/papers/2018/p0993r0.pdf", "len_cl100k_base": 8016, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 38865, "total-output-tokens": 9051, "length": "2e12", "weborganizer": {"__label__adult": 0.00040078163146972656, "__label__art_design": 0.0002956390380859375, "__label__crime_law": 0.00029587745666503906, "__label__education_jobs": 0.0002727508544921875, "__label__entertainment": 4.583597183227539e-05, "__label__fashion_beauty": 0.00013589859008789062, "__label__finance_business": 0.00012362003326416016, "__label__food_dining": 0.00037598609924316406, "__label__games": 0.0004606246948242187, "__label__hardware": 0.0006542205810546875, "__label__health": 0.0003275871276855469, "__label__history": 0.00017023086547851562, "__label__home_hobbies": 6.842613220214844e-05, "__label__industrial": 0.0002799034118652344, "__label__literature": 0.0001958608627319336, "__label__politics": 0.00023317337036132812, "__label__religion": 0.0004703998565673828, "__label__science_tech": 0.0033092498779296875, "__label__social_life": 6.788969039916992e-05, "__label__software": 0.0027675628662109375, "__label__software_dev": 0.98828125, "__label__sports_fitness": 0.0002884864807128906, "__label__transportation": 0.0004711151123046875, "__label__travel": 0.0002105236053466797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36545, 0.00943]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36545, 0.47539]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36545, 0.8353]], "google_gemma-3-12b-it_contains_pii": [[0, 2309, false], [2309, 4559, null], [4559, 6583, null], [6583, 8661, null], [8661, 10929, null], [10929, 13277, null], [13277, 15530, null], [15530, 18235, null], [18235, 20175, null], [20175, 22032, null], [22032, 24164, null], [24164, 27128, null], [27128, 29872, null], [29872, 32587, null], [32587, 35054, null], [35054, 36545, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2309, true], [2309, 4559, null], [4559, 6583, null], [6583, 8661, null], [8661, 10929, null], [10929, 13277, null], [13277, 15530, null], [15530, 18235, null], [18235, 20175, null], [20175, 22032, null], [22032, 24164, null], [24164, 27128, null], [27128, 29872, null], [29872, 32587, null], [32587, 35054, null], [35054, 36545, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36545, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36545, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36545, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36545, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36545, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36545, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36545, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36545, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36545, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36545, null]], "pdf_page_numbers": [[0, 2309, 1], [2309, 4559, 2], [4559, 6583, 3], [6583, 8661, 4], [8661, 10929, 5], [10929, 13277, 6], [13277, 15530, 7], [15530, 18235, 8], [18235, 20175, 9], [20175, 22032, 10], [22032, 24164, 11], [24164, 27128, 12], [27128, 29872, 13], [29872, 32587, 14], [32587, 35054, 15], [35054, 36545, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36545, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
0d020c4d8083b7d920e8de41ade86b33aa1f4485
This is an electronic reprint of the original article. This reprint may differ from the original in pagination and typographic detail. Author(s): Rönkkö, Mikko; Ojala, Arto; Tyrväinen, Pasi Title: Innovation as a Driver of Internationalization in the Software Industry Year: 2013 Version: Please cite the original version: All material supplied via JYX is protected by copyright and other intellectual property rights, and duplication or sale of all or part of any of the repository collections is not permitted, except that material may be duplicated by you for your research use or educational purposes in electronic or print form. You must obtain permission for any other use. Electronic or print copies may not be offered, whether for sale or otherwise to anyone who is not an authorised user. Innovation as a Driver of Internationalization in the Software Industry Mikko Rönkkö Aalto University Helsinki, Finland mikko.ronkko@aalto.fi Arto Ojala University of Jyväskylä Jyväskylä, Finland arto.k.ojala@jyu.fi Pasi Tyrväinen University of Jyväskylä Jyväskylä, Finland pasi.tyrvainen@jyu.fi Abstract—Innovation and internationalization are two important factors for growth. This study analyzes whether innovativeness has an effect on the internationalization of software firms, and if so, how strong this effect is. Innovation and internationalization have rarely been studied together, with research tending to focus more on the relationship between innovations and growth. However, internationalization is a key prerequisite for growth for companies operating in small domestic markets. This paper analyzes the innovativeness and internationalization of firms, using data from the Software Industry Survey conducted in Finland. Since the speed of firm growth and internationalization are dependent on the willingness to grow and the age of the firm, these variables are used as moderators in the analysis. Our analysis suggests that innovativeness does contribute to internationalization; however, the effect is significant only for younger firms during their expansion to new markets, after which they prioritize revenue growth. Keywords: Innovativeness, internationalization, growth, software firms I. INTRODUCTION Innovations and innovativeness are crucial for competitiveness and economic growth. Furthermore, innovations are an important source of competitive advantage for firms operating in international markets [37]. Several studies have also found that the innovativeness of a firm is positively related to the speed of internationalization [8, 9, 12, 22]. The software industry is characterized by innovation-driven market growth [30]. Broadly speaking, one can say that the software industry contributes to the economy in two ways. First of all, software can be used to increase productivity in other industries. Secondly, software can be sold abroad, generating export revenue. While the impact of software innovations on productivity has been extensively investigated (e.g. [10, 5]), the effect of software innovations on internationalization remains a somewhat understudied topic [12]. This is surprising, since the internationalization of software firms has attracted considerable attention in academic literature (e.g. [6, 14, 15, 27, 29, 31, 32, 33]). Moreover, young high-growth firms have been shown to have a strong impact on economic development [1, 27], and for companies with small domestic markets, internationalization can be regarded as a natural stage in development [40]. It appears that studying the effect of innovativeness on internationalization could help us to understand how small firms can generate growth. The aim of this paper is thus to address the innovativeness of software firms from an internationalization perspective, via an analysis of firm internationalization as a function of a firm’s age, its innovativeness, and its willingness to grow. II. THEORETICAL BACKGROUND AND DEVELOPMENT OF HYPOTHESES The internationalization process of a firm can be studied from two distinct perspectives. The first and more traditional view of internationalization is encapsulated in the Uppsala model of internationalization, developed by Johanson and Wiedersheim-Paul [21] and by Johanson and Vahlne [19]. The second and more recent model is embodied in the International New Venture (INV) theory developed by Oviatt and McDougall [34]. The underlying assumptions of and differences between these theories are explained briefly below. The Uppsala model describes internationalization as an incrementally evolving process, in which a firm internationalizes its operations in various stages. In this model, a firm’s internationalization is based on increasing market knowledge, which increases market commitment through commitment decisions and current activities [19]. According to the model, firms operate first in their domestic markets and only thereafter start to internationalize, initially entering into nearby markets (markets that share a similar language, culture, political system, level of education, level of industrial development, and so on). After that, when a firm’s knowledge of international operations increases, it gradually starts to develop activities in more distant countries. Thus, knowledge of and learning about foreign markets has a central role. The model divides knowledge into general knowledge and market-specific knowledge. General knowledge is objective, and transferable from previous countries entered to the target country. It includes general issues concerning marketing methods, operation modes, and typical customers, on a global scale. Market-specific knowledge is based on previous experiences of the target country environment, including its culture, the market structure, the customers in the market, and so on. This knowledge is mainly acquired through operating in the target country [19]. Recent studies on new ventures and “born globals” have challenged the internationalization process described in the Uppsala model [7, 22, 28, 34], and have proposed... alternative internationalization pathways [7, 23, 26]. The study by Oviatt and McDougall mentioned above [34] gave a theoretical foundation for why some firms internationalize faster than traditional internationalization theories would predict. INV theory is based on the notion that the internationalization of INVs is related to opportunity-seeking behavior, according to which an INV “seeks to derive significant competitive advantage from the use of resources and the sale of outputs in multiple countries” [34: 49]. The theory proposes that the origins of an INV are international because they have commitments to valuable resources in more than one country. Within the theory, “international from the inception” means that the founders of an INV seek growth opportunities in foreign markets, and have already made some decisions related to the international scope of these activities, before foundation of the firm [28, 34]. This perspective places an emphasis on firms whose products are innovative and perhaps new on a global scale. Such firms tend to exist in high-technology sectors. One of the premises of the theory concerning INVs is that a firm must control unique and non-imitable resources, capable of creating competitive advantage [4] internationally. INVs must also have the capability to transfer and combine mobile resources with less mobile ones in the target countries, in order to generate value. Software can be considered a prime example of this kind of resource, and it is created through innovation. Although innovativeness is often believed to be one of the leading drivers of growth [11], the evidence supporting this belief is far from conclusive. This uncertainty is due to difficulties in operationalizing innovativeness and also to the complexity of the relationship between innovativeness and growth on the level of individual firms [13]. Two characteristics of innovations make it difficult to study. First of all, innovations tend to not to carry over into sales growth until several years have elapsed. This is due to the fact that innovations do not immediately result in revenue; they must be further developed into products or services, and these often require manufacturing or other production processes. Secondly, not all innovations survive the test of the market, and they may well fail to generate increases in revenue. However, one persistent finding is that firms which innovate and grow seem to grow more rapidly than non-innovative firms [18]. This would suggest that successful innovations do indeed promote growth at the firm level. For firms operating in high-technology sectors with small home markets, internationalization is often considered a natural step in the life-cycle of the firm (cf. [7, 24]). The software industry is in many ways an atypical industry in terms of international expansion. For instance, software companies can use the Internet to distribute their products, giving them instant access to global markets. Furthermore, the ability to distribute digital goods with minimum marginal costs is a unique characteristic of the software industry. It is notable that many software companies actually achieve their first international sales before domestic sales. This is particularly the case with providers of specialized systems and applications in business-to-business markets. Due to these various characteristics, the internationalization of software firms has also received a considerable amount of attention in academic circles, as demonstrated by the proliferation of studies addressing the internationalization of software firms in particular [6, 14, 15, 32, 33, 36, 43]. Since innovativeness is often studied in the context of growth, and since growth is often linked to internationalization, it is surprising that only a few studies have linked these three phenomena (e.g. [8, 22]). Of the two internationalization models presented briefly above, only the INV model explicitly addresses innovations; for its part, the traditional model either ignores this or presents it as a phenomenon embedded in the broader context of resource acquisition. Nevertheless, since innovativeness is clearly a positive determinant of growth, and since in countries with small home markets growth often takes place through international expansion, we can formulate the following hypothesis: **Hypothesis 1:** The innovativeness of software firms increases the probability of internationalization. However, there is evidence that not all innovative firms grow. In a study related to the Finnish software industry, Rönkkö et al. [39] found that willingness to grow was among the most significant determinants of both growth and internationalization. INV theory emphasizes that rapidly internationalizing new ventures actively seek opportunities and growth possibilities in foreign markets [34]. However, it further appears that some small firms prioritize innovation and technological tinkering over expansion of the business; they have little desire to grow, believing that this would restrict the freedom of their employees to experiment with new technologies. For these reasons, one could expect willingness to grow to have a positive effect on the relationship between innovativeness and growth. Such a chain of reasoning leads to our next hypothesis: **Hypothesis 2:** Willingness to grow positively moderates (increases) the effect of software firm innovativeness on the probability of internationalization. In INV theory, the young age of a firm is emphasized; thus, firms that follow the INV model tend to start foreign sales either right at the inception of the firm or very soon afterwards, during the first years of their operations [3, 34]. Firms that rely more on innovations than on resource accumulation (cf. [19]) in preparing for international expansion tend to internationalize early, and this has an effect on the overall internationalization phenomenon. The overall line of argument here includes the notion that innovative firms tend to internationalize early, and hence that innovativeness is less likely to have a major role among firms that internationalize later. Hence we can expect the two relationships presented earlier to be conditional on the age of the firm, giving us the following hypotheses: Hypothesis 3: The effect of innovativeness on the probability of internationalization is stronger for younger software firms than for older software firms. Hypothesis 4: The moderating effect of willingness to grow on the association between innovativeness and internationalization will be more pronounced for younger software firms than for older software firms. III. RESEARCH DESIGN A. Sample The present paper uses data collected from the Finnish software industry over the years 2008 and 2009. The details of the sampling frame have evolved over the years to match the changing needs of the survey. NACE codes 7221 (“Publication of software”) and 7222 (“Other software consultancy and supply”) have typically been included. To cover the entire software industry, including also firms officially registered under other industry codes, the sampling frame included the member lists of several industry associations. The reason for this approach is that the trade register data is considered to cover the software industry only partially, since some companies have software as a secondary industry. In addition, the survey project during which the data were collected required that the entire industry should be covered. Typically, the sampling frame covered all firms with five or more people, and smaller enterprises only if they were members of some industry association, or if they had registered on any of the lists covering the software industry (these lists include, for example, a list of software companies that had applied for product development subsidies from public organizations). However, the coverage of the smallest firms varies between years. It should also be noted that missing revenue data were obtained later from Asiakastieto Ltd., which collects and organizes information from the publicly accessible Finnish trade register. B. Constructs and measures The dependent variable “has international revenue” was measured via a self-report in the survey. The question asked whether the firm had international revenues, and the response options were as follows: (1) Yes; (2) No, but we have considered internationalization; (3) No, international business is not relevant to us at present. The first option was coded as a positive answer, and the other two as negative. The methodological challenges in measuring innovations relate to the weaknesses of R&D intensity and patenting as indicators of innovativeness. Both of these indicators have been criticized in the literature [35, 41]. Moreover, both apply better to larger firms than to small entrepreneurial firms. With this in mind, we used a self-reporting scale to measure innovativeness. We adapted three items measuring innovativeness from a widely used Entrepreneurial Orientation Scale [16, 25]. Cronbach’s alpha reliability coefficient was .69, and principal factor analysis indicated discriminant validity. Willingness to grow was measured via a self-developed scale. The full scale consists of eight items measuring three dimensions of attitude towards growth: general willingness to grow, tolerance of risk to achieve growth, and willingness to grow internationally. The first three items measured general growth motivation, and the degree to which growth was prioritized over other objectives. For the purposes of this paper we used this dimension as a measure of willingness to grow. Cronbach’s alpha reliability coefficient was .87, and principal factor analysis indicated discriminant validity. We retrieved the firm age from trade register data and then derived the variable age class by splitting this variable at two, five, and ten years to arrive at four age classes (i.e. (i) 0-2, (ii) 2-5, (iii) 5-10, (iv) 10-). C. Data collection and analysis Loosely following a tailored design in our approach [17], data collection for the survey took place during the late spring and summer, using a paper and web-based questionnaire. For the year 2009 the invitation to participate was sent to 4544 mainly small and medium-sized companies. Since one of the goals of the Software Industry Survey project was to cover the entire industry, this figure represents a significant amount of oversampling to ensure inclusion of all the relevant firms. In all, we estimated that the figure contains approximately 30% firms that are either not active or do not operate in the software industry. We obtained a total of 584 complete responses using this approach, representing a response rate of approximately 20% (which is typical for a survey of this kind). Micro-enterprises produced the most non-responses, so the effective response rate for the firms with meaningful international activities was higher. Data analysis was carried out using Intercooled Stata, version 11. After organization of the data, which included inspection of the raw data for outliers and calculation of the values of the constructs, we analyzed the data using logistic regression analysis. Four different models were estimated. The first model tested the direct effect of innovativeness on internationalization (Hypothesis 1), while the second model included an interaction term between innovativeness and willingness to grow (Hypothesis 2). The last two models were derived from the first two by adding factorial interactions between age class and the previously tested direct and interaction effects (Hypothesis 3 and Hypothesis 4). All the independent variables were lagged by one year, and we used the lagged version of the dependent variable as a control. In total, 214 cases were included in the analysis. IV. RESULTS Table 1 shows the descriptive statistics and correlations between the study variables. The correlation table does not show anything strikingly untoward, suggesting that regression analysis can safely be used without much concern over collinearity of the independent variables. Table 2 shows the results of the logistic regression analysis. The dependent variable is the binary variable indicating whether the firm has international revenues or not. The lagged version of the same variable is used to control the unmeasured variables and the effect of innovativeness from earlier time periods [1]. ### Table 1. Descriptive Statistics and Correlations <table> <thead> <tr> <th></th> <th>Mean</th> <th>Std.</th> <th>1</th> <th>2</th> <th>3</th> <th>4</th> </tr> </thead> <tbody> <tr> <td>Innovativeness</td> <td>0.39</td> <td>1.00</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Willingness to grow</td> <td>0.15</td> <td>0.98</td> <td>9.26</td> <td></td> <td></td> <td></td> </tr> <tr> <td>Willingness to grow x innovativeness</td> <td>0.29</td> <td>1.06</td> <td>-9.08</td> <td>-9.17</td> <td></td> <td></td> </tr> <tr> <td>Age</td> <td>1.70</td> <td>1.21</td> <td>0.19</td> <td>-0.81</td> <td>9.02</td> <td></td> </tr> <tr> <td>Has revenue from abroad</td> <td>0.38</td> <td>0.49</td> <td>0.16</td> <td>0.17</td> <td>0.07</td> <td>0.14</td> </tr> </tbody> </table> Table 2. Regression Results <table> <thead> <tr> <th>Has revenue from abroad</th> <th>(1)</th> <th>(2)</th> <th>(3)</th> <th>(4)</th> </tr> </thead> <tbody> <tr> <td>Willingness to grow</td> <td>0.24</td> <td>0.22</td> <td></td> <td></td> </tr> <tr> <td>Innovativeness</td> <td>0.37</td> <td>0.40</td> <td></td> <td></td> </tr> <tr> <td>Has revenue from abroad</td> <td>3.60</td> <td>3.65</td> <td>3.60</td> <td>4.11</td> </tr> <tr> <td>Age</td> <td>-0.77</td> <td>-0.75</td> <td>-1.75</td> <td>-1.89</td> </tr> <tr> <td>Age&lt;2</td> <td>-0.83</td> <td>-0.82</td> <td>-0.97</td> <td>-0.97</td> </tr> <tr> <td>Age&gt;2</td> <td>-0.39</td> <td>-0.39</td> <td>-0.61</td> <td>-0.56</td> </tr> <tr> <td>Willingness to grow x</td> <td>-0.16</td> <td></td> <td></td> <td></td> </tr> <tr> <td>Innovativeness</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Age&lt;2</td> <td>-0.12</td> <td>-0.11</td> <td></td> <td></td> </tr> <tr> <td>Age&lt;2&lt;5</td> <td>1.80</td> <td>1.88</td> <td></td> <td></td> </tr> <tr> <td>Age&lt;2&lt;5&lt;10</td> <td>-0.61</td> <td>0.04</td> <td></td> <td></td> </tr> <tr> <td>Age&lt;2&lt;5&lt;10&lt;15</td> <td>0.46</td> <td>3.35</td> <td></td> <td></td> </tr> <tr> <td>Age&lt;2&lt;5&lt;10&lt;15&lt;20</td> <td>0.80</td> <td>0.83</td> <td></td> <td></td> </tr> <tr> <td>Age&lt;2&lt;5&lt;10&lt;20&lt;25</td> <td>0.22</td> <td>0.18</td> <td></td> <td></td> </tr> <tr> <td>Age&lt;2&lt;5&lt;10&lt;20&lt;25&lt;30</td> <td>0.62</td> <td>0.75</td> <td></td> <td></td> </tr> <tr> <td>Age&lt;2&lt;5&lt;10&lt;20&lt;30&lt;35</td> <td>-0.25</td> <td>-0.31</td> <td></td> <td></td> </tr> <tr> <td>Age&lt;2&lt;5&lt;10&lt;20&lt;30&lt;35&lt;40</td> <td>-0.21</td> <td></td> <td></td> <td></td> </tr> <tr> <td>Age&lt;2&lt;5&lt;10&lt;20&lt;30&lt;35&lt;40&lt;50</td> <td>0.04</td> <td></td> <td></td> <td></td> </tr> <tr> <td>Age&lt;2&lt;5&lt;10&lt;20&lt;30&lt;35&lt;40&lt;50&lt;60</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Innovativeness</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Age&lt;2&lt;5&lt;10&lt;20&lt;30&lt;35&lt;40&lt;50&lt;60&lt;70</td> <td>-0.30</td> <td></td> <td></td> <td></td> </tr> <tr> <td>Age&lt;2&lt;5&lt;10&lt;20&lt;30&lt;35&lt;40&lt;50&lt;60&lt;70&lt;80</td> <td>-0.45</td> <td></td> <td></td> <td></td> </tr> </tbody> </table> The pseudo R² values of each model are high, indicating that the models can accurately explain the dependent variable. However, much of this explanation is due to inclusion of the lagged version of the dependent variable as a predictor. When this is taken into account, one can see that the internationalization actions are affected relatively little by innovativeness or by willingness to grow. Nevertheless, models one and two provide weak support for the hypothesis that innovativeness does indeed promote internationalization at the firm level. Surprisingly, neither willingness to grow nor its interaction with innovativeness was significant. This leads to the conclusion that Hypothesis 1 is weakly supported, but that Hypothesis 2 is not supported in the sample as a whole. When we added in the factorial interactions with age class, an interesting observation can be made. While the interaction between innovativeness and willingness to grow remains non-significant, we can see that the effect of innovativeness on internationalization is significant only for young firms. Hence we can conclude that Hypothesis 3 (The effect of innovativeness is stronger for younger firms) is supported, but that Hypothesis 4 (The interaction between innovativeness and willingness to grow is stronger for younger firms) is not. ### V. Discussion and Conclusions In this study, we analyzed the effects of innovativeness on internationalization at the firm level, using data from a longitudinal survey of the Finnish software industry. The results indicate that innovativeness is indeed positively related to software firm internationalization. This finding is in line with INV theory [34] and with earlier studies on born globals and INVs [8, 22]. However, a somewhat unexpected finding was that the relationship between innovativeness and internationalization is not moderated by willingness to grow. The findings here indicate that the effect of innovativeness is stronger for younger firms, becoming non-significant for older firms. There are several possible explanations for this tendency. First of all, it is possible that innovativeness only helps younger firms, whereas internationally well established firms may use other resources for internationalization [19, 20]. A second possibility is that because innovative firms seem to internationalize earlier than their less innovative counterparts, a large proportion of these firms are already international at a later age. Since the firms are already international, the measure of internationalization used in this study does not capture any further international expansion. The age categorization data in Table 2 also shows that the strongest correlation between international revenue and innovativeness exists for firms younger than two years; this could indicate that a new innovative offering is an effective means of penetrating the international markets. The correlation between international revenue and willingness to grow is strongest for firms aged 2-5 years, while the youngest international firms seem to have smaller growth orientation attitude. This seems to be in line with the INV theory, which describes firms that are international from their inception – firms which emphasize growth and profitability only after they have established a presence in international markets. These observations suggest that further studies could apply clustering methods for extracting archetypes of internationalization behavior from the data sets in question. However, in our case, a set of 214 firms with a mean age of 1.7 years necessarily sets some limits on approaches of this kind, with respect to the later phases of internationalizing behavior. This study, like all other studies, is not without weaknesses. The greatest problem in the research design is how to measure innovativeness. Although self-reporting scales do not share the same weaknesses as patent data or R&D intensity, self-reports are subject to other kinds of measurement error [38]. Moreover, the binary variable measuring internationalization might not have sufficient fidelity to capture the phenomenon. It is quite possible that the firms in question are merely experimenting with internationalization, and that they never achieve international breakthrough. However, this is not captured in the measure we used. REFERENCES [1] Achen, C. H. Why lagged dependent variables can suppress the explanatory power of other independent variables. In Annual Meeting of the Political Methodology Section of the American Political Science Association, UCLA (pp. 20–22), (2000) [27] Lopez, L.E., Kundu, S., Ciravegna, L. Born global or born regional? an from an exploratory study in the Costa Rican software industry. Journal of International Business Studies, 40(7), 1228-1238 (2009)
{"Source-Url": "https://jyx.jyu.fi/bitstream/handle/123456789/42619/innovation%20as%20a%20driver%20of%20internationalization%20in%20the%20software%20industry.pdf;jsessionid=2EF99C99FAC377A13C04F7034612780A?sequence=1", "len_cl100k_base": 5924, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 19647, "total-output-tokens": 8299, "length": "2e12", "weborganizer": {"__label__adult": 0.0008068084716796875, "__label__art_design": 0.0014324188232421875, "__label__crime_law": 0.000881195068359375, "__label__education_jobs": 0.022857666015625, "__label__entertainment": 0.0003390312194824219, "__label__fashion_beauty": 0.0004138946533203125, "__label__finance_business": 0.32568359375, "__label__food_dining": 0.001201629638671875, "__label__games": 0.0020656585693359375, "__label__hardware": 0.0012369155883789062, "__label__health": 0.0010709762573242188, "__label__history": 0.0010585784912109375, "__label__home_hobbies": 0.00046896934509277344, "__label__industrial": 0.0012845993041992188, "__label__literature": 0.0019054412841796875, "__label__politics": 0.0014181137084960938, "__label__religion": 0.0006613731384277344, "__label__science_tech": 0.045196533203125, "__label__social_life": 0.0004558563232421875, "__label__software": 0.0626220703125, "__label__software_dev": 0.5244140625, "__label__sports_fitness": 0.0005784034729003906, "__label__transportation": 0.0013570785522460938, "__label__travel": 0.0007700920104980469}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32415, 0.07292]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32415, 0.24613]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32415, 0.90734]], "google_gemma-3-12b-it_contains_pii": [[0, 1085, false], [1085, 6339, null], [6339, 12633, null], [12633, 18494, null], [18494, 24649, null], [24649, 31065, null], [31065, 32415, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1085, true], [1085, 6339, null], [6339, 12633, null], [12633, 18494, null], [18494, 24649, null], [24649, 31065, null], [31065, 32415, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32415, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32415, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32415, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32415, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32415, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32415, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32415, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32415, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32415, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32415, null]], "pdf_page_numbers": [[0, 1085, 1], [1085, 6339, 2], [6339, 12633, 3], [12633, 18494, 4], [18494, 24649, 5], [24649, 31065, 6], [31065, 32415, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32415, 0.22794]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
a15be7f7c2b8f8f50cfbd25eb44e3912c080be99
Using Intentions and Plans of Mobile Activities to Guide Geospatial Web Service Composition Bo Yu and Guoray Cai† Spatial Information and Intelligence Laboratory, College of Information Sciences and Technology Penn State University, University Park, PA 16802, USA Email: {byu, cai}@ist.psu.edu Abstract -- Mobile applications are increasingly taking advantages of the diverse geospatial web services to meet the information needs of their users. However, matching available web services to user's information needs is not a trivial task, as there are many contextual factors that may influence the fitness of use. In addition, mobile activities can be highly dynamic and interleaving, which demand certain level of context-adaptation for web service matching policies. Previous work on context-based service matching and composition tends to focus on environmental and functional contexts that can be either sensed directly or defined a prior, and they assume relatively stable user activities. In this paper, we describe a method for representing and reasoning the intentional structures of mobile activities and use it to contextualize mobile map services. Our context model treats a user’s mobile activity as an evolving collaborative plan situated in a set of physical and mental factors. The model explicitly reasons on the intentional structure of the mobile activity to determine the appropriate service matching policy on the fly. The feasibility and benefit of using this model is demonstrated through the implementation of a prototype system, MyTour - a mobile city tour guide application. Keywords—geographical information; web service composition; activity; context adaptation. I. INTRODUCTION Geographical web services are modular web applications that provide services on geographical data, information, and knowledge [38]. With the availability of geographical web services and associated architectures [32] and standards [30], developing advanced geospatial applications becomes a process of orchestrating (or composing) heterogeneous web services for supporting spatial decision-making, knowledge discovery, and spatial activities [11, 35]. Approaches for automatic or semi-automatic composition of atomic services to generate value-adding services have been proposed in the past, ranging from pure feature matching, syntactic chaining, to more sophisticated semantic-based methods. While business communities took a more workflow-oriented approach [23], semantic web communities recently followed ontology-based [20] and AI planning based approaches [31]. These approaches are far from offering a comprehensive solution to the overall challenges of automated composition of web services into a coherent solution. The difficulties in representing and reasoning on semantics and domain processes remain to be addressed. Mobile map services that provide relevant geographic information to people on the move represent one of the fundamental and most widespread types of location-based services (LBS) [28]. The problem of composing geo web services for mobile applications is unique for two reasons. First, compared with other types of data, geographical information is more heterogeneous in its media, representational forms, and semantics[4, 22]. There is little agreement on the ontology of geographical objects and processes, except at the most abstract level [1]. Second, mobile applications must have a high degree of adaptability of its behavior to the changing environment and work contexts [21]. This makes it difficult to use traditional web service composition methods, since they all assume prescribed actions, states, goals and events. On the other hand, mobile geographical applications offer a number of advantages and opportunities over other (desktop) applications [8]. In particular, human mobile activities are highly driven by their goals which can better be sensed or reasoned from the intentional structure of the larger activity. Such knowledge of mobile activity offers rich and detailed contextual clues that can be used by a service composition engine to make real time inference on the information needs. The key to the success is to build the awareness the ongoing activity into the web service composition engine so that web service requests can be flexible and adaptive. In this paper, we present our idea of contextualizing mobile map service composition in terms of activities. User activities become first-class entities that are represented explicitly by the mobile map services composition engine. Following the cognitive and mental state views of tool-mediated human activity [33, 36], our model of activities goes beyond traditional AI planning models to include intentions and belief towards interrelated goals as more stable and relevant contexts. This model of activity is available to the service composition engine and allows the engine to reason about what and when geographical information services are needed and how it influences the behavior of the user and the system. To motivate subsequent discussions, we start with a scenario where a traveler plans a city-tour with the assistance of map-based tour-guide that provides geographical information services (see Table 1). From this scenario, we can differentiate four roles that geographical † Corresponding author information serves: (1) as part of the spatial constraint on the user’s mobile activity, (2) as the criterion for promoting visual saliency to support the user’s activity, (3) as knowledge-precondition for action, or (4) as indication of activity state changes. Table 1 describes the scenario as four episodes, each of which was analyzed to understand the information needs. The selection and composition of geo web services is determined by the information need to advance user’s activity from current state to future states. As the users make progress on their activities, their intentional focus shifts and new geo information service must be composed correspondingly. It is the activities supported by mobile map services that play a central role in deciding when and why user’s location becomes an important factor to influence the system’s supporting behavior, and how the location information is consumed to provide the support. ### User Activity Development | Spatial constraint. The system can use Jim’s current location together with the time constraint (half a day) to estimate the maximum distance that Jim could go and use this as a spatial constraint to retrieve POIs and calculate the appropriate map extent. | | Knowledge for action. Jim needs to have knowledge about the route from the current location to the destination. As a result, the system can highlight the current location on the map to help him finish his task. | | Representing activity state. Jim needs to monitor where he is at in relation to the destination. The map service update itself to show the status of the navigation task. | The following sections of this paper present our approach in detail. After a brief review of related work (section II), we present our computational model to represent user activities based on the SharedPlan theory (section III). In section IV, we demonstrate how this model can be employed in the motivating scenario to infer different roles of location information and how it influences the map-based support. Section V provides our prototype system MyTour design and implementation details. We conclude the paper by discussing the advantages and limitations of our approach, as well as possible future work. II. RELATED WORK A. Geo Web Service Composition Composing web services for an application can be analyzed as a five-phase process, according to Kim et al. [19]. In the first phase (specification phase), users specify their intended goal of their domain activity to be supported by the composed web services, and produce an abstract specification. This involves the specifications of goals, tasks, events, states, and constraints as models of the domain. Then, the second phase (planning phase) takes abstract specification of the domain activity and generates a formal representation of process flows that allow machines to automate service composition. Major approaches in this phase include workflow-based composition, template-based composition, and AI planning techniques (such as Markov chains, backward chaining, and graph theory-based techniques for service chaining. The third phase conducts syntactic and semantic validation of the service composition workflow to ensure the composed services satisfy the stated goals of the users. The last two phases are service discovery and execution, which will not be the focus of this paper. In real applications, the first three phases are commonly done by human developers. Such work is time-consuming and error-prone, but most importantly, it requires developers to have good knowledge in both the intended application domain (e.g. wildlife protection) and the technical domain (e.g. geographical information services). Recently, automate service composition methods with the use of AI planning techniques have emerged as promising research directions [31, 35]. However, computational work mostly deals with problems within one of the phases, and they rarely address cross-phase issues. This is mostly due to the difficulties in bridging the huge semantic gap going from the specification phase to the planning phase. The exceptions are recent work in context-based methods for matching and discovering web services [27, 34]. Context is potentially an idea that relates the semantic issues across different phases. Unfortunately, the work on context is limited to some broad categorizations and representation languages, with little influence on service composition. B. Context and Activity Contexts affect all phases of service composition. However, the intuitive concept of ‘context’ is quite tricky to define and formalize. A commonly adopted definition of context, as offered by Dey [9] is “all things that are relevant to the interaction between a user and an application.” This definition is too broad to be useful as a formal definition. Other researchers have focused on a subset of all possible contextual factors that have predictive power over an application’s behaviors [7]. It is important to distinguish between context and situation. While a "situation" is an observer-independent and potentially unlimited resource that characterize an actual setting within which a tool is used, a "context", as an expression of certain interpretation of a situation, is observer-dependent and includes a relatively small subset of features that has impact on the behavior of the system [26]. A key point is that relevant features of a context may be highly application-specific. This is consistent with the interactional view of context [10], which argues that something is part of the context because it is used for adapting the interaction between human and the current system, and features of the world become context parameters through their use [37]. Hence, the most important context of information services is the activity within which information is used. The idea of organizing geographical information support in terms of activities has been discussed in some location-based systems. Liao et al [24] uses location data from a wearable GPS location sensor to identify a user’s significant places, and learns to estimating high-level activity categories, e.g. working, shopping, and dining out. Such activity knowledge can then be used to adapt the system’s behaviors. However, the concept of activity used in these systems is lightweight in the sense that they only consider an activity as consisting of a range of actions performed with a collection of computational support [3]. Huang and Gartner [15] provided an example about how a model of human activity can be used to identify relevant context parameters in designing context-aware pedestrian way-finding services. However, all of these existing efforts made only limited advances in modeling and using activities for automated reasoning - they either discussed activity at the conceptual level or applied it at the design stage. None of them have explored the potential of modeling the human activity in a computational way so that the system can maintain an updated activity model and reason about the relevant contextual factors on the fly. Our work attempts to develop a computational model of user activities that can be used by service composition engine to computationally determine when and how geographical information is needed during an ongoing activity. ### III. MODELING SITUATED ACTIVITIES In order to model an activity, we first need to understand what an activity entails. In our approach, we subscribe to Activity Theory to conceptualize human activity [29]. From this perspective, an activity is a dynamic construct expressed over a period of time, and it consists of people, artifacts, an object or motive, socio-cultural rules, and roles. An activity usually starts with an objective as motivations behind the activity. Multiple actions are performed to reach the overall objective. Each action is driven by a conscious intentional goal. Finally, actions are performed by operations that are unconscious, often routine actions carried out automatically as a service. The conceptualization of activity from this perspective provides several requirements for the computational model of activity [18]: 1. Activities are rooted in mental attitudes of individual participants. It is the goals and intentions of participants that lead them to plan and perform intended actions. As a result, to model a human activity, a key is to characterize the mental attitudes of participants in an activity. 2. Activities are hierarchically structured from motive-driven high-level activities through goal-driven actions down to operations without being goal-directed. Each of these three levels can itself consist of several layers of complexity. Hence, the computation model of an activity must be able to capture the hierarchy of subsidiary actions and operations in an activity. 3. Activities are highly dynamic and time-dependent. As users continue their activities, the states of activities keep changing. Therefore, the model of activity must be able to update itself accordingly. To satisfy these requirements, we model an activity as a evolving collaborative plan between the user and the system, following the computational theory of SharedPlans [14]. In particular, (1) we model the mental state of an activity as the intentions, beliefs, and mutual beliefs of the participants; (2) we model an activity as hierarchically structured plans that are dynamically constructed and executed under specific situational and resource constraints; (3) we model the development of an activity as the evolution from a partial shared plan (PSP) to a full shared plan (FSP) through elaboration. In the rest of this section, we first describe how a activity can be formally represented as a shared plan, followed by a description of the development cycle of a shared plan capturing the dynamics of an activity. #### A. The Representation of Activities In our approach, we represent an activity as a shared plan. In general, a shared plan is a moment-to-moment representation of an unfolding activity. It includes hierarchically organized plans and subplans that form a PlanGraph (see Figure 1). ![Graphical representation of PlanGraph](image) To model the necessary mental attitudes, each node in a PlanGraph includes several slots to store the system’s beliefs about other agents’ mental states: (a) Intentions are slots recording the system’s beliefs about intentions of each agent towards the associated action; (b) Capability indicates the system’s belief about the ability to perform the action, such as whether they can identify a recipe or they can bring it about; (c) Beliefs are slots for recording the system’s beliefs about what the other agents also believe about this action. Reasoning on the changes of mental attitudes is performed through a set of mental state operators as specified in the SharedPlan theory [14]. Unlike traditional notion of AI plans that were abstract expression of human actions, our notion of plans refers to situated actions. Specifically, we consider that each situated action has both situation-independent and situation-dependent aspects. For example, although there is a generic set of knowledge about how to cook a dinner (assuming one knows a set of recipes), exactly what to cook depends on the available resources (meat, vegetables, ingredients, etc) as well as the skill and time of the person. This is known as the “knowledge precondition principle” of collaborative plans [25]. In a PlanGraph, we handle the knowledge-conditions as a special type of node - parameters. Nodes with oval shape in Figure 1 indicate parameters, and nodes with rectangle shape represent subsidiary actions. A plan underneath a parameter node is the plan for identifying that parameter. The theory of situated actions and plans [2, 36] offers insights on how information services (as part of the situational context) condition or constrain human activities. B. Representing the Developmental Aspects of Activities An activity normally starts with an overall purpose or objective that agents intend to accomplish. This is often represented as the root node in a PlanGraph. As the activity proceeds, the human user and the system collaborate (communicate and interact with each other) toward the success of their shared goal in the activity [6]. The development of the activity is the process that iterates four steps of reasoning (see Figure 2): (1) recognition; (2) explanation; (3) elaboration; and (4) behavior generation. In the plan explanation phase, the system analyzes each input to infer its beliefs about the changes in the user’s mental states, the activity, or environment. Then the system attempts to explain how the meanings of these new beliefs relate to the current PlanGraph. If the new beliefs can be successfully explained, the system updates the PlanGraph to accommodate these new beliefs. In the plan elaboration phase, the system needs to adopt or update its mental states according to these changes, do the means-end reasoning to elaborate the shared plan, and perform individual actions to advance the activity. 1) Recognition. This step refers to the process that the system recognizes the input from the users or the environment and establishes corresponding beliefs towards them. The inputs can come from the user’s explicit requests, such as selecting a menu item from the client or a speech utterance in a spoken dialogue system; or are collected implicitly by the system through different sensors, e.g. the GPS sensor that detects the location changes of the user. The system searches the domain knowledge base for an appropriate match between the inputs and the meanings through three levels of interpretation [16] [13]: lexical, syntactic, and semantic. The result of recognition is a set of new beliefs \( P \), indicating how the system understands the new inputs. 2) Explanation. The second step is to determine how these new recognized beliefs contribute to the collaborative activity as augmenting the partial shared plan. This process starts from the root node of the PlanGraph and traverse the nodes to decide whether the new beliefs can contribute to any node of the PlanGraph. For each node traversed, the system must decide if the new belief from the input contributes to the current plan node. To model this step, we introduce two subsidiary processes to determine the relations between a belief \( p \) and a plan node. A-Contributes determines the relationship between \( p \) and an action, and P-Contributes determines the relationship between \( p \) and a parameter. ![Figure 2. Reasoning process with PlanGraph](image1) **Explanation** (Proposition \( \rho, \text{PlanNode } \alpha \)) 1. if \( \alpha \) is an action, and \( \rho \) contributes to \( \alpha \) in certain way, i.e. A-Contribute(\( \rho, \alpha \)) is true, current action \( \alpha \) explains \( \rho \); 2. if \( \alpha \) is a parameter, and \( \rho \) contributes to \( \alpha \) in certain way, i.e. P-Contribute(\( \rho, \alpha \)) is true, parameter \( \alpha \) explains \( \rho \); 3. otherwise, \( \rho \) does not contribute directly to current action \( \alpha \). For each subsidiary parameter and action of \( \alpha \) (\( \beta_1, \beta_2, ..., \beta_k \)), repeat the augment process: explanation (\( \rho, \beta_1 \)). Where: A-Contribute(Proposition \( \rho, \text{PlanNode } \alpha \)) could mean four things: 1) if \( \rho \) indicates the initiation of a subsidiary shared plan for action \( \alpha \), and return true; 2) if \( \rho \) indicates the completion of the current shared plan for \( \alpha \), the system will ascribe the following belief to \( \alpha \): \[\text{Bel(Sys,Bel(User, FSPG(\alpha, \rho)), t))}\] and return true; 3) if \( \rho \) indicates that \( \delta \) is a sub-action of current action \( \alpha \), the system will ascribe the following belief: \[\text{Bel(Sys,Bel(\text{User } \delta \in \text{Recipe}(\alpha)))}\] and return true; 4. if \( \rho \) represents a piece of information that is part of the performance context of current action \( \alpha \), the system will ascribe the following belief: \[\text{Bel(Sys,Bel(\text{User } \delta \in \text{Val}(\alpha)))}\] and return true; P-Contribute(Proposition \( \rho, \text{ParamNode } \alpha \)) could mean one of the following: 1. if \( \rho \) refers to a value \( v \) of the current parameter \( \alpha \), the system will ascribe the following belief to \( \alpha \): \[\text{Bel(Sys,Bel(\text{User } \delta \in \text{Val}(\alpha)))}\] and return true; 2. if \( \rho \) indicates that \( \alpha \) is a sub-action of identifying the current parameter \( \alpha \), the system will ascribe the following belief: \[\text{Bel(Sys,Bel(\text{User } \delta \in \text{Recipe}(\text{id.param}(\alpha)))}\] and return true; ![Figure 3. Plan explanation algorithm](image2) further updated to accommodate these changes, because the new beliefs might imply chained changes to other related actions. 3) Plan Elaboration. After the plan explanation process, the context of the activity is changed. Therefore, the system needs to elaborate the PlanGraph to accommodate the changes and advance the collaborative activity from the system side. The elaboration process begins with the root node of the PlanGraph and adopts the depth-first traverse to visit the whole plan based on these reasoning rules. The elaboration ends when no more parts of the PlanGraph can be further elaborated. Reasoning rules for plan elaboration follow the principles of shared cooperative activities [5] and include: (1) Recipe Selection: the system intends that the group d will develop a FSP to select a recipe for the action: \[ \text{IntTh}(\text{Sys}, FSP(\text{GR}, t_{day}, C_{al})) \] \[ \Rightarrow \text{IntTh}(\text{Sys}, FSP(\text{GR}, \text{select recipe}(t_{select}), C_{select})) \] (2) Constraint Satisfaction: it requires that all the members of the group be committed to making sure that the constraints for doing $\alpha$ will hold. \[ \text{IntTh}(\text{Sys}, FSP(\text{GR}, t_{day}, C_{al})) \Rightarrow \text{IntTh}(\text{Sys}, \text{const}(t_{day})) \] 4) Behavior generation. Following the cooperative nature of SharedPlans [17], the system needs to adopt corresponding commitments to exhibit helpful behaviors. In general, two kinds of helpful behaviors can be provided by the system: performing domain actions that are helpful to the user’s task, and communicating relevant information to ensure the user’s success in doing the action [17]. IV. ACTIVITY-MEDIATED ADAPTATION OF WEB SERVICE COMPOSITION By modeling the user’s activity in a PlanGraph as described in Section III, we believe that web service composition can be designed to adapt to the changing activity state. In this section, we particularly focus on the geographical information needs of advancing activity and demonstrate how maps play different roles in the development of the shared plan of an activity. We now return to the motivating scenario in Section I. Figure 4 depicts the higher-level view of the PlanGraph (mental attitudes slots associated with each node are omitted for simplicity), which indicates the major goals of the user in the development of the activity. Figure 4. Upper-level part of the PlanGraph in the scenario [Note: Nodes are colored ‘blue’, ‘red’, or ‘mixed’ to represent the involvement of ‘system only,’ ‘user only,’ or ‘both user and system,’ respectively] This PlanGraph reveals that the user’s top-level goal to tour a place is decomposed into three major goals: (a) identify the place to go, (b) plan the trip, and (c) navigate to the destination. Each of these goals can be further divided into subsidiary actions that may be performed by the system (in blue), the user (in red), or the group together (in both colors). During the different development phases of this PlanGraph, we can identify points in time when geographical information services are needed. Next, we will analyze the four episodes of the scenario to illustrate the change of system’s behavior. The first episode (in Table 1) happens when the user indicates that the overall goal is to take a tour with the time constraint: \[ \text{Bel}(\text{Sys}, \text{IntTh}(\text{User}, FSP(\text{GR}, \text{take_tour}, t_{tour}, C_{tour}))) \] \[ \text{Bel}(\text{Sys}, \text{duration = half a day} \in \text{const restr(take_tour)}) \] From these beliefs, the system commits itself to the group activity: \[ \text{IntTh}(\text{Sys}, FSP(\text{GR}, \text{take_tour}, t_{hour}, C_{take_tour})) \] This commitment leads the system to adopt further intentions based on the elaboration rules described in Section III. The result of such reasoning is shown as the PlanGraph in Figure 5. Figure 5. The PlanGraph at episode I of the scenario The system further elaborates the plan by adopting a ‘recipe’ (i.e. a strategy to accomplish a goal) which can be found from the knowledge base. When it comes to the behavior generation phase, search possible places for the purpose of helping the user to identify the place to go. During the elaboration process of the complex action “Search nearby places”, the temporal constraint of the overall activity (‘half-day’), which was given earlier, is explained into the plan for calculation of the distance extent of the tour. Figure 6 shows the state of the activity plan up to the moment when the destination of the tour (Stadium) is determined (end of the second episode). Based on this plan, our service composition engine can infer two sets of web services (indicated in Figure 6 as square textboxes in white): (1) location service; (2) proximity calculation service (e.g. buffer zones around the current location). In this episode, geographical information services are used to support user’s goal in determining a destination for the tour within the given time limit. The second episode describes the situation when the user has examined the possible locations suggested by the system and needs to decide on the destination to visit. The system reasons on the state of the activity (based on the PlanGraph) and determined that the most appropriate behavior is to show a map of potential destinations by highlighting the places of interests according to the distance to current location. See the map in Figure 6. In order to generate this map, a request to ‘map presentation service’ is identified and executed. While the user physically navigates in space, the system demonstrates helpful behavior by displaying a map of the route from current location to the destination. Again, a ‘map presentation service’ is to be requested. The above scenario, although relatively simple, does demonstrate the evolution of the activity plan and its impact on the web service composition. The example suggests that composing web services requires a degree of adaptivity in order to be coupled dynamically with the situated activity. V. IMPLEMENTATION A prototype software agent, MyTour, is implemented in this study to demonstrate the feasibility of our approach in the motivating scenario. As outlined in Figure 8, the system is implemented with a modular architecture, including four major modules. The mobile client serves to monitor the ongoing interaction between the user and the system, capture all the meaningful inputs (e.g. user’s requests or location coordinates), and send them to the activity manager. The activity manager is the core component of the system, which maintains the PlanGraph structure, performs reasoning to keep the PlanGraph updated, and generates map-based outputs to the client. The knowledge base provides general knowledge that the activity manager needs in different phases of the reasoning process. The mapping component is in charge of interacting with underlying spatial information infrastructure. A. Activity Manager The activity manager is at the centre of the whole system design, which maintains the PlanGraph structure and performs associated reasoning algorithms. PlanGraph is implemented as a dynamic data structure by several data objects (Figure 9). The overall PlanGraph object includes the root plan node and current attention focus. Each plan node includes attributes recording the parent node and the subsidiary plan nodes, which together allows the recursive traverse throughout the PlanGraph hierarchy. In addition, each plan node also includes a list of participating agents. We use two subclasses of the PlanNode class to represent two specialized types of plans: (1) ActionNode defines the properties of the action, such as the type of the action (e.g. single-agent/multi-agent, basic/complex). Each ActionNode also includes several slots to define the system’s beliefs about the corresponding action, as defined in Section III; (2) ParamNode models knowledge pre-condition of a plan as parameters to be identified and used. 1) **Reasoning Engine.** The reasoning engine in the activity manager organizes these reasoning processes in two levels. At a higher level, four module functions are implemented to control the overall workflow of the reasoning process: The Input Interpretation module; Plan Explanation module; Plan Elaboration module; and Response Control function, which correspond to the reasoning process discussed in section 3.2. At a lower level, each of these four modules is implemented as part of a logical programming engine to allow the dynamic inference behaviors. As discussed in Section 3.3, each step of the reasoning process is guided by different reasoning rules with dynamic sets of facts about the activity. To achieve this goal, we employ a knowledge engine Pyke [12] to enable both the forward-chaining and backward-chaining inference. In the Plan explanation module, forward chaining is used to update the system’s beliefs in the PlanGraph. The reasoning engine uses the PlanGraph to assert all the facts about the shared plan, and then activate plan justification rules to generate new facts about the shared plan. During the Plan Elaboration phase, the backward chaining process is used to derive further intentions and beliefs that the system may adopt from the initial commitment to the group activity. 2) **Knowledge Base** The Knowledge Base module stores different types of knowledge used in the system’s reasoning and behavior planning processes. We use a relational schema (stored in a relational database powered by PostgreSQL) to capture the knowledge necessary to (1) map user’s spoken language input to *semantic units* (actions, location, time, numbers, objects, etc), (2) choose *recipes* for actions, (3) discover appropriate geographical information services based on *metadata*, and (4) make behavioral decisions based on *adaptation rules*. Whenever semantic markup is needed, XML language is used. **B Client-Server Interaction** The system adopts a distributed architecture where the interaction between mobile clients and the activity manager is through the standard HTTP. Each time a mobile client registers a map service to the server, the server creates an instance of activity manager, which is in charge of modeling the activity with the mobile client and providing map displays. On the other hand, the assess to geospatial data services is also through HTTP, where the system can integrate map data from multiple data sources and provide on-the-fly vector data as well. The client we currently use in the prototype system is built on the Android mobile platform. The system supports three forms of input from the client: (a) the selection of certain action from possible action lists, (b) free-text input, and (c) location information retrieved by the mobile device. The user’s input is encoded in a simple request XML document that is sent to the server. When the system receives a request from the client, it dispatches the request to the corresponding activity manager based on the unique client identifier. The activity manager follows the reasoning process to update the PlanGraph and return appropriate map responses to the client. The response message is encoded in an XML format describing the map content and styles (following the OpenGIS Web Map Context (WMC) specification), and is interpreted by the client to render the map. **VI. DISCUSSION AND CONCLUSIONS** In this paper, we have described a computational approach to modeling situated activities to contextualize mobile map service composition. Specifically, we adopt the SharedPlans theory to build the activity model, which has the capabilities to: (a) capture the internal mental attitudes of the users, (b) model the hierarchically structured actions in an activity that are intended and performed in some specific situational components (e.g. constrains, knowledge preconditions, intentional structure, and attentional state etc.); (c) and adapt to the ongoing development of activity as the shared plan evolves from partial to full. Implementation of the prototype system, *MyTour*, demonstrates the feasibility of the collaborative plan approach and the PlanGraph model to capture the dynamics of the usage context from the activity-centric perspective. Our work extends the AI planning approach of web service composition with the added flexibility and robustness of intention-based plans to achieve activity adaptive service composition. This study can be further extended in several directions. First, our prototype system has not been deployed and tested with real user activities, and the validity of our method has not been tested yet. In order to know how well they work in real situations, a user-centered evaluation study is necessary. Second, the performance of the system depends on the quality of various types of knowledge built into the system, such as the domain knowledge about the actions and recipes, and cartographic knowledge to generate appropriate map displays. Therefore, a knowledge elicitation process that allows us to collect these types of knowledge from human experts/users in the application domain is planned for future study. In addition, the function of the input interpretation module of current system is very limited and only supports the lexical level interpretation. In the future, we plan to explore the possibility of integrating multiple modals of user inputs (e.g. speech,
{"Source-Url": "http://spatial.ist.psu.edu/cai/2010-APSCC-Yu-Cai.pdf", "len_cl100k_base": 7165, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 29630, "total-output-tokens": 7635, "length": "2e12", "weborganizer": {"__label__adult": 0.00036835670471191406, "__label__art_design": 0.0007262229919433594, "__label__crime_law": 0.0005030632019042969, "__label__education_jobs": 0.0027904510498046875, "__label__entertainment": 0.0001424551010131836, "__label__fashion_beauty": 0.00022149085998535156, "__label__finance_business": 0.0004878044128417969, "__label__food_dining": 0.0004978179931640625, "__label__games": 0.0009784698486328125, "__label__hardware": 0.0014781951904296875, "__label__health": 0.0008668899536132812, "__label__history": 0.0012416839599609375, "__label__home_hobbies": 0.00011074542999267578, "__label__industrial": 0.0005207061767578125, "__label__literature": 0.0005941390991210938, "__label__politics": 0.00045371055603027344, "__label__religion": 0.0004525184631347656, "__label__science_tech": 0.3359375, "__label__social_life": 0.0001475811004638672, "__label__software": 0.040679931640625, "__label__software_dev": 0.6083984375, "__label__sports_fitness": 0.0003337860107421875, "__label__transportation": 0.0013303756713867188, "__label__travel": 0.0007734298706054688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35704, 0.00392]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35704, 0.69112]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35704, 0.918]], "google_gemma-3-12b-it_contains_pii": [[0, 5336, false], [5336, 10282, null], [10282, 15900, null], [15900, 22264, null], [22264, 27562, null], [27562, 30275, null], [30275, 35704, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5336, true], [5336, 10282, null], [10282, 15900, null], [15900, 22264, null], [22264, 27562, null], [27562, 30275, null], [30275, 35704, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35704, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35704, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35704, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35704, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35704, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35704, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35704, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35704, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35704, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35704, null]], "pdf_page_numbers": [[0, 5336, 1], [5336, 10282, 2], [10282, 15900, 3], [15900, 22264, 4], [22264, 27562, 5], [27562, 30275, 6], [30275, 35704, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35704, 0.01554]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
0134bcc713e33f7b82b5a1cecd3586a6f71b5100
Defining and Analyzing P2P Applications with a Data-Dependency Formalism Ayoub Ait Lahcen, Salma Mouline, Didier Parigot To cite this version: Ayoub Ait Lahcen, Salma Mouline, Didier Parigot. Defining and Analyzing P2P Applications with a Data-Dependency Formalism. PDCAT’12: Parallel and Distributed Computing, Applications and Technologies, Dec 2012, Beijing, China. lirmm-00757286 HAL Id: lirmm-00757286 https://hal-lirmm.ccsd.cnrs.fr/lirmm-00757286 Submitted on 26 Nov 2012 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Defining and Analyzing P2P Applications with a Data-Dependency Formalism Ayoub Ait Lahcen\textsuperscript{a,b}, Didier Parigot\textsuperscript{a}, Salma Mouline\textsuperscript{b} \textsuperscript{a}Zenith Team, Inria Sophia Antipolis, Sophia Antipolis, France \textsuperscript{b}LRIT, Unité associée au CNRST URAC 29, Faculté des Sciences, Rabat, Morocco Email: \{ayoub.ait_lahcen, didier.parigot\}@inria.fr, mouline@fsr.ac.ma Abstract—Developing peer-to-peer (P2P) applications became increasingly important in software development. Nowadays, a large number of organizations from many different sectors and sizes depend more and more on collaboration between actors to perform their tasks. These P2P applications usually have a recursive behavior that many modeling approaches cannot describe and analyze (e.g., finite-state approaches). In this paper, we present a formal approach that combines component-based development with well-understood methods and techniques from the field of Attribute Grammars and Data-Flow Analysis in order to specify the behavior of P2P applications, and then construct an abstract representation (i.e., Data-Dependence Graph) to perform analyzes on it. Keywords—Data-Dependency Formalism; Peer-to-Peer Applications; Data-Flow Analysis. I. INTRODUCTION P2P architecture is the concept of an entity acting at the same time as a server and as a client in P2P networks [1]. This is completely different to Client/Server networks, within which the participating entities can act as a server or as a client but cannot embrace both capabilities. Therefore, the responsibilities of entities are approximately equal and each entity provides services to each other as peers. In software systems, especially those that support P2P applications, data are required for achievement of the computing activity and driving the interactions between software entities. Nevertheless, software system design is usually based on computational aspects with data as an afterthought. A data-centric approach provides a different way of viewing and designing applications. It lets us focus on the flow and transformation of data through the software system. In this context, we have defined a Data-Dependency Graph (DDG). It has been chosen as an abstract representation for P2P applications for the following two reasons. Firstly, it represents only one data-flow model (dictated by the dependence between data) on the execution. Further, DDG exposes the right level of detail—enough to perform Data-Flow Analysis (DFA). In this paper, we present a formal approach that combines Component-based Software Engineering (CBSE) [2] with well-understood methods and techniques from the field of DFA [3] (commonly used in compiler construction) in order to construct an abstract representation (i.e., DDG) for P2P applications, and then perform data-flow analyzes on it. This approach consists of a formalism called DDF (Data-Dependency Formalism). DDF provides the necessary set of operations to specify and analyze P2P applications. DDF can be considered as a minimal and lightweight formalism for the following two reasons. Firstly, the goal of DDF is to formally construct the dependency graph which exposes the right level of detail to perform data-flow analysis. Secondly, DDF is not intended to express business code or to be a general-purpose programming language. This is performed according to Domain-Specific Language (DSL) [4] principles. We note that DDF is highly inspired by the main characteristics of the Attributed Grammars (AGs) because they are able not only to construct similar dependency graphs, but also to naturally capture complex recursive behavior (which is very frequent in P2P applications cf. Section II-A) that many other approaches cannot describe. This paper is organized as follows. In Section 2, we present in more detail our motivations. In Section 3, we illustrate our approach through the example Gossip protocol. In Section 4, the DDF formalism is presented. In Section 5, we present how Data-Flow Analysis techniques can be used to analyze the dependency graph. Finally, a conclusion is presented in Section 6. II. MOTIVATIONS A. Specificity of P2P applications Important properties of P2P applications are scalability and self-organization because of their very large user base and the specificity of connections between different peers (e.g., low-bandwidth connections). To support scalability and self-organization in such networks, a large number of P2P-specific algorithms and protocols have been developed. These algorithms and protocols are often executed recursively. Consider, for instance, reputation computation which is a problem of great importance in P2P environments [5] (a simple example justifying this importance is the case where, while downloading files with a P2P file sharing software, we want to choose only reliable peers). The reputation computation relies on a sequence of queries for getting the trust information about a peer A and the corresponding responses. This computation must be performed recursively because a response returned from another peer B results in a query about the trustworthiness of B. In addition, this trust computation needs the reception of all information in the right order since the cut-off may rely on that order. Such recursive call-backs can be viewed as a sequence of well-formed parentheses if a query call is replaced by a left parenthesis and the corresponding response by a right parenthesis. Therefore, the set of sequences describing these recursive call-backs is a Dyck-Language\(^1\). It is a well-known result from the formal language theory that a Dyck-Language is not a regular language [6]. Thus, no Finite-State Automaton (FSA) exists that accepts a Dyck-Language. The kind of recursive call-backs presented above, which has a properly nested structure, can be well defined in terms of context-free languages or Pushdown Automata [3]. However, it is frequently the case that P2P protocols present more complex recursive call-backs which give rise to context-sensitive structures, e.g., interactive structures that adjust their behavior when the context changes. Consider, for example, the case where four neighboring nodes exchange information according to an interaction that corresponds to two interleaved recursive call-backs. Such kind of interaction \((a^n b^m c^n d^m)\) is context-sensitive and cannot be described by context-free languages [3]. Referring to the research work on Attribute Grammars (AGs) [7] which are context-sensitive languages, the recursive behavior of P2P applications can be captured by describing both control and data flow of each interaction. In addition, this behavior can be analyzed using DFA techniques. B. Towards Data-Flow Analysis of component-based P2P applications 1) Model checking and the specificity of P2P applications: Model checking is an automated technique that, given a finite-state model of a system and a formal property, systematically checks whether this property holds for (a given state in) that model [8]. It explores all possible states of the system in an exhaustive manner. Model checking has been successfully applied to a wide range of systems such as embedded systems, hardware design and software engineering. Unfortunately, not all systems can take advantage of its power. One reason for this is that some systems cannot be described as a finite-state model. In particular, in the context of P2P applications. Another reason is that model checking is not suited for data-intensive applications (which, in many cases, are developed using the P2P paradigm). The recent book on model checking [8] clearly shows why the verification of data-intensive applications is extremely hard. Even if there are only a small number of data, the state space that must be analyzed may be very large. 2) Verification by Data-Flow Analysis: Data-flow analysis refers to a body of techniques, which derive information about the flow of data along software system execution paths [3]. The execution of a system can be viewed as a series of transformations of the system state, which consists of the values of all the data in the system. Each execution of an intermediate statement transforms an input state to an output state. We denote these data-flow values before and after a statement \(s\) by INPUTS\([s]\) and OUTPUTS\([s]\). To analyze the behavior of a system, we must consider all the possible paths (i.e., sequences of system states) through a flow graph that the system execution can take. Thus, solving a problem in data-flow analysis is reduced to find a solution to a set of constraints (called Data-Flow Equations) on the INPUTS\([s]\) and OUTPUTS\([s]\), for all system statements. A broad range of system properties can be computed at this level of data abstraction, including some properties like safety and liveness that model checking cannot compute for infinite state systems (cf. e.g., [9]). In addition, several algorithms have been proposed in literature to compute these properties. Unfortunately, to date, the most dominant application of these algorithms, and more generally, Data-Flow Analysis, is in the context of compiler construction. In particular, for Attribute Grammar formalism, which is used to describe the semantic analysis in most compilers. Our motivation in this context is to use these well-understood methods and techniques from the field of AGs in order to construct an abstract representation for P2P applications and then perform data-flow analyzes on it. III. ILLUSTRATIVE EXAMPLE: Gossip protocol In order illustrate that our approach is useful, especially in the context of P2P applications, we explain our dependency formalism in an example that consists of a Gossip protocol [10]. Gossip protocol, also called epidemic protocol, is well-known in the community of P2P. It is mainly used to ensure a reliable information dissemination in a distributed system in a manner closely similar to the spread of epidemics in a biological community. This kind of dissemination is a common behavior of various P2P applications, and according to [11], a large number of distributed protocols can be reduced to Gossip protocol. There exist different variants of Gossip protocol. However, a template that covers a considerable number of those variants has been presented by Jelasity in [11]. In our example, we will rely on this template shown in Algorithm III. To model this Gossip protocol, we consider a set of nodes, which get activated in each \(T\) time units exactly once and then spread data in a network by exchanging messages. Basically, when a node receives data, it responds to the sender and propagates the data to another node in the network (in practice, the data are propagated to a subset of nodes selected according to a specific algorithm). In terms of service, a node is a component that has two activities: serving and consuming data. There are two input services for the serving activity and two output services for the consuming activity. These services are described in the \(^1\)The Dyck-Language \(D\) is the subset of \(\{x, y\}^*\) such that if \(x\) is replaced by a left parenthesis and \(y\) by a right parenthesis, then we obtain sequence of properly nested parentheses [6]. Algorithm 1 The gossip algorithm skeleton (from [11]) loop timeout(T) node ← selectNode() send gossip(state) to node end procedure onPushAnswer(msg) send answer(state) to msg.sender state ← update(state, msg.state) end procedure onPullAnswer(msg) state ← update(state, msg.state) end node interface as follows: \[ \{ \text{answer}(\text{resp} : \text{String}), \text{gossip}(\text{info} : \text{String}) \}_i, \{ \text{gossip}(\text{info} : \text{String}), \text{answer}(\text{resp} : \text{String}) \}_o \] The gossip service is for the propagation of data and the answer service is for sending a response to the sender. The behavior of input services (serving activity) just mirrors the same steps of the output services (consuming activity). From this description of services, we can construct intuitively a simple dependency graph between services, i.e., output services of a node are connected to input services of another node, and so on. This graph represents a part of the control flow but is not very explicit about the data flow. In fact, we do not know the dependencies between services and between data within a node. To complete this interface with a description of both control and data flow, our formalism specifies the behavior with a set of rules: \[ \begin{align*} r_1 : \text{timeout}(T) & \rightarrow (\text{gossip}(\text{state}_x), \text{node}_y) \\ r_2 : (\text{gossip}(\text{state}_y), \text{node}_y), [\text{onPush}] & \rightarrow (\text{answer}(\text{state}_y), \text{node}_y) \\ r_3 : (\text{gossip}(\text{state}_y), \text{node}_y), [\text{onPull}] & \rightarrow (\text{state}_y, \text{msg}.\text{state}) \\ r_4 : (\text{answer}(\text{state}_y), \text{node}_y) & \rightarrow \\ \end{align*} \] where, \( r_1 \) indicates that the internal service \text{timeout} activates the \text{node}_x in each \( T \) time and then sends the data \text{state}_x to \text{node}_y through the service \text{gossip}. \( r_2 \) indicates that the \text{node}_x receives the data \text{state}_y from \text{node}_y and then responses by sending the data \text{state}_x through the service \text{answer} if the condition \text{onPush} is satisfied. \( r_3 \) is a guard condition (to keep things simple, we will ignore guard conditions in this example). \( r_3 \) indicates that the \text{node}_x receives the data \text{state}_y from \text{node}_y through the service \text{gossip}. \( r_4 \) indicates that the \text{node}_x receives the data \text{state}_y from \text{node}_y through the service \text{answer}. By introducing these rules, the system can be viewed as a set of components where each component has inputs (left side of the rules) and outputs (right side of the rules). The inputs receive data carried by services, and after computation, these data can be sent through outputs. Therefore, we can extract a Data-Dependency Graph of the whole system by connecting together the partial data dependency graphs corresponding to each component used in this system. Once the DDG is defined, we can perform several data-flow analyzes. IV. DATA-DEPENDENCY FORMALISM Our formalism is highly inspired by the main characteristics of the Attributed Grammars (AGs). AGs were introduced by Knuth [12] and, since then, they have been widely studied [7]. An attributed grammar is an extension of context-free grammar to precisely describe both control and data flow. In this context, an AG’s production describes an elementary control-flow that has the following form: \( X_0 \rightarrow X_1, \ldots, X_n \) (\( X_0 \) represents a node in a tree and \( X_1, \ldots, X_n \) are its child nodes), whereas a semantic method \( f \) describes the computation of the synthesized attributes of \( X_0 \) and the inherited attributes of \( X_1 \leq i \leq n \). The synthesized attributes are the result of the attribute computation, and may use the values of the inherited attributes. Synthesized attributes are used to pass computed information up the tree, while inherited attributes pass information down and across it. Many techniques and algorithms for data-flow analysis were introduced in AG literature and in our previous works (e.g., [13], [14]). These techniques and algorithms are commonly used in compiler construction for performing optimizations from a program’s abstract representation (an attribute-dependency graph induced by the Abstract Syntax Tree of the source code). In [14] we have argued that in the term “Attributed Grammar” the notion of grammar does not necessarily imply the existence of an underlying tree, and that the notion of attribute does not necessarily mean decoration of a tree. We have presented Dynamic Attributed Grammars as an extension to the AG formalism. They are consistent with the general ideas underlying AGs, hence we retain the benefits of the results that are already available in that domain. In the same direction, we explore to use similar techniques to define a Data-Dependency Formalism (DDF) which allows us to construct a Data-Dependency Graph (DDG). The DDF formalism is essentially dedicated to applications that can be divided into autonomous components communicating to each other over channels. For this purpose, we separate clearly computational activities and component interactions. Thus, we distinguish two types of descriptions, grouped as syntactic and semantic descriptions. The syntactic descriptions consist of a collection of input, output and internal services described only by their signatures. The semantic descriptions consist of interaction rules that define not only the valid sequences of service invocations, but also data exchange required for achieving of the functional activities and driven the interactions between components. We call interface the syntactic part and behavior the semantic part. A. DDF specification 1) Interface: A service is a functional activity supported by a component. If the component provides a service through its interface, the service is called input service; if the component requires a service through its interface, the service is called output service. If the component provides a service that is invoked only by itself, the service is called internal service. A service call refers to an output service or an internal service. Formally, a service and an interface are defined as follow: Definition 4.1 (Service): A service is a 3-tuple \( s = < T, name, arg > \), where: - \( T \) is the service type; - \( name \) is the service name; - \( arg \) is a set of the service arguments. A service \( s \) is written as \( s(a_0, \ldots, a_n) \), its result is denoted by \( s\$ \) and its arguments are denoted by \( arg \), with \( arg = \{ a_0, \ldots, a_n \} \). Definition 4.2 (Interface): An interface is a 3-tuple \( I = < S_{in}, S_{out}, S_{int} > \), where: - \( S_{in}, S_{out}, S_{int} \) are a set of, respectively, input, output and internal services. 2) Component: A component encapsulates data (attributes) with methods to operate on the component’s data. Methods implement the services provided through the component interface. A service is implemented by one method. A component contains the declaration of attributes whose values define the state of its instances, along with the bodies of methods that operate on those attributes. A method defined within a component can access only those attributes that are declared within the component, along with any arguments that are passed to the method. Formally, a component is defined as follows: Definition 4.3 (Component): A component is a 4-tuple \( C = < A, I, Imp, m > \), where: - \( A \) is a set of typed attributes; - \( I \) is an interface; - \( Imp \) is a set of methods (implementing the services provided through the interface). A method is denoted \( F \) and defined in Definition 4.6; - \( m : \{ S_{in}, S_{out} \} \rightarrow Imp \) is a function that maps each service \( s \in (S_{in} \cup S_{int}) \) of I to a component method in \( Imp \). An attribute may be chosen as a component state. State changes are caused by an input, output or internal service. Thus, for the external environment, the input or output services may describe a visible state change. These states may be used by guarded conditions (defined in Section IV-A3) to control the component behavior. A component may have multiple instances. An instance \( c_i \) of a component \( C = (A, I_C, Imp_C, m_C) \) is denoted by \( c_i : C \). 3) Behavior with data dependency: We define the component behavior as a set of rules, where each rule links one input event to some output events (a rule is defined hereafter in Definition 4.6). When a component receives an input event, it will respond to this by executing computations, changing values of its attributes or sending output events. In a rule, the input event is linked to output events by a transition labeled by optional guard conditions. The guard conditions indicate the circumstances under which a rule can be applied. Hence, a rule describes a one-step behavior. To keep the rule definition simple, we define first input and output event. Definition 4.4 (Input Event): An input event \( v \) of a component \( C = < A, I, Imp, m > \) is an element of \( (S_{in} \cup S_{int}) \). Definition 4.5 (Output Event): An output event \( v \) of a component \( C = < A, I, Imp, m > \) is an element of \( (S_{out} \cup S_{int}) \). Based on these events, a rule may specify four kinds of events (asynchronous events): receiving an input service, receiving an internal service, emitting an output service and emitting an internal service. Table I gives some examples (with abbreviations) of such events. <table> <thead> <tr> <th>Input Event → Output Events</th> <th>Informal meaning</th> </tr> </thead> <tbody> <tr> <td>( s_I(\text{args}_i)[\text{Guards}] \rightarrow ... )</td> <td>receipt of a service ( s_I(\text{args}_i) ), where ( s_I ) is an input or internal service.</td> </tr> <tr> <td>... ( \rightarrow s_g$ )</td> <td>emission of a response ( s_g$ ) of a service ( s_g ), where ( s_g ) is an input or internal service.</td> </tr> <tr> <td>... ( \rightarrow s_g(\text{args}_g) )</td> <td>emission of a service ( s_g(\text{args}_g) ), where ( s_g ) is an output or internal service.</td> </tr> <tr> <td>( s_I$[\text{Guards}] \rightarrow ... )</td> <td>receipt of a response ( s_I$ ) of a service ( s_I ), where ( s_I ) is an output or internal service.</td> </tr> </tbody> </table> Table I ASYNCHRONOUS EVENTS. In a rule \( r \), we distinguish three types of data grouped as input, computed and output data. The input data denote the known data used during the computation achieved by the method implementing the service corresponding to the input event of \( r \) (this method is called \( F \) and it is defined hereafter in Definition 4.6). The input data consist only of internal component attributes and the arguments or result of the service causing the input event. The computed data consist of the results of \( F \) and the output data consist of the arguments or result of the service causing the output event. The output data are presented as the union of the input and computed data. Guard conditions act on the input data. They ensure that the input data are valid or conforms to the conditions before applying the rule. They can be used, for instance, to ensure that two events are mutually exclusive if they occur at the same time. Formally, a rule is defined as follows: **Definition 4.6 (Rule):** A rule describes the execution of an input event \( v \) in a component \( C \). It is defined by a 4-tuple \( r = < L, Guards, R, E > \), where: - \( L = \{ v \} \) with \( v \) is an input event. \( L \) represents the left side of the rule; - \( Guards \) are the guard conditions, indicating the circumstances under which the input event \( v \) can be executed. A guard condition consists of a set of Boolean expressions. An input event \( v \) is executed if each Boolean expression is true; - \( R = \{ v_1, ..., v_n \} \) \( \forall i \in 1..n \), \( v_i \) is an output event \( \cup \{ \emptyset \} \). \( R \) represents the right side of the rule; - \( E \) is a semantic equation which has the following form: \[ (b_0, ..., b_q) = F(a_0, ..., a_p) \] (1) where \( F \) is a method that implements the service corresponding to the input event \( v \) and defines the computation of the output data \( (b_i) \) in terms of the input data \( (a_i) \). Before giving the definition of the constraints on the equation \( E \), we define first three sets of data: Input Data \( ID_r \), Computed Data \( CD_r \), and Output Data \( OD_r \). **Definition 4.7 (Input data \( ID_r \) of a rule \( r \)):** Let a rule \( r = < L, Guards, R, E > \) describes the execution of an input event \( v \in L \) in a component \( C = < A, I, Imp, m > \), the input data \( ID \) of \( r \) are: \[ v \in L, ID_r = \begin{cases} arg_s \cup A & \text{if } v = s(arg_s) \\ \{s\} \cup A & \text{if } v = s \end{cases} \] (2) **Definition 4.8 (Computed data \( CD_r \) of a rule \( r \)):** Let a rule \( r = < L, Guards, R, E > \), computed data \( CD \) of \( r \) are the set of data resulting from the equation \( E \): \[ CD_r = \{ b_0, ..., b_q \} \] (3) **Definition 4.9 (Output data \( OD_r \) of a rule \( r \)):** Let a rule \( r = < L, Guards, R, E > \), output data \( OD \) of \( r \) are the data emitted by the output events of \( r \): \[ OD_r = \bigcup_{v_i \in R} \begin{cases} arg_s & \text{if } v_i = s(arg_s) \\ \{s\} & \text{if } v_i = s \end{cases} \] (4) Once these three sets of data are defined, the constraints on the semantic equation \( E \) of a rule \( r \) can be defined as follow: **Definition 4.10 (Constraints of a semantic equation):** The constraints to be satisfied by a semantic equation \( E : (b_0, ..., b_q) = F(a_0, ..., a_p) \) of a rule \( r \) are: - **Contraint (1):** \( OD_r \) elements can only be elements of the union of \( ID_r \) and \( CD_r \): \[ OD_r \subseteq ID_r \cup CD_r \] (5) - **Contraint (2):** \( F \) only accepts \( ID_r \) elements as inputs: \[ \forall i \in 0..p, a_i \in ID_r \] (6) In right side \( R \) of a rule, output events (separated by ",") may be output service emitted to different remote components, and each component is a process that can be executed separately. This parallel relation between output events is nearly implicit. For example, \( r : s \rightarrow s_1, s_2 \) means services \( s_1 \) and \( s_2 \) do not have sequential relation. This relation characterizes the activity of a unique rule. So, in order to characterize the activity of a set of rules, we define three operations for rules: - **Sequence operation ":"**: Indicating a sequential order among rules. For example, \( r_1 ; r_2 ; r_3 \) means rule \( r_1 \) acts before \( r_2 \) and \( r_2 \) acts before \( r_3 \). - **Alternative operation "|"**: Indicating an alternative choice concerning the output events of a rule. For example, \[ r : s[Guards] \rightarrow s_1 \] \[ r : s_2 \] means services \( s_1 \) and \( s_2 \) may have same chance to occur. This alternative can be controlled by the guard conditions. - **Recursive operation "[ ]"**: Indicating that an internal service \( s \) will be called recursively. This recursion can be controlled by the guard conditions. Thus, recursion operations can be used to have repetition (loop) indicating that some rules will be executed \( n \) times continuously. For example, \[ [r_1 : s[Guards] \rightarrow s_1 \] \] \[ r_2 : s_2 \rightarrow s \] means that the rule \( r_1 \) execute the internal service \( s \) if guard conditions are satisfied, and then it calls the service \( s_1 \). When the service \( s_1 \) response arrives, the rule \( r_2 \) calls the internal service \( s \), which will be executed again by \( r_1 \) if the guards are still satisfied. Therefore, from the definition of an interface, a rule and rule operations, we have the following definition of a component behavior. **Definition 4.11 (Behavior):** The behavior of a component \( C \) is a set of rules combined by sequence, alternative and recursion operations with respect to the following regular expressions: \[ B ::= r^+ \mid [B^+] \mid \{B^+\} \] (7) \[ r ::= r \mid (r \backslash r) \] (8) 4) System: The component composition is based on connections among component instances. A connection between two instances occurs when one of them provides its interface and another instance uses it. Hence, input (resp. output) services are connected to signature-matching output (resp. input) services. There is a unique connection between two instances. Once component instances are connected, the behavior of the entire resulting system is obtained by composition of behaviors of participating instances. Since one rule is a one-step behavior and the component instance behavior is a set of rules connected by sequence, alternative and recursive operations, the system behavior can be again viewed as a set of rules connected by these same operations. Formally, a system is defined as follows: **Definition 4.12 (System):** A system is defined by a 2-tuple $\text{Sys} = \langle \text{Inst, T} \rangle$ where: - $\text{Inst}$ is a set of component instances; - $\text{T} = \{(c_1, c_2) | (c_1, c_2) \in \text{Inst} \times \text{Inst}\}$ is a set of connections between component instances. **V. SYSTEM ANALYSIS** As described in Section II-B2, Data-Flow Analysis refers to a body of techniques, which derive information about the flow of data along software system execution paths in order to infer or compute some system properties. To achieve this, we must first consider all the possible paths through a flow graph that the system execution can take. Therefore, we have defined a Data-Dependency Graph. It presents an abstract representation of the system. This abstraction exposes the right level of detail to perform DFA. The DDG models the flow of data values from the point where a datum value is created, a definition, to any point in a system where it is used, a use. A node in a DDG represents a low-level operation on data. In most cases, nodes contain both definitions and uses. A directed edge in a DDG connects two nodes (head and tail). The head defines a datum value and the tail uses it. The edges in the DDG represent interesting constraints on the control flow, i.e., a datum value can be used only if it has been defined. This only implies a partial order on the execution. Therefore, no total order among system operations is needed to be given by the system designer who often set it as an automaton to perform analysis. Moreover, it is possible through a data-flow analysis on this graph to infer various data evaluation orders during run time (e.g., total, parallel and incremental). Thanks to the theory of iterative data-flow analysis based on a fixed-point theorem [15]. **VI. CONCLUSION AND PERSPECTIVE** This paper presents a formalism called DDF (Data-Dependency Formalism). The goal of DDF is to formally specify the behavior of P2P applications, and then construct an abstract representation (i.e., Data-Dependency Graph) to perform analyzes on it. We note that our approach shares with the theory of Attribute Grammars [7] the same semantics of the Data-Dependency Graph. The theoretical algorithms and techniques of AGs and DFA show that it is possible through analysis on these dependency graphs to infer various evaluation orders of data and compute different properties. The reliability of those algorithms was proven in different works [7], and optimized variants were presented in our previous works, e.g., [13], [14]. A reformulation of some of these AGs analysis/testing algorithms is in progress. In particular, an algorithm that infers the evaluation orders of data to determine formally which services in a system can be executed in a parallel or incremental way. In a future work, we plan to extend our formalism by program transformation mechanisms in order to optimize CPU and memory usage (by analyzing lifetime of data taking into account their functional dependencies and redundancies) in large-scale data-centric applications. Especially, in the emerging Cloud Computing area, where data management has been receiving significant attention. **REFERENCES**
{"Source-Url": "https://hal-lirmm.ccsd.cnrs.fr/lirmm-00757286/file/PDCAT2012_Final.pdf", "len_cl100k_base": 7754, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 25233, "total-output-tokens": 9232, "length": "2e12", "weborganizer": {"__label__adult": 0.00033402442932128906, "__label__art_design": 0.0003185272216796875, "__label__crime_law": 0.0003464221954345703, "__label__education_jobs": 0.000728607177734375, "__label__entertainment": 7.790327072143555e-05, "__label__fashion_beauty": 0.00014770030975341797, "__label__finance_business": 0.00023305416107177737, "__label__food_dining": 0.0003631114959716797, "__label__games": 0.0006227493286132812, "__label__hardware": 0.0007238388061523438, "__label__health": 0.0006022453308105469, "__label__history": 0.00022709369659423828, "__label__home_hobbies": 7.37905502319336e-05, "__label__industrial": 0.00033736228942871094, "__label__literature": 0.0003662109375, "__label__politics": 0.00026702880859375, "__label__religion": 0.0004305839538574219, "__label__science_tech": 0.034515380859375, "__label__social_life": 9.232759475708008e-05, "__label__software": 0.0070648193359375, "__label__software_dev": 0.951171875, "__label__sports_fitness": 0.00029087066650390625, "__label__transportation": 0.0004580020904541016, "__label__travel": 0.00018906593322753904}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35174, 0.0277]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35174, 0.79753]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35174, 0.88478]], "google_gemma-3-12b-it_contains_pii": [[0, 1024, false], [1024, 6293, null], [6293, 12354, null], [12354, 18126, null], [18126, 23723, null], [23723, 29007, null], [29007, 35174, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1024, true], [1024, 6293, null], [6293, 12354, null], [12354, 18126, null], [18126, 23723, null], [23723, 29007, null], [29007, 35174, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35174, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35174, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35174, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35174, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35174, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35174, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35174, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35174, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35174, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35174, null]], "pdf_page_numbers": [[0, 1024, 1], [1024, 6293, 2], [6293, 12354, 3], [12354, 18126, 4], [18126, 23723, 5], [23723, 29007, 6], [29007, 35174, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35174, 0.02927]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
23829f2b7c228229c7c49233ab0ac8ec201d154e
Reusing and Combining UI, Task and Software Component Models to Compose New Applications Christian Brel, Philippe Renevier, Alain Giboin, Anne-Marie Dery, Michel Riveill To cite this version: Christian Brel, Philippe Renevier, Alain Giboin, Anne-Marie Dery, Michel Riveill. Reusing and Combining UI, Task and Software Component Models to Compose New Applications. BCS HCI, http://hci2014.bcs.org/, Sep 2014, Southport, United Kingdom. hal-01096211 HAL Id: hal-01096211 https://hal.archives-ouvertes.fr/hal-01096211 Submitted on 17 Dec 2014 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Reusing and Combining UI, Task and Software Component Models to Compose New Applications Christian Brel CNRS, Laboratoire I3S - UMR 7271 - UNS CNRS FR-06900 Sophia Antipolis Cedex Christian.Brel@unice.fr Philippe Renevier Gonin Université Nice Sophia Antipolis, Laboratoire I3S - UMR 7271 - UNS CNRS FR-06900 Sophia Antipolis Cedex Philippe.Renevier@unice.fr Alain Giboin INRIA, Laboratoire I3S - UMR 7271 - UNS CNRS INRIA FR-06900 Sophia Antipolis Cedex Alain.Giboin@inria.fr Michel Riveill Université Nice Sophia Antipolis, Laboratoire I3S - UMR 7271 - UNS CNRS FR-06900 Sophia Antipolis Cedex Michel.Riveill@unice.fr Anne-Marie Dery Université Nice Sophia Antipolis, Laboratoire I3S - UMR 7271 - UNS CNRS FR-06900 Sophia Antipolis Cedex Anne-Marie.Pinna@unice.fr Composing applications by considering in parallel both software components and UI elements is a complex process not yet very well supported by any current composition process model or composition environment. To contribute to better support the composition process, we propose a new composition model and a prototype of a component assembler, the so-called OntoCompo, which implements the model. The model describes applications in terms of Task, UI and software components. The prototype allows a composition mainly driven by the direct manipulation of UI elements, the other components being hidden, but still being linked to the UI elements. We performed a user testing with actual developers to evaluate if the composition process was actually facilitated by our modeling approach and the prototype implementing it. UI Composition ; Model-Based Development ; User Testing 1. INTRODUCTION Facilitating the development work of software developers was the motivation of the OntoCompo approach reported in this paper. We could call this approach the Developer-Friendly Development (DFD) approach, by comparison to the so-called End-User Development (EUD) [Lieberman et al. (2006)]. EUD has been defined as a set of methods, techniques, and tools that allow users of software systems, who are acting as non-professional software developers, at some point to create, modify or extend a software artefact*. In 2006, Lieberman et al. (2006) foresaw that "the goal of human-computer interaction will evolve from just making systems easy to use (...) to making systems that are easy to develop", implied end-users to develop. Our DFD approach aims at making application composition easier for professional developers themselves. Composing applications, by considering in parallel both software component assembly and User Interfaces (UI), is a complex process not yet very well supported by any current modeling approach or composition environment. The need to combine applications may grow with the increase of specialized applications available on application stores. For example, Google Maps is often integrated for geo-localization. In an idealistic way, developers must be able to reuse existing functionalities with minor developments. There is a need in supporting developers in their task of combining elements of existing applications to create a new application. © The Authors. Published by BISL. Proceedings of BCS HCI 2014 - Sand, sea and Sky - Holiday HCI, Southport, UK Rather than having to learn API of applications such as Google Maps and to code from scratch, or rather than having to abstract existing applications (in terms of tasks, services or UI) and to transform such abstractions in some new application, we propose a new composition model and a prototype, the so-called OntoCompo, to simplify the composition work. The model describes applications in terms of links between Task, UI (i.e., graphical elements) and Software Components. By preserving and by reasoning on these links, during developers’ direct manipulations on the UI, we intend to enhance the consistency of the composition. In our case study, we consider the UI elements as an entry point. Developers can indicate directly on the UI which visible parts of the application are required for the composition. By considering at the same time UI, task and software models, i.e., by following the linked UI, Task and Software descriptions, we could transfer such UI direct manipulations to the whole application description. We assumed that hiding the three models would hide a part of the complexity of the process, and consequently facilitate it. To evaluate if the composition process was actually facilitated by our approach through the prototype implementing it, we performed a user testing with actual developers. This article is structured as follows. After having presented related work, we describe our application model for composition. Next we present the OntoCompo prototype. Then we report the user testing (method, results and discussion). Finally we conclude with future works. 2. RELATED WORK 2.1. Software Composition For software composition, “Composition can be defined as any possible and meaningful interaction between the software constructs involved” according to (Lau and Rana (2010)) where a taxonomy of composition mechanisms (e.g., orchestration, aspect oriented programming, etc.) is defined. When the application code is not available, we can only access to published interfaces and we have to use connectors (Mehta et al. (2000)). 2.2. UI Composition 2.2.1. UI composition approaches We distinguish two different UI composition approaches. (1) The first approach bases the UI composition on abstract description, like in UisXML (Lepreuex et al. (2010)), the ServFace project (Paternò et al. (2011)), Alias (Joffroy et al. (2011)) and Transparent Interface (Ginzburg et al. (2007)). Those models are defined by XML languages. Final UIs are obtained thanks to model transformations. (2) The second composition approach is based on “UI Components”, which are reusable high-level widgets, available in repositories. Compose (Gabillon et al. (2011)), COTS-UI (Criado et al. (2010)), CRUISe (Pietschmann et al. (2009)), WinCuts (Tan et al. (2004)), UI façades (Stuerzlinger et al. (2006)) and on-the-fly mashup composition (Zhao et al. (2008)) illustrate such composition. Several of these works, e.g., (Nestler et al. (2009), Gabillon et al. (2011)), express and manipulate requirements with tasks. 2.2.2. Models used in UI composition Three kinds of models are used in the two approaches: Task models (e.g., trees of users’ tasks), UI models (e.g., hierarchies of graphical elements), and Software models (e.g., assemblies of components). Table 1 reports the kinds of models used as entry point (i.e., the models manipulated in order to drive the composition) in the related work we analyzed. The last row of the table represents the characteristics we wanted to include in our approach: we wanted (1) to use the three kinds of models in order to make the composition easier, and (2) to reuse existing applications in order to avoid re-designing what has been already designed. Requirement (1) led us to rely on the existing approaches and/or systems considering several kinds of models instead of only one model. ServFace (Paternò et al. (2011), Nestler et al. (2009), Paternò et al. (2009)) and Service-Interaction Descriptions (Vermeulen et al. (2007)) start with building a new Task Tree, associate Service to tasks, then produce a new UI by assembling abstract UI fragment associate to services and finally complete the new UI. In those works, the composition produce a new UI and is based on an abstraction of the wished composition. On-the-fly service composition (Zhao et al. (2008)), COTS-UI (Criado et al. (2010)) and CRUISe (Pietschmann et al. (2009)) compose web services or software components and their UI, but without considering tasks. Compose (Gabillon et al. (2011)) starts with the translation in term of tasks of a requirement express in natural language and then a new UI is computed. Compose is designed for end users in a context of UI adaptation while our context is application designs made by developers. 2.3. Implications for Our Approach From the analysis of these works, we noted that we can compose the UI (respectively, the functional parts) of former applications, but that we must build Related Work <table> <thead> <tr> <th>Service Approach: BPEL4WS (Khalaf et al. (2003)) - BPEL (Alves et al. (2007)) - Web Service Composition OWL-S (Sohrabi and McIlraith (2010))</th> <th>Entry Point</th> <th>Used Models</th> <th>Results</th> </tr> </thead> <tbody> <tr> <td></td> <td>S* Task UI</td> <td>S* Task UI</td> <td>Reused G*</td> </tr> <tr> <td>Component Approaches: Fractal (Bruneton et al. (2006)), SCA (et al. (2005), Ope et al. (2007)) and SLCA (Hourdin et al. (2008))</td> <td>x x x</td> <td>x</td> <td></td> </tr> <tr> <td>ALIAS (Joffroy et al. (2011)) ; Transparent interface composition (Ginzburg et al. (2007))</td> <td>x x x x serv</td> <td>UI</td> <td></td> </tr> <tr> <td>ServFace (Paternò et al. (2011); Nestler et al. (2009); Paternò et al. (2009)) ; Service-Interaction Descriptions (Vermeulen et al. (2007))</td> <td>x x x x serv</td> <td>UI</td> <td></td> </tr> <tr> <td>on-the-fly service composition (Zhao et al. (2008))</td> <td>x x x</td> <td></td> <td></td> </tr> <tr> <td>Task Composition (Bourguin et al. (2007))</td> <td>x x x code</td> <td></td> <td></td> </tr> <tr> <td>Task Tree Merge (Lewandowski et al. (2007))</td> <td>x x x x</td> <td></td> <td></td> </tr> <tr> <td>Compose (Gabillon et al. (2011))</td> <td>x x x x</td> <td></td> <td></td> </tr> <tr> <td>ComposiXML (Lepreux et al. (2010))</td> <td>x x UI</td> <td></td> <td></td> </tr> <tr> <td>Migratable UI (Luyten et al. (2002))</td> <td>x x x CUI</td> <td></td> <td></td> </tr> <tr> <td>WinCuts (Tan et al. (2004)) ; UI façades (Stuerzlinger et al. (2006))</td> <td>x x x</td> <td></td> <td></td> </tr> <tr> <td>COTS-UI (Criado et al. (2010))</td> <td>x x x x</td> <td></td> <td></td> </tr> <tr> <td>CRUISe (Pieetschmann et al. (2009))</td> <td>x x x x</td> <td></td> <td></td> </tr> </tbody> </table> Table 1: Classification of Composition Approaches 3. APPLICATION MODEL FOR COMPOSITION OntoCompo exploits the three points of view for application composition identified in the state of the art, i.e., the three descriptions: UI, Tasks and Software Component (Assembly). The originality of this approach is to connect each of those models with the two others (Gabillon et al. (2011)) by linking corresponding entities: (1) each graphical element is linked to tasks it supports; (2) each task is linked to graphical elements used to perform it; (3) each graphical element is linked to the software component surrounding it; (4) each software component is linked to graphical element surrounded by it; (5) each task is linked to software component used to perform it; (6) each software component is linked to tasks it supports. Those links are expressed by annotations on the three models. We model UI with a classical hierarchical description of graphical elements. That description is annotated with layout links such as on the left, below, etc. We model Tasks as ConcurTaskTrees (CTT) (Paternò et al. (1997)). We model Software Assembly as Component Assembly. Components are connected through their ports. A connection between two components is between a requiring port and a providing port. So ports are characterized by their nature ("required" or "provided"), their type ("trigger" for activating actions, "input" for entering or providing values or "output" for displaying or storing data), and their role ("UI" when concerning the graphical interface, "UI component" when concerning the manipulation of graphical element, etc.). The links with software components and tasks or graphical elements are done on their ports. Figure 1 illustrates the application model for composition. For example, the text entry "AddressAInput" is connected with the task "Fill Position A" and with the two ports of the software component "AddressAInput": one, with the port tagged "UI", for getting the typed value and the other, with the port tagged "UI Component", for graphical element exchange between software components in order to build the UI. The approach is applied to the application composition driven by UI manipulation. Thus, starting from a selected part of the UI, corresponding software components are identified. The connections between models are exploited in a process of selection, composition by substitution and layout reorganization. 1. **Selection** consists in selecting the parts of UI required for the composition. Thanks to the UI model, the selection is completed in order to obtain an operational and usable selection. Moreover, thanks to extensions (query based on the models), the selection is eased by following the connection. For example, all graphical elements required for achieving a parent task can be retrieved. Let consider the selection of the graphical element "AddressAInput". The latter is connected with the task "Fill begin and arrival position" which parent task is "AddressAInput". Then, we consider all software components connected to (at least) one of these graphical elements or to the task "FBaAP" (or to its subtasks). If there are unsatisfied required ports in the selection, corresponding missing software components and graphical elements connected with such missing components are added to the selection, until there is no unsatisfied required port left. The final selection is \{ AddressAInput, AddressBInput \} \_{UI} ; \{ "Fill begin and arrival position" \}_{task} ; \{ AddressAInput, AddressBInput \} \_{SC} \}. 2. **Composition by substitution** allows to replace a selected component by another "equivalent" component. The substitution is based on the connections between software components. Thanks to the characterization (nature, type, role) of component ports involved in connection, the substitution is realized by the addition of adapters between already connected ports or between ports to connect. So the links between the initial applications are set up to create the new application, by accordingly modifying the software component assembly (Brel et al. 2012). In the context of application composition driven by UI manipulation, to perform such substitution, actions on graphical elements are propagated to corresponding software components. To choose how to make a substitution, i.e., which port must replace which port, the approach allows to apply different strategies: asking for the user, applying an algorithm, etc. In our approach, we generate skeleton of adapters whose code needs to be completed. 3. **Layout reorganization** is a simple step where remaining selected UI elements are placed in the windows. Once the placement done, the final application is generated: a new component assembly is produced and also the skeleton of adapters. A developer has to fill the content of those adapters in order to finalize the composition. The model for application composition enables several possible interactions. Wanting to determine the better way to apply the approach, we developed OntoCompo in order to experiment a simple use of the model in a composition process. OntoCompo sequentially implements the "selection-substitution-layout reorganization" process. OntoCompo also hides underlying models to the developer performing a composition. The developer only manipulate graphical elements. In the next section, we present an implementation of OntoCompo. 4. **IMPLEMENTATION OF ONTOCOMPO** Our application models are developed thanks to ontologies, allowing us to quickly perform the necessary requests for composing. OntoCompo (Brel et al. 2011) manipulates applications with Fractal\(^1\) components (Bruneton et al. 2006), which must be semantically described. To implement our functions and algorithms, we make SPARQL requests, processed by the semantic engine CORESE / KGRAM (Corby et al. 2012). The initial applications are developed according to component architecture, defined by the Julia implementation (in Java) of the Fractal model. The whole application, whether its features or its graphical interface, is implemented by components. Some components encapsulate graphical elements (from the SWING library of Java) and are recognizable by their particular ports with the "UI Component" role. The architecture of OntoCompo consists of three interrelated parts (see Figure 2): (1) The Application Loader, for loading software components and models (semantic descriptions); (2) the OntoCompo GUI, implementing the three application composition steps; and (3) the OntoCompo API. This API is the main part of the architecture; it handles all the manipulations to be made by our algorithms on fractal components or semantic descriptions. Through a collection (a map) allowing to associate a Swing component with the encapsulating fractal component, the API can retrieve and manipulate fractal components from selected graphical elements given by the OntoCompo GUI. A video illustrating OntoCompo and the scenario used in the experimentation is available at http://goo.gl/QEfq4g. 5. **USER TESTING OF ONTOCOMPO: METHOD** To validate our approach of the application composition driven by the manipulation of UI \(^{1}\)This software component model was named "Fractal" because its components could be made of several components. elements, we performed the following user testing with actual developers. 5.1. Evaluation Method Type Quantitative methods are classically preferred over qualitative methods to evaluate systems. For example: (a) evaluating their UsiXML-based composition tool GrafiXML, (Lepreux and Vanderdonckt (2007)) used the GOMS method to establish the time gain during the use of GrafiXML; (b) user testing their platform for migrating interface components to a target device, (Paterno et al. (2011)) submitted a quantitative questionnaire (with Likert scales) to the users of the platform. In our case, using strictly quantitative methods to evaluate our approach and its supporting tool was considered premature. We had to qualify developers’ actual practices of composition before to quantify them. Hence we needed a method that was both qualitative and quantitative. We used a variant of the "cooperative evaluation" method of (Monk (1993)). 5.2. Goal and Hypotheses Our goal was to evaluate the understanding and acceptance of the OntoCompo approach as performed through the OntoCompo tool. We envisioned two working hypotheses: - **Strong hypothesis:** Developers can perform their composition task by manipulating graphical elements only; any of the three models (UI model, Task model and Software-Components model) is necessary. Manipulating the code of the resulting application is not necessary either. This hypothesis reflects the ideal we sought to facilitate the composition work of the developer. - **Weak hypothesis:** To perform their composition task, in addition to graphical elements, developers need to manipulate the three kinds of models, but to different degrees. Manipulating the code of the resulting application remains not necessary. This hypothesis means that development work facilitation is variable. 5.3. Participants Two kinds of developers participated to the user testing: four developers who never handled an application composition tool; five developers who already used some composition tool (not necessarily a tool for composing applications). Since no --- Figure 1: Excerpt of the application model linking UI, Tasks and Software Components (SC). Figure 2: Architecture of the OntoCompo prototype. differences in their behavior were noticed, we decided to consider them as a single group. 5.4. Material The material used during the experimentation was: (a) the OntoCompo Prototype; (b) the composition task instructions; (c) the "additional information" to be provided on demand to developers or during the debriefing phase; this information consisted of printed documents representing: the Task Model, the Software-Component Model, the UI Model, and the generated code; excerpts of the models are given in Figure 1; and (d) the composition task scenario, including the UIs of the two source applications and the UI of the resulting application (described in detail in the next subsections). 5.4.1. Composition task scenario and related UIs Composition task scenario: "A developer has at her disposal two applications: one, called "Movie Theaters", for displaying movies played in cinema near a specified location, and another, called "Maps", for searching directions on a map. She wants to produce a new application to search movie theaters closed to a specified address. Once a movie theater is selected, then the directions from that address to the selected theater are displayed." The "Movie Theaters" UI includes at the top of the window a text field to enter the address that will be the geographical center of the research. The application uses a Web Service from the Web Site http://www.allocine.com, a French web site about movies and theaters. The Web Service enables queries to find theaters from a location, to list the movies played in a theater, etc. The user calls that service by clicking the "search" button. The table on the left of the window is fulfilled with the received answers. By clicking on a line of that table, another query is made to the Web Service to get the list of played movies with their showtimes. The returned list is displayed in the table on the right of the window. The "Maps" UI is vertically organized. At the top, there is the panel for displaying the map. At the middle, there is the form to fill the start and the arrival. At the bottom, there are the main intersections of the found route. The resulting UI: an example of the UI resulting from the composition scenario is given in Figure 3. Only one text field is left, to enter the start of the route for the "Maps" application and to enter the geographical center of the search for the "Movie Theaters" application. From the latter, only the list of the closed theaters is displayed. It is only after selecting one theater that the route is displayed on the maps, coming from the "Maps" application. Figure 3: UI of the application resulting from the composition scenario. 5.4.2. Extensions of selection During the selection step, the developers can use consistent extensions of selection thanks to queries in all models, as described in Section 3. 5.4.3. Scenario difficulties To make the scenario realistic, we included three main difficulties for the substitutions in the application composition task of the developer. Difficulty 1: A misleading similarity of two graphic elements. Two buttons present in both applications had the same shape and the same title "Search". This similarity can mislead the developer who may be tempted to merge them, a merging that is not in the composition scenario. One button (from "Movie Theaters") must be kept, the second one (from "Maps") must be substituted by a trigger associated with the selection of one theater in the list. Difficulty 2: A substitution of two heterogeneous graphic elements. The first element, to be kept, was a list of the closest cinemas in the application "Movie Theaters". The second element, to be replaced, was an entry text (allowing to inform the address of arrival necessary for the calculation of the route in the application "Maps"). The list is obviously an "Output", and the entry text an "Input". The substitution was possible because the software element associated with the list supplies a port of type "Input" to obtain the selected cinema. Difficulty 3: A generated adapter source code (to be completed) including two methods with the same signature. This difficulty comes from the code presented to the developer after a substitution. It indicates the same method signature twice. The adapter generated by the substitution has several ports. Each of them corresponds to an implementation of a software interface. In the adapter source code, since two different software interfaces include the same method signature, there is twice the same signature of method `public String getInput()`. Even if this difficulty comes from the language Java, which does not allow to make the difference between both methods of the same signature resulting from two different software interfaces, we expected that the developer would know how to react. Indeed in this example, both methods `getInput()` have to produce the same result: merging the two methods is here possible. 5.5. Procedure Each developer was placed in front of the OntoCompo prototype, next to the experimenter leading the developer's testing session. As the developer went along, additional information was given to her on demand. Explanations on the use of the prototype were also given by the experimenter when requested by the developer. Each session consisted of three phases: (1) a familiarization phase where the developer freely manipulated the prototype interface to become familiar with it; (2) a task-performance phase where the developer performed the substitution task proposed by the experimenter; at the end of this phase, the developer was presented with the code generated on the outcome of the composition. The developer was expected to understand and explain that this code corresponded to an adapter generated during substitutions; (3) a debriefing phase where the developer provided further feedback. 5.6. Data Collection and Analysis 5.6.1. Data collection The manipulation of OntoCompo was video-recorded. Oral exchanges between the developer and the experimenter were also recorded. An observer was sitting back the developer to take notes on the developer's behavior. The experimenter also took notes when possible for him. Data collected consequently were: experimenter's and observer's written notes; videos; developers' verbalizations. 5.6.2. Data analysis The analysis consisted in determining if the developers achieved each composition task (or stage of the process: selection, substitution, layout reorganization) using graphical elements only (Strong Hypothesis), or if they needed to rely on the UI, Task and Software Components models (Weak Hypothesis), i.e., if they asked for the additional documentation representing these models. 6. USER TESTING OF ONTOCOMPO: RESULTS AND DISCUSSION On one hand, the nine developers well understood the composition process, and succeeded in manipulating the tool and in performing what they planned; however, only 55% (5/9) succeeded in making the composition without error. On the other hand, it emerges that additional information helped most developers (67%; 6/9) to achieve the composition. The results are essentially qualitative. In order to present the experimentation result, we summarized developers' comments and feelings. For example, 67% of developers needed additional information means that by analyzing the 9 experimentation sessions, we found that 6 of them required more information, either thanks to their comments or thanks to their used of printed additional information. 6.1. Developers' General Performance 6.1.1. How extensions were used Table 2 summarizes the uses of extensions. The test containing several selections, the developers varied in the use of the extensions. They made sometimes at least a selection without extension, sometimes with two extensions, and often with one extension (including combination of several extensions applied at the same time). They had several opportunities to use one or several extensions, so the percentage accumulation is upper to 100% in Table 3. We notice that 44% (4/9) of the developers used a combination of two extensions, in particular task with software extensions, e.g., extending the selection by the software component links while preserving only elements involved in the same task. <table> <thead> <tr> <th>Extension Type</th> <th>Use</th> <th>Asked Information</th> </tr> </thead> <tbody> <tr> <td>UI</td> <td>44% (4/9)</td> <td>No</td> </tr> <tr> <td>Task</td> <td>67% (6/9)</td> <td>Yes (for 5 developers), No (for 1 developer)</td> </tr> <tr> <td>Software</td> <td>11% (1/9)</td> <td>Yes</td> </tr> </tbody> </table> Table 2: Extension uses during user tests. <table> <thead> <tr> <th>Performing at least 1 section with n extensions</th> </tr> </thead> <tbody> <tr> <td>n=0</td> </tr> <tr> <td>33% (3/9)</td> </tr> </tbody> </table> Table 3: Proportion of extension uses. 6.1.2. How scenario difficulties were addressed **Difficulty 1 (close resemblance of two “Search” buttons)** confused several developers. A difference exists clearly between what we had considered as manipulations before performing the tests and the manipulations which the developers effectively made. This difficulty, for 44% (4/9) of the developers, led them to merge both buttons directly even though this substitution was identified as not necessary. **Difficulty 2 (substitution of two heterogeneous elements, a list and an entry text)** led most developers to substitute not the list with the entry text but rather with an element of the list (among others, the address of the selected cinema). Yet, this action was not allowed by the approach because each modeled graphical element was considered as indivisible. The expected substitution was always achieved, but 44% (4/9) of the developers strongly hesitated and achieved the right substitution after having eliminated the other possibilities. **Difficulty 3 (adapter source code including two methods with the same signature)** was circumvented by developers by suppressing the redundant method, but without fully understanding why. This highlights a lack of information. 6.2. Developers’ Need for Models A general analysis underlined that additional information would ease the use of OntoCompo. The developers’ preferences during the debriefing are summarized in Table 4. A majority of them (78% - 7/9) asked for an interactive representation of the task model during the phase of selection. This integration would allow to match the selection of the tasks with the associated graphical elements and vice versa. The task tree was indicated as the most intuitive model to analyze the behavior of the application especially if it allows to identify the correspondences between graphical elements and tasks. We also noted that 67% (6/9) of developers (cf. Table 2) having used the Task extension made an “abstract” deviation towards the software component model, deducting links between components exclusively from the information on the tasks. Moreover, the expressed preferences showed that for the substitution step, 67% of the developers wished an access to the software component models. On the contrary we noticed that no additional information was necessary for the use of UI Extensions. However 44% (4/9) of the developers (not necessarily the same that those who used this extension) expressed the fact that the representation of UI model seems necessary with more complex graphical interfaces. According to the developers, this model would highlight the information on the interweaving of UI elements. 6.3. Discussion The results of this user testing are encouraging. Participant developers welcomed well our model of application and the composition process. They generally succeeded in realizing the expected application. Difficulties met by the developers are the most often related to the lack of information about the underlying models. The identified needs for additional information show that the strong hypothesis we formulated rarely correspond to the developers’ practices. In other words, we have to say that the composition task can not be only driven by the manipulation of UI graphical elements. The weak hypothesis seems to be the most realistic: to perform their composition task, developers need to manipulate the three kinds of models together, but to different degrees. In other words, we have to say that the composition task must be driven by at least two of the three models (UI model, Task model, and Software Components model), depending on the process step. Results of the user testing revealed the most interesting models in the different steps, i.e., the additional information to be provided at these steps: (1) the **UI model and the Task model** are the most adapted for selecting the relevant part of applications. This can be explained because the two models are used to describe the interactions in user-centered design. (2) The **Software Components model**, and to a lesser extent the Task model, is the most adapted for the substitution step, because of the underlying impact on the software components. Such additional information will help developers to predict and to explain both the result of a selection extension, and the possible substitutions. Such information must be kept until the final application is generated in order to provide explanations to the developer when needed. Providing the developers with the underlying models would not only guide and reassure them, but also limit their cognitive load. 7. CONCLUSION AND FURTHER WORK We have presented our approach of application composition based on three application descriptions or models, namely the UI, Task and Software Components descriptions or models. We described OntoCompo, the prototype applying our approach to an application composition driven by manipulation of UI graphical elements only. The user testing we performed with OntoCompo highlighted that this restricted manipulation was not enough to achieve an appropriate composition. Results led us to conclude that the most realistic of our working hypotheses was the weak hypothesis, namely: to perform their composition task, developers need to manipulate the three kinds of models together, but to <table> <thead> <tr> <th></th> <th>Selection</th> <th>Substitution</th> <th>UI Reorganization</th> </tr> </thead> <tbody> <tr> <td><strong>UI</strong></td> <td>Information <strong>needed</strong> by 44% (4/9) of the developers (when the applications would be more complex)</td> <td>Information <strong>not needed</strong></td> <td>Information <strong>not needed</strong> (needs in keeping information used during selection and substitution)</td> </tr> <tr> <td><strong>Task</strong></td> <td>Information <strong>needed</strong> by 78% (7/9) of the developers</td> <td>Information <strong>needed</strong> by 22% (2/9) of the developers</td> <td>Information <strong>not needed</strong></td> </tr> <tr> <td><strong>Software</strong></td> <td>Information <strong>needed</strong> by 44% (4/9) of the developers</td> <td>Information <strong>needed</strong> by 67% (6/9) of the developers</td> <td>Information <strong>not needed</strong></td> </tr> </tbody> </table> *Table 4: UI, Task or Software-Components additional information needed by the developers during the different steps of the composition process.* different degrees; developers must have the control of the three models; they need to visualize and manipulate these models when needed. However, to achieve the composition, developers did not need to manipulate the code of the resulting application. From a system development point of view, further work will be orientated toward making the management of the three models by the developers as fluent and appropriate as possible. To elaborate new specifications for the system (especially for determining the strict level of information necessary for composing), we will further exploit the comments developers made during the initial user testing. Iterative user testing of the next versions of the system will be performed. Initially interested in designing the OntoCompo approach and system for developers, we will consider to design them for end users too, so contributing to the End-User Development trend. **REFERENCES** et al., M. B. (2005), ‘Service component architecture - building systems using a service oriented architecture’, *A Joint Whitepaper by BEA, IBM, Interface21, IONA, SAP, Siebel, Sybase*.
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01096211/file/xpOntocompo.pdf", "len_cl100k_base": 8179, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 33432, "total-output-tokens": 10720, "length": "2e12", "weborganizer": {"__label__adult": 0.00029921531677246094, "__label__art_design": 0.0007076263427734375, "__label__crime_law": 0.0002244710922241211, "__label__education_jobs": 0.0006456375122070312, "__label__entertainment": 5.942583084106445e-05, "__label__fashion_beauty": 0.00013828277587890625, "__label__finance_business": 0.00012958049774169922, "__label__food_dining": 0.00024187564849853516, "__label__games": 0.0004277229309082031, "__label__hardware": 0.0005202293395996094, "__label__health": 0.0002448558807373047, "__label__history": 0.0002112388610839844, "__label__home_hobbies": 6.115436553955078e-05, "__label__industrial": 0.00021326541900634768, "__label__literature": 0.00023734569549560547, "__label__politics": 0.00015532970428466797, "__label__religion": 0.00034689903259277344, "__label__science_tech": 0.00600433349609375, "__label__social_life": 6.604194641113281e-05, "__label__software": 0.00567626953125, "__label__software_dev": 0.98291015625, "__label__sports_fitness": 0.0001962184906005859, "__label__transportation": 0.0002970695495605469, "__label__travel": 0.0001569986343383789}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43113, 0.04535]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43113, 0.22522]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43113, 0.89199]], "google_gemma-3-12b-it_contains_pii": [[0, 1086, false], [1086, 4341, null], [4341, 9304, null], [9304, 13044, null], [13044, 17918, null], [17918, 20155, null], [20155, 24188, null], [24188, 28947, null], [28947, 34292, null], [34292, 38789, null], [38789, 43113, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1086, true], [1086, 4341, null], [4341, 9304, null], [9304, 13044, null], [13044, 17918, null], [17918, 20155, null], [20155, 24188, null], [24188, 28947, null], [28947, 34292, null], [34292, 38789, null], [38789, 43113, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43113, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43113, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43113, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43113, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43113, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43113, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43113, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43113, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43113, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43113, null]], "pdf_page_numbers": [[0, 1086, 1], [1086, 4341, 2], [4341, 9304, 3], [9304, 13044, 4], [13044, 17918, 5], [17918, 20155, 6], [20155, 24188, 7], [24188, 28947, 8], [28947, 34292, 9], [34292, 38789, 10], [38789, 43113, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43113, 0.14872]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
3d088aabb4776c9b87b68a89dca170dfa622e845
ACE Autonomic Software Engineering for online Cultural Experiences Carles Sierra IIIA-CSIC ### Partners <table> <thead> <tr> <th>Partner</th> <th>City</th> <th>PI</th> </tr> </thead> <tbody> <tr> <td>IIIA-CSIC</td> <td>Barcelona</td> <td>Carles Sierra</td> </tr> <tr> <td>Goldsmiths, U. London</td> <td>London</td> <td>Mard d’Inverno</td> </tr> <tr> <td>IRIT, CNRS</td> <td>Toulouse</td> <td>Leila Amgoud</td> </tr> </tbody> </table> Other Researchers: Nardine Osman, Henri Prade, Matthew Yee-King, Dave de Jonge, Roberto Confalonieri, Katina Hazelden, Bruno Rosell. Project Objectives - To develop autonomic BDI architectures for personal assistants of human agents engaging in online activities. - To develop a peer to peer autonomic electronic institution infrastructure to support the autonomous interaction or human and autonomic agents. - To embed the P2P infrastructure into mobile appliances to allow for a mobile social distributed consumption of cultural artefacts. - To develop a series of case studies with cultural institutions and build a full specification of a selected case study that we will develop into a working prototype. ### WP 1: P2P eInstitutions <table> <thead> <tr> <th>Deliverable</th> <th>WP</th> <th>Date</th> </tr> </thead> <tbody> <tr> <td>P2P Institution node architecture</td> <td>D1.1</td> <td>M6</td> </tr> <tr> <td>Running P2P node</td> <td>D1.2</td> <td>M6, M12, M24</td> </tr> <tr> <td>P2P Electronic Institution prototype</td> <td>D1.3</td> <td>M12, M24</td> </tr> </tbody> </table> ### WP 2: Autonomic Agents <table> <thead> <tr> <th>Deliverable</th> <th>WP</th> <th>Date</th> </tr> </thead> <tbody> <tr> <td>An autonomic Agent Architecture</td> <td>D2.1</td> <td>M6</td> </tr> <tr> <td>An argumentation-based Negotiation Framework</td> <td>D2.2</td> <td>M6</td> </tr> <tr> <td>Personal autonomic agent</td> <td>D2.3</td> <td>M12, M24</td> </tr> </tbody> </table> ### WP 3: Negotiated social online cultural experiences <table> <thead> <tr> <th>Deliverable</th> <th>WP</th> <th>Date</th> </tr> </thead> <tbody> <tr> <td>Set of case studies of how autonomic agents could be applied to enrichable social online experiences</td> <td>D3.1</td> <td>M3</td> </tr> <tr> <td>A full specification jointly written with a London cultural institution of the functionality of a system to enable online social experiences</td> <td>D3.2</td> <td>M6 I, M6 II</td> </tr> <tr> <td>First implementation of the cultural experience prototype</td> <td>D3.3</td> <td>M12 I, M12 II</td> </tr> <tr> <td>A complete validation and evaluation of prototype</td> <td>D3.4</td> <td>M21</td> </tr> <tr> <td>Self-evaluation document in working in a multi-disciplinary cultural context especially with respect to potential role of autonomic agents</td> <td>D3.5</td> <td>M24</td> </tr> </tbody> </table> Last Year’s work - P2P Electronic Institutions - Autonomic agents - Cultural experiences - Results Last Year’s work - P2P Electronic Institutions - Autonomic agents - Cultural experiences - Results WP1 objectives - Develop a P2P Electronic Institution infrastructure - Embed the P2P autonomic electronic institution infrastructure in mobile devices - Make the software available as open source T1.1: P2P Electronic Institution Concept - Completed; see deliverable D1.1. - New component introduced in the EI concept: Device Manager - Network architecture based on Freepastry T1.2: P2P Node Implementation - Completed; see deliverable D1.2. Two software versions. - Distributed search for content. - Specific for the WeCurate application. T1.3: Electronic Institution Prototype - Completed; see deliverable D1.3. - The Horniman Museum was chosen for the WeCurate case study and an electronic institution specified for it. T1.3: Electronic Institution Prototype Last Year’s work - P2P Electronic Institutions - Autonomic agents - Cultural experiences - Results WP2 Objectives - Argumentation-based agent architecture - BDI mental model - Argumentation-based negotiation T2.1: Autonomic Agent architecture - Interfacing users to the system - Storing the likes and dislikes as preferences - Merging users’ tags (Group preference defined) - Selecting subsequent images - Architecture completed; see deliverable D2.1 - Model was implemented and presented at AT’12 T2.2: Negotiation Framework - Arguments are supports for an overall image opinion as pairs \( \langle \text{tag}, \text{value} \rangle \) - Arguments change dynamically, sometimes via private negotiations - Work completed; see deliverable D2.2 - Work presented at MDAI’12 T2.2: Negotiation Framework Figure 2: WeCurate interface. T2.3: Personal autonomic agents for cultural experiences - Largest development effort - P2P EI connected with autonomic agents - Work completed; see deliverable D2.3 - The model was implemented T2.3: Personal autonomic agents for cultural experiences - incarnate WeCurate users by storing their actions and arguments - enact the curation workflow and the scenes - support social interactions by means of agent-to-agent protocols - take collective decisions to drive the curation workflow - support argument-based multiple criteria decision making T2.3: Personal autonomic agents for cultural experiences Last Year’s work - P2P Electronic Institutions - Autonomic agents - Cultural experiences - Results WP3 objectives - Case study selection and specification - Cultural experience prototype - Validation of the prototype T3.1: Case scenarios pilot study and Selection - Completed; see deliverable D3.1. - The Horniman Museum was chosen for the WeCurate case study. T3.2: Case Study Specification - Completed; see deliverable D3.2. - Work was presented at AT 2012 Croatia. - The case study was based around a public installation at the Horniman museum which took place in November 2012. - A follow up case study was also developed for an exhibition that ran concurrently at 4 sites, 2 on campus at Goldsmiths and 2 off campus in Lewisham. T3.3: Implementation of the Cultural Experience Prototype - Completed; see deliverable D3.3a and D3.3b. - Work presented at AAMAS 2013. T3.4: Validation and evaluation of the prototype - Public validation and evaluation carried out on site at Horniman Museum November 2012. - Completed; see deliverable D3.4. T3.4: Validation and evaluation of the prototype WeCurate was installed as a multiuser museum interactive, and was used by visitors in groups of up to four people. Multiple sources of qualitative and quantitative data were collected: - An automatic log of all participants actions (92 sessions) - Observations based on field notes (37 sessions) - Questionnaires (48 collected) The analysis of the data concentrated on the distinct interactive behaviours of different social groups. T3.4: Validation and evaluation of the prototype High variation of engagement: - Time: mean 5mins 38secs (+/- 4mins 25secs) - Images viewed: mean 4.4 (+/- 4.1) Social groups: 83% familiar with the group (46% family) reflecting the public’s everyday habits in cultural institutions. Key findings showed evidence of collective decision making and negotiation: - Parent and child: scaffolded experience - Adult groups: playful engagement and interdependent behaviours T3.4: Validation and evaluation of the prototype Strong evidence for the social group’s influence over individual’s decision making, made available via the WeCurate system: - Participants felt they were able to communicate their preferences and had an awareness of the group’s intentions and opinions - 87% of participant were aware of others’ action via the synchronised view. - The social group had an influence on individual’s decision making, as 42% reported changing their decision as a consequence of seeing (a representation of) other’s actions. - Effectiveness of the agents - 75% of participants voted on the images they prefered. T3.5: Self-evaluation - Completed; see deliverable D3.5. - The self evaluation addresses the primary research questions: does the agent technology enable users to share an experience, and are they good predictors of the behaviours of groups when making collective decisions? The successes, areas for improvement, and potential for future work are discussed. Last Year’s work - P2P Electronic Institutions - Autonomic agents - Cultural experiences - Results Results: Scalability - 48 agents - Message delivery time: - Number of nodes: 1, 2, 3, 4 - Graph shows the relationship between the number of nodes and message delivery time. Results: Scalability ![Graph showing scalability results](image) - **x-axis**: Number of agents - **y-axis**: Message delivery time ### Key Points - The graph illustrates the scalability of the system as the number of agents increases. - At 4 nodes, the message delivery time is observed to be at its highest, indicating a potential bottleneck. - The trend suggests that as the number of agents increases, the message delivery time also increases, which is crucial for understanding system performance under load. Results: Publications I Angela Fabregues and Carles Sierra. HANA: a Human-Aware Negotiation Architecture. In Decision Support Systems, In press. Nardine Osman and Carles Sierra and Fiona McNeill and Juan Pane and John Debenham. Trust and Matching Algorithms for Selecting Suitable Agents. In ACM Transactions on Intelligent Systems and Technology, Volume 5, Issue 1, 2014. Nardine Osman and Mark d’Inverno and Carles Sierra and Leila Amgoud and Henri Prade and Matthew Yee-King and Roberto Confalonieri and Dave de Jonge and Katina Hazelden. An Experience-Based BDI Logic: Motivating Shared Experiences and Intentionality. In the 39th Annual Conference of the IEEE Industrial Electronics Society (IECON 2013), Vienna, Austria, 2013. Mark d’Inverno; Michael Luck; Pablo Noriega; Juan A. RodrÃguez-Aguilar and Carles Sierra. Communicating Open Systems: Extended Abstract. In IJCAI 2013, AAAI Press, Beijing, China, 2013. Dave de Jonge, Bruno Rosell and Carles Sierra. Human Interactions in Electronic Institutions. Nardine Osman; Mark d’Inverno; Carles Sierra; Leila Amgoud; Henri Prade; Matthew Yee-King; Roberto Confalonieri; Dave de Jonge and Katina Hazelden. An Experience-Based BDI Logic: Motivating Shared Experiences and Intentionality. In the 39th Annual Conference of the IEEE Industrial Electronics Society (IECON 2013), Vienna, Austria, 2013. Matthew Yee-King; Roberto Confalonieri; Dave de Jonge; Katina Hazelden; Carles Sierra; Mark d’Inverno; Leila Amgoud and Nardine Osman. WeCurate: Enriching the sociocultural practices of the museum experience. In AAMAS 2013, Saint Paul, Minnesota, USA, p.917-924 , 2013. Katina Hazelden, Matthew Yee-King, Roberto Confalonieri, Dave de Jonge, Carles Sierra, and Mark d’Inverno. Multiuser Museum Interactives for Shared Cultural Experiences: an Agent Based Approach. Andrew Koster, Jordi Madrenas, Nardine Osman, Marco Schorlemmer, Jordi Sabater-Mir, Carles Sierra, Angela Fabregues, Dave de Jonge, Josep Puyol-Gruart, and Pere García. u-Help: supporting helpful communities with information technology. Katina Hazelden, Matthew Yee-King, Leila Amgoud, Mark d’Inverno, Carles Sierra, Nardine Osman, Roberto Confalonieri, and Dave de Jonge. WeCurate: Multiuser museum interactives for shared cultural experiences. Results: Publications III Dave de Jonge and Carles Sierra. Branch and bound for negotiations in large agreement spaces. Leila Amgoud, Roberto Confalonieri, Dave de Jonge, Mark d’Inverno, Katina Hazelden, Nardine Osman, Henri Prade, Carles Sierra, and Matthew Yee-King. Sharing online cultural experiences: An argument-based approach. Andrew Koster, Jordi Madrenas-Ciurana, Nardine Osman, W. Marco Schorlemmer, Jordi Sabater-Mir, Carles Sierra, Dave De Jonge, Angela Fabregues, Josep Puyol-Gruart, and Pere Garcia-Calves. u-Help: Supporting Helpful Communities with Information Technology. Katina Hazelden, Matthew Yee-King, Leila Amgoud, Mark d’Inverno, Carles Sierra, Nardine Osman, Roberto Confalonieri, and Dave de Jonge. WeCurate: Designing for synchronised browsing and social negotiation. ## Results: Meetings <table> <thead> <tr> <th>Date</th> <th>Description</th> <th>Place</th> <th>Attendees</th> </tr> </thead> <tbody> <tr> <td>08/09/11</td> <td>Kick-off Meeting</td> <td>Barcelona</td> <td>IIIA, GC and IRIT</td> </tr> <tr> <td>20-21/10/11</td> <td>Technical and MGT Meeting</td> <td>Barcelona</td> <td>IIIA, GC and IRIT</td> </tr> <tr> <td>8-10/12/11</td> <td>Technical and MGT Meeting</td> <td>Toulouse</td> <td>IIIA, GC and IRIT</td> </tr> <tr> <td>06/01/12</td> <td>Technical and MGT Meeting</td> <td>London</td> <td>IIIA, GC and IRIT</td> </tr> <tr> <td>08/02/12</td> <td>Technical and MGT Meeting</td> <td>Barcelona</td> <td>IIIA, GC and IRIT</td> </tr> <tr> <td>13-17/02/12</td> <td>Technical integration</td> <td>London</td> <td>IIIA and GC</td> </tr> <tr> <td>28-30/03/12</td> <td>Shared experiences and collective intentionality</td> <td>Paris</td> <td>IIIA and GC</td> </tr> <tr> <td>3-4/05/12</td> <td>Technical and MGT Meeting</td> <td>Barcelona</td> <td>IIIA, GC and IRIT</td> </tr> <tr> <td>28/05/12</td> <td>Technical integration</td> <td>London</td> <td>IIIA and GC</td> </tr> <tr> <td>10-12/09/12</td> <td>Technical integration</td> <td>London</td> <td>IIIA, IRIT and GC</td> </tr> <tr> <td>14/10/12</td> <td>Technical and MGT meeting</td> <td>Dubrovnik</td> <td>IIIA, IRIT and GC</td> </tr> <tr> <td>22/02/13</td> <td>MGT meeting</td> <td>Skype meeting</td> <td>IIIA, IRIT and GC</td> </tr> <tr> <td>06-07/06/13</td> <td>Technical and MGT meeting</td> <td>Barcelona</td> <td>IIIA, IRIT and GC</td> </tr> </tbody> </table> Results: Spin-off company - We are working on the incubation of a spin-off company, **SocialBrowsing**, based on the acquired *know-how* and results of ACE. - The incubation process is lead by one of the project’s members and it is supported by a valorisation manager and a scientific advisor both from IIIA-CSIC. - Since agost 2013 we have taken several actions in order to create the spin-off. Results: Spin-off Actions 1. Identification of competitive advantages: - A novel model of online and real-time collaboration - Group decision making algorithms such as selection, multiple-criteria, polls, etc. - Group interactions to create user and group profiles 2. Development of a business idea: - A SaaS for the agile development of social and collaborative business applications - We have participated in the VALORTEC 2013 Contest organised by ACC1O 3. Development of a business plan: - Many verticals possible (marketing, entertainment, education, tourism, etc.) - As part of the VALORTEC Contest we have developed our first business plan focusing on marketing research --- 1 The Agency for the Enterprise competitiveness of the Government of Catalonia Results: Spin-off Actions 4 Building the founder team: - The current project team is: - Roberto Confalonieri as CEO and CTO - Lissette Lemus to support the commercialisation and marketing - Carles Sierra as Scientific Advisor - Looking to complete the team with a Software Engineer and a Sales Manager 5 Initial fund-raising: - We have estimated a first seed round of 83.000 € - We have presented the business plan to several local business angels Results summary - Consortium Agreement was signed - Several general meetings - 4 architects week-long meetings - Two prototypes running - User test completed - All deliverables completed and publicly available - 17 papers published - 1 spin-off company planned - 1 STREP project funded on call 8 based on ideas generated in ACE: PRAISE Thanks!
{"Source-Url": "http://www.chistera.eu/sites/chistera.eu/files/ACE%20-%202014.pdf", "len_cl100k_base": 4130, "olmocr-version": "0.1.53", "pdf-total-pages": 43, "total-fallback-pages": 0, "total-input-tokens": 55608, "total-output-tokens": 5914, "length": "2e12", "weborganizer": {"__label__adult": 0.0003919601440429687, "__label__art_design": 0.0011911392211914062, "__label__crime_law": 0.00043582916259765625, "__label__education_jobs": 0.0035572052001953125, "__label__entertainment": 0.00014889240264892578, "__label__fashion_beauty": 0.0002244710922241211, "__label__finance_business": 0.0029888153076171875, "__label__food_dining": 0.0004124641418457031, "__label__games": 0.0005693435668945312, "__label__hardware": 0.0012540817260742188, "__label__health": 0.0007414817810058594, "__label__history": 0.0005054473876953125, "__label__home_hobbies": 0.00016176700592041016, "__label__industrial": 0.0006842613220214844, "__label__literature": 0.00031304359436035156, "__label__politics": 0.000476837158203125, "__label__religion": 0.00035881996154785156, "__label__science_tech": 0.09735107421875, "__label__social_life": 0.00028443336486816406, "__label__software": 0.01910400390625, "__label__software_dev": 0.86767578125, "__label__sports_fitness": 0.0002287626266479492, "__label__transportation": 0.0005021095275878906, "__label__travel": 0.00027871131896972656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17607, 0.02688]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17607, 0.05351]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17607, 0.81662]], "google_gemma-3-12b-it_contains_pii": [[0, 93, false], [93, 556, null], [556, 1134, null], [1134, 1134, null], [1134, 2659, null], [2659, 2759, null], [2759, 2859, null], [2859, 3056, null], [3056, 3237, null], [3237, 3401, null], [3401, 3585, null], [3585, 3624, null], [3624, 3724, null], [3724, 3834, null], [3834, 4125, null], [4125, 4398, null], [4398, 4457, null], [4457, 4652, null], [4652, 5006, null], [5006, 5063, null], [5063, 5163, null], [5163, 5282, null], [5282, 5427, null], [5427, 5801, null], [5801, 5938, null], [5938, 6172, null], [6172, 6657, null], [6657, 7126, null], [7126, 7770, null], [7770, 8129, null], [8129, 8229, null], [8229, 8408, null], [8408, 8925, null], [8925, 10061, null], [10061, 11654, null], [11654, 13115, null], [13115, 14066, null], [14066, 15603, null], [15603, 16000, null], [16000, 16786, null], [16786, 17263, null], [17263, 17600, null], [17600, 17607, null]], "google_gemma-3-12b-it_is_public_document": [[0, 93, true], [93, 556, null], [556, 1134, null], [1134, 1134, null], [1134, 2659, null], [2659, 2759, null], [2759, 2859, null], [2859, 3056, null], [3056, 3237, null], [3237, 3401, null], [3401, 3585, null], [3585, 3624, null], [3624, 3724, null], [3724, 3834, null], [3834, 4125, null], [4125, 4398, null], [4398, 4457, null], [4457, 4652, null], [4652, 5006, null], [5006, 5063, null], [5063, 5163, null], [5163, 5282, null], [5282, 5427, null], [5427, 5801, null], [5801, 5938, null], [5938, 6172, null], [6172, 6657, null], [6657, 7126, null], [7126, 7770, null], [7770, 8129, null], [8129, 8229, null], [8229, 8408, null], [8408, 8925, null], [8925, 10061, null], [10061, 11654, null], [11654, 13115, null], [13115, 14066, null], [14066, 15603, null], [15603, 16000, null], [16000, 16786, null], [16786, 17263, null], [17263, 17600, null], [17600, 17607, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 17607, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17607, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17607, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17607, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 17607, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17607, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17607, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17607, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17607, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17607, null]], "pdf_page_numbers": [[0, 93, 1], [93, 556, 2], [556, 1134, 3], [1134, 1134, 4], [1134, 2659, 5], [2659, 2759, 6], [2759, 2859, 7], [2859, 3056, 8], [3056, 3237, 9], [3237, 3401, 10], [3401, 3585, 11], [3585, 3624, 12], [3624, 3724, 13], [3724, 3834, 14], [3834, 4125, 15], [4125, 4398, 16], [4398, 4457, 17], [4457, 4652, 18], [4652, 5006, 19], [5006, 5063, 20], [5063, 5163, 21], [5163, 5282, 22], [5282, 5427, 23], [5427, 5801, 24], [5801, 5938, 25], [5938, 6172, 26], [6172, 6657, 27], [6657, 7126, 28], [7126, 7770, 29], [7770, 8129, 30], [8129, 8229, 31], [8229, 8408, 32], [8408, 8925, 33], [8925, 10061, 34], [10061, 11654, 35], [11654, 13115, 36], [13115, 14066, 37], [14066, 15603, 38], [15603, 16000, 39], [16000, 16786, 40], [16786, 17263, 41], [17263, 17600, 42], [17600, 17607, 43]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17607, 0.14176]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
ab4a35e2b29660efa886e5291ab4e8ac04a0e720
[REMOVED]
{"Source-Url": "https://cis.temple.edu/~xjdu/Papers_Du/2013_ICIC_Liu.pdf", "len_cl100k_base": 5156, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 21524, "total-output-tokens": 5993, "length": "2e12", "weborganizer": {"__label__adult": 0.0003533363342285156, "__label__art_design": 0.0004017353057861328, "__label__crime_law": 0.0005011558532714844, "__label__education_jobs": 0.0017852783203125, "__label__entertainment": 0.00017917156219482422, "__label__fashion_beauty": 0.0001977682113647461, "__label__finance_business": 0.0022144317626953125, "__label__food_dining": 0.0004019737243652344, "__label__games": 0.0007147789001464844, "__label__hardware": 0.0022792816162109375, "__label__health": 0.001068115234375, "__label__history": 0.00042176246643066406, "__label__home_hobbies": 0.00013196468353271484, "__label__industrial": 0.0007543563842773438, "__label__literature": 0.0004324913024902344, "__label__politics": 0.0003726482391357422, "__label__religion": 0.00040793418884277344, "__label__science_tech": 0.457275390625, "__label__social_life": 0.0001398324966430664, "__label__software": 0.0322265625, "__label__software_dev": 0.496337890625, "__label__sports_fitness": 0.0002899169921875, "__label__transportation": 0.0007257461547851562, "__label__travel": 0.0002586841583251953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22220, 0.04318]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22220, 0.34786]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22220, 0.872]], "google_gemma-3-12b-it_contains_pii": [[0, 2560, false], [2560, 5116, null], [5116, 6401, null], [6401, 9075, null], [9075, 11293, null], [11293, 14009, null], [14009, 16483, null], [16483, 18081, null], [18081, 19620, null], [19620, 22220, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2560, true], [2560, 5116, null], [5116, 6401, null], [6401, 9075, null], [9075, 11293, null], [11293, 14009, null], [14009, 16483, null], [16483, 18081, null], [18081, 19620, null], [19620, 22220, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22220, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22220, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22220, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22220, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22220, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22220, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22220, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22220, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22220, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22220, null]], "pdf_page_numbers": [[0, 2560, 1], [2560, 5116, 2], [5116, 6401, 3], [6401, 9075, 4], [9075, 11293, 5], [11293, 14009, 6], [14009, 16483, 7], [16483, 18081, 8], [18081, 19620, 9], [19620, 22220, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22220, 0.22293]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
508f472789d94027f0e71548bf90f9f7deaa7c33
[REMOVED]
{"Source-Url": "https://cdn.uclouvain.be/public/Exports%20reddot/ssh-ilsm/images/WP-2013-20.pdf", "len_cl100k_base": 6722, "olmocr-version": "0.1.50", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 39719, "total-output-tokens": 8363, "length": "2e12", "weborganizer": {"__label__adult": 0.00030159950256347656, "__label__art_design": 0.0006628036499023438, "__label__crime_law": 0.0003159046173095703, "__label__education_jobs": 0.002140045166015625, "__label__entertainment": 6.103515625e-05, "__label__fashion_beauty": 0.0001493692398071289, "__label__finance_business": 0.0005650520324707031, "__label__food_dining": 0.00027871131896972656, "__label__games": 0.0004627704620361328, "__label__hardware": 0.0004029273986816406, "__label__health": 0.0003330707550048828, "__label__history": 0.00027298927307128906, "__label__home_hobbies": 7.557868957519531e-05, "__label__industrial": 0.00040078163146972656, "__label__literature": 0.00031113624572753906, "__label__politics": 0.0002340078353881836, "__label__religion": 0.0003628730773925781, "__label__science_tech": 0.01861572265625, "__label__social_life": 9.47713851928711e-05, "__label__software": 0.0088653564453125, "__label__software_dev": 0.96435546875, "__label__sports_fitness": 0.0002256631851196289, "__label__transportation": 0.0003981590270996094, "__label__travel": 0.00017464160919189453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35163, 0.01681]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35163, 0.23916]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35163, 0.89523]], "google_gemma-3-12b-it_contains_pii": [[0, 138, false], [138, 1211, null], [1211, 3893, null], [3893, 6945, null], [6945, 10077, null], [10077, 13664, null], [13664, 15934, null], [15934, 17817, null], [17817, 20353, null], [20353, 20400, null], [20400, 22046, null], [22046, 23758, null], [23758, 25059, null], [25059, 26248, null], [26248, 28112, null], [28112, 30939, null], [30939, 34136, null], [34136, 35163, null]], "google_gemma-3-12b-it_is_public_document": [[0, 138, true], [138, 1211, null], [1211, 3893, null], [3893, 6945, null], [6945, 10077, null], [10077, 13664, null], [13664, 15934, null], [15934, 17817, null], [17817, 20353, null], [20353, 20400, null], [20400, 22046, null], [22046, 23758, null], [23758, 25059, null], [25059, 26248, null], [26248, 28112, null], [28112, 30939, null], [30939, 34136, null], [34136, 35163, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35163, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35163, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35163, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35163, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35163, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35163, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35163, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35163, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35163, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35163, null]], "pdf_page_numbers": [[0, 138, 1], [138, 1211, 2], [1211, 3893, 3], [3893, 6945, 4], [6945, 10077, 5], [10077, 13664, 6], [13664, 15934, 7], [15934, 17817, 8], [17817, 20353, 9], [20353, 20400, 10], [20400, 22046, 11], [22046, 23758, 12], [23758, 25059, 13], [25059, 26248, 14], [26248, 28112, 15], [28112, 30939, 16], [30939, 34136, 17], [34136, 35163, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35163, 0.18717]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
88eae9af1e63ec1f6987f19e3ff5c3a63369e0f5
**Syntax** *Standard estimation command syntax* ``` nestreg [ , options ]: command_name depvar (varlist) [ (varlist) ... ] ``` ``` [ if ] [ in ] [ weight ] [ command_options ] ``` *Survey estimation command syntax* ``` nestreg [ , options ]: svy [ vcetype ] [ , svy_options ]: command_name depvar ``` ``` (varlist) [ (varlist) ... ] [ if ] [ in ] [ , command_options ] ``` <table> <thead> <tr> <th>options</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><em>waldtable</em></td> <td>report Wald test results; the default</td> </tr> <tr> <td><em>lrtable</em></td> <td>report likelihood-ratio test results</td> </tr> <tr> <td><em>quietly</em></td> <td>suppress any output from <em>command_name</em></td> </tr> <tr> <td><em>store(stub)</em></td> <td>store nested estimation results in <em>_est_stub</em>_#</td> </tr> </tbody> </table> *by* is allowed; see [U] 11.1.10 Prefix commands. Weights are allowed if *command_name* allows them; see [U] 11.1.6 weight. A *varlist* in parentheses indicates that this list of variables is to be considered as a block. Each variable in a *varlist* not bound in parentheses will be treated as its own block. All postestimation commands behave as they would after *command_name* without the *nestreg* prefix; see the postestimation manual entry for *command_name*. **Menu** Statistics > Other > Nested model statistics **Description** *nestreg* fits nested models by sequentially adding blocks of variables and then reports comparison tests between the nested models. Options waldtable specifies that the table of Wald test results be reported. *waldtable* is the default. lrtable specifies that the table of likelihood-ratio tests be reported. This option is not allowed if *pweights*, the *vce(robust)* option, or the *vce(cluster clustvar)* option is specified. *lrtable* is also not allowed with the *svy* prefix. quietly suppresses the display of any output from *command_name*. store(*stub*) specifies that each model fit by *nestreg* be stored under the name _est-_stub#, where # is the nesting order from first to last. Remarks and examples Remarks are presented under the following headings: - *Estimation commands* - *Wald tests* - *Likelihood-ratio tests* - *Programming for nestreg* Estimation commands *nestreg* removes collinear predictors and observations with missing values from the estimation sample before calling *command_name*. The following Stata commands are supported by *nestreg*: - clogit - nbreg - regress - cloglog - ologit - scobit - glm - oprobit - stcox - intreg - poisson - stcrreg - logistic - probit - streg - logit - qreg - tobit You do not supply a *depvar* for *stcox*, *stcrreg*, or *streg*; otherwise, *depvar* is required. You must supply two *depvars* for *intreg*. Wald tests Use *nestreg* to test the significance of blocks of predictors, building the regression model one block at a time. Using the data from example 1 of [R] test, we wish to test the significance of the following predictors of birth rate: *medage*, *medagesq*, and *region* (already partitioned into four indicator variables: *reg1*, *reg2*, *reg3*, and *reg4*). . use http://www.stata-press.com/data/r13/census4 (birth rate, median age) .nestreg: regress brate (medage) (medagesq) (reg2-reg4) **Block 1: medage** <table> <thead> <tr> <th>Source</th> <th>SS</th> <th>df</th> <th>MS</th> <th>Number of obs = 50</th> </tr> </thead> <tbody> <tr> <td>Model</td> <td>32675.1044</td> <td>1</td> <td>32675.1044</td> <td>F( 1, 48) = 164.72</td> </tr> <tr> <td>Residual</td> <td>9521.71561</td> <td>48</td> <td>198.369075</td> <td>Prob &gt; F = 0.0000</td> </tr> </tbody> </table> | Total | 42196.82 | 49 | 861.159592 | Adj R-squared = 0.7696 | | brate | Coef. | Std. Err. | t | P>|t| | [95% Conf. Interval] | |------------------------|---------|-----------|-------|------|---------------------| | medage | -15.24893 | 1.188141 | -12.83 | 0.000 | -17.63785 to -12.86002 | | _cons | 618.3935 | 35.15416 | 17.59 | 0.000 | 547.7113 to 689.0756 | **Block 2: medagesq** <table> <thead> <tr> <th>Source</th> <th>SS</th> <th>df</th> <th>MS</th> <th>Number of obs = 50</th> </tr> </thead> <tbody> <tr> <td>Model</td> <td>36755.8524</td> <td>2</td> <td>18377.9262</td> <td>F( 2, 47) = 158.75</td> </tr> <tr> <td>Residual</td> <td>5440.96755</td> <td>47</td> <td>115.765267</td> <td>R-squared = 0.8711</td> </tr> </tbody> </table> | Total | 42196.82 | 49 | 861.159592 | Root MSE = 10.759 | | brate | Coef. | Std. Err. | t | P>|t| | [95% Conf. Interval] | |------------------------|---------|-----------|-------|------|---------------------| | medage | -109.8925 | 15.96663 | -6.88 | 0.000 | -142.0132 to -77.7718 | | medagesq | 1.607332 | 0.2707228 | 5.94 | 0.000 | 1.062708 to 2.151956 | | _cons | 2007.071 | 235.4316 | 8.53 | 0.000 | 1533.444 to 2480.698 | **Block 3: reg2 reg3 reg4** <table> <thead> <tr> <th>Source</th> <th>SS</th> <th>df</th> <th>MS</th> <th>Number of obs = 50</th> </tr> </thead> <tbody> <tr> <td>Model</td> <td>38803.419</td> <td>5</td> <td>7760.68381</td> <td>F( 5, 44) = 100.63</td> </tr> <tr> <td>Residual</td> <td>3393.40095</td> <td>44</td> <td>77.1227489</td> <td>R-squared = 0.9196</td> </tr> </tbody> </table> | Total | 42196.82 | 49 | 861.159592 | Root MSE = 8.782 | | brate | Coef. | Std. Err. | t | P>|t| | [95% Conf. Interval] | |------------------------|---------|-----------|-------|------|---------------------| | medage | -109.0957 | 13.52452 | -8.07 | 0.000 | -136.3526 to -81.83886 | | medagesq | 1.635208 | 0.2290536 | 7.14 | 0.000 | 1.173581 to 2.096835 | | reg2 | 15.00284 | 4.252068 | 3.53 | 0.001 | 6.433365 to 23.57233 | | reg3 | 7.366435 | 3.953336 | 1.86 | 0.069 | -0.6009898 to 15.33386 | | reg4 | 21.39679 | 4.650602 | 4.60 | 0.000 | 12.02412 to 30.76946 | | _cons | 1947.61 | 199.8405 | 9.75 | 0.000 | 1544.858 to 2350.362 | <table> <thead> <tr> <th>Block</th> <th>F</th> <th>df</th> <th>Residual F</th> <th>df</th> <th>Pr &gt; F</th> <th>R2 in R2</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>164.72</td> <td>1</td> <td>48</td> <td>0.0000</td> <td>0.7743</td> <td></td> </tr> <tr> <td>2</td> <td>35.25</td> <td>1</td> <td>47</td> <td>0.0000</td> <td>0.8711</td> <td>0.0967</td> </tr> <tr> <td>3</td> <td>8.85</td> <td>3</td> <td>44</td> <td>0.0001</td> <td>0.9196</td> <td>0.0485</td> </tr> </tbody> </table> This single call to `nestreg` ran `regress` three times, adding a block of predictors to the model for each run as in ``` . regress brate medage Source | SS df MS ----------+--------+-------+ Model | 32675.1044 1 32675.1044 Residual | 9521.71561 48 198.369075 ----------+--------+-------+ Total | 42196.82 49 861.159592 ----------+--------+-------+ Number of obs = 50 F( 1, 48) = 164.72 Prob > F = 0.0000 R-squared = 0.7743 Adj R-squared = 0.7696 Root MSE = 14.084 | Coef. | Std. Err. | t | P>|t| [95% Conf. Interval] | |---------|-----------|------|------|-------------------------| | medage | -15.24893 | 1.188141 | -12.83 | 0.000 | -17.63785 to -12.86002 | | _cons | 618.3935 | 35.15416 | 17.59 | 0.000 | 547.7113 to 689.0756 | ``` ``` . regress brate medage medagesq Source | SS df MS ----------+--------+-------+ Model | 36755.8524 2 18377.9262 Residual | 5440.96755 47 115.765267 ----------+--------+-------+ Total | 42196.82 49 861.159592 ----------+--------+-------+ Number of obs = 50 F( 2, 47) = 158.75 Prob > F = 0.0000 R-squared = 0.8711 Adj R-squared = 0.8656 Root MSE = 10.759 | Coef. | Std. Err. | t | P>|t| [95% Conf. Interval] | |---------|-----------|------|------|-------------------------| | medage | -109.8925 | 15.96663 | -6.88 | 0.000 | -142.0132 to -77.7718 | | medagesq| 1.607332 | .2707228 | 5.94 | 0.000 | 1.062708 to 2.151956 | | _cons | 2007.071 | 235.4316 | 8.53 | 0.000 | 1533.444 to 2480.698 | ``` ``` . regress brate medage medagesq reg2-reg4 Source | SS df MS ----------+--------+-------+ Model | 38803.419 5 7760.68381 Residual | 3393.40095 44 77.1227489 ----------+--------+-------+ Total | 42196.82 49 861.159592 ----------+--------+-------+ Number of obs = 50 F( 5, 44) = 100.63 Prob > F = 0.0000 R-squared = 0.9196 Adj R-squared = 0.9104 Root MSE = 8.782 | Coef. | Std. Err. | t | P>|t| [95% Conf. Interval] | |---------|-----------|------|------|-------------------------| | medage | -109.0957 | 13.52452 | -8.07 | 0.000 | -136.3526 to -81.83886 | | medagesq| 1.635208 | .2290536 | 7.14 | 0.000 | 1.173581 to 2.096835 | | reg2 | 15.00284 | 4.252068 | 3.53 | 0.001 | 6.433365 to 23.57233 | | reg3 | 7.366435 | 3.953336 | 1.86 | 0.069 | -6.009898 to 15.33386 | | reg4 | 21.39679 | 4.650602 | 4.60 | 0.000 | 12.02412 to 30.76946 | | _cons | 1947.61 | 199.8405 | 9.75 | 0.000 | 1544.858 to 2350.362 | ``` `nestreg` collected the $F$ statistic for the corresponding block of predictors and the model $R^2$ statistic from each model fit. The $F$ statistic for the first block, 164.72, is for a test of the joint significance of the first block of variables; it is simply the $F$ statistic from the regression of `brate` on `medage`. The $F$ statistic for the second block, 35.25, is for a test of the joint significance of the second block of variables in a regression of both the first and second blocks of variables. In our example, it is an $F$ test of `medagesq` in the regression of `brate` on `medage` and `medagesq`. Similarly, the third block’s $F$ statistic of 8.85 corresponds to a joint test of `reg2`, `reg3`, and `reg4` in the final regression. Likelihood-ratio tests The `nestreg` command provides a simple syntax for performing likelihood-ratio tests for nested model specifications; also see `lrtest`. Using the data from example 1 of \[R\] `lrtest`, we wish to jointly test the significance of the following predictors of low birthweight: `age`, `lwt`, `ptl`, and `ht`. ``` use http://www.stata-press.com/data/r13/lbw (Hosmer & Lemeshow data) ``` ``` .xi: nestreg, lr: logistic low (i.race smoke ui) (age lwt ptl ht) .i.race _Irace_1-3 (naturally coded; _Irace_1 omitted) Block 1: _Irace_2 _Irace_3 smoke ui Logistic regression Number of obs = 189 LR chi2(4) = 18.80 Prob > chi2 = 0.0009 Log likelihood = -107.93404 Pseudo R2 = 0.0801 Odds Ratio Std. Err. z P>|z| [95% Conf. Interval] _Irace_2 3.052746 1.498087 2.27 0.023 1.166747 7.987382 _Irace_3 2.922693 1.189229 2.64 0.008 1.316457 6.489285 smoke 2.945742 1.101838 2.89 0.004 1.415167 6.131715 ui 2.419131 1.047359 2.04 0.041 1.035459 5.651788 _cons .1402209 .0512295 -5.38 0.000 .0685216 .2869447 ``` ``` Block 2: age lwt ptl ht Logistic regression Number of obs = 189 LR chi2(8) = 33.22 Prob > chi2 = 0.0001 Log likelihood = -100.724 Pseudo R2 = 0.1416 Odds Ratio Std. Err. z P>|z| [95% Conf. Interval] _Irace_2 3.534767 1.860737 2.40 0.016 1.259736 9.918406 _Irace_3 2.368079 1.039949 1.96 0.050 1.001356 5.600207 smoke 2.517698 1.00916 2.30 0.021 1.147676 5.523162 ui 2.1351 .9808153 1.65 0.099 .8677528 5.2534 age .9732636 .0354759 -0.74 0.457 .9015758 .9480231 lwt .9849634 .0068217 -2.19 0.029 .9716834 .9984249 ptl 1.719161 .5952579 1.55 0.122 1.212455 2.427877 ht 6.249602 4.322408 2.66 0.008 1.611152 24.214199 _cons 1.586014 1.910496 0.80 0.417 .1496092 16.8134 ``` ``` Block LL LR df Pr > LR AIC BIC 1 -107.934 18.80 4 0.0009 225.8681 242.0768 2 -100.724 14.42 4 0.0061 219.448 248.6237 ``` The estimation results from the full model are left in `e()`, so we can later use `estat` and other postestimation commands. ``` .estat gof Logistic model for low, goodness-of-fit test number of observations = 189 number of covariate patterns = 182 Pearson chi2(173) = 179.24 Prob > chi2 = 0.3567 ``` Programming for \texttt{nestreg} If you want your user-written command (\textit{command name}) to work with \texttt{nestreg}, it must follow standard Stata syntax and allow the \texttt{if} qualifier. Furthermore, \textit{command name} must have \texttt{sw} or \texttt{swml} as a program property; see [P] \texttt{program properties}. If \textit{command name} has \texttt{swml} as a property, \textit{command name} must store the log-likelihood value in \texttt{e(ll)} and the model degrees of freedom in \texttt{e(df_m)}. \section*{Stored results} \texttt{nestreg} stores the following in \texttt{r()}: \begin{itemize} \item Matrices \begin{itemize} \item \texttt{r(wald)} \quad \text{matrix corresponding to the Wald table} \item \texttt{r(lr)} \quad \text{matrix corresponding to the likelihood-ratio table} \end{itemize} \end{itemize} \section*{Acknowledgment} We thank Paul H. Bern of Syracuse University for developing the hierarchical regression command that inspired \texttt{nestreg}. \section*{Reference} Acock, A. C. 2014. \textit{A Gentle Introduction to Stata}. 4th ed. College Station, TX: Stata Press. \section*{Also see} [P] \texttt{program properties} — Properties of user-defined programs
{"Source-Url": "https://www.stata.com/manuals13/rnestreg.pdf", "len_cl100k_base": 4939, "olmocr-version": "0.1.49", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 15318, "total-output-tokens": 6020, "length": "2e12", "weborganizer": {"__label__adult": 0.0003178119659423828, "__label__art_design": 0.0006084442138671875, "__label__crime_law": 0.0003817081451416016, "__label__education_jobs": 0.00363922119140625, "__label__entertainment": 0.00010991096496582033, "__label__fashion_beauty": 0.00015497207641601562, "__label__finance_business": 0.0011091232299804688, "__label__food_dining": 0.000270843505859375, "__label__games": 0.0008502006530761719, "__label__hardware": 0.0007734298706054688, "__label__health": 0.000713348388671875, "__label__history": 0.0004181861877441406, "__label__home_hobbies": 0.0001919269561767578, "__label__industrial": 0.0006690025329589844, "__label__literature": 0.0003285408020019531, "__label__politics": 0.0003533363342285156, "__label__religion": 0.0004222393035888672, "__label__science_tech": 0.071533203125, "__label__social_life": 0.00020003318786621096, "__label__software": 0.1285400390625, "__label__software_dev": 0.78759765625, "__label__sports_fitness": 0.00035262107849121094, "__label__transportation": 0.00031304359436035156, "__label__travel": 0.00025844573974609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 12744, 0.1552]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 12744, 0.28886]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 12744, 0.60566]], "google_gemma-3-12b-it_contains_pii": [[0, 1470, false], [1470, 3092, null], [3092, 6112, null], [6112, 9389, null], [9389, 11520, null], [11520, 12744, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1470, true], [1470, 3092, null], [3092, 6112, null], [6112, 9389, null], [9389, 11520, null], [11520, 12744, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 12744, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 12744, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 12744, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 12744, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 12744, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 12744, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 12744, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 12744, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 12744, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 12744, null]], "pdf_page_numbers": [[0, 1470, 1], [1470, 3092, 2], [3092, 6112, 3], [6112, 9389, 4], [9389, 11520, 5], [11520, 12744, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 12744, 0.24291]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
8b780f0921160fbbe18a3722437144c1d9d01d0d
Introducing programmability and automation in the synthesis of virtual firewall rules Original Availability: This version is available at: 11583/2844332 since: 2020-10-19T08:32:20Z Publisher: IEEE Published DOI:10.1109/NetSoft48620.2020.9165434 Terms of use: This article is made available under terms and conditions as specified in the corresponding bibliographic description in the repository Publisher copyright IEEE postprint/Author's Accepted Manuscript ©2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collecting works, for resale or lists, or reuse of any copyrighted component of this work in other works. (Article begins on next page) Introducing programmability and automation in the synthesis of virtual firewall rules Daniele Bringhenti, Guido Marchetto, Riccardo Sisto, Fulvio Valenza, Jalolliddin Yusupov Dip. Automatica e Informatica, Politecnico di Torino, Torino, Italy, Emails: {first.last}@polito.it Abstract—The rise of new forms of cyber-threats is mostly due to the extensive use of virtualization paradigms and the increasing adoption of automation in the software life-cycle. To address these challenges we propose an innovative framework that leverages the intrinsic programmability of the cloud and software-defined infrastructures to improve the effectiveness and efficiency of reaction mechanisms. In this paper, we present our contributions with a demonstrative use case in the context of Kubernetes. By means of this framework, developers of cybersecurity appliances will not have any more to care about how to react to events or to struggle to define any possible security tasks at design time. In addition, automatic firewall ruleset generation provided by our framework will mostly avoid human intervention, hence decreasing the time to carry out them and the likelihood of errors. We focus our discussions on technical challenges: definition of common actions at the policy level and their translation into configurations for the heterogeneous set of security functions by means of a use case. Index Terms—network functions virtualization, firewall, automatic programmability, cloud networking, formal methods I. INTRODUCTION The networking field is currently facing a deep revolution based on virtualization. In the decade which has just ended, innovative paradigms shook the traditional vision of networks as a mesh of heterogeneous functions providing different services. The first time when networking embraced virtualization is represented by the born of Software-Defined Networking (SDN) [1], [2]. The main pillars of this technology are the decoupling between data plane and control plane, centralization of all the control plane functions in a software module that is referred to as SDN controller, abstraction between the specificity of user applications and the generality of controller interfaces. More recently, Network Functions Virtualization (NFV) [3], [4] introduced the possibility to create network functions as software programs and to make them run as traditional virtual machines or containerized applications, supervised by a software MANagement and Orchestration (MANO) module [5]. Physical middleboxes have been thus progressively replaced by general-purpose servers where the programs implementing the network functions can run. Among the contributions brought over, automatic (re)programmability of the network functions are nowadays becoming feasible, with respect to the traditional troubles coming from a manual function configuration [6]. On one side, a fundamental novelty provided by SDN has been the reactive generation and deployment of forwarding rules by the controller onto the data plane switches. Whenever a packet which does not match any switch rule is received, it is forwarded to the controller, so that it can take the best decision according to the internal logic and consequently generates rules for all the network switches which would have to manage packets with the same characteristics in the future. On the other side, if the network functions are implemented in the NFV fashion, MANO can automatically manage their life-cycle and deployment, so as to optimize either resource consumption for the underlying physical infrastructure or availability of the provided services. Although many organizations are migrating virtual machine (VM)-based applications to containers, virtualization is still present in data centers and public clouds. We are also seeing new ways of integrating virtualization with containers and Kubernetes (K8s) to provide innovative solutions to new problems. In other words, virtual machines are also becoming part of the cloud-native architecture – this concept is called container-native virtualization. Kubernetes is an example of the fulfillment of SDN and NFV paradigms, which is an open-source system for automating deployment, scaling, and management of virtualized applications. It significantly simplifies the works of network and service administrators. However, in an environment like Kubernetes, where multiple software processes run in parallel, the correct global management becomes more difficult than what traditionally was with single hardware devices. The increase of complexity, unfortunately, contributed to raising the number of cyberattacks, which became more variable by having the possibility to exploit new kinds of breaches. In particular, misconfiguration of network functions has become more critical, because more variable factors must be considered when enforcing a correct security defense against both external and internal attacks. This statement is confirmed by recent surveys, such as Verizon's most recent study [7]. In this report, the misconfigurations have been identified as the third most critical threat in cloud environments, that can lead to catastrophic breaches. In light of these observations, the challenge we propose to face is to effectively exploit the benefits provided by the virtual networking paradigms, minimizing the impact of their beforehand illustrated drawbacks. With this aim, we designed a framework based on the innovative methodology presented in [8], based on Maximum Satisfiability Modulo Theories (MaxSMT), and we integrated it in the context of Kubernetes. The proposed approach automatically configures virtual firewalls, where a consistent number of configuration errors are traditionally performed. Moreover, we will particularly describe how this methodology is effectively introduced in the framework architecture of ASTRID (AddreSsing ThReats for virtualIseD services), which is an EU H2020 Project [9]. The remainder of this paper is structured as follows. In Section II, the most related works are described, so that the main differences with respect to the methodology proposed in this paper are illustrated. In Section III, first, the general architecture of the ASTRID framework is presented. Then the focus will be shifted on the methodology for the automatic firewall configuration, present inside the Security Controller, the central component of the ASTRID framework which enforces security in cloud-based networks. In Section IV, additional details about the implementation will be provided, alongside with a validation based on the framework’s application in a realistic scenario. Finally, Section V briefly concludes the paper and describes the planned future works. II. RELATED WORK The focus of this paper is centered on the automatic configuration of firewalling functions in Kubernetes framework. Therefore, we briefly introduce the main characteristics of the Kubernetes framework and then we report the main works related to the automatic firewalls configuration. As shown in Fig. 1, a Kubernetes cluster is composed of multiple nodes, which can be virtual or physical. A Pod is a minimal management unit and can accommodate one or more containers. Each Pod is protected by a packet filter (i.e., FW in Fig 1). Pods are assigned with network addresses and are allocated to nodes. Containers inside a Pod share resources, such as volumes where they can write and read data. Clients contact the cluster through another firewall, which distributes requests to nodes according to load balancing rules. The proxy receives requests from this firewall and forwards them to Pods. Each node has a proxy installed. If a Pod is replicated, the proxy distributes the load among the replicas. The kubelet is a component that manages Pods, containers, images and other elements in the node. The kubelet forwards data on the monitoring of containers to the main node, which acts when necessary. In this framework, one of the main key points concerns the correct and consistent configuration of this graph of firewalls that protect the access to each container. In literature, an automatic configuration of firewalls is a challenge where research has been partially carried out. However, most of the works describe either technique which can be only applied to traditional networks (e.g., with hardware firewalls), or mathematical models that do not have a correspondent implementation proving their feasibility and effectiveness. Moreover, only a limited subset of them enrich the computed configurations with optimality or formal correctness assurance [10]. The three papers which gave birth to this research trend have been [11], [12] and [13]. In particular, Firmato [11] represents a vital milestone, because it is the first approach based on policy refinement that is able to automatically synthesize a firewall configuration, by exploiting an entity-relationship model for the representation of the security policies and the network topology. Nevertheless, its limitations are evident: the most critical is that it has been validated on a single centralized firewall, instead of a distributed security architecture. The other two works ([12], [13]) added the possibility to configure a distributed firewall as the main novelty. However, all these works exclusively target traditional networks, and do not offer either optimality or formal verification. Formal mathematical models have been, instead, presented in [14] and [15], where formal methodologies are used to automatically compute firewall configuration. However, in both cases these techniques work only in specific cases, not related to virtualized networks: [14] follows the syntax of IPChains and Cisco’s PIX, whereas [15]’s technique has been validated only with SCADA-firewall configuration. Besides, optimization is overlooked in both these two works. A recent work which, with respect to all the others, specifically targets NFV-based networks is [16], [17]. The proposed approach is the first step toward a security policy aware NFV management, with the introduction of a specific module, called Security Awareness Manager (SAM), into frameworks which provide NFV MANO, such as OpenMANO. This module performed a complete refinement of high-level, user-specified network security policies into the low-level configuration of virtual network functions, using optimization models defined for each function type. There are limitations in this work, though: the achieved results are not formally verified and little information is provided about how firewall policies are managed, since this paper provides a comprehensive approach for multiple security function types. Anyhow, it shows how, despite its drawbacks, virtualization is altogether characterized by features which can be positively and efficiently exploited in the automatic programmability of next-generation computer networks. Finally, the proposed work integrates the automatic configu- ![Fig. 1: Kubernetes architecture](image-url) ration approach, presented in [8], into Kubernetes. Specifically, the solution in [8] adopts a formal approach based on the MaxSMT problem, which provides formal assurance about the correctness of the solution. More details will be provided in the next sections. III. APPROACH This section presents the design of the ASTRID framework and presents a generic workflow to illustrate the main functionalities. Next, our proposed approach is presented as a Security Controller component that resides in the ASTRID framework. A. ASTRID framework The term orchestration is commonly being used in the IT field. In the NFV and microservice system, there is Service Orchestration for Service, and in the Cloud system, there is Cloud Orchestration for cloud resource description. With the development and maturity of container technology, more and more enterprises and individuals choose to containerize traditional applications or directly develop container-based cloud-native applications and then run applications on the container platform. Faced with a complex container operating environment, the needs for container orchestration have raised. In general, container orchestration is responsible for the lifecycle scheduling of containers, and it improves container usage by managing container clusters. There are currently three major industry giants such as Kubernetes, Docker Swarm, and Apache Mesos. They belong to the category of DevOps infrastructure management tools and are called “container orchestration engines”. But when developers enter the world of orchestration, one thing that needs special attention is security. Various blogs, videos, books, and tutorials teach developers how to use these solutions, but only a few mention the need to add security controls to protect applications in the cluster. Moreover, if the underlying infrastructure of the cloud is unreliable (or configured in a vulnerable manner), for instance, there is no way to guarantee the security of a Kubernetes cluster built on this foundation. The main goal of the AddreSsing ThReats for virtualIseD services (ASTRID) project is to address these technological gaps in the scope of cloud infrastructures. The project proposes a novel cyber-security framework to provide situational awareness for cloud applications and NFV services. The overall workflow of the framework is presented in Fig. 2. According to the workflow, the ASTRID framework allows software and service developers to provide a description of the service graph as well as the infrastructure information. The infrastructure information includes the actual number of launched virtual network functions and parameters assigned after the enforcement process such as IP and port addresses. After receiving the required data from the orchestrator, the controller performs an automatic translation from the high-level policy to low-level configuration parameters for firewall network functions. This process of automatic configuration is formally proven to meet these security policies as a part of this analysis. The security controller formulates the problem of the automatic configuration of firewall rule tables as the Maximum Satisfiability Modulo Theories (MaxSMT) problem. It is a basic constraint optimization problem that we use to provide two main features: i) high assurance in the correctness of the possible the behaviour of the security functions, in the control plane. In the next subsection, we describe the component in detail. B. Security Controller The Security Controller has been developed on the basis of the methodology presented in [8]. It incorporates programmability and automation in the synthesis of virtual firewall rules from a user-provided security policy. With this respect, the Security Controller works in close coordination along with the service orchestrator. The service orchestrator is in charge of providing a description of the service graph as well as the infrastructure information. The infrastructure information includes the actual number of launched virtual network functions and parameters assigned after the enforcement process such as IP and port addresses. After receiving the required data from the orchestrator, the controller performs an automatic translation from the high-level policy to low-level configuration parameters for firewall network functions. This process of automatic configuration is formally proven to meet these security policies as a part of this analysis. The security controller formulates the problem of the automatic configuration of firewall rule tables as the Maximum Satisfiability Modulo Theories (MaxSMT) problem. It is a basic constraint optimization problem that we use to provide two main features: i) high assurance in the correctness of the computed solutions, thanks to the intrinsic formal correctness-by-construction paradigm; ii) optimality of the solution, by minimizing the number of automatically generated firewall rules, with the purpose to improve the filtering operations. To this day, optimization problems are modeled by Integer Programming (IP) languages. At the same time, most of them are NP-hard classes, and large-scale integer problems are difficult to solve. Moreover, none of the variations of the IP formulation are able to model the problem of automatic firewall configuration having in mind the verification of end-to-end reachability. This is due to the less expressive power of the approaches compared to the Constraint Satisfaction Problem (CSP) representations. An instance encoding of CSP, MaxSMT in our case, is defined by a set of variables, a set of possible values (or domains) for each variable, and a set of soft and hard constraints, each constraint involving one or more variables. A MaxSMT solver determines for a given instance whether it is possible to assign a value to each variable from its respective domain in order to satisfy all hard constraints and an optimal number of soft constraints simultaneously. The Security Controller translates the input service graph into a MaxSMT instance by means of a set of First-Order Logic formulas. In a nutshell, these formulas will be converted to boolean variables in Conjunctive Normal Form, eventually. In addition to the topological definition of the service graph, each network function of the service graph will be translated into the abstract model according to the guidelines given by Verigraph [19]. This allows us to provide a higher level of assurance that the automatically generated configuration parameters of the firewall will satisfy the security policies in the presence of complex network functions. The level of abstraction of these models covers all the forwarding behavior of the network and their configuration parameters that are already defined. Instead, we model the firewall network function by introducing soft constraints over variables, which then will be decided to satisfy or not by the MaxSMT solver. These variables represent the IP and port addresses to be autoconfigured in order to satisfy the end-to-end policies. Initially, these variables are set to false, which means that a firewall does not contain any rule. If the policy requires that the firewall must block the traffic, it must falsify the soft constraint in favor of satisfying the policy requirement. Hence, the policy requirement is modeled as a hard constraint, which means it must be always satisfied. In this way, the solver tries to minimize the falsifying constraints in the formula and satisfy the hard constraints. This is the definition of the optimization problem we pursue to solve. In order to represent the reachability policies by means of hard constraints, we introduce the concept of packet flows between endpoints. The first constraint we assert is that the network function model defined in the service graph must forward a packet flow if it receives a packet flow. This constraint must be true under the functional behavior of the network device. For instance, this is true if a firewall network function does not contain any rule that drops packets. The second constraint states that the packet flow sent from a source node must be received by the destination node. Other constraints include the forwarding path definitions and static configuration parameters of network functions. This concludes the fact that IP formulation of the same problem would be limited to a set of constraints over binary, integer, or real variables. Instead, the approach presented in this paper allows us to model the problem and using very expressive constraints. These constraints include configuration parameters of network functions, forwarding behavior of the service graph, and complex security policies, in addition to the automatic configuration constraints of the problem. Therefore, existing IP algorithms are not comparable to our algorithm for that class of problems. In the next section, we demonstrate our approach by means of a representative scenario. IV. USE CASE SCENARIO This section presents our framework in greater detail with a practical use case and motivates our design decisions. For the sake of simplicity, we focus our attention on a specific component of the ASTRID framework, the Security Controller, and emphasize the fact that the interaction between other components is performed by means of a REST API. We expose a number of resource endpoints to the Security Orchestrator, which will use to deliver the service graph and infrastructure information and to retrieve the automatically generated firewall rules. We underline the fact this methodology can be extended to more general scenarios than the ASTRID framework. In fact, the Security Controller is a standalone web service application, which makes it possible to be easily incorporated into existing cloud platforms and orchestrators. We consider the scenario where an administrator predefines the logical service graph presented in Fig. 3a and feeds it to the dashboard of the ASTRID framework. This service graph represents a realistic scenario where the nginx web server is made public to the Internet and functions as a reverse proxy to fetch dynamic data from multiple instances of nodejs and apache servers. In this case, both servers can acquire data from a mysql database. As we can see from the figure, reachability policies required by the use case are rather obvious (i.e., highlighted with arrows). Instead, the isolation property required by the service graph is not evident. For instance, all the communications, which are not highlighted with arrows must be isolated. Considering the fact that each service in the graph is associated with a firewall, firewalls are preconfigured with deny-all rules, in order to satisfy this policy. This ensures that all other interactions within the service graph must be isolated, except the ones predefined by the user (i.e., arrows). A Service Orchestrator of the ASTRID framework is in charge of deploying the service graph onto the infrastructure and generating the enriched service graph shown in Fig. 3b. During this enforcement phase, all the services are assigned with corresponding IP addresses and ports where these services can be reached. It is important to highlight that the multiple instances of the services are deployed in separate Pods and each will have its own IP addresses. In this scenario, the user specified to have two instances of the nodejs server to handle the load. To illustrate the complexity introduced by this simple use case, we included all the links connecting each service in the infrastructure in Fig. 3b. Taking into account the deny-all rules of each firewall of the service, we can assure that there is no reachability between the Pods in this phase. Although, we have specified the user policy that needs to be satisfied by means of the arrows in the figure. As an example, apache server needs to be configured to allow traffic from itself to a mysql database and allow communication from nodejs. However, it needs to be isolated from each instance of the nodejs servers. Without the Security Controller, an administrator of the infrastructure must manually configure each firewall. This process of manual configuration of each firewall is error-prone and time-consuming. This scenario motivates the use of the Security Controller presented in this paper, in order to automatically generate firewall configurations for each service and provide formal assurance that the network policy defined by the user is satisfied. To obtain the low-level configuration of each firewall component, the Security Controller accepts as an input the infrastructure information and logical service graph as described in Section III. Infrastructure information contains the IP and port addresses of each service that is shown in Fig. 3b. This information is required to define the firewall rules, which allows to block specific packet flows involving specific Pods. In the next step, the Security Controller automatically generates an output with a low-level configuration of each firewall component. As an example, we present the partial output format and the actual configuration parameters generated by the Security Controller in Listing 1. In this prototype evaluation experiment, we use a machine with 3.40 GHz Intel i7-6700 CPU and 32GB of RAM. The average time needed for the overall procedure is less than a second. We need to emphasize the fact that for most service requests, the time required to schedule VMs to be several orders of magnitude larger than the this computation time. Listing 1 shows the configuration parameters generated for the firewall component of the mysql service. It includes all the neighbors of the firewall in the infrastructure network and firewall rule entries. According to the output, we need to configure the firewall with 3 entries. **Listing 1: Automatic Configuration Output for mysql** ``` 1 <node name="172.20.1.34" functional_type="FIREWALL"> 2 <action>ALLOW</action> 3 <source>172.20.1.13</source> 4 <destination>172.20.1.14</destination> 5 <protocol>TCP</protocol> 6 <src_port>8080</src_port> 7 <dst_port>8080</dst_port> 8 </elements> 9 <nodes> 10 <node name="172.20.1.35" functional_type="FIREWALL"> 11 <action>ALLOW</action> 12 <source>172.20.1.14</source> 13 <destination>172.20.1.15</destination> 14 <protocol>TCP</protocol> 15 <src_port>8080</src_port> 16 <dst_port>8080</dst_port> 17 </nodes> 18 <configuration name="mysql" description="172.20.1.14"> 19 <firewall defaultAction="ALLOW"> 20 <nodes> 21 <node name="172.20.1.36" functional_type="FIREWALL"> 22 <action>ALLOW</action> 23 <source>172.20.1.15</source> 24 <destination>172.20.1.16</destination> 25 <protocol>TCP</protocol> 26 <src_port>8080</src_port> 27 <dst_port>8080</dst_port> 28 </nodes> 29 </nodes> 30 </firewall> 31 </configuration> 32 </node> ``` The first rule states that the packets arriving from the Pod with an IP address 172.20.1.13 need to be allowed. The rest of the rules are associated with the two instances of the node.js server of the service graph. Due to the default action set by the firewall in line 8, Listing 1, all the other packets arriving from the network is dropped. For instance, intruders from the Internet are not able to access the mysql database in accordance with these rules. This, in fact, ensures the satisfiability of the initial service graph policy defined by the user. Eventually, the output file generated by the Security Controller is sent back to the Context Broker, which is in charge of translating the low-level configuration of each firewall into a vendor-specific format of the firewall. An important feature of the Security Controller is in the possibility to have firewalls without any configuration as in the use case or with partial configuration, giving to the tool itself the task of providing the missing configurations as an output. The tool generates the configuration with the objective of satisfying all the requested policies while minimizing the number of generated rules in order to achieve it. In the case of partial configuration, a firewall may include static rule entries that will not be changed in the output. This is useful when the service graph is updated according to an event when a Pod is terminated or a new Pod has been created to handle the overhead to the service. In this scenario, in order not to recompute the configuration parameters of all the other services, we can provide their rules in a static manner, meaning that they can be left unchanged. This process not only generates a set of configuration parameters but also provides an optimal set of rules to satisfy the user policy. Optimality is achieved by minimizing the number of rules inside each firewall to improve the performance of the virtual network functions. V. Conclusion and Future Works In this paper, we illustrated the benefits which the introduction of automatic programmability would bring for the synthesis of firewall rule sets in virtual networks, in the respect of NFV and cloud infrastructures with special emphasis on Kubernetes. In particular, the role of the presented automated methodology in the ASTRID framework architecture has been described, with an emphasis on the contributions provided to the Security Controller. We formulated the problem of automatic firewall configuration as a MaxSMT instance and solve it to provide reachability assurance between endpoints. As possible future works, we are currently planning to introduce programmability for other kinds of network security functions, such as intrusion detection systems and security devices for channel protection (e.g., VPN gateways). Moreover, we plan to provide automatic configuration settings in the presence of minor changes in the initial service graph without solving the problem from scratch. As the initial results show promises in smaller instances, we plan to evaluate the model in larger scale scenarios. Acknowledgment This work has been partially supported by the EU H2020 Projects ASTRID (Grant Agreement no. 786922) and Cyber-Sec4Europe (Grant Agreement no. 830929). References
{"Source-Url": "https://iris.polito.it/retrieve/handle/11583/2844332/e384c432-7c62-d4b2-e053-9f05fe0a1d67/main.pdf", "len_cl100k_base": 5726, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 21989, "total-output-tokens": 7804, "length": "2e12", "weborganizer": {"__label__adult": 0.0003764629364013672, "__label__art_design": 0.0004608631134033203, "__label__crime_law": 0.0007557868957519531, "__label__education_jobs": 0.0007915496826171875, "__label__entertainment": 0.00014507770538330078, "__label__fashion_beauty": 0.00017762184143066406, "__label__finance_business": 0.0005030632019042969, "__label__food_dining": 0.00035762786865234375, "__label__games": 0.0006251335144042969, "__label__hardware": 0.00229644775390625, "__label__health": 0.0008368492126464844, "__label__history": 0.00036025047302246094, "__label__home_hobbies": 0.0001316070556640625, "__label__industrial": 0.0007123947143554688, "__label__literature": 0.00032591819763183594, "__label__politics": 0.0004405975341796875, "__label__religion": 0.00044417381286621094, "__label__science_tech": 0.386962890625, "__label__social_life": 0.00016188621520996094, "__label__software": 0.0390625, "__label__software_dev": 0.56298828125, "__label__sports_fitness": 0.00026154518127441406, "__label__transportation": 0.0005855560302734375, "__label__travel": 0.00022017955780029297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34440, 0.06345]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34440, 0.42242]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34440, 0.88709]], "google_gemma-3-12b-it_contains_pii": [[0, 1403, false], [1403, 7057, null], [7057, 12430, null], [12430, 17205, null], [17205, 23807, null], [23807, 27522, null], [27522, 34440, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1403, true], [1403, 7057, null], [7057, 12430, null], [12430, 17205, null], [17205, 23807, null], [23807, 27522, null], [27522, 34440, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34440, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34440, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34440, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34440, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34440, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34440, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34440, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34440, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34440, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34440, null]], "pdf_page_numbers": [[0, 1403, 1], [1403, 7057, 2], [7057, 12430, 3], [12430, 17205, 4], [17205, 23807, 5], [23807, 27522, 6], [27522, 34440, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34440, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
85b0500058111a115396cd4b83a7d1e9899804f6
Security-as-a-Service for Microservices-Based Cloud Applications Yuqiong Sun Penn State University Email: yus138@cse.psu.edu Susanta Nanda Symantec Research Labs Email: susanta_nanda@symantec.com Trent Jaeger Penn State University Email: tjaeger@cse.psu.edu Abstract—Microservice architecture allows different parts of an application to be developed, deployed and scaled independently, therefore becoming a trend for developing cloud applications. However, it comes with challenging security issues. First, the network complexity introduced by the large number of microservices greatly increases the difficulty in monitoring the security of the entire application. Second, microservices are often designed to completely trust each other, therefore compromise of a single microservice may bring down the entire application. The problems are only exacerbated by the cloud, since applications no longer have complete control over their networks. In this paper, we propose a design for security-as-a-service for microservices-based cloud applications. By adding a new API primitive FlowTap for the network hypervisor, we build a flexible monitoring and policy enforcement infrastructure for network traffic to secure cloud applications. We demonstrate the effectiveness of our solution by deploying the Bro network monitor using FlowTap. Results show that our solution is flexible enough to support various kinds of monitoring scenarios and policies and it incurs minimal overhead (∼6%) for real world usage. As a result, cloud applications can leverage our solution to deploy network security monitors to flexibly detect and block threats both external and internal to their network. Keywords—microservices; network monitoring; security; I. INTRODUCTION There are multiple trends that are forcing modern cloud applications to evolve. Users expect rich, interactive, and dynamic user experience on a wide variety of client devices. Applications must be highly scalable, highly available, and must run on cloud environments. Organizations want to roll out frequent updates – sometimes, even multiple times a day. Consequently, it is no longer adequate to develop monolithic web applications. The predominant way to address this problem today is to use an alternate architecture – known as microservices architecture [1] – that decomposes a monolithic application into a set of narrowly focussed, independently deployable services, called microservices. The popularity of this architecture is evident from the report by the popular jobs portal indeed.com [2] that the number of job openings on microservices-related technologies, such as JSON and REST, has grown more than 100 fold in the last six years, whereas jobs in similar technology areas like SOAP and XML have stayed nearly identical. The microservices architecture, due to its new design paradigm, introduces two major security challenges. First, the microservices design creates many smaller applications interacting among themselves that results in complex network activity. This makes monitoring and securing networks for the overall application and individual microservices very challenging. Second, the trusted computing base (TCB) for cloud applications usually includes all of the component microservices and compromise of one could result in the entire application’s compromise. It gets even more challenging in the public cloud environment where the application administrator has limited access to the application network. One way to address these challenges is to leverage the software defined networking (SDN) capabilities provided by modern cloud networking and program the networks in a way that monitors the complex network interactions and enforces policies on them. For example, SDN provides the ability to scan through network packets at every forwarding element and control the forwarding as per the application requirements. This also allows to control the granularity of the network flows (while processing) and even changing them dynamically based on attack patterns and application behaviors. It also allows for passive monitoring (e.g. via copy-and-forward) and active rerouting (e.g. via changing the forwarding destination) that can be leveraged based on security requirements. In this paper, we propose the design, development, and evaluation of a security-as-service infrastructure for microservices-based cloud applications. We propose a cloud-based network security framework that helps cloud application administrators/providers to construct a global view of their application, even when its components are distributed throughout the cloud. The framework also enables application providers/administrators to flexibly implement their own security control over its services, thus preventing a compromised service from compromising the rest of the application or the cloud platform/hypervisor. Our design is motivated by the micro-kernel design [3], where a security kernel monitors and mediates security critical operations performed by “server” (aka services) running atop, effectively removing the services from the TCB of the application. II. BACKGROUND AND PROBLEM In this section, we illustrate microservice architecture with an example application—an online DVD rental store, and discuss security challenges of this design paradigm. Other Microservice architecture [4] decomposes an application into a set of narrowly focused and independently deployable services, a.k.a. microservices [1], which may communicate among themselves using lightweight mechanisms, such as REST APIs. Some examples of popular online services using this design include Netflix [5], Ebay [6], and Gilt [7]. In contrast, traditional internet applications are often designed using a three-tier model with a monolithic application implementing all of the business logic, as shown in Figure 1(a). In this example, all the logic for renting DVDs runs in a single process and is deployed as a single executable (i.e., a WAR file). For scalability, multiple application instances are deployed horizontally behind a load-balancer. There are three problems with monolithic applications. First, different components of the application have different scaling requirements — for instance, creating new customers is less frequent than customers renting DVDs. However, scaling a monolithic application requires the entire application to be replicated, which requires greater resources. Second, monolithic architecture often means technology lock-in—it is difficult for application components to evolve separately and adopt new technologies (e.g., new databases, new programming languages). Moreover, a small change made to the application requires the entire application to be re-built. Finally, as the application becomes more complex, it is often difficult to separate out DevOps responsibilities, which results in slow development and deployment. Microservice architecture addresses these problems by decomposing a complex monolithic application into a set of small and autonomous services that work together. In Figure 1(b), the DVD rental application is broken into many small and decoupled tasks, and each task is implemented by a small service. This decomposition allows different services to be built, deployed, managed, and scaled independently. During this decomposition, all the function calls across components are replaced by inter-service communications that are implemented via well-defined API interfaces, as illustrated in the Figure 1(b) using connections. B. Security Issues Microservice architecture does not make an application any simpler, it only distributes the application logic into multiple smaller components, resulting in a much more complex network interaction model between components [8], which is evident even in this simplified example in Figure 1(b). When a real world application is decomposed, it can easily create hundreds of microservices, as seen in the architecture overview of Hailo, an online cab reservation application, depicted in Figure 1(c). Around the circle are microservices and the lines are the communications between them. The security challenge brought by such network complexity is the ever-increasing difficulty in debugging, monitoring, auditing, and forensic analysis of the entire application. Since microservices are often deployed in a cloud that the application owners do not control, it is difficult for them to construct a global view of the entire application. Attackers can thus exploit this complexity to launch attacks against applications. Current cloud platforms lack a mechanism to assist application owners to effectively collect and monitor the interactions among distributed microservices in order to have a better visibility of the application. Another security concern involves the trust among the distributed microservices. An individual microservice may be compromised and controlled by an adversary. For example, the adversary may exploit a vulnerability in a public facing microservice and escalate privilege on the VM that the microservice runs in. As another example, insiders may abuse their privileges to control some microservices. As a result, individual microservices may not be trustworthy. However, current applications often assume a TCB including all their microservices. Consequently, adversaries controlling one microservice may propagate their attacks through the connections among microservices and bring down the entire application. For example, in the DVD rental application, a compromised Contract-Update may send modified requests to User-Update to cause user account to be arbitrarily charged. A compromised DVD-Update service may consume and then delete messages on the queue without actually shipping out DVDs, causing a denial of service attack, and so on. As a real world example, a subdomain of Netflix was compromised, and from that domain, adversary can serve any content in the context of netflix.com. In addition, since Netflix allowed all users’ cookies to be accessed from any subdomain, an adversary controlling a subdomain was able to tamper with authenticated Netflix subscribers and their data [9]. Current cloud platforms lack a mechanism for applications to monitor and enforce the connections among microservices to confine the trust placed on individual microservices, limiting the potential damage if any microservice gets compromised. C. Problem Definition Threat and Trust Model. In this work, we aim to assist cloud applications to monitor and enforce communication among its microservices. To do this, we assume that the cloud infrastructure on which application VMs\(^1\) run are not compromised. However, we do not trust VMs that run the microservices. We assume it is possible that adversaries may take control over the VM after compromising a microservice. Thus, we offer the guarantee that the communications among microservices of an application are completely monitored \(^1\)Our approach also works for containers. and mediated according to application’s policy, even when individual microservices are controlled by the adversaries. **Limitations of Prior Research.** Prior research has focused on protecting applications from external threats [10], [11]. They often deploy security services (e.g., IDS/IPS) on network edges in order to monitor traffic that goes into or comes out of an application’s private network. However, these approaches cannot address internal threats that come from compromised microservices within the application’s private network. There are some prior work [12], [13] that try to extend monitoring to the communications among microservices. However, these approaches often rely on instrumentation of microservices themselves or their host VMs. Consequently, if the entire VM is compromised by adversary, the results collected by the monitoring services will no longer be trustworthy. We want a solution that can flexibly monitor the network communication between microservices and enforce policies on them in order to detect or prevent both external and internal threats targeting cloud applications. We envision the following requirements for such a solution. - Completeness: the solution should be able to monitor and enforce over both internal and external network events of a cloud application. - Tamper resistance: the solution should work even if individual application VMs are under adversary’s control. - Flexibility: the solution should allow applications to specify their own policies over the kind of network events they want to monitor and enforce policies on. - Efficiency: the solution should have minimal impact on network and CPU resources consumed. **III. DESIGN** To monitor and enforce various network events in microservice applications, two requirements must be met. First, the solution needs to provide complete mediation—it must be able to monitor and enforce all security-sensitive network events, both external and internal. Second, the solution must be tamper-proof—it must protect itself from adversaries that may control individual microservices and their host VMs. Our solution leverages modern networking technology (i.e., SDN) to separate the decision about where to place security monitor from the network flows themselves. The insight behind such design is that security monitors no longer need to sit on the network path (e.g., network edge, Application VMs) in order to monitor the network events, since the network connection seen by cloud applications are actually defined by software (i.e., via SDN). Consequently, we can place security monitors in their own VMs, hereafter called security VMs, which are deployed just like application VMs in the cloud. As a result, our solution can achieve tamper-proof since security VMs are isolated from application VMs; complete mediation since network events, either internal or external, can be delivered to the security VM via the cloud network that is defined by software; and the flexibility and efficiency since both security VMs and the way network events are delivered can be flexibly decided according to the application’s needs. To demonstrate our approach, consider Figure 1(b). Assume a security monitor is interested in customer requests sent to the Contract-Update service and the resulting internal messages sent by the Contract-Update service. In this case, the virtual network of the DVD store application can be re-programmed in a way that all the incoming and outgoing network traffic of the VM that hosts Contract-Update service are copied and forwarded to the security VM for analysis. Network virtualization based on SDN in the cloud allows the cloud infrastructure to control each and every network packet to or from individual application VMs, consequently providing the completeness guarantee that the security monitor will be able to monitor both external and internal network events. Such a design also lends itself to a tamperproof deployment of security monitors. Since security monitors reside in their own VMs, adversaries cannot evade or tamper with security monitor even if they have complete control over application VMs. In fact, much like a traditional security monitor deployed on network edges, security VMs are transparent to application VMs. Adversaries cannot propagate attacks from application VMs to security VMs unless they can break VM isolation, which hypervisors are trusted to enforce. In order to deploy security monitors in VMs, cloud infrastructure must be able to deliver relevant network events to corresponding security VMs. In the remainder of this section, we will describe the architectural support required from cloud infrastructure to do so and how flexibility and efficiency goals can be fulfilled. A. FlowTap Primitive Deploying security monitors in VMs requires architectural support from cloud infrastructure in order to deliver relevant network events to corresponding security VMs. The key challenge here is to provide flexibility. A naive solution that copies and forwards all network events, both internal and external, to a security VM is unlikely to fulfill the diverse security requirements of all cloud applications. We envision the following scenarios that a cloud application may need a security monitor for: - An application wants to deploy a security monitor to simply log all internal and external network events seen by its microservices for later forensics purposes. - An application wants to deploy a security monitor to protect its public facing microservices but trust its internal microservices, in a way similar to traditional security monitors on network edges. - An application wants to deploy a special set of security monitors to protect certain microservices (e.g., SQL injection detection for DB service). - An application wants to deploy a security monitor to selectively monitor certain type of network events (e.g., messages sent over message queue) but ignore other types of traffic (e.g., HTTP requests/responses). - Building on the above scenarios, applications may require security monitors to be able to react to network events differently (e.g., passively monitor vs. block/allow vs. redirect for honeypot analysis). The architectural support provided by the cloud infrastructure must be flexible enough to support all these scenarios. The solution we propose is a primitive called FlowTap. FlowTap is a contract established between application and cloud infrastructure regarding how to deliver network events. This contract describes the monitoring functionality for each application VM by associating that VM with a security VM and the network events to be monitored, implementing the illusion that the security VM is physically resident on the network channel. FlowTap contracts may specify the types of network events and actions to take upon those events, but different mappings of events/actions may be delivered to different security VMs for the same target VM. The FlowTap primitive is designed to be a cloud API that can be invoked by cloud applications. Figure 2 shows a prototype of the API. It accepts four parameters. The first parameter \( SRC \) is the target application VM which needs to be monitored. The second parameter \( DST \) is the security VM that hosts the security monitor. Both \( SRC \) and \( DST \) are specified using \( port \). Port is an abstract concept in cloud that uniquely represents the connection between a VM and a virtual network. A port may or may not have an IP address. Working with ports, FlowTap primitive allows applications to specify the monitoring relationship without worrying about the actual deployment. For example, target VM may be migrated, suspended or rebooted with a new IP address, but as long as it is plugged into the same virtual network, cloud infrastructure will honor the contract and deliver the network events. Moreover, since port can work without IP address, security VM can be transparent (i.e., not addressable) to application VMs. This further protects security VM from compromised application VMs. The third parameter \( Flow\_Syntax \) describes a particular network flow. Network flow is the basic unit at which the cloud infrastructure handles traffic routing. It is defined using different fields of a network packet as shown in Figure 2. By specifying the \( Flow\_Syntax \), application can ask cloud infrastructure to selectively deliver specific types of traffic to a security VM. For example, if the application is only interested in monitoring HTTP traffic, it can specify a flow with destination TCP port to be 80. In this way, only HTTP requests/responses sent to or from microservices will be delivered to the security VM, with the rest of the traffic (e.g., database access) untouched. Working at the granularity of network flow, FlowTap primitive allows applications to specify different monitoring strategies with maximum flexibility that a cloud network may support\(^2\). The last parameter \( Action \) allows an application to choose different reaction strategies for each type of network event. Currently, FlowTap implements two kinds of strategies, namely forwarding and redirecting. On forwarding, relevant network events will be copied and forwarded to the security VM, with the original network events still delivered to their intended destination. On redirecting, the relevant network events will be directed to the security VM, and depending the decisions made by security monitor, the network events may or may not reach the their intended destination. Forwarding and redirecting essentially implement the passive monitoring and active enforcement respectively. Multiple FlowTap contracts can be established on the same target application VM with non-overlapping flows. For example, if a microservice both accepts HTTP requests and accesses the message queue (e.g., Contract-Update in Figure 1), the HTTP traffic can be forwarded to a Web \(^2\)This is because underlying routing devices in cloud (e.g., virtual switches) also process network traffic at the granularity of network flow. Port, WAF In vm dport ), where the r.h.s. contains dport Port, http IP fabric Cloud Node App_A App_B VM VM Cloud Node App_A VM App_B VM Cloud Node Security_A VM Security_B VM IP fabric (a) Security monitor per application (tenant) (b) Security monitor per cloud node Figure 3. Two security monitor deployment strategies. Application Firewall (WAF) to block external attacks and the message over message queue can be forwarded to an internal security analysis tool to detect internal attacks. B. FlowTap Compiler The FlowTap primitive allows traffic slicing at the granularity of flows, which operate at layer 4 and below. To support monitoring policies at a higher level of abstraction, we provide a FlowTap compiler, ftc. This tool translates a given set of high level policies – usually provided by the application – to a sequence of FlowTap API calls necessary to implement the policies. For example, a policy such as redirect all incoming HTTP traffic for public-facing services to a WAF (web application firewall) will be translated to a sequence of API calls of type FlowTap(Service_Port, WAF_Port, (Proto = TCP | TcpDstPort = 80), Forward) and is pushed out to all the nodes that host VMs for the particular application. Currently ftc supports Datalog as the policy language and is designed to be compatible with the policy language used in OpenStack Congress [14]. In this example, the policy could be written as a Datalog rule, monitor_http(Tcp_dport, InOut, VM1, VM2) :- http_In(Tcp_dport, InOut), public_vm(VM1), waf_vm(VM2), monHttpAction(_), where the r.h.s. contains rules to validate the context (the first three terms) and perform the set-up/action (monHttpAction). The details of the policy language is left out due to space limitations. The compiler is designed to generate FlowTap calls according to the network monitoring strategy chosen by the administrator (Refer to Section III-C) in a way that also maximizes the efficiency of the system. More concretely, in addition to policy, ftc also takes as input the utilization of the cloud resource (e.g., cloud node CPU usage, network load) and can dynamically compile the same policy into different set of FlowTap calls that maximize the efficiency of the system. For example, using the monitor per cloud node strategy mentioned below, ftc may slice the flows monitored internally on a cloud node and forward part of them to security monitor on another node for processing if the CPU of the first node becomes the bottleneck. C. Monitor Per Application vs. Monitor Per Cloud Node FlowTap primitive delivers each security VM its relevant network traffic. This unavoidably creates additional traffic on the network. Therefore, the next question is where do we place security VMs such that network monitoring can be efficient for cloud applications. There are two distinct approaches for deploying security VMs, corresponding to two different service models. The first approach is to deploy a security VM per application as shown in Figure 3(a). In this case, each security VM is deployed as part of the application. The service model is self-service, where each application is responsible for building and maintaining its security VMs, setting up the FlowTap contracts, etc. Cloud vendor only exposes the FlowTap API for delivering the network traffic as specified by the application. The benefit of this approach is that it provides applications with maximum flexibility—applications can choose any security monitors and policies they like and can directly manage their security monitors. However, this approach has a drawback in terms of efficiency. Since security VM may reside on another physical cloud node, relevant network traffic will be delivered over the physical IP fabric. Although it appears to be one flat L2 segment to applications, physical IP fabric is built on top of multiple L2/L3 tunnels, which can degrade network performance. For example, we measured the network throughput for GRE tunnels between different cloud nodes in Table I. Results show that it is 5x slower than the virtual network within a cloud node. In addition, as reported by Gartner [15], 80% of traffic within cloud for microservice applications are going to be east-west (i.e., from application VM to application VM) instead of north-south (i.e., from end-user to application VM). Consequently, the bandwidth on physical IP fabric will likely become a resource that cloud vendor charge applications for, so applications will want to minimize utilization of such resources. Another approach is to deploy a security VM per cloud node, as shown in Figure 3(b). In this case, security VM is deployed as part of cloud infrastructure, running as a security service provided by vendor to applications. The service model is pay-by-use where applications specify their policies for security monitoring, and cloud vendor is responsible for building and maintaining the security VMs, setting up the FlowTap contracts, etc. This approach has two benefits. First, it reduces the traffic on physical IP fabric since now relevant traffic is forwarded to local security VM. Second, this approach offers an opportunity to better utilize cloud resources. Since security monitoring is often CPU intensive, cloud vendor may leverage FlowTap and the compiler to re-balance the traffic forwarding such that security VMs on less loaded nodes get more traffic to monitor. This approach brings up two challenges. First, application owners should still have the flexibility to deploy customized Security VMs on different nodes may still need to collaborate in order to enforce a global policy for an application, but the amount of information necessary to correlate events in a global policy has been found to be orders of magnitude smaller than the raw traffic [16]. security policies to protect their applications. In other words, the security monitor deployed by cloud vendor should provide a comprehensive set of security primitives to analyze and operate over any kind of network traffic, and be general enough to support various kinds of security policies. While a specific design of such security monitor is out of scope of this paper, we argue that it is possible. The reason is that network traffic follows protocols. As long as a security monitor can extract fields from a network packet, the content analysis can be application policy specific. For example, the Bro network security monitor [10] already supports plugin policies in terms of Bro scripts to customize network security monitoring. Second, since now VMs from different applications may run on the same cloud node, they are monitored by the same security VM, so it is important that security policies from different applications do not interfere. For example, one application may have a security policy that blocks all HTTP traffic while another application may have a security policy that only allows HTTP traffic. This challenge can be resolved by associating policies to network flows. The insight is that network flows from different applications cannot overlap, so if security policies are associated to flows, they cannot interfere with each other. We rely on the compiler to do so. In our work, we adopted the second approach where we created a Security-as-a-Service provided by cloud vendor based on the Bro network security monitor [10]. In Section V-A, we demonstrate several security policies we used that can detect and block internal threats from compromised microservices. In general, we envision a hybrid approach will be adopted where certain common security monitors (e.g., malware detection, IP blacklist) will be deployed as a service provided by cloud vendor, but application owners have the freedom to deploy highly customized security monitors specific to their applications. Our FlowTap primitive is designed to be flexible enough to support either deployment case and service model. IV. IMPLEMENTATION Our prototype FlowTap is implemented on OpenStack Icehouse release. To implement FlowTap, we modified the virtual routing devices on cloud nodes, including the integration bridge (i.e., br-int) that connects to VMs and the tunneling bridge (i.e., br-tun) that tunnels the VM traffic across cloud nodes. We modified br-int such that when a packet of the target VM is submitted, it is processed through the following steps according to the FlowTap API: 1) the flow is compared with the flow syntax; 2) if it matches, it is duplicated (if the action is forwarding) or taken as it is (if the action is redirecting); 3) its destination MAC address is rewritten to be the MAC of the security VM; and 4) it is resubmitted. It is resubmitted to either a local port on br-int if the security VM is on the same cloud node, or to the br-tun for tunneling. We modified br-tun to establish a tunnel to the remote cloud node that hosts the security VM upon the execution of FlowTap API. The packet will be delivered through the tunnel to the security VM by remote br-tun and br-tun based on its destination MAC. Although the general implementation is straightforward, we run into several interesting issues. First, virtual bridges do not allow the same device port to be used for both incoming and outgoing flow. Consequently, if a packet comes from br-tun to br-int, we cannot duplicate it and send it out to br-tun again. The solution we adopted is to create another patch port between br-int and br-tun such that outgoing flow is separated from incoming flow. However, this creates another challenge. Since now there are two connections between br-int and br-tun, a loop is created. As a result, broadcast packets from a VM can propagate back to itself. This creates a serious issue for DHCP-discovery packets. Since the VM receives its own DHCP-discovery requests, Iptables on a cloud node will set the connection state to be invalid thus preventing the VM from getting valid DHCP-reply. One options is to modify the iptables such that invalid connections states are accepted. Another option is to add new flow tables rules to filter broadcast messages that are looped back. We adopted the second approach. V. EVALUATION We evaluate our solution by: (1) demonstrate its effectiveness by showing an example network security monitor deployed via FlowTap and how it enforces various security policies to block internal threats of microservices; and (2) investigate the performance impact of FlowTap. A. Case Study To demonstrate the effectiveness of our solution, we deployed a Bro network security monitor via FlowTap on cloud. The security monitor enforces over the internal network events, including HTTP, message, and database access, among microservices of the example application shown in Figure 1(b). It enforces the following policies: Connection Policy. This policy decides whether or not a microservice can have a direct connection to another microservice. For example, neither Shipping nor DVD-Return needs to have direct access to the database, therefore the policy would deny any connection attempts from these two services to the database. As a result, although they run within the same application network, they can gain no access to critical data even if they are compromised. Request-specific Connection Policy. This policy defines what kind of request a microservice can make to another microservice. For example, User-Create service is allowed to insert an entry into user database, but it is not allowed to make a query of a specific user. The security monitor parses the request body and checks for its legitimacy. Moreover, the security monitor enforces this policy based on user request. A microservice can only make certain requests to others as required for serving a particular user request. Thus, even if a microservice is compromised, it is confined through limiting its connections to other microservices. **Request Integrity Policy.** This policy enforces over the content (e.g., checks for certain invariants, correlations, etc.) of a request to prevent compromised microservices from hijacking requests. For example, when user Alice rents a DVD, her request is handled by Contract-Update. However, a compromised Contract-Update may send requests to User-Update indicating user Bob to bill. Similarly, a compromised User-Update may modify the database such that the DVD is accounted to Bob instead of Alice. The security monitor enforcing request integrity policy analyzes the body of requests to ensure that the same user is referred to throughout the processing of the user request by multiple microservices. Similarly, the security monitor can check other application specific invariants and correlations to make sure that a compromised microservices cannot hijack how a user request is served. **B. Network Virtualization Performance** Table I shows the overhead associated with the tunneling. In the table, the column Node-Node measures the bare metal bandwidth between cloud nodes, serving as a baseline for comparison. The bandwidth is measured using the Netperf. Column VM-VM (same node) measures the throughput between two VMs running on the same node. Column VM-VM (different node) measures the throughput between two VMs running on different nodes, in which case traffic will be tunneled. As shown in the table, due to the GRE tunneling, the network performance degrades dramatically, almost 5x slower. The reason is that the offload features on most existing NICs cannot be utilized by GRE outer header computation. Consequently, tunneling becomes a CPU intensive task. Other tunneling techniques such as STT and VXLAN may yield better performance. One interesting observation is that the network throughput between two VMs on the same node is 12000, even larger than the bandwidth for VM hardware (10Gb). This is due to the memory optimization in hypervisor which allows two VMs to exchange network packets very quickly. **C. FlowTap performance** Next, we evaluate the performance impact of the FlowTap. We are interested in knowing how the raw (TCP) network throughput of two communicating microservices is affected due to FlowTap. We consider four different deployment scenarios shown in Figure 4. For all the scenarios, we used Netperf to pump TCP packets with dummy content as fast as it could and used a dummy security VM that would passively read, count, and ignore the packets. We believe, this is the worst case scenario for FlowTap, as there is no computation latency involved for producing or consuming this dummy content. For simplicity, we name the two communicating service VM A and B, and the security VM S. S is setup to passively monitor outgoing traffic of A. In the first deployment scenario, we deployed A, B and S on different nodes. The baseline throughput between A and B is 2600 mbps, due to the GRE tunnels. With FlowTap, the throughput between A and B drops to 2100 mbps. The reason for the drop is the added latency for encapsulating packets for another tunnel—A to S in addition to A to B. In the second deployment scenario, A and S run on the same node while B runs on a different node. In this case, the dominant overhead is the tunneling between A and B. Therefore we did not see a throughput drop since the local forwarding from A to S is very fast. In the third deployment scenario, A and B run on the same node while S runs on a different node. In this case, the throughput between A and B drops from the baseline to 5100 mbps, decreasing by more than 55%. The reason is because of the tunneling from A to S. Since the packet delivery between A and B is fast, tunneling becomes the dominant overhead, therefore decreasing the throughput dramatically. However, we are curious why the throughput between A and B can be higher than the baseline throughput (2600 mbps), since in this case FlowTap also forwards the traffic to S on a different node. By looking at the packet counter, we found that the reason is because packets are dropped while they were forwarded to S. Since S is only a passive listener of packets, A will not stop and wait if buffers on S’s host are filled up. Consequently, A can send packets at a faster speed than the baseline case. In the fourth deployment scenario, A, B and S all run on the same node. We see a throughput drop from 12000 mbps to 9100 mbps. Since no tunneling is involved, the sole source of overhead in this case is the traffic forwarding performed by FlowTap. FlowTap currently incurs relatively large overhead due to the expensive operations of packet copying and rewriting (i.e., rewrite destination MAC address). However, FlowTap Performance When Monitoring a Web Server. Table III <table> <thead> <tr> <th>Throughput (req/s)</th> <th>Baseline</th> <th>FlowTap</th> <th>Throughput loss</th> </tr> </thead> <tbody> <tr> <td>3195</td> <td>3004</td> <td>6%</td> <td></td> </tr> </tbody> </table> this can be avoided by adding more complex flow rules on virtual devices of cloud nodes such that the packets can be delivered to security VM without being modified. We are currently in the process of optimizing our implementation to address this problem. As mentioned before, the above measurements are performed on raw TCP traffic between two dummy ends. In real world, the overhead of FlowTap is often amortized by application traffic. For example, we measured a case where external traffic to a web server is monitored by a security monitor on the same node. As shown in Table III, FlowTap causes about 6% throughput drop for the web server, which makes it a practical solution for real world usage. VI. RELATED WORK Existing approaches to securing cloud applications fall into two categories. Infrastructure and/or platform-based security approaches, such as VMware vCNS [17] and NSX [18], etc., extend the hypervisor/platform to provide distributed, and sometimes inline, monitoring for the cloud applications. They mostly try to implement monitoring techniques inspired by their physical counterparts (e.g. SPAN and/or TAP ports) in distributed virtual switches [19], firewalls [20] and routers [21]. Our work introduces flexibility to these techniques with more fine grained and dynamic control, and augments microservice-specific context to address security issues that are important for this architecture. Application-based security approaches, such as in Netflix Fido [13] analyze API-level behaviors within cloud applications to build application profiles and then use the profiles to detect anomalous patterns. They, however, have two drawbacks. First, the analysis often uses hooks within the VM or the application to monitor the APIs and other application behaviors. If an adversary successfully compromises a microservice and escalates the privileges to control the VM that hosts the service, it can easily subvert the security of this framework. Second, this approach usually lacks the visibility into the underlying infrastructure, thus may lack capability to respond to the conditions (e.g. they cannot redirect traffic by themselves and need some infrastructure support). In some situations, they may also be susceptible to poor performance, e.g. when sending traffic or application logs to another host for analytics. We, in comparison, take a middle ground and leverage the application context from the RPC calls captured in inter-service communication on top of the monitoring infrastructure and program the infrastructure for enforcing security controls via automated and dynamic response. There are some application health monitoring technologies [22], but they often focus on individual applications. VII. CONCLUSION In this paper, we presented FlowTap, a primitive for the cloud infrastructure that enables it to support fine grain virtual network monitoring. FlowTap establishes monitoring relationships between microservices and security monitors, allowing them to enforce policies over the network traffic seen by the microservices. An empirical study shows that the FlowTap primitive is flexible enough to support various kinds of monitoring scenarios and policies with minimal overhead. Using the FlowTap primitive, cloud vendors can thus provide security-as-a-service for cloud applications that are based on microservice architecture. REFERENCES
{"Source-Url": "http://www.cse.psu.edu/~trj1/papers/cloudcom15.pdf", "len_cl100k_base": 7942, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 26505, "total-output-tokens": 8932, "length": "2e12", "weborganizer": {"__label__adult": 0.0004529953002929687, "__label__art_design": 0.0005626678466796875, "__label__crime_law": 0.0015382766723632812, "__label__education_jobs": 0.001163482666015625, "__label__entertainment": 0.0001786947250366211, "__label__fashion_beauty": 0.0001958608627319336, "__label__finance_business": 0.0006136894226074219, "__label__food_dining": 0.0003237724304199219, "__label__games": 0.0009336471557617188, "__label__hardware": 0.003782272338867187, "__label__health": 0.0008153915405273438, "__label__history": 0.0003573894500732422, "__label__home_hobbies": 0.0001246929168701172, "__label__industrial": 0.0006237030029296875, "__label__literature": 0.0003421306610107422, "__label__politics": 0.0004253387451171875, "__label__religion": 0.000396728515625, "__label__science_tech": 0.395751953125, "__label__social_life": 0.00016176700592041016, "__label__software": 0.05548095703125, "__label__software_dev": 0.53466796875, "__label__sports_fitness": 0.00024378299713134768, "__label__transportation": 0.0005774497985839844, "__label__travel": 0.00019741058349609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43188, 0.00981]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43188, 0.30982]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43188, 0.92816]], "google_gemma-3-12b-it_contains_pii": [[0, 5351, false], [5351, 11030, null], [11030, 15632, null], [15632, 21129, null], [21129, 26940, null], [26940, 32838, null], [32838, 37720, null], [37720, 43188, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5351, true], [5351, 11030, null], [11030, 15632, null], [15632, 21129, null], [21129, 26940, null], [26940, 32838, null], [32838, 37720, null], [37720, 43188, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43188, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43188, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43188, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43188, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43188, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43188, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43188, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43188, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43188, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43188, null]], "pdf_page_numbers": [[0, 5351, 1], [5351, 11030, 2], [11030, 15632, 3], [15632, 21129, 4], [21129, 26940, 5], [26940, 32838, 6], [32838, 37720, 7], [37720, 43188, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43188, 0.02]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
5c6dc864031ebcb47be091ba5ed68008b739e45d
Using bio.tools to generate and annotate workbench tool descriptions Hillion, Kenzo-Hugo; Kuzmin, Ivan; Khodak, Anton; Rasche, Eric; Crusoe, Michael; Peterson, Hedi; Ison, Jon; Ménager, Hervé Published in: F1000Research Link to article, DOI: 10.12688/f1000research.12974.1 Publication date: 2017 Document Version Publisher's PDF, also known as Version of record Link back to DTU Orbit Citation (APA): SOFTWARE TOOL ARTICLE Using bio.tools to generate and annotate workbench tool descriptions [version 1; referees: 4 approved] Kenzo-Hugo Hillion¹, Ivan Kuzmin², Anton Khodak³, Eric Rasche⁴, Michael Crusoe⁵, Hedi Peterson², Jon Ison⁶, Hervé Ménager¹ ¹Bioinformatics and Biostatistics HUB, Centre de Bioinformatique, Biostatistique et Biologie Intégrative (C3BI, USR 3756 Institut Pasteur et CNRS), Paris, France ²Institute of Computer Science, University of Tartu, Tartu, Estonia ³Igor Sikorsky Kyiv Polytechnic Institute, National Technical University of Ukraine, Kyiv, Ukraine ⁴Lehrstuhl für Bioinformatik, Institut für Informatik, Albert-Ludwigs-Universität Freiburg, Freiburg, Germany ⁵Common Workflow Language Project, Vilnius, Lithuania ⁶DTU Bioinformatics, Technical University of Denmark, Copenhagen, Denmark Abstract Workbench and workflow systems such as Galaxy, Taverna, Chipster, or Common Workflow Language (CWL)-based frameworks, facilitate the access to bioinformatics tools in a user-friendly, scalable and reproducible way. Still, the integration of tools in such environments remains a cumbersome, time consuming and error-prone process. A major consequence is the incomplete or outdated description of tools that are often missing important information, including parameters and metadata such as publication or links to documentation. ToolDog (Tool DescriptiOn Generator) facilitates the integration of tools - which have been registered in the ELIXIR tools registry (https://bio.tools) - into workbench environments by generating tool description templates. ToolDog includes two modules. The first module analyses the source code of the bioinformatics software with language-specific plugins, and generates a skeleton for a Galaxy XML or CWL tool description. The second module is dedicated to the enrichment of the generated tool description, using metadata provided by bio.tools. This last module can also be used on its own to complete or correct existing tool descriptions with missing metadata. This article is included in the ELIXIR gateway. Corresponding authors: Kenzo-Hugo Hillion (kenzo-hugo.hillion1@pasteur.fr), Hervé Ménager (herve.menager@pasteur.fr) Author roles: Hillion KH: Software, Writing – Original Draft Preparation; Kuzmin I: Software, Writing – Review & Editing; Khodak A: Software, Writing – Review & Editing; Rasche E: Software, Writing – Review & Editing; Crusoe M: Methodology, Software, Writing – Review & Editing; Peterson H: Funding Acquisition, Writing – Review & Editing; Ison J: Conceptualization, Funding Acquisition, Writing – Review & Editing; Ménager H: Conceptualization, Funding Acquisition, Software, Supervision, Writing – Original Draft Preparation Competing interests: No competing interests were disclosed. How to cite this article: Hillion KH, Kuzmin I, Khodak A et al. Using bio.tools to generate and annotate workbench tool descriptions [version 1; referees: 4 approved] F1000Research 2017, 6(ELIXIR):2074 (doi: 10.12688/f1000research.12974.1) Copyright: © 2017 Hillion KH et al. This is an open access article distributed under the terms of the Creative Commons Attribution Licence, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Grant information: ELIXIR-EXCELERATE is funded by the European Commission within the Research Infrastructures Programme of Horizon 2020 [676559]. Introduction Over the last few years, bioinformatics has played a major role in the field of biology, raising the issue of best practices in software development for the members of the bioinformatics community. These practices include facilitating the discovery, deployment, and usage of tools, and several helpful solutions are available. Tool discovery is facilitated by various online catalogs and registries. The ELIXIR Tools and Data Services Registry, bio.tools, describes bioinformatics software using extensive metadata descriptions, supported by the EDAM ontology. For software deployment, distribution systems are available that let users locally install the tools that they need in convenient, portable and reproducible ways. Workbench and workflow systems such as Galaxy, Taverna or Chipster allow the execution and composition of bioinformatics tools in integrated environments which aim at improved usability, interoperability and reproducibility. Finally, the Common Workflow Language (CWL) is a recent project that defines a standardized and portable tool and workflow description format, usable across different platforms. All of the above systems rely on components that provide the necessary information to describe, install, or run a specific piece of software. Gathering this information and formatting it into tractable tool descriptions is often a complex and time-consuming task for developers. Indeed, it requires a deep knowledge of both the tool itself and the description format. A significant part of the metadata stored in the descriptions is, however, common to registries and workbench environments systems, and strategies relying on a mapping between these different description formats can help avoid redundancy and mislabeling of tools Figure 1). The ReGaTE utility illustrates this by using tool descriptions from Galaxy to publish available services on bio.tools. Another application is to facilitate workbench environment integration, by reusing tool descriptions from registries. Here we present “ToolDog” (Tool Description Generator), an application that enables workbench integration for tools registered in the bio.tools registry. Tool descriptions Bioinformatics tools are described in various formats and levels of detail, befitting different systems and use-cases. A bio.tools entry provides tool descriptions for tool end-users, primarily for search and discovery purposes. The metadata provides a basic description including the tool type, what task it performs, the main input and output data, who created it, where it is available, and its license. This description, based on the BiotoolsSchema model, can be accessed through the bio.tools API and retrieved in JSON format. Conversely, Galaxy and CWL tool descriptions must support tool discovery, execution, and integration into homogeneous environments. This requires an extensive description of their command line syntax (or other type of API). Galaxy tool descriptions are written in XML or XML, and the corresponding XSD is available. CWL tool descriptions are described using the YAML-based SALAD format. All three of these tool description formats provide the possibility of specifying EDAM terms. In bio.tools this can be done directly. CWL supports these annotations through the addition of bioschemas mark-up, and Galaxy supports EDAM through specific tags mapping to its internal typing system. The EDAM ontology helps with the description of the tools by providing a common vocabulary that includes terms to describe topics that specify which particular domains of bioinformatics the tool serves, operations that describe what the tool does, and data and formats that specify the type and format of the inputs and outputs. Completeness of Workbench tool description Tool descriptions for workbench systems are expensive to create and maintain, because they require exhaustive knowledge of both the described tool, and the syntax used for the description. Consequently, tool descriptions are sometimes incomplete or out of date. For instance, in the case of Galaxy, the analysis of the main server and the server of the Institut Pasteur shows that some tools are not adequately described (see Figure 2). Specifically, although most of the tools have a help section and a description, important elements such as citation information are often missing. The evolution of the Galaxy framework itself also generates a need for maintenance, through changes in the tool description format. With the recent addition of EDAM annotations tags in the format, tools had to be updated to support this new feature. The users of such graphical workbench platforms do not typically handle tool discovery and deployment tasks. Thus, detailed tool descriptions are fundamental, because they are the main source of information for the scientists who use them. Different approaches exist to help improve the quality of the corpus of tool descriptions. (1) Tooling facilitates the creation and validation of the tool descriptions, using Planemo in the case of Galaxy. (2) Community approaches such as the Intergalactic Utilities Commission design and promote best practices for the Figure 2. Metadata coverage for Galaxy tool descriptions from (A) the main Galaxy instance (https://usegalaxy.org) and (B) the Institut Pasteur Galaxy instance (https://galaxy.pasteur.fr). The graphs show the percentage of tools possessing various metadata types: Help: usage instructions; Description: description of the tool to be displayed in the tool menu; Citations: tool citation information using either a DOI or a BibTeX entry; H+D+C: contains a help, description and citations section; Operations: description of the EDAM operation(s) performed; Topics: description of the EDAM topics covered. The total number of tools includes those which were successfully retrieved and analyzed (672 out of 1209 on Galaxy main, 351 out of 526 on Pasteur); not all available tools were retrieved - some because they are not available in a ToolShed, and some because we chose to retrieve only the latest version of each tool and discarded the earlier ones. development of Galaxy tools. (3) Standardization efforts like CWL also reduce the maintenance work for tool descriptions by making them portable between different platforms. ToolDog complements all of these approaches. It leverages the information available in bio.tools to simplify the integration of bioinformatics software into workbench environments. Methods ToolDog is a command-line utility written in Python. It consists of two modules, which handle (1) the generation of a skeleton for the tool description, based on the analysis of the source code of the tool, and (2) the enrichment of the tool description, using the bio.tools metadata. The tool description generation pipeline (Figure 3) leverages bio.tools and includes both a module to generate a tool description using only the registry, as well as a module to enrich an existing tool description with information from the registry. Source code analysis For a number of bioinformatics tools, a significant part of their description can be extracted from an analysis of the source code. The source code analysis module of ToolDog does this, currently only with python-based tools that use the argparse library for parsing command-line arguments. This module uses the argparse2tool package to retrieve the list of parameters and generate Galaxy or CWL tool description skeletons. To generate such skeletons, ToolDog runs a Docker software container that will download, install, analyze the source code, generate the tool description and then retrieve it. This strategy avoids the pollution of the local user’s environment and provides a completely pre-configured, ready-to-use installation of ToolDog. Tool description enrichment Galaxy and CWL tool descriptions, whether they were manually authored or automatically generated by source code analyses, can be improved by the description enrichment module. This retrieves additional metadata from the corresponding bio.tools entries, and fills in the missing information in the workbench tool description when available. Internally, the input tool description is parsed into an object model of the tool. The metadata from bio.tools are then mapped onto this object model, which is later exported to Galaxy or CWL formats. Parsing and export capabilities of ToolDog leverage the galaxyxml or cwlgen libraries to import and export the updated descriptions. Results Generation of a tool description from a bio.tools entry Here we illustrate the generation of a tool description with the example of IntegronFinder, an analysis tool dedicated to the identification of integrons in bacterial genomes. Launching ToolDog in “generation mode” on the IntegronFinder entry in the bio.tools registry allows the generation of a significant portion of the tool description (Figure 4), either in CWL or Galaxy format. Some manual modifications (corrections + additions) are still necessary to complete the tool description and to make it functional. For instance, software requirements, which specify what software needs to be installed for the tool to run correctly, cannot be automatically generated, because this information is currently not available in bio.tools. Additionally, the mapping between inputs and the generated command line, as well as between outputs and the file names they refer to is not present. Enrichment of an existing collection of tool descriptions In addition to novel tool description generation, ToolDog can also perform the automated enrichment of existing tool descriptions with bio.tools metadata. To test this approach, we ran ToolDog on the tool descriptions available on the Galaxy main instance that lack EDAM annotations. All of the Galaxy descriptions from the main instance were retrieved, and mapped to bio.tools entries using the citation identifiers (DOI). The goal was to add EDAM terms describing the topic of application and the operation(s) performed by the tools. To avoid linking unrelated entries, we took a conservative approach, only mapping by default two entries when they referred to, and only to, the same publication. The results (Figure 5) show that whenever this linking can be reliably done, the enrichment can easily be performed, with a total of 217 Galaxy tool descriptions being enriched out of 224 being initially mapped to bio.tools. A detailed description of this analysis, including the original and annotated tool descriptions, is available at (https://github.com/khillion/galaxyxml-analysis/annotate_usegalaxy). Discussion The ToolDog utility allows a developer to generate new tool descriptions for tools which are compatible with the code analysis module, and reuse the metadata provided by bio.tools to enrich existing tool descriptions. There are some limitations to this approach: 1. The “plugin” libraries used for code analysis are specific to the programming languages, libraries or framework used to build the command line interface. To this date, they don’t cover most of these. 2. The generation of the tool descriptions through code analysis must assume certain coding practices, such as the use of specific functions to define input or output parameters, which are not uniformly adopted. 3. Some of the input/output operations performed by some programs are a lot more difficult to detect through code analysis because they are typically not included in command line parsing frameworks, such web service and database queries and submissions, or in place file modifications. The automated enrichment of existing tool descriptions provides a convenient way to improve them, especially if they lack most of the metadata provided by bio.tools. Performing this enrichment efficiently en masse, however, would require the wide adoption of an identification system for bioinformatics software. This mechanism would allow to avoid the complex and sometimes ambiguous mapping procedures based on publication identifiers we performed when testing it on the Galaxy tools. A recent update to bio.tools has added stable and unique tool identifiers, based on registered tool names, yielding persistent references to tools, for example https://bio.tools/signal. Future work will make use of these identifiers to improve the generation of tool descriptions. For instance, linking of the bioconda and biocontainers repositories to bio.tools will enable ToolDog to generate software requirements compatible with workbench platforms26. Figure 4. Output of the run of ToolDog using the bio.tools entry of IntegronFinder to generate the corresponding CWL and Galaxy tool descriptions. Figure 5. Tool descriptions automated mapping and enrichment. Out of 665 retrieved tool descriptions, 399 have a DOI and 224 of these descriptions could be mapped to a bio.tools entry. 217 tool descriptions have been successfully annotated using ToolDog (Citations: presence of tool citation information; DOI: tool citation information described using a DOI; Corresponding bio.tools: tool descriptions with a corresponding bio.tools entry retrieved using the DOI; Annotated tools: tool descriptions successfully annotated with ToolDog). Conclusions During the last years, integration of various tools has been eased by the use of workbench systems such as Galaxy, and frameworks using the Common Workflow Language. Still, it remains time consuming and not straightforward to adapt resources to such environments. ToolDog lays the foundation for future work, that will provide a Workbench Integration Enabler for the bio.tools registry as an online service. Furthermore, integration with Planemo, the main utility to develop Galaxy and CWL tools, will be further developed in order to make the simple, bio.tools-based metadata enrichment of ToolDog available to the widest possible audience. Data availability The scripts and results of the analysis performed to motivate and test our approach are available at: https://github.com/khillion/galaxyxml-analysis, and are archived at the time of publication at: https://doi.org/10.5281/zenodo.1038005. Software availability The ToolDog software is available at: https://pypi.python.org/pypi/tooldog The source code is available at: https://github.com/bio-tools/tooldog Archived source code as at the time of publication: https://doi.org/10.5281/zenodo.1037902 Software license: MIT License. Competing interests No competing interests were disclosed. Grant information ELIXIR-EXCELERATE is funded by the European Commission within the Research Infrastructures Programme of Horizon 2020 [676559]. Acknowledgments Jon Ison acknowledges the support of the Danish ELIXIR Node. Kenzo-Hugo Hillion and Hervé Ménager wish to thank Fabien Mareuil, Olivia Doppelt-Azeroual, Bertrand Néron from the Institut Pasteur, as well as Daniel Blankenberg (Cleveland Clinic) and John Chilton (Galaxy Project) for their technical advice during the development. Anton Khodak wishes to thank his Google Summer of Code mentor Roman Valls Guimera (University of Melbourne), who promoted the idea of argparse2tool and supervised his internship. 27. Hiiloin KH, just another pesky drone, Kuzmin I, et al.: bio-tools/ToolDog: v0.3.4 for F1000 submission (Version v0.3.4). Zenodo. 2017. Data Source Open Peer Review Current Referee Status: ✔ ✔ ✔ ✔ ✔ Brian O'Connor Computational Genomics Platform, UCSC Genomics Institute, University of California, Santa Cruz, Santa Cruz, CA, USA "Using bio.tools to generate and annotate workbench tool descriptions" is an article that describes a tool descriptor program known as ToolDog. It was designed to generate Galaxy XML or CWL from particular bioinformatics tool source code as well as metadata annotations on bio.tools. The idea is great, since the issue is a real one in the community. Namely, there are a lot of tools out there but typically they lack descriptors in Galaxy of CWL format. And this makes it harder to use in "workbench" and workflow systems. Creating a tool that tool authors can use to help create descriptors is awesome. Source is available in GitHub and the tool can be installed via pip. Feedback/Questions 1. Can the authors rename the article? I think it should include ToolDog in the article title. 2. What are the plans for other languages (if any)? Do the authors see ToolDog as something that others will extend for, say, WDL generation? 3. I think it would be interesting to hear more about future plans. Specifically, how will the authors expand this to a Workbench Integration Enabler? Do they see this as being an automated process? How will they leverage the work of bioconda and biocontainers (they did mention this briefly) and will the goal be to generate CWL/GalaxyXML for everything in bio.tools + bioconda/biocontainers? 4. Alternatively, if the goal not to automatically export CWL/GalaxyXML for everything in bio.tools, is it, instead, to provide a tool for tool authors to use when building their tool to jumpstart their descriptor creation? Some clarification on the intended audience I think would be helpful. 5. The authors described generating CWL/Galaxy XML for IntegronFinder. Did they try other tools and, if so, how successful was that? What about generation in bulk? 6. Can they comment on what a tool author should do with the generated CWL or Galaxy XML? They mention in the results that some work is required to make the tool run correctly. Is the tool author then suggested to check in the CWL/Galaxy XML to their source repo and maintain it? What is the recommendation here? Is the rationale for developing the new software tool clearly explained? Yes Is the description of the software tool technically sound? Yes Are sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes Is sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes Are the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes Competing Interests: No competing interests were disclosed. I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. Referee Report 10 January 2018 doi:10.5256/f1000research.14069.r28567 Manuel Corpas Cambridge Precision Medicine, Cambridge, UK The article 'Using bio.tools to generate and annotate workbench tool descriptions' describes the software tool ToolDog. ToolDog improves the interoperability of bio.tool-deposited entries within workbenches by converting their descriptions into formats that are compatible with workflow standards. ToolDog is a convenient addition to the existing capabilities for the integration of bio.tools entries with workbench environments. I found Figure 2 particularly interesting, describing the metadata coverage descriptions from two of the main Galaxy servers. Do you have the raw data with which this figure was created? It would be good to have it openly shared. Figure 2 illustrates the problem of the significant lack of completeness in crucial metadata descriptions of Galaxy tools. My main recommendation for this article would be to provide a step-by-step guide on how to run ToolDog using a self-contained example. I feel unable to test the tool because I do not know how to download the metadata from a bio.tools entry and need to set up my python environment, download the code and make it work. This article, although it is geared toward a programmer audience, it would be hard to test/reproduce for someone who is not a seasoned python programmer. I would thus recommend a beginner's guide for those of us who are not so technical. Other than that, I am glad to see all the source code adequately deposited both in github and Zenodo for the snapshot image for this publication. The MIT license is also commendable as it allows free reuse and modification. Finally some minor corrections: - Link in the first paragraph of the results section ‘of a significant portion of the tool description’ is broken - Link on the second paragraph of the results section ‘https://github.com/khillion/galaxyxml-analysis/annotate_usegalaxy’ is broken - Discussion section bullet point #3 ‘such web service’ ==> such as web services **Is the rationale for developing the new software tool clearly explained?** Yes **Is the description of the software tool technically sound?** Yes **Are sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others?** Yes **Is sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool?** Yes **Are the conclusions about the tool and its performance adequately supported by the findings presented in the article?** Yes **Competing Interests:** No competing interests were disclosed. **Referee Expertise:** Computational Genomics I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. Referee Report 18 December 2017 doi:10.5256/f1000research.14069.r28565 Christopher J. Fields High-Performance Biological Computing Group, Roy J. Carver Biotechnology Centre, University of Illinois at Urbana–Champaign, Urbana, IL, USA The paper presents a very nice overview on how ToolDog is used to (1) generate new tool descriptors for Galaxy and CWL from code analysis, and (2) improve documentation for current tools from the bio.tools registry. This provides a valuable service to the bioinformatics community and in particular to ensuring that tooling information is consistently described but also updatable. In my opinion this should be accepted, with some minor suggested revisions. Speaking of ‘suggestions’: 1. The current title ‘Using bio.tools to generate and annotate workbench tool descriptions’ suggests the paper will talk more generally about bio.tools, whereas the text focuses primarily on the specific component ToolDog. The title should be modified to reflect this. 2. The graphs in Fig.2 would be more effective if they were displayed in an integrated manner (single bar chart?), so that the improvements that ToolDog makes are more easily compared to one another. 3. The discussion about the challenges in autogenerating tool documentation (language, code practices, etc), in the discussion, are spot-on. However not much is discussed on if / how ToolDog might address some of these challenges, though there are suggestions on how to more readily map existing tool descriptions to add to or update. Maybe this could be elaborated on, even if it’s indicating the problems may not be easily overcome? 4. I’m wondering whether the information in Fig. 5 might be better displayed (or augmented) as a before / after comparison to more readily demonstrate how ToolDog could automatically improve tool descriptions. Another option is whether this information could be somehow connected to the data in Fig. 2 to show how ToolDog improves the overall documentation. **Is the rationale for developing the new software tool clearly explained?** Yes **Is the description of the software tool technically sound?** Yes **Are sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others?** Yes **Is sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool?** Yes **Are the conclusions about the tool and its performance adequately supported by the findings presented in the article?** Partly *Competing Interests:* No competing interests were disclosed. *Referee Expertise:* Computational biology I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. From the point of view of a software tool author, it is not a simple task to provide high-quality metadata and software tool descriptions. Any tooling that supports DRY (don’t repeat yourself) in this regard is most welcome. The authors describe a path from the bio.tools bioinformatics software registry, which uses a rich metadata schema for syntax, the EDAM ontology for semantics, and strongly written guidelines to ensure high-quality entries, to tool descriptions for the Galaxy workbench and in Common Workflow Language (CWL) for use on various workflow execution environments. Much of the metadata in the tool descriptions is generated by the ToolDog utility from an entry in bio.tools, ensuring proper mapping between metadata concepts. This would be a great help when bootstrapping Galaxy and CWL support for a new software tool. The authors also describe and implement a use case for enriching existing tool descriptions. I am curious if there are practical benefits to enriching tool descriptions with EDAM ontology terms, in addition to quality of metadata from using well defined terms from a controlled vocabulary? The source code is available at the Github link provided and is licensed MIT License as stated in the paper. I appreciate that the scripts and results of the analysis are archived as well. Yes Is the description of the software tool technically sound? Yes Are sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes Is sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes Are the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes Competing Interests: No competing interests were disclosed. Referee Expertise: Bioinformatics, big data genomics, immunogenomics I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. The benefits of publishing with F1000Research: - Your article is published within days, with no editorial bias - You can publish traditional articles, null/negative results, case reports, data notes and more - The peer review process is transparent and collaborative - Your article is indexed in PubMed after passing peer review - Dedicated customer support at every stage For pre-submission enquiries, contact research@f1000.com
{"Source-Url": "http://orbit.dtu.dk/ws/files/146551720/7badfa4e_5cfe_436c_9f0b_99a70bc52910_12974_Herve_Menager.pdf", "len_cl100k_base": 6138, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 37221, "total-output-tokens": 8829, "length": "2e12", "weborganizer": {"__label__adult": 0.0003173351287841797, "__label__art_design": 0.00035309791564941406, "__label__crime_law": 0.0003740787506103515, "__label__education_jobs": 0.001918792724609375, "__label__entertainment": 0.00011491775512695312, "__label__fashion_beauty": 0.0002168416976928711, "__label__finance_business": 0.00048732757568359375, "__label__food_dining": 0.0003814697265625, "__label__games": 0.0006480216979980469, "__label__hardware": 0.0013256072998046875, "__label__health": 0.0013303756713867188, "__label__history": 0.000301361083984375, "__label__home_hobbies": 0.0001908540725708008, "__label__industrial": 0.0005230903625488281, "__label__literature": 0.0002911090850830078, "__label__politics": 0.0002582073211669922, "__label__religion": 0.00048279762268066406, "__label__science_tech": 0.1502685546875, "__label__social_life": 0.0001982450485229492, "__label__software": 0.06390380859375, "__label__software_dev": 0.77490234375, "__label__sports_fitness": 0.0003981590270996094, "__label__transportation": 0.0003941059112548828, "__label__travel": 0.0002206563949584961}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36298, 0.03182]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36298, 0.28044]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36298, 0.83387]], "google_gemma-3-12b-it_contains_pii": [[0, 640, false], [640, 2712, null], [2712, 4155, null], [4155, 9333, null], [9333, 13606, null], [13606, 16695, null], [16695, 16842, null], [16842, 19315, null], [19315, 24993, null], [24993, 27280, null], [27280, 29521, null], [29521, 31496, null], [31496, 33774, null], [33774, 35651, null], [35651, 36298, null]], "google_gemma-3-12b-it_is_public_document": [[0, 640, true], [640, 2712, null], [2712, 4155, null], [4155, 9333, null], [9333, 13606, null], [13606, 16695, null], [16695, 16842, null], [16842, 19315, null], [19315, 24993, null], [24993, 27280, null], [27280, 29521, null], [29521, 31496, null], [31496, 33774, null], [33774, 35651, null], [35651, 36298, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36298, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36298, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36298, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36298, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36298, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36298, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36298, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36298, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36298, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36298, null]], "pdf_page_numbers": [[0, 640, 1], [640, 2712, 2], [2712, 4155, 3], [4155, 9333, 4], [9333, 13606, 5], [13606, 16695, 6], [16695, 16842, 7], [16842, 19315, 8], [19315, 24993, 9], [24993, 27280, 10], [27280, 29521, 11], [29521, 31496, 12], [31496, 33774, 13], [33774, 35651, 14], [35651, 36298, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36298, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
fbb9403c7176223187742618fffce3fbb196d389
[REMOVED]
{"Source-Url": "https://kar.kent.ac.uk/43205/1/meta4es_final_camera.pdf", "len_cl100k_base": 6160, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 25422, "total-output-tokens": 7187, "length": "2e12", "weborganizer": {"__label__adult": 0.0003886222839355469, "__label__art_design": 0.0006542205810546875, "__label__crime_law": 0.001987457275390625, "__label__education_jobs": 0.005321502685546875, "__label__entertainment": 0.00014138221740722656, "__label__fashion_beauty": 0.0002970695495605469, "__label__finance_business": 0.0018644332885742188, "__label__food_dining": 0.0003592967987060547, "__label__games": 0.0006303787231445312, "__label__hardware": 0.0011043548583984375, "__label__health": 0.0010194778442382812, "__label__history": 0.0005364418029785156, "__label__home_hobbies": 0.00017774105072021484, "__label__industrial": 0.0007271766662597656, "__label__literature": 0.0007643699645996094, "__label__politics": 0.0008697509765625, "__label__religion": 0.0005259513854980469, "__label__science_tech": 0.2880859375, "__label__social_life": 0.00039768218994140625, "__label__software": 0.1224365234375, "__label__software_dev": 0.57080078125, "__label__sports_fitness": 0.0002384185791015625, "__label__transportation": 0.0005869865417480469, "__label__travel": 0.00028514862060546875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30769, 0.0149]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30769, 0.35189]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30769, 0.91943]], "google_gemma-3-12b-it_contains_pii": [[0, 1238, false], [1238, 3755, null], [3755, 6927, null], [6927, 8975, null], [8975, 12470, null], [12470, 14168, null], [14168, 17243, null], [17243, 18471, null], [18471, 21694, null], [21694, 24917, null], [24917, 27975, null], [27975, 30769, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1238, true], [1238, 3755, null], [3755, 6927, null], [6927, 8975, null], [8975, 12470, null], [12470, 14168, null], [14168, 17243, null], [17243, 18471, null], [18471, 21694, null], [21694, 24917, null], [24917, 27975, null], [27975, 30769, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30769, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30769, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30769, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30769, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30769, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30769, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30769, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30769, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30769, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30769, null]], "pdf_page_numbers": [[0, 1238, 1], [1238, 3755, 2], [3755, 6927, 3], [6927, 8975, 4], [8975, 12470, 5], [12470, 14168, 6], [14168, 17243, 7], [17243, 18471, 8], [18471, 21694, 9], [21694, 24917, 10], [24917, 27975, 11], [27975, 30769, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30769, 0.15179]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
8c32c8d8ba38fbaac7db95a69621912453d104f1
A Rational Development Process Philippe Kruchten Vancouver, BC pbk@rational.com 1. Introduction This paper gives a high level description of the philosophy and structure of the Rational Software Development Process. It is: • iterative and incremental • object-oriented • managed and controlled It is generic enough to be tailorable to a wide variety of software products and projects, both in size and application domain. 2. The Overall Software Life-Cycle 2.1 Two Perspectives The Rational process may be approached from 2 different and integrated perspectives: • A management perspective, dealing with the financial, strategic, commercial, and human aspects. • A technical perspective, dealing with quality, engineering and design method aspects. 2.2 Cycles and Phases As seen from a management perspective, i.e., the business and economics point of view, the software life-cycle is organized along four main phases, indicators of the progress of the project: • **Inception**—The good idea: specifying the end-product vision and its business case, defining the scope of the project.\(^1\) • **Elaboration**—Planning the necessary activities and required resources; specifying the features and designing the architecture.\(^2\) • **Construction**—Building the product, and evolving the vision, the architecture and the plans until the product—the completed vision—is ready for transfer to its users community. • **Transition**—Transitioning the product to its user’s community, which includes manufacturing, delivering, training, supporting, maintaining the product until the users are satisfied. Going through the 4 phases is called a development *cycle*, and it produces a software *generation*. Unless the life of the product stops, an existing product will evolve into its next generation by repeating the same sequence of inception, elaboration, construction and transition phases, with a different emphasis however on the various phases. We call this period *evolution*. As the product eventually goes through several cycles, new generations are being produced. For example, evolution cycles may be triggered by user suggested enhancements, changes in the users’ context, changes in the underlying technology, reaction to the competition, etc. In practice, cycles may slightly overlap: the inception and elaboration phase may start during the trailing part of the transition phase of the previous cycle. **2.3. Iterations** From a technical perspective the software development is seen as a succession of *iterations*, through which the software under development evolves incrementally. Each iteration is concluded by the *release* of an executable product which may be a subset of --- \(^1\) The American Heritage Dictionary defines *inception* as “the beginning of something, such as an undertaking, a commencement.” \(^2\) The American Heritage Dictionary defines *elaboration* as the process “to develop thoroughly, to express at greater length or greater detail.” the complete vision, but useful from some engineering or user perspective. Each release is accompanied by supporting artifacts: release description, user’s documentation, plans, etc. An iteration consists of the activities of planning, analysis, design, implementation, testing, in various proportions depending on where the iteration is located in the development cycle. The management perspective and the technical perspective are reconciled, and in particular the end of the phases are synchronized with the end of iterations. In other words, each phase is broken down into one or more iterations. (Note: The number of iterations per phase shown on this diagram is merely for illustration purposes.) However the two perspectives—management and technical—do more than just synchronize on a few well identified milestones, they both contribute to a common set of products and artifacts that evolve over time. Some artifacts are more under the control of the technical side, some more under control of the management side. Cf. section 5. The availability of these artifacts, and the satisfaction of the established evaluation criteria for the product and the artifacts are the tangible elements that constitute the milestones, much more than mere dates on a calendar. Like cycles, iterations may slightly overlap, e.g., the planning or architecture activities of iteration N may be started towards the end of iteration N–1. In some cases, some iterations may proceed in parallel: a team, working on one part of the system, may have no deliverable for a given iteration. 2.4. Discriminants The emphasis and importance of the various phases, the entry and exit criteria, the artifacts involved along a development cycle, the number and length of the iterations may all vary depending on four major project characteristics that are process discriminants. In order of decreasing impact, the most important process discriminants are: - The business context: - contract work, where the developer produce the software on a given customer specification and for this customer only, - speculative or commercial development, where the developer produce software to be put on the market, - or internal project, where customer and developer are in the same organization. - The size of the software development effort: as defined by some metrics, such as Delivered Source Instructions, or in functions points, etc., or number of person-months, or merely in cost. - The degree of novelty: how “precendent” this software effort is, relative to the development organization, and in particular whether the development is in a second or subsequent cycle. This discriminant includes the maturity of the organization and the process, its assets, its current skill set, and issues such as assembling and training a team, acquiring tools and other resources. - The type of application, the target domain: MIS, command and control, embedded real-time, software development environment tools, etc., especially with respect to the specific constraints the domain may impose on the development: safety, performance, internationalization, memory constraint, etc. This paper first describes the generic process, i.e., the part of the process that applies to all kinds of software developments, across a broad and general range of these discriminants. It then describes some specific instances of the process for some values of the discriminants, as examples. ### 2.5 Effort and Schedule All phases are not identical in terms of schedule and effort. Although this will vary considerably depending on the project discriminants, a typical initial development cycle for a medium size project should anticipate the following ratios: <table> <thead> <tr> <th>Inception</th> <th>Elaboration</th> <th>Construction</th> <th>Transition</th> </tr> </thead> <tbody> <tr> <td>Effort</td> <td>5 %</td> <td>20 %</td> <td>65 %</td> </tr> <tr> <td>Schedule</td> <td>10 %</td> <td>30 %</td> <td>50 %</td> </tr> </tbody> </table> which can be depicted graphically as: ![Graphical representation of effort and schedule] But for an evolution cycle, the inception and elaboration phases can be considerably reduced. Also, using certain tools and techniques, such as applications builders, the construction phase can be much smaller than the inception and elaboration phase together. ### 3. The Phases of the Rational Process #### 3.1. Inception phase This phase brings to light an original vision of a potential product, and transforms it into an actual project. Its purpose is to establish the business case for a new product or a major update, and to specify the project scope. For the development of a new product, the main outcome of this phase is a “go-no go” decision to move into the next phase and to invest time and money to analyze in detail what it to be built, can it be built, and how to build it. For the evolution of an existing product, this may be a simple and short phase, based on users’ or customers’ requests, on problem reports, on new technological advances. For a contractual development, the decision to proceed is based on experience of the specific domain and on the competitiveness of the development organization in this domain or market. In this case the inception phase may be concluded by a decision to bid, or by the bid itself. The idea may be based on an existing research prototype, whose architecture may or may not be suitable for the final software. Entry criteria: The expression of a need, which can take any of the following forms: - an original vision - a legacy system - an RFP (request for proposal) - the previous generation and a list of enhancements - some assets (software, know-how, financial assets) - a conceptual prototype, or a mock-up Exit criteria: - an initial business case containing at least: - a clear formulation of the product vision—the core requirements—in terms of functionality, scope, performance, capacity, technology base - success criteria (for instance revenue projection) - an initial risk assessment - an estimate of the resources required to complete the elaboration phase. Optionally at the end of the inception phase, we may have: - an initial domain analysis model (~10%-20% complete), identifying the top key use cases, and sufficient to drive the architecture effort. - an initial architectural prototype, which at this stage may be a throw-away prototype. 3.2. Elaboration Phase The purpose of this phase is to more thoroughly analyze the problem domain, to define and stabilize the architecture and to address the highest risk elements of the project. So that at the end of the phase we can produce a comprehensive plan showing how the 2 next phases will be done: - A baseline product vision (i.e., an initial set of requirements) based on an analysis model - Evaluation criteria for at least the first construction iteration - A baseline software architecture - The resources necessary to develop and deploy the product, especially in terms of people and tools - A schedule - A resolution of the risks sufficient to make a “high fidelity” cost, schedule and quality estimate of the construction phase. In this phase an executable architectural prototype is built, in one or several iterations depending on the scope, size, risk, novelty of the project, which addresses at least the top key use cases identified in the inception phase, and which addresses the top technical risks of the project. This is an evolutionary prototype, of production quality code which becomes the architectural baseline, but it does not exclude the development of one or more exploratory, throw-away prototypes to mitigate specific risks: refinements of the requirements, feasibility, human-interface studies, demonstrations to investors, etc. At the end of this phase, there is again a “go-no go” decision point to actually invest and build the product (or bid for the complete development of the contract). The plans produced must be detailed enough, and the risks sufficiently mitigated to be able to determine with accuracy the cost and schedule for the completion of the development. Entry criteria: • The products and artifacts described in the exit criteria of the previous phase. • The plan was approved by the project management, and funding authority, and the resources required for the elaboration phase have been allocated. Exit criteria: • a detailed software development plan, containing: • an updated risk assessment • a management plan • a staffing plan • a phase plan showing the number and contents of the iteration • an iteration plan, detailing the next iteration • the development environment and other tools required • a test plan • a baseline vision, in the form of a set of evaluation criteria for the final product • objective, measurable evaluation criteria for assessing the results of the initial iterations(s) of the construction phase • a domain analysis model (80% complete), sufficient to be able to call the corresponding architecture ‘complete’. • a software architecture description (stating constraints and limitations) • an executable architecture baseline. 3.3. Construction Phase This phase is broken down into several iterations, fleshing out the architecture baseline and evolving it in steps or increments towards the final product. At each iteration, the various artifacts prepared during the elaboration phase (see above) are expanded and revised, but they ultimately stabilize as the system evolves in correctness and completeness. New artifacts are produced during this phase beside the software itself: documentation, both internal and for the end-users, test beds and test suites, and deployment collaterals to support the next phase: marketing collaterals for example. For each iteration we have: Entry criteria: • The product and artifacts of the previous iteration. The iteration plan must state the iteration specific goals: • additional capabilities being developed: which use cases or scenarios will be covered • risks being mitigated during this iteration • defects being fixed during the iteration. Exit criteria: The same products and artifacts, updated, plus: • A release description document, which captures the results of an iteration • Test cases and results of the tests conducted on the products, • An iteration plan, detailing the next iteration • Objective measurable evaluation criteria for assessing the results of the next iteration(s). Towards the end of the construction phase the following artifacts must be produced, and are additional exit criteria for the last iteration of the phase: • a deployment plan, specifying as necessary: • packaging • pricing • roll out • support • training • transition strategy (e.g., an upgrade plan from an existing system) • production (e.g., making floppies and manuals) • user documentation 3.4. Transition Phase The transition phase is the phase where the product is put in the hands of its end users. It involves issues of marketing, packaging, installing, configuring, supporting the user-community, making corrections, etc. From a technical perspective the iterations continue with one or more releases (or deliveries): ‘beta’ releases, general availability releases, bug fix or enhancement releases. The phase is completed when the user community is satisfied with the product: formal acceptance for example in a contractual setting, or when all activities on this product are terminated. It is the point where some of the accumulated assets can be made reusable by the next cycle or by some other projects. Entry criteria: • The product and artifacts of the previous iteration, and in particular a software product sufficiently mature to be put into the hands of its users. Exit criteria: - An update of some of the previous documents, as necessary, the plan being replaced by a “post-mortem” analysis of the performance of the project relative to its original and revised success criteria; - a brief inventory of the organization’s new assets as a result this cycle. 3.5. Evolution Cycles For substantial evolutions, we apply the whole process recursively, starting again at the inception phase, for a new cycle. Since we already have a product, this inception phase may be considerably reduced, compared to an initial development cycle. The elaboration phase also may be limited, and focused more on the planning aspects than evolving the analysis or the architecture. Said otherwise: cycles can slightly overlap. Minor evolutions are done in extending the transition phase, adding one or more iterations. Alternatively, the transition phase may be concluded by an end-of-life process, i.e., the product does not evolve any more, but some specific actions must be taken in order to terminate it, or retire it. 4. Activities in the Rational Process The names of the phases of the Rational process stay away from the terms describing an intellectual activity: analysis, design, test, etc., so that it will be understood that this activity is not confined to that phase, and also to remain independent of terms employed by other authors, standards, and domain-specific jargon. These activities do take place, but in varying degree in each phase and iteration. The following figure illustrates how the emphasis and effort evolves over time. This change of focus explains also that, although all structured in the same way, the exact nature and contents of the iterations evolves over time. This also shows that the beginning of an activity is not bound to the end of another, e.g., design does not start when analysis completes, but the various artifacts associated with the activities are revised as the problem or the requirements are better understood. Finally, in an iterative process, the activities of planning, test and integration are spread incrementally throughout the cycle, in each iteration, and not massively lumped at the beginning and at the end, respectively. They do not appear as separate steps or phases in the process. Although this will vary considerably depending on the project discriminants, a typical initial development cycle for a medium size project should anticipate the following ratios for various activities: <table> <thead> <tr> <th>Activity</th> <th>Ratio</th> </tr> </thead> <tbody> <tr> <td>Planning and management</td> <td>15%</td> </tr> <tr> <td>analysis/requirements</td> <td>10%</td> </tr> <tr> <td>design/integration</td> <td>15%</td> </tr> <tr> <td>implementation/functional tests</td> <td>30%</td> </tr> <tr> <td>measurement/assessment/acceptance test</td> <td>15%</td> </tr> <tr> <td>tools/environment/change management</td> <td>10%</td> </tr> <tr> <td>maintenance (fixes during development)</td> <td>5%</td> </tr> </tbody> </table> 5. Life-cycle Artifacts The process is not document-driven: its main artifact must remain at all time the software product itself. The documentation should remain lean and limited to the few documents that bring real value to the project from a management or technical point of view. Rational suggests the following typical set of documents. 5.1 Management artifacts The management artifacts are not the product, but are used to drive or monitor the progress of the project, estimate the risks, adjust the resources, give visibility to the customer (in a contractual setting) or the investors. - An Organizational Policy document, which is the codification of the organization’s process; it contains an instance of this generic process. - A Vision document, which describes the system level requirements, qualities and priorities. - A Business Case document, describing the financial context, contract, projected return on investment, etc. - A Development Plan document, which contains in particular the overall iteration plan, and plan for the current and upcoming iteration. - An Evaluation Criteria document, containing the requirements, acceptance criteria and other specific technical objectives, which evolves from major milestone to major milestone. It contains the iteration goals and acceptance levels. - Release Description documents for each release, - Deployment document, gathering additional information useful for transition, training, installation, sales, manufacturing, cut-over. - Status Assessment documents: periodic snapshots of project status, with metrics of progress, staffing, expenditure, results, critical risks, actions items, post-mortem. 5.2 Technical artifacts These artifacts are either the delivered goods: executable software and manuals, or the blueprints that were used to manufacture the delivered goods: software models, source code, and other engineering information useful to understand and evolve the product. The - User’s Manual, developed early in the life-cycle. - Software documentation, preferably in the form of self-documenting source code, and models (uses cases, class diagrams, process diagrams, etc.) captured and maintained with appropriate CASE tools. - A Software Architecture document, extracted (abstracted) from the software documentation, describing the overall structure of the software, its decomposition in major elements: class categories, classes, processes, subsystems, the definition of critical interfaces, and rationale for the key design decisions. The artifacts enumerated in the entry and exit criteria in §3 can all be mapped onto one of these 11 documents. Depending on the type of project, this typical document set can be extended or contracted, some documents can be merged. The documents do not have to be *paper* documents—they can be spreadsheet, text-files, database, annotations in source code, hypertext documents, etc.—but the corresponding information source must be clearly identified, easily accessible and some of its history preserved. ### 5.3 Requirements The Rational process is not requirement-driven either. The requirements for the product evolve during a cycle, and take different forms: - The *business case* gives the main constraints, mostly in terms of resources that can be expended. - The *vision* document describes only the key requirements of the system from a user’s perspective, and it evolves only slowly during the development cycle. - The more detailed requirements are elaborated during the elaboration phase, in the form of use cases and scenarios, and are refined incrementally throughout the construction phase, as the product and the users needs become better understood. These more detailed requirements are in the *evaluation criteria* document; they drive the definition of the contents of the construction and transition iterations, and are referenced in the iteration plan. ### 6. Examples of Rational Processes The Rational process takes different aspects depending on the discriminants described in section 2.4. Here are two extreme examples. #### 6.1 Rational Process for Large Contractual Software Development Rational proposes to set up the procurement of large software in 3 stages, associated with 3 different kinds of contract. - An **R&D** stage, comprising the inception and elaboration phase, typically bid in a risk sharing manner, e.g., as a cost plus award fee contract (CPAF). - A **production** stage, comprising the construction and transition phases, typically bid as a firm, fixed price contract (FFP) - A **maintenance** stage if any, corresponding to the evolution phase, typically bid as a level of effort contract (LOE). Due to the higher level of visibility that is required from the customer on the evolution of the project, and because of the larger number of people and organizations involves, more formalism is required in the process and there may be more emphasis on *written* artifacts, than would be the case in a small, internal project. All 11 documents listed in §5 are present in some form or name. 6.2 Rational Process for a small commercial software product At the other extremity of the scale of the family of processes, a small commercial development would see a more fluid process, with only limited amount of formalism at the major milestones and a more limited set of documents: - **a product vision** - **a development plan**, showing schedule and resources - **release description** documents, specifying the goal of an iteration at the beginning of the iteration, and updated to serve as release notes at the end. - **user documentation**, as necessary Software architecture, software design, development process and procedures can be documented by the code itself or the software development environment. 7. Conclusion The Rational process puts an emphasis on addressing very early high risks areas, by developing rapidly an initial version of the system, which defines its architecture. It does not assume a fixed set of firm requirements at the inception of the project, but allows to refine the requirements as the project evolves. It expect and accommodate changes. The process does not put either a strong focus on documents or ‘ceremonies’, and it lends itself to the automation of many of the tedious tasks associated with software development. The main focus remains the software product itself, and its quality, as measured by the degree to which it satisfies its end-users, and meets its return on investment objective altogether. A process derived from the generic process described here would fully conform to the requirements of a standard such as ISO 9000. Further Readings G. Booch, Object Solutions: Managing the Object-Oriented Project, Addison-Wesley, Redwood City, California, 1996. Glossary <table> <thead> <tr> <th>Term</th> <th>Definition</th> </tr> </thead> <tbody> <tr> <td>Artifact</td> <td>Any document or software other than the software product itself.</td> </tr> <tr> <td>Baseline</td> <td>A release that is subject to change management and configuration control</td> </tr> <tr> <td>Construction</td> <td>The 3rd phase of the process, where the software is brought from an executable architectural baseline to the point where it is ready to be transitioned to its user’s community.</td> </tr> <tr> <td>Cycle</td> <td>One complete pass through the 4 phases: inception-elaboration-construction-transition. The span of time between the beginning of the inception phase and the end of the transition phase.</td> </tr> <tr> <td>Elaboration</td> <td>The 2nd phase of the process where the product vision and its architecture are defined.</td> </tr> <tr> <td>Evolution</td> <td>The life of the software after its initial development cycle; any subsequent cycle, where the product evolve.</td> </tr> <tr> <td>Generation</td> <td>The result of one software development cycle.</td> </tr> <tr> <td>Inception</td> <td>The first phase of the process, where the seed—idea, RFP, previous generation—is brought up to the point of being (at least internally) founded to enter into the elaboration phase.</td> </tr> <tr> <td>Iteration</td> <td>A distinct sequence of activities with a baselined plan and an evaluation criteria.</td> </tr> <tr> <td>Milestone</td> <td>An event held to formally initiate and conclude an iteration</td> </tr> <tr> <td>Phase</td> <td>The span of time between 2 major milestones of the process where a well defined set of objectives are met, artifacts are completed, and decisions are made to move or not into the next phase.</td> </tr> <tr> <td>Term</td> <td>Definition</td> </tr> <tr> <td>----------</td> <td>-----------------------------------------------------------------------------</td> </tr> <tr> <td>Product</td> <td>The software that is the result of the development, and some of the</td> </tr> <tr> <td></td> <td>associated artifacts (documentation, release medium, training).</td> </tr> <tr> <td>Prototype</td> <td>A release which is not necessarily subjected to change management and</td> </tr> <tr> <td></td> <td>configuration control</td> </tr> <tr> <td>Release</td> <td>A subset of the end-product which is the object of evaluation at a major</td> </tr> <tr> <td></td> <td>milestone (see: prototype, baseline).</td> </tr> <tr> <td>Risk</td> <td>An ongoing or upcoming concern which has a significant probability of</td> </tr> <tr> <td></td> <td>adversely affecting the success of major milestones</td> </tr> <tr> <td>Transition</td> <td>The 4th phase of the process where the software is turned into the hands</td> </tr> <tr> <td></td> <td>the hands of the user’s community.</td> </tr> <tr> <td>Vision</td> <td>The user’s view of the product to be developed</td> </tr> </tbody> </table>
{"Source-Url": "http://www.unix.eng.ua.edu/~crutcher/class/cs600/papers/A_Rational_Development_Process.pdf", "len_cl100k_base": 5667, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 29935, "total-output-tokens": 6307, "length": "2e12", "weborganizer": {"__label__adult": 0.00047469139099121094, "__label__art_design": 0.0004503726959228515, "__label__crime_law": 0.00035309791564941406, "__label__education_jobs": 0.0013628005981445312, "__label__entertainment": 4.57763671875e-05, "__label__fashion_beauty": 0.00015497207641601562, "__label__finance_business": 0.0006594657897949219, "__label__food_dining": 0.0003724098205566406, "__label__games": 0.0005235671997070312, "__label__hardware": 0.00048065185546875, "__label__health": 0.00033211708068847656, "__label__history": 0.00024044513702392575, "__label__home_hobbies": 9.071826934814452e-05, "__label__industrial": 0.00029754638671875, "__label__literature": 0.0003371238708496094, "__label__politics": 0.0002765655517578125, "__label__religion": 0.0004019737243652344, "__label__science_tech": 0.0016183853149414062, "__label__social_life": 9.638071060180664e-05, "__label__software": 0.0035076141357421875, "__label__software_dev": 0.98681640625, "__label__sports_fitness": 0.00033926963806152344, "__label__transportation": 0.0003981590270996094, "__label__travel": 0.00022268295288085935}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28130, 0.01155]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28130, 0.43157]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28130, 0.91203]], "google_gemma-3-12b-it_contains_pii": [[0, 972, false], [972, 2997, null], [2997, 3703, null], [3703, 6327, null], [6327, 8437, null], [8437, 10596, null], [10596, 12803, null], [12803, 14755, null], [14755, 16324, null], [16324, 17751, null], [17751, 20389, null], [20389, 22821, null], [22821, 24279, null], [24279, 26878, null], [26878, 28130, null]], "google_gemma-3-12b-it_is_public_document": [[0, 972, true], [972, 2997, null], [2997, 3703, null], [3703, 6327, null], [6327, 8437, null], [8437, 10596, null], [10596, 12803, null], [12803, 14755, null], [14755, 16324, null], [16324, 17751, null], [17751, 20389, null], [20389, 22821, null], [22821, 24279, null], [24279, 26878, null], [26878, 28130, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28130, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28130, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28130, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28130, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28130, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28130, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28130, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28130, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28130, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28130, null]], "pdf_page_numbers": [[0, 972, 1], [972, 2997, 2], [2997, 3703, 3], [3703, 6327, 4], [6327, 8437, 5], [8437, 10596, 6], [10596, 12803, 7], [12803, 14755, 8], [14755, 16324, 9], [16324, 17751, 10], [17751, 20389, 11], [20389, 22821, 12], [22821, 24279, 13], [24279, 26878, 14], [26878, 28130, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28130, 0.16116]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
323eb6b070850ac0a163e942dbd8e002268e6aeb
Using Constraint Programming to solve a Cryptanalytic Problem David Gerault, Marine Minier, Christine Solnon To cite this version: David Gerault, Marine Minier, Christine Solnon. Using Constraint Programming to solve a Cryptanalytic Problem. IJCAI 2017 - International Joint Conference on Artificial Intelligence (Sister Conference Best Paper Track), Aug 2017, Melbourne, Australia. pp.4844-4848. hal-01528272 HAL Id: hal-01528272 https://hal.science/hal-01528272 Submitted on 28 May 2017 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Using Constraint Programming to solve a Cryptanalytic Problem* David Gerault(1), Marine Minier(2), and Christine Solnon(3) (1) LIMOS, Clermont-Ferrand, France (2) Université de Lorraine, LORIA, UMR 7503, F-54506, France (3) Université de Lyon, INSA-Lyon, F-69621, France, LIRIS, CNRS UMR5205 Abstract We describe Constraint Programming (CP) models to solve a cryptanalytic problem: the chosen key differential attack against the standard block cipher AES. We show that CP solvers are able to solve these problems quicker than dedicated cryptanalytic tools, and we prove that a solution claimed to be optimal in two recent cryptanalysis papers is not optimal by providing a better solution. 1 Introduction Since 2001, AES (Advanced Encryption Standard) is the encryption standard for block ciphers [FIPS 197, 2001]. It guarantees communication confidentiality by using a secret key $K$ to encode an original plaintext $X$ into a ciphertext $AES_K(X)$, in such a way that the ciphertext can further be decoded into the original one using the same key, i.e., $X = AES_K^{-1}(AES_K(X))$. Cryptanalysis aims at testing whether confidentiality is actually guaranteed. In particular, differential cryptanalysis [Biham and Shamir, 1991] evaluates whether it is possible to find the key within a reasonable number of trials by considering plaintext pairs $(X, X')$ and studying the propagation of the initial difference $X \oplus X'$ between $X$ and $X'$ while going through the ciphering process (where $\oplus$ is the xor operator). Today, differential cryptanalysis is public knowledge, and block ciphers such as AES have proven bounds against differential attacks. Hence, [Biham, 1993] proposed a new type of attack called related-key attack that allows an attacker to inject differences not only between the plaintexts $X$ and $X'$ but also between the keys $K$ and $K'$ (even if the secret key $K$ stays unknown from the attacker). To mount related-key attacks, the cryptanalyst must find optimal related-key differentials, i.e., a plaintext difference $\delta X$ and a key difference $\delta K$ that maximize the probability that, for randomly chosen plaintext $X$ and key $K$, the difference $X \oplus \delta X$ in the input plaintexts becomes the difference $AES_K(X) \oplus AES_K(X')$ in the output ciphertexts. Finding the optimal related-key differentials for AES is a highly combinatorial problem that hardly scales. Two main approaches have been proposed to solve this problem: a graph traversal approach [Fouque et al., 2013], and a Branch & Bound approach [Biryukov and Nikolic, 2010]. The approach of [Fouque et al., 2013] requires about 60 GB of memory when the key has 128 bits, and it has not been extended to larger keys. The approach of [Biryukov and Nikolic, 2010] only takes several megabytes of memory, but it requires several days of computation when the key has 128 bits, and several weeks when the key has 192 bits. During the process of designing new ciphers, this search generally needs to be performed several times, so it is desirable that it can be done rather quickly. Another point that should not be neglected is the time needed to design and implement these approaches: To ensure that the computation is completed within a “reasonable” amount of time, it is necessary to reduce the branching by introducing clever reasoning. Of course, this hard task is also likely to introduce bugs, and checking the correctness or the optimality of the computed solutions may not be so easy. Finally, reproducibility may also be an issue. Other researchers may want to adapt these algorithms to other problems, with some common features but also some differences, and this may again be very difficult and time-consuming. In [Gerault et al., 2016], we proposed to use Constraint Programming (CP) to solve this problem. When using CP to solve a problem, one simply has to model the problem by means of constraints. Then, this model is solved by generic solvers usually based on a Branch & Propagate approach: The search space is explored by building a search tree, and constraints are propagated at each tree node in order to prune branches. CP opens new perspectives for this kind of cryptanalysis problems. First, it is very competitive with dedicated approaches: When the key has 128 bits, Chuffed [Chu and Stuckey, 2014] is able to find optimal solutions in less than one hour. Second, the CP model is easier to check or re-use than a full program that not only describes the problem to solve, but also how to solve it. Actually, CP allowed us to prove that a solution claimed to be optimal in [Biryukov and Nikolic, 2010; Fouque et al., 2013] is not optimal by providing a better solution. 2 Problem Statement AES block cipher. AES ciphers blocks of length $n = 128$ bits, and each block is a $4 \times 4$ matrix of bytes. The length of keys is $l \in \{128, 192, 256\}$. In this paper, we only consider keys of length \( l = 128 \), and keys are \( 4 \times 4 \) matrices of bytes. Given a \( 4 \times 4 \) matrix of bytes \( M \), we note \( M[j][k] \) the byte at row \( j \in [0, 3] \) and column \( k \in [0, 3] \). AES is an iterative process, and we note \( X_i \) the ciphertext at the beginning of round \( i \in [0, r] \). Each round is composed of the following operations, as displayed in Fig. 1: - **SubBytes** \( S \). \( S \) is a non-linear permutation which is applied on each byte of \( X_i \) separately, i.e., \( \forall j, k \in [0, 3] \), \( X_i[j][k] \) is replaced by \( S(X_i[j][k]) \), according to a lookup table. We note \( SX_i = S(X_i) \). - **ShiftRows** \( SR \). \( SR \) is a linear mapping which rotates on the left by 1 byte position (resp. 2 and 3 byte positions) the second row (resp. third and fourth rows) of \( SX_i \). - **MixColumns** \( MC \). \( MC \) is a linear mapping that multiplies each column of \( SR(SX_i) \) by a \( 4 \times 4 \) fixed matrix chosen for its good properties of diffusion [Daemen and Rijmen, 2002]. In particular, it has the Maximum Distance Separable (MDS) property: For each column, the total number of bytes which are different from 0, before and after applying \( MC \), is either equal to 0 or strictly greater than 4. We note \( Y_i = MC(SR(SX_i)) \). - **KeySchedule** \( KS \). The subkey at round 0 is the initial key, i.e., \( K_0 = K \). For each round \( i \in [0, r − 1] \), the subkey \( K_{i+1} \) is generated from \( K_i \) by applying \( KS \). It first replaces each byte \( K_i[j][3] \) of the last column by \( S(K_i[j][3]) \) (where \( S \) is the SubBytes operator), and we note \( SK_i[j][3] = S(K_i[j][3]) \). Then, each column of \( K_{i+1} \) is obtained by performing a xor operation between bytes coming from \( K_i, SK_i, \) or \( K_{i+1} \). - **AddRoundKey** \( ARK \). \( ARK \) performs a xor operation between bytes of \( Y_i \) and subkey \( K_{i+1} \) to obtain \( X_{i+1} \). Let us note \( \text{Bytes} \) the set of all bytes (for all rounds), i.e., \[ \text{Bytes} = \{ X[j][k], X[j][k], SX[j][k], Y_i[j][k], \\ K_i[j][k], SK_i[j][3] \mid i \in [0, r] \text{ and } j, k \in [0, 3] \} \] Operations applied at each round \( i \in [0, r − 1] \): - **SubBytes** \( S \). - **ShiftRows** \( SR \). - **MixColumns** \( MC \). - **AddRoundKey** \( ARK \). Figure 1: AES ciphering process. Each \( 4 \times 4 \) array represents a group of 16 bytes. Before the first round, \( ARK \) is applied on \( X \) and \( K \) to obtain \( X_0 \). Then, for each round \( i \in [0, r − 1] \), \( S \) is applied on \( X_i \) to obtain \( SX_i \), \( SR \) and \( MC \) are applied on \( SX_i \) to obtain \( Y_i \), \( KS \) is applied on \( K_i \) to obtain \( K_{i+1} \) (and during \( KS \), \( S \) is applied on \( K_i[j][3] \) to obtain \( SK_i[j][3] \), \( \forall j \in [0, 3] \)), and \( ARK \) is applied on \( K_{i+1} \) and \( Y_i \) to obtain \( X_{i+1} \). The ciphertext \( SX_r \) is obtained by applying \( SB \) on \( X_r \). **Optimal related-key differentials.** Let us note \( \delta B = B \oplus B' \) the difference between two bytes \( B \) and \( B' \), and \( \text{diffBytes} = \{ \delta B \mid B \in \text{Bytes} \} \) the set of all differential bytes. Mounting attacks requires finding a related-key differential characteristic, i.e., a plain text difference \( \delta X = X \oplus X' \) and a key difference \( \delta K = K \oplus K' \), such that \( \delta X \) becomes \( \delta SX \), after \( r \) rounds with a probability as high as possible. This can be achieved by tracking the propagation of the initial differences through the cipher. Once such an optimal differential characteristic is found, the cryptanalyst asks for the encryption of messages satisfying the input difference. When this difference propagates as expected, he can infer informations leading to a recovery of the secret key \( K \) with less workload than exhaustive search. The difficulty of recovering \( K \) decreases as the probability of the used differential characteristic increases. The AES operators \( SR, MC, ARK \), and \( KS \) are linear, i.e. they propagate differences in a deterministic way (with probability 1). However, the \( S \) operator is not linear: Given a byte difference \( \delta B \) with \( \delta B = 0 \), the probability that \( \delta B \) becomes \( \delta B' \) is \( Pr(\delta B = \delta B') \). It is equal to 1 if \( \delta B = 0 \) (resp. \( B = B' \)). However, if \( \delta B \neq 0 \), then \( Pr(\delta B = \delta B') \in \{ \frac{2}{256}, \frac{4}{256} \} \). The probability that \( \delta X \) becomes \( \delta SX \) is equal to the product of all \( Pr(\delta B \rightarrow \delta SB) \) such that \( \delta B \) (resp. \( \delta SB \)) is a byte difference before (resp. after) passing through the \( S \) operator when censoring \( X \) with \( K \) and \( X' \) with \( K' \), i.e., the product of all \( Pr(\delta X[j][k] \rightarrow \delta SX[j][k]) \) and all \( Pr(\delta SK_i[j][3] \rightarrow \delta SK_i[j][3]) \) with \( i \in [0, r] \) and \( j, k \in [0, 3] \). The goal is to find \( \delta X \) and \( \delta K \) that maximize this probability. **Two step solving process.** To find \( \delta X \) and \( \delta K \), we search for the values of all differential bytes in \( \text{diffBytes} \). Both [Biryukov and Nikolic, 2010] and [Fouque et al., 2013] propose to solve this problem in two steps. In Step 1, a Boolean variable \( \Delta B \) is associated with every differential byte \( \delta B \in \text{diffBytes} \) such that \( \Delta B = 0 \leftrightarrow \delta B = 0 \) and... \[ \Delta B = 1 \Leftrightarrow \delta B \in [1, 255]. \] The goal of Step 1 is to find a \textit{Boolean solution} that assigns values to Boolean variables such that the AES transformation rules are satisfied. During this first step, the SubBytes operation \( S \) is not considered. Indeed, it does not introduce nor remove differences. Therefore, we have \( \Delta X_i[j][k] = \Delta S X_i[j][k] \) and \( \Delta K_i[j][3] = \Delta S K_i[j][3] \). As we search for a solution with maximal probability, the goal of Step 1 is to search for a Boolean solution which minimizes the number of variables \( \Delta X_i[j][k] \) and \( \Delta K_i[j][3] \) which are set to 1. In Step 2, the Boolean solution is transformed into a byte solution: For each differential byte \( \delta B \in \text{diffBytes} \), if the corresponding Boolean variable \( \Delta B \) is assigned to 0, then \( \delta B \) is also assigned to 0; otherwise, we search for a byte value in \([1, 255]\) to be assigned to \( \delta B \) such that the AES transformation rules are satisfied and the probability is maximized. Note that some Boolean solutions may not be transformable into byte solutions. These Boolean solutions are said to be \textit{byte-inconsistent}. 3 Basic CP Model for Step 1 A first CP model for Step 1 may be derived from the AES transformation rules in a rather straightforward way. A CP model is defined by a set of variables, such that each variable \( x \) has a domain \( D(x) \), and a set of constraints, \( i.e. \), relations that restrict the values that may be simultaneously assigned to the variables. **Variables.** For each differential byte \( \delta B \in \text{diffBytes} \), we define a Boolean variable \( \Delta B \) whose domain is \( D(\Delta B) = \{0, 1\} \): it is assigned to 0 if \( \delta B = 0 \), and to 1 otherwise. **XOR constraint.** We first define a XOR constraint for \( \text{ARK} \) and \( \text{KS} \). Let us consider three differential bytes \( \delta A, \delta B \) and \( \delta C \) such that \( \delta A \oplus \delta B = \delta C \). If \( \delta A = \delta B = 0 \), then \( \delta C = 0 \). If \( \delta A = 0 \) and \( \delta B \neq 0 \) or \( \delta A \neq 0 \) and \( \delta B = 0 \) then \( \delta C \neq 0 \). However, if \( \delta A \neq 0 \) and \( \delta B \neq 0 \), then we cannot know if \( \delta C \) is equal to 0 or not: This depends on whether \( \delta A = \delta B \) or not. When abstracting differential bytes \( \delta A, \delta B \) and \( \delta C \) with Boolean variables \( \Delta A, \Delta B \) and \( \Delta C \) (which only model the fact that there is a difference or not), we obtain the following definition of the XOR constraint: \[ \text{XOR}(\Delta A, \Delta B, \Delta C) \Leftrightarrow \Delta A + \Delta B + \Delta C \neq 1. \] **AddRoundKey and KeySchedule constraints.** Both \( \text{ARK} \) and \( \text{KS} \) are modeled with XOR constraints between \( \Delta X_i, \Delta Y_i \), and \( \Delta K_i \) variables (for \( \text{ARK} \)), and between \( \Delta K_i \) and \( \Delta S K_i \) variables (for \( \text{KS} \)). **ShiftRows and MixColumns.** \( S R \) simply shift variables. The MDS property of \( \text{MC} \) is ensured by posting a constraint on the sum of some variables of \( \Delta X_i \) and \( \Delta Y_i \) variables, which must belong to the set \([0, 5, 6, 7, 8]\). **Objective function.** We introduce an integer variable \( obj_{\text{Step1}} \) that must be minimized, and we post an equality constraint between \( obj_{\text{Step1}} \) and the sum of Boolean variables on which a non linear \( S \) operation is performed (all variables \( \Delta X_i[j][k] \) and \( \Delta K_i[j][3] \) with \( i \in [0, r] \) and \( j, k \in [0, 3] \)). Let \( v \) be the optimal value of \( obj_{\text{Step1}} \). It may happen that none of the Boolean solutions with \( obj_{\text{Step1}} = v \) is byte-consistent, or that the maximal probability \( p \) of Boolean solutions with \( obj_{\text{Step1}} = v \) is such that it is possible to have a better probability with a larger value for \( obj_{\text{Step1}} \) (i.e., \( p < (\frac{4}{256})^{r+1} \)). In this case, we need to search for new Boolean solutions, such that \( obj_{\text{Step1}} \) is minimal while being strictly greater than \( v \). This is done by adding the constraint \( obj_{\text{Step1}} > v \) before solving again Step 1. **Limitations of the basic CP model.** This basic CP model is complete, \( i.e. \), for any solution at the byte level (on \( \delta \) variables), there exists a solution at the Boolean level (on \( \Delta \) variables). However, preliminary experiments have shown us that there is a huge number of Boolean solutions which are byte inconsistent. For example, when the number of rounds is \( r = 4 \), the optimal cost is \( obj_{\text{Step1}} = 11 \), and there are more than 90 millions of Boolean solutions with \( obj_{\text{Step1}} = 11 \). However, none of these solutions is byte-consistent. In this case, most of the Step 1 solving time is spent at generating useless Boolean solutions which are discarded in Step 2. 4 Additional constraints for step 1 When reasoning at the Boolean level, many solutions are not byte-consistent because constraints at the byte level have been ignored. To propagate some properties at the byte level, we introduce new variables and constraints that model equality relations between differential bytes. **New equality variables.** For each couple of differential bytes \( \delta A, \delta B \in \text{diffBytes} \), we introduce a Boolean equality variable \( EQ_{\delta A, \delta B} \) which is equal to 1 if \( \delta A = \delta B \), and to 0 otherwise. These variables are constrained to define an equivalence relation by adding a symmetry constraint \( EQ_{\delta A, \delta B} = EQ_{\delta B, \delta A} \) and a transitivity constraint (if \( EQ_{\delta A, \delta B} = EQ_{\delta B, \delta C} = 1 \) then \( EQ_{\delta A, \delta C} = 1 \)). Also, \( EQ \) variables are related to \( \Delta \) variables by adding the constraints: \[ EQ_{\delta A, \delta B} = 1 \Rightarrow (\Delta A = \Delta B) \] \[ EQ_{\delta A, \delta B} + \Delta A + \Delta B \neq 0 \] Revisiting the XOR constraint. When defining the constraint \( \text{XOR}(\Delta A, \Delta B, \Delta C) \), if \( \Delta A = \Delta B = 1 \), then we cannot know if \( \Delta C \) is equal to 0 or 1. However, whenever \( \Delta C = 0 \) (resp. \( \Delta C = 1 \)), we know for sure that the corresponding byte \( \delta C \) is equal to 0 (resp. different from 0), meaning that the two bytes \( \delta A \) and \( \delta B \) are equal (resp. different), \( i.e. \), that \( EQ_{\delta A, \delta B} = 1 \) (resp. \( EQ_{\delta A, \delta B} = 0 \)). The same reasoning may be done for \( \Delta A \) and \( \Delta B \) because \( (\delta A \oplus \delta B = \delta C) \Leftrightarrow (\delta B \oplus \delta C = \delta A) \Leftrightarrow (\delta A \oplus \delta C = \delta B) \). Therefore, we redefine the XOR constraint as follows: \[ \text{XOR}(\Delta A, \Delta B, \Delta C) \Leftrightarrow ((\Delta A + \Delta B + \Delta C \neq 1) \] \[ \Leftrightarrow (EQ_{\delta A, \delta B} = 1 - \Delta C) \] \[ \Leftrightarrow (EQ_{\delta A, \delta C} = 1 - \Delta B) \] \[ \Leftrightarrow (EQ_{\delta B, \delta C} = 1 - \Delta A) \] Propagation of MDS at the byte level. The MDS property ensures that, for each column, the total number of bytes which are different from 0, before and after applying $MC$, is either equal to 0 or strictly greater than 4. This property also holds for any xor difference between two different columns of $X$ and $Y$ matrices. To propagate this property, for each pair of columns in $X$ and $Y$ matrices, we add a constraint on the sum of equality variables between bytes of these columns. Constraints derived from KS. The KeySchedule mainly performs xor and $S$ operations. As a consequence, each byte $\delta K_i[j][k]$ may be expressed as a xor between bytes of the original key difference $\delta K_0$, and bytes of $\delta S K_{i-1}$ (which are differences of key bytes that have passed through $S$ during the previous round). Hence, for each byte $\delta K_i[j][k]$, we precompute the set $V(i,j,k)$ such that $V(i,j,k)$ only contains bytes of $\delta K_0$ and $\delta S K_{i-1}$ and $\delta K_i[j][k] = \bigoplus_{\delta S K_{i-1}, V(i,j,k)} \delta A$. For each set $V(i,j,k)$, we introduce a set variable $V_1(i,j,k)$ which is constrained to contain the subset of $V(i,j,k)$ corresponding to the Boolean variables equal to 1. We use these set variables to infer that two differential key bytes that have the same $V_1$ set are equal. Also, if $V_1(i,j,k)$ is empty (resp. contains one or two elements), we infer that $\delta K_i[j][k]$ is equal to 0 (resp. a variable, or a xor between 2 variables). 5 CP model for Step 2 Given a Boolean solution for Step 1, Step 2 aims at searching for the byte-consistent solution with the highest probability (or proving that there is no byte-consistent solution). Given a Boolean solution for Step 1, Step 2 aims at searching for the byte-consistent solution with the highest probability. For each byte of the key difference before and after passing through $S$ during the previous round, we define an integer variable $\delta A$ whose domain depends on the value of $\delta A$: If $\delta A = 0$, then $D(\delta A) = \{0\}$ (i.e., $\delta A$ is also assigned to 0); otherwise, $D(\delta A) = \{1,2,5,5\}$. As we look for a byte-consistent solution with maximal probability, we also add an integer variable $P_A$ for each byte $A$ that passes through the $S$ operator, i.e., $X_i[j][k]$ and $K_i[j][3]$: This variable corresponds to the base 2 logarithm of the probability $Pr(\delta A \rightarrow \delta S_A)$ of obtaining the $S$ output difference $\delta S_A$ when the $S$ input difference is $\delta A$. The domain of these variables depends on the value of $\delta A$ in the Step 1 solution: If $\delta A = 0$, then $Pr(0 \rightarrow 0) = 1$ and therefore $D(P_A) = \{0\}$; otherwise, $Pr(\delta A \rightarrow \delta S_A) \in \{\frac{2}{256}, \frac{1}{256}\}$ and $D(P_A) = \{-7,-6\}$. The constraints basically follow the AES operations to relate variables, as described for Step 1, but consider the definition of the operations at the byte level, instead of the Boolean level. A main difference is that the $SubBytes$ operation, which has no effect at the Boolean level, must be modeled at the byte level. This is done thanks to a ternary table constraint which extensively lists all triples $(X,Y,P)$ such that there exist two bytes $B_1$ and $B_2$ whose difference before and after passing through $S$ is equal to $X$ and $Y$, respectively, and such that $P$ is the base 2 logarithm of the probability of this transformation. The goal is to find a byte-consistent solution with maximal differential probability. As we consider logarithms, this amounts to searching for a solution that maximizes the sum of all $P_A$ variables. 6 Results and Conclusion The CP model for Step 1 was implemented with the MiniZinc modeling language [Nethercote et al., 2007], and we compared three CP solvers: Gecode [Gecode Team, 2006], Choco 4 [Prudhomme and Fages, 2013], and Chuffed [Chu and Stuckey, 2014]. The best results were obtained with Chuffed. The basic CP model (described in Section 3) does not scale well because it generates a huge number of Boolean solutions which are not byte-consistent. When adding the additional constraints described in Section 4, most of these byte-inconsistent solutions are filtered out, and Chuffed is able to solve all instances in less than one hour. The CP model for Step 2 was implemented with Choco 3. It solves all instances (i.e., finds an optimal byte-consistent solution for each Boolean solution of Step 1 when it exists) in a few seconds. As a conclusion, CP solvers are much faster than the Branch & Bound approach of [Biryukov and Nikolic, 2010], which needs several days to solve these instances. It is also faster and much less memory consuming than the approach of [Fouque et al., 2013], which needs 60GB and 30 minutes on a 12-core computer to pre-compute the graph. New results for differential cryptanalysis. For $r = 4$ rounds, we have found a byte-consistent solution with $obj_{Step1} = 12$ and a probability equal to $2^{-79}$. This solution is better than the solution claimed to be optimal in [Biryukov and Nikolic, 2010] and [Fouque et al., 2013]: In these papers, the authors say that the best byte-consistent solution has $obj_{Step1} = 13$, and a probability equal to $2^{-81}$. We have shown how to extend our CP models to AES with longer keys (such that $l \in \{192,256\}$) [Gérault et al., 2017]. These models allowed us to find optimal solutions for all possible instances of AES with $l \in \{128,192,256\}$ in less than 35 hours for Step 1, and in less than 6 minutes for Step 2. This is a clear improvement with respect to existing work as the approach of [Fouque et al., 2013] cannot be extended to $l > 128$ due to its memory complexity, and the approach of [Biryukov and Nikolic, 2010] needs several weeks to solve instances with $l = 192$. Furthermore, we have shown that the solution proposed in [Biryukov and Nikolic, 2010] for $l = 192$ and $r = 11$ is inconsistent. We have also found better solutions when $l = 256$, and we have computed the actual optimal solution for AES with $l = 256$. Its probability is $2^{-146}$ (instead of $2^{-154}$ for the solution of [Biryukov et al., 2009]). Using this solution, we improved the related-key distinguisher and the basic related-key differential attack on the full AES-256 by a factor $2^6$ and the $q$-multicollisions by a factor 2 (see [Gérault et al., 2017] for more details). These cryptanalysis problems open new and exciting challenges for the CP community. In particular, these problems are not easy to model. More precisely, naive CP models such as the one described in Section 3 may not scale well. The introduction of equality constraints at the byte level drastically improves the solving process, but these constraints are not straightforward to find and implement. Hence, a challenge is to define new CP frameworks, dedicated to this kind of cryptanalysis problems, in order to ease the development of efficient CP models for these problems. References
{"Source-Url": "https://hal.science/hal-01528272/file/main.pdf", "len_cl100k_base": 7176, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 23441, "total-output-tokens": 8846, "length": "2e12", "weborganizer": {"__label__adult": 0.0006213188171386719, "__label__art_design": 0.0004804134368896485, "__label__crime_law": 0.0019741058349609375, "__label__education_jobs": 0.0008397102355957031, "__label__entertainment": 0.00014293193817138672, "__label__fashion_beauty": 0.0002586841583251953, "__label__finance_business": 0.0006742477416992188, "__label__food_dining": 0.0005397796630859375, "__label__games": 0.00127410888671875, "__label__hardware": 0.001929283142089844, "__label__health": 0.0014810562133789062, "__label__history": 0.0005469322204589844, "__label__home_hobbies": 0.00018680095672607425, "__label__industrial": 0.001186370849609375, "__label__literature": 0.00039124488830566406, "__label__politics": 0.0006365776062011719, "__label__religion": 0.0008273124694824219, "__label__science_tech": 0.44482421875, "__label__social_life": 0.0001538991928100586, "__label__software": 0.01113128662109375, "__label__software_dev": 0.5283203125, "__label__sports_fitness": 0.0006203651428222656, "__label__transportation": 0.0009622573852539062, "__label__travel": 0.0002739429473876953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29012, 0.04077]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29012, 0.59445]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29012, 0.83883]], "google_gemma-3-12b-it_contains_pii": [[0, 1035, false], [1035, 5967, null], [5967, 11637, null], [11637, 18976, null], [18976, 26006, null], [26006, 29012, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1035, true], [1035, 5967, null], [5967, 11637, null], [11637, 18976, null], [18976, 26006, null], [26006, 29012, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29012, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29012, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29012, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29012, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29012, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29012, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29012, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29012, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29012, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29012, null]], "pdf_page_numbers": [[0, 1035, 1], [1035, 5967, 2], [5967, 11637, 3], [11637, 18976, 4], [18976, 26006, 5], [26006, 29012, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29012, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
a515a5fe323fc2a3e568c4e7f40f669389a25af0
EDiFy: An Execution time Distribution Finder Boudewijn Braams University of Amsterdam bbx1992@gmail.com Sebastian Altmeyer University of Amsterdam altmeyer@uva.nl Andy D. Pimentel University of Amsterdam a.d.pimentel@uva.nl ABSTRACT Embedded real-time systems are subjected to stringent timing constraints. Analysing their timing behaviour is therefore of great significance. So far, research on the timing behaviour of real-time systems has been primarily focused on finding out what happens in the worst-case (i.e., finding the worst case execution time, or WCET). While a WCET estimate can be used to verify that a system is able to meet deadlines, it does not contain any further information about how the system behaves most of the time. An execution time distribution does contain this information and can provide useful insights regarding the timing behaviour of a system. In this paper, we present EDiFy, a measurement-based framework that derives execution time distributions by exhaustive evaluation of program inputs. We overcome the scalability and state-space explosion problem by i) using static analysis to reduce the input space and ii) using an anytime algorithm which allows deriving a precise approximation on the execution time distribution. We exemplify EDiFy on several benchmarks from the TACLeBench and EEMBC benchmark suites, and show that the anytime algorithm provides precise estimates already after a short time. 1. INTRODUCTION Research on the timing behaviour of embedded real-time systems has been primarily focused on determining the worst-case execution time (WCET). This focus is clearly motivated by the need for timing verification, i.e., the need to guarantee at design time that all deadlines will be met. Figure 1 taken from the survey paper on WCET analyses [15] illustrates the simplification of this focus: It shows the fictitious execution time distribution of a real-time task, i.e the smallest individual software component within the system. A WCET analysis reduces the often complex timing-behaviour of a task to a single value. Speaking in terms of Figure 1, all values left of the WCET are ignored. Timing verification, in its traditional form, only requires bounds on the WCET of all tasks in the system. It assumes conservatively that the system operates correctly, only if it does so when all tasks run up to their WCET value. For many industries, this assumption is unnecessarily conservative and leads to costly over-provisioning of hardware resources. In fact, only very few real-time applications, mostly from the avionics industry, require a timing verification up to the highest standard. In most cases, infrequent deadline misses are acceptable and also preferable to excessive hardware costs. The state-of-the-art in timing analysis, however, does not provide the necessary means to derive richer information about the timing behaviour. Figure 1: An execution time distribution, with annotated best-case execution time (BCET) and worst-case execution time (WCET), source [15] (modified) In this paper, we close this gap and present EDiFy, a framework for the estimation of execution time distributions of embedded real-time tasks. While a WCET estimate merely describes the execution time in the worst case scenario, a distribution describes the execution times in all possible scenarios. It can therefore be a valuable asset to the development process of real-time embedded systems, and can answer questions such as: is the worst-case a common or a rare case, what is the average execution time, or, what is the difference between best and worst-case execution time. Deriving execution time distributions is an even more complex task than bounding the WCET value, as it subsumes the former: A correct and complete execution time distribution would also contain the information about the WCET value. Consequently, we have to restrict the problem setting: First of all, we rely on measurements instead of static analyses. Relying on measurements implies that the resulting execution time distribution will never show the full picture, unless the input space allows for exhaustive measurements, which is highly unlikely. Secondly, EDiFy is task-centric: we assume that only the task under examination is running on the hardware. Analysing the timing behaviour of a complete task set and schedule is future work. Thirdly, we assume an input value probability distribution to be provided. Even though it may seem as a strong assumption, it is an absolute necessity. Not even the average execution time is well defined, if we do not know which inputs occur how often. The burden of providing these distributions is clearly on the system designer. We also consider this assumption feasible as sample data can be derived via test runs, control simulations and so on. Last but not least, we concentrate for now on control applications, instead of data-intensive image or video-processing benchmarks due to the size of the input data. With these restrictions in place, which we consider reasonable and realistic, the problem remains computationally intractable. EDiFy overcomes this obstacle by a combination of i) a static analysis to reduce the state-space, ii) a distributed anytime algorithm, and iii) an evenly-distributed state-space traversal that ensures quick convergence of the anytime algorithm. We note that our work differs fundamentally from probabilistic timing analyses as currently advocated for real-time verification. We do not employ any statistical methods. Instead, EDiFy uses a heuristic to approximate the distribution. If executed for a sufficiently long time, EDiFy will eventually result in the \textit{ground-truth} under the restriction detailed above, and assuming that the distribution of the distribution of the input values is provided. Hence, as a side-effect, the EDiFy framework enables us to evaluate the precision and correctness of measurement-based probabilistic timing analyses [4]. Yet, we are ultimately interested in providing a precise approximation on the execution-time distribution, and not in providing estimates on the WCET. \textbf{Related Work.} Execution time analyses can be classified as static analyses or measurements-based analyses [15]. Static analyses are based on analysing program code and control flow and do not involve any actual execution of program code. \textit{niT} [11] and \textit{Bound-T} [12] are examples of commercial static analysis tools used in the real-time embedded systems industry. However, these tools are designed for producing WCET estimates only, and are not suited for deriving execution time distributions. In 2004, David and Puaut [9] developed a static analysis to derive complete execution time distributions, but without considering any hardware effects, such as caching, branch-prediction or pipelining. Their approach consists purely of a source-code analysis, hence the resulting distribution can only be an abstract indication of the actual execution times on real hardware. Measurement-based analyses, in contrast, extract timing behaviour by taking actual run-time measurements of execution times. This method is inherently simpler and merely requires the program code and/or binary and a means to execute it (either on real hardware or in a simulated environment). \textit{RapiTime} [3] represents an example of a commercial measurement-based tool. As it is in general intractable to derive all measurements, measurement-based WCET analyses tend to steer the input values towards the worst-case. This is again in stark contrast to our approach, where we need to cover a wide range of input values. Recently, probabilistic timing analyses, especially measurement-based probabilistic timing analyses [4], have received ample attention in the real-time community. Despite arguing about execution time distributions in general, these approaches only serve to derive upper bounds on the execution and employ extreme-value theory [8] or copulas [5] to this end. As a consequence, these methods rely on strong assumptions about the probabilistic nature [13] of the hardware and input values, and foremost, they only derive cumulative distribution functions to assign exceedance probabilities to WCET estimates. To the best of our knowledge, no methods are available so far to derive complete execution time distributions on modern hardware. \textbf{Structure.} The paper is structured as follows: Section 2 introduces the EDiFy framework, its inputs and outputs and the tools used. In Section 3 we detail the state-space pruning, and in Section 4, we detail the anytime algorithm. Section 5 provides an evaluation based on selected TACLeBench and EEMBC benchmarks, and Section 6 concludes the paper. \section{The EDiFy Framework} In this section, we explain the overall structure of the EDiFy framework and the required input and derived output. The input to the framework is the C-code of the program to be analysed, and the input value probability distributions of each input variable. The output is an approximation on the corresponding execution time distribution. We define an execution time distribution (ETD), as a probability distribution, which gives the likelihood of a certain execution time \( t \) occurring: \( \text{ETD}: \mathbb{N} \rightarrow \mathbb{R} \) with \( \sum_t \text{ETD}(t) = 1 \). Such a distribution captures the complete timing behaviour of a system. We note that for real-world applications of embedded real-time systems it can be assumed that input values are not uniformly distributed. Take for example a control system in a modern car where the engine temperature is an input variable. Initially, the engine temperature will be low, but after driving the car for a while the engine will remain warm. It is hence evident that the input value distribution (IVD) influences the ETD, and must therefore be taken into account. An IVD assigns each value of an input variable its likelihood: \( \text{IVD}: V \rightarrow \mathbb{N} \) with \( \sum_{v \in V} \text{IVD}(v) = 1 \) \( \forall V \subseteq \mathbb{N} \). We require an IVD for each independent input variable, and a conditional probability distribution for a dependent variable. We note that dependency between variables does not change the complexity of the EDiFy framework, as the input probabilities are solely used to weight the measured execution times. For the sake of simplicity, we only present the equations assuming independent variables. Further details on handling depending types can be found in [7]. \subsection{Structure of the EDiFy Framework} The framework (see Figure 2) consists of two main components, a static part to prepare the input space, shown on the left side, and a dynamic part to run the measurements, shown on the right side. The input preparation is executed once, and performs a static program analysis to derive the set of variables that indeed influence the execution time of the task, and how these variables influence the execution time. The rational behind this step is to reduce the input space by pruning irrelevant input variables and variable ranges: not each input variable influences the execution behaviour, and not each input value leads to a distinct execution time. Section 3 provides the details on the input preparation. The measurements, i.e. the dynamic part of the EDiFy framework, are executed distributively by a fixed number of worker processes. Each process is assigned a dedicated range of the complete input space, and traverses this range until either each input value of the assigned range has been visited, or until the algorithm is aborted. In each iteration for each process, an input generator computes the next state of the input space to be visited, injects these values in the test-harness of the C-code provided by the user, and creates a stand-alone executable to be executed in the simulator. The result of each measurement is forwarded to the execution time distribution calculator, which weighs the measured execution times with the input distribu- 2.2 Supported Input Types For each supported type, we require a bijective function that maps the complete domain of the variable to the natural numbers $\mathbb{N}$, which enumerates all values of that type $E: \mathbb{N} \rightarrow \mathbb{N}$. For example, the enumeration function for Boolean values simply assigns 0, resp. 1, to the values true and false. The enumeration function for integer shifts the complete range by the minimum integer value, i.e., $E_{int}(x) = x + \lfloor \text{INT}\_\text{MIN} \rfloor$ to ensure that the enumeration function starts at 0, instead of a negative value. For arrays with $n$ unique values, the lexicographic order is used as an enumeration function. Built-in types such as floats and doubles can also be integrated: We can simply interpret the bit-representation of a float value as an integer value, and apply the enumeration function for in integers $E_{int}$. Compound types such as structs, or arrays are supported by interpreting each component as an independent input variables. Only domain specific knowledge must be encoded using a dedicated enumeration function. We note that input variables can influence the execution time either by influencing the control flow directly, i.e., through conditional, or loop-statements, or through instructions with variable execution times, such as floating-point or memory operations. While the EDiFy framework also supports variable instruction execution times by exhaustive evaluation of pointer or floating point values, dedicated support for this type of input-dependent execution time is future work. 2.3 Hardware State The EDiFy Framework is task-centric, meaning that we assume no other tasks or code to be executed on the same hardware system. Consequently, there are only two hardware states that can occur, a cold system, where no data or instructions have yet been cached, or a warm system, where the cache has already been filled with data from the task under examination. Results for the first can be achieved by resetting the simulation after each measurement, and the second by executing the same task with the same input data twice, but only measuring the second iteration. We acknowledge that this restriction is rather substantial. We do however believe that the information about the execution time distribution based on warm or cold system hardware states only, is already valuable on its own. The extension to other hardware states is considered future work. 2.4 Implementation details The EDiFy framework is implemented using Python to control the tool chain and the anytime algorithm. The static program analysis is implemented within the CIL framework [14]. As target architecture, we have selected the ARMv8, for which a cycle accurate simulator (gem5) [6] and a gcc cross-compiler are freely available. The implementation of the framework is made available online [2]. 3. INPUT-SPACE ANALYSIS The main obstacle to overcome is the prohibitively large number of input variations. The input space is simply too large to naively derive an execution time measurement for each element in the input space. Our first goal is thus to remove superfluous input values and to cluster input ranges for which we can guarantee that the execution time values will be the same. The questions we need to ask here are: - Which input parameters influence the execution times? - How do they influence the execution times? - Static program analysis is the natural way to provide safe and complete answers to these question. The input analysis is implemented as a backwards program analysis that derives the set of variables that influence the execution time either directly or indirectly. With directly, we mean that the variables appear in the expression within an if-statement or loop-statement (or within a float or pointer operation, in case of variable instruction times) and with indirectly, we refer to variables that only influence variables from the first set. 3.1 Program Analysis In the following, we describe the basic program analysis, which derives the set of all variables that influence the control flow of the program. All other program analyses are derived from this basic analysis using minor modifications. The domain of the analysis is the powerset of the set of variables $V$: $D = 2^V$ with $\emptyset$ being the bottom and $V$ the top element. Since we are interested in a safe analysis, we use set-union $\cup$ as the combine-operator to be invoked in case of control-flow merges. The auxiliary function $\text{var}Used$: $\text{Expr} \rightarrow 2^V$ derives the set of variables used within an expression. The transfer function $\text{tf}$: $\text{Instr} \rightarrow (2^V \rightarrow 2^V)$ selects all variables used within an expression in an if or loop statement, and also all variables used within an expression if the result of the expression is assigned to an execution time influencing variable. It is defined as follows: \[ \text{tf}(I)(V) = \text{match } I \text{ with } \\ \quad \text{if } (exp) - \quad \text{if}(v \in V) \text{ then } V \cup \text{var}Used(exp) \\ \quad \text{while } (exp) - \quad V \cup \text{var}Used(exp) \\ \quad v = exp \quad \text{else } V \\ \quad \text{else } V \] (1) where $I$ is an instruction and $V$ is the set of input-influencing variables. We assume for the sake of simplicity a simplified instruction set where all loops have been transformed to while loops, as within the CIL framework [14], in which we have implemented the program analysis. The analysis can be modified to cover instructions with variable execution times. The analysis iterates over all expressions within a program, whenever the analysis encounters an expression of type float, or an expression used to index a memory address, respectively, all variables used within the expression are added to the current data-flow value. The presented analysis derives all directly and indirectly influencing variables. To derive directly influencing variables only, we simply have to omit the case distinction $v = \exp$ and directly forward the data flow value $V$ without any additions. We acknowledge that further program analyses, such as a value- or a pointer-analysis can be integrated to further reduce the input-space. These analyses, however, exceed the scope of the paper. The main purpose of the presented program analysis is to correctly classify all input variables and to ensure completeness, i.e., to ensure that each input-influencing variable is correctly identified. Bounds on the minimal or maximal values of variables, or additional information about the input variables can be provided by the user. 3.2 Classification The result of these program analyses is a classification of the input variables along two orthogonal lines: directly or indirectly influencing, and through loops, conditionals or variable instruction times. This classification is a prerequisite to divide the input space in a meaningful manner. We note that this classification is not exclusive, i.e., a variable may influence the execution times in more than only one single category. Furthermore, as an implicit result of this classification, we can validate whether the user has specified an input distribution for all relevant variables, and we can omit irrelevant variables from further examination. We denote the set of execution time influencing variables with $V_I$. 3.3 Handling Multiple Variables In case of multiple variables, we project the multi-dimensional input space to the natural numbers $\mathbb{N}$ using $E: V_1 \times V_2 \times \ldots V_n \rightarrow \mathbb{N}$ with $$E(v_1, v_2, \ldots, v_n) = \prod_{i=1}^{I} \left( \prod_{j<i} E_{V_j}^{\text{max}}(v_j) \right)$$ where $E_{V_i}$ is the enumeration function for Type $T_i$ and $E_{V_j}^{\text{max}}$ denotes the size of the domain of variable $V_j$. Similarly, we compute the input probability for the tuple of input variables $(v_1, v_2, \ldots, v_t)$ assuming that all variables are independent as follows: $$\text{IVD}(v_1, v_2, \ldots, v_t) = \prod_{i<j} \text{IVD}(v_i) \quad \text{(3)}$$ Dependent variables have to be handled and defined explicitly, see [7] for further details. 3.4 Input Range Division The measurements will be distributed to different processes so that we can exploit the parallelism of modern architectures. To this end, we evenly distribute the entire input space to all spawned processes used by the anytime algorithm. 4. ANYTIME ALGORITHM The elimination of non-relevant input values is unlikely to reduce the input space sufficiently for an exhaustive evaluation. In most cases, approximation is inevitable. In this section, we detail the anytime algorithm. In particular, we describe how the input ranges assigned to each processor are traversed to achieve an even coverage, and we describe how the resulting measurements are weighted by their corresponding input value distribution. The anytime algorithm works by spawning various worker processes to perform the measurements, and an additional process which continuously accumulates and processes the execution times produced by the worker processes. This provides immediate availability of the latest results and thereby allows for the execution time distribution to be derived on the fly. To derive a meaningful approximation of the execution time distribution early on, we divide the input space over several processes, and employ a specific traversal function. Our assumption is that the execution time distribution can be approximated quickly by evaluating the input space evenly. 4.1 Range Traversal We have to ensure that we traverse the input space, or to be specific, the range of the input space assigned to a worker process, in a meaningful way. When we start to traverse the range from one corner and move to the other step by step, we achieve a full coverage of a part of the range, whereas the other side remains unvisited until the entire range has been visited. We refer to this type of state traversal as linear traversal. To cover the entire input space evenly already early on we propose an alternative traversal function $\text{tr}$. The rational behind this function is to always hit the middle of the unvisited space. Assume an input range given by $[0 : 127]$. The traversal function starts in the middle of the range, $\text{tr}(1) = 63$, followed by the middle of the left sub-range $[0 : 63]$, $\text{tr}(2) = 32$, and of the right sub-range $[63 : 127]$, $\text{tr}(3) = 96$, and so on. We refer to this traversal function as logarithmic traversal. We first define an auxiliary function $\text{tr}' : \mathbb{N} \rightarrow (0 : 1)$ which computes the range pointer within the range $(0 : 1)$, e.g., $\text{tr}'(1) = 0.5$, $\text{tr}'(2) = 0.25$, $\text{tr}'(3) = 0.75$, irrespective of the size of the range of $\text{tr}$. It is defined as follows: $$\text{tr}'(x) = \left( \frac{1}{2^{\lceil \log_2(x) \rceil + 1}} + \frac{x - 2^{\lfloor \log_2(x) \rfloor}}{2^{\lfloor \log_2(x) \rfloor}} \right) \quad \text{(4)}$$ Since the function $\text{tr}'$ always cuts the unvisited ranges in half, it works best for range-size of a power of 2. Next, we have to map the range of $\text{tr}'$ to an arbitrary range $[l_{\text{min}} : l_{\text{max}}]$. Let $s$ be the size of the range, i.e., $s = l_{\text{max}} - l_{\text{min}} + 1$. We define $\text{tr}: \mathbb{N} \rightarrow \mathbb{N} \cup \{ \bot \}$, the corrected version of $\text{tr}'$ as follows: $$\text{tr}(x) = \begin{cases} l_{\text{min}} + \text{tr}'(x) \times 2^{\lfloor \log_2(s) \rfloor} & \text{if } \text{tr}'(x) \times 2^{\lfloor \log_2(s) \rfloor} \leq s \\ \bot & \text{otherwise} \end{cases} \quad \text{(5)}$$ The value $\bot$ indicates that the result will be omitted and we directly proceed with the next range index. This is necessary to ensure that each value occurs exactly once, and to avoid performing measurements with the same input values twice. The logarithmic traversal function $\text{tr}$ is applied to each worker process, and hence, to each subrange individually. A weakness of $\text{tr}$ is that it visits $l_{\text{min}}$ and $l_{\text{max}}$ very late, resp. at the very last. The values $l_{\text{min}}$ and $l_{\text{max}}$ tend to result in the lowest and highest execution time values and hence, determine the overall shape of the distribution more than values from the middle range. To overcome this drawback, $l_{\text{min}}$ and $l_{\text{max}}$ will be visited first within each process and only after these two measurements, the traversal using $tr$ starts. 4.2 Derivation of execution time distribution The measured execution times are stored in a relative frequency table. This table contains an entry for each observed execution time with a value indicating its relative occurrence in relation to all others: $rf: \mathbb{N} \rightarrow \mathbb{R}$ with $\sum_{v \in \mathbb{N}} rf(t) = 1$. As we assume the availability of the value probability distributions for each of the input variables, we have to include these in the derivation of the execution time distribution. We do this by utilising the probability functions as weight functions for the frequency table. The relative frequency of a measured execution time $t$ is determined by the following update function: $rf(t) := rf(t) + \prod_{i=1}^{n} P(I_i = v_i)$. This ensures that an execution time resulting from a high probability input contributes more to the distribution than one with a low probability input. Note that we assume the probabilities of the individual input variables to be statistically independent (i.e., that the joint probability is given by the product of the individual probabilities). The final step in deriving the execution time probability distribution is to normalise the data by dividing each value by the sum of all values. This last step ensures that all the combined probabilities add up to 1. 5. EVALUATION AND RESULTS In this section, we exemplify the EDiFy framework on a selection of benchmarks from the TACLeBench [10] and the EEMBC [1] benchmark suites. TACLeBench is an open source benchmark suite particularly designed for the evaluation of timing analysis tools, whereas EEMBC is a commercial benchmark suite based on realistic automotive use cases. Despite the high number of available benchmarks, only a subset exhibits non-trivial timing behaviour, or an input-dependent execution time. Furthermore, in nearly all cases, a single variable per task influences the execution time behaviour. We have selected three non-trivial benchmarks to highlight different aspects of the EDiFy Framework: bubble-sort (from TACLeBench) has been selected to illustrate the progress of the anytime algorithm over time, and $bitmnp$ and $pntrch$ (both EEMBC) to show specific execution time distributions and their dependency on the input value distributions. Due to space constraints, further results are only available online [2]. The EDiFy framework was run on a system featuring a quad-core Intel Core i7-4700MQ processor clocked at 3.4GHz with 16GB of DDR3 RAM. The benchmarks were executed in the gem5 cycle-accurate simulator and cross-compiled using GCC 5.3.0. The simulator itself targeted the 64-bit ARMv8-A architecture, a clock speed of 500MHz and a cache featuring 128kB of L2 cache, 64kB of L1 data cache and 16kB of L1 instruction cache. 5.1 Anytime Algorithm For the evaluation of the anytime algorithm, we have chosen the bubble-sort benchmark, as it exhibits non-trivial timing behaviour and is easily scalable. Due to the specific purpose as a benchmark, there is only one parameter which has been correctly identified by the input space analysis. We have assumed equal probability of each permutation, and use the lexicographic order as bijective enumeration function, i.e., to assign each value from $[0 : n! - 1]$ a unique permutation. Figure 3 depicts the execution time distributions derived after 10 minutes using 4 different configurations: linear and logarithmic traversal executed on 1 (with one spawned process only) or on 4 processors (with 8 spawned processes). The anytime algorithm completes around 1000 measurements per processor within 10 minutes. In addition, we have added a line that shows the final, and hence exact execution time distribution after exhaustive evaluation. The measured execution times have been rounded to the closest 0.1ns to smooth the graph. The logarithmic traversal function leads to a precise approximation after already 10 minutes, irrespective of the number of processors used, whereas, the linear traversal function shows an heavily skewed approximation on the execution time distribution, which is only slightly alleviated by using 4 processors. We have also evaluated the mean difference with respect to the exact distribution (see Figure 4). Exhaustive evaluation is achieved after around 120, resp. 360 minutes when executed on 4, resp. 1 processor. The graph shows the advantage of combining logarithmic traversal with distributed processing. The logarithmic traversal function ensures a tight approximation early on, irrespective of the number of processors, and the distributed processing reduces the overall runtime resulting in faster convergence. Interestingly, the mean differences are not monotonically decreasing for bubble-sort as some costly permutations are only examined towards the end of the evaluation. We note that for cases with purely integer input variables, we observe monotonic mean differences. 5.2 IVD-Dependency The other two benchmarks, $bitmnp$ and $pntrch$ have been selected as they depict rather peculiar execution time distributions. Both of these benchmarks stem from the EEMBC automotive benchmark suite [1]. We use these benchmarks to 6. CONCLUSION In this paper, we have presented EDiFy, a framework to derive the execution time distributions of embedded real-time tasks. EDiFy lifts real-time timing analysis from deriving bounds on the execution time to deriving complete execution time distributions. The main obstacle towards deriving such execution time distributions is the computational complexity and the sheer size of the input space. We attack this state-space exploration problem by i) using static analysis to reduce the input space and ii) using an anytime algorithm which allows to derive meaningful approximation on the execution time distribution. The static analysis removes irrelevant input parameters, and hence prunes the state-space. The anytime algorithm – together with a logarithmic traversal function to achieve a balanced coverage of the input space – allows to compute a precise approximation even if exhaustive evaluation is infeasible. We have successfully exemplified the EDiFy framework on TACLeBench and EEMBC control applications. Our framework is currently task-centric, meaning that we assume that only the task under examination is running on the hardware. As future work, we plan to extend the framework towards complete task sets, where we take the interference of different tasks on the hardware into account. Furthermore, we plan to integrate more sophisticated program analyses to further prune the input-space. References
{"Source-Url": "https://pure.uva.nl/ws/files/40110307/a32_Braams.pdf", "len_cl100k_base": 6719, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 23860, "total-output-tokens": 8084, "length": "2e12", "weborganizer": {"__label__adult": 0.0006046295166015625, "__label__art_design": 0.0006999969482421875, "__label__crime_law": 0.0005674362182617188, "__label__education_jobs": 0.0005211830139160156, "__label__entertainment": 0.00014150142669677734, "__label__fashion_beauty": 0.00027561187744140625, "__label__finance_business": 0.00037026405334472656, "__label__food_dining": 0.0004935264587402344, "__label__games": 0.0011844635009765625, "__label__hardware": 0.01142120361328125, "__label__health": 0.0007829666137695312, "__label__history": 0.0005059242248535156, "__label__home_hobbies": 0.00021708011627197263, "__label__industrial": 0.0012502670288085938, "__label__literature": 0.00027680397033691406, "__label__politics": 0.0004982948303222656, "__label__religion": 0.0007843971252441406, "__label__science_tech": 0.20849609375, "__label__social_life": 8.785724639892578e-05, "__label__software": 0.00870513916015625, "__label__software_dev": 0.759765625, "__label__sports_fitness": 0.0005183219909667969, "__label__transportation": 0.0017690658569335938, "__label__travel": 0.0003082752227783203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33822, 0.02728]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33822, 0.34008]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33822, 0.8554]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 4729, false], [4729, 12090, null], [12090, 17335, null], [17335, 24371, null], [24371, 29873, null], [29873, 33822, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 4729, true], [4729, 12090, null], [12090, 17335, null], [17335, 24371, null], [24371, 29873, null], [29873, 33822, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33822, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33822, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33822, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33822, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33822, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33822, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33822, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33822, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33822, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33822, null]], "pdf_page_numbers": [[0, 0, 1], [0, 4729, 2], [4729, 12090, 3], [12090, 17335, 4], [17335, 24371, 5], [24371, 29873, 6], [29873, 33822, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33822, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
96eab64b14ea9e19dcb373608c4208dd9995fba2
Tutorial on Modeling VAT Rules Using OWL-DL Nielsen, Morten Ib; Simonsen, Jakob Grue; Larsen, Ken Friis Publication date: 2007 Document Version Publisher's PDF, also known as Version of record Citation for published version (APA): Tutorial on Modeling VAT rules using OWL-DL Morten Ib Nielsen, Jakob Grue Simonsen and Ken Friis Larsen Department of Computer Science University of Copenhagen Email: {mortenib|simonsen|kflarsen}@diku.dk August 28, 2007 Total number of pages: 16 Abstract This paper reports on work in progress. We present a methodology for constructing an OWL-DL model of a subset of Danish VAT rules. It is our intention that domain experts without training in formal modeling or computer science should be able to create and maintain the model using our methodology. In an ERP setting such a model could reduce the Total Cost of Ownership (TCO) and increase the quality of the system. We have selected OWL-DL because we believe that description logic is suited for modeling VAT rules due to the decidability of important inference problems that are key to the way we plan to use the model and because OWL-DL is relatively intuitive to use. 1 Introduction Imagine an ERP system where domain experts can create and implement changes in e.g. VAT rules without the help of programmers. The benefits would be shorter development time and fewer mistakes due to misinterpretation of specifications which lead to reduced TCO and increased quality of the software. On a coarse-grained scale such a system consists of three parts: A model of the rules, a tool to edit the model and the core ERP system using the model. In this paper we focus on the first part - the model. A priori two requirements exist. First the modeling language must be strong enough to express the rules in question and second it must be easy to use without training in formal modeling or computer science. In a more general setting the model can be used as a VAT knowledge system which external programs can query through an interface. In the long run we envision that authorities such as SKAT (Danish tax administration) can provide online access to the model e.g. using web services such that applications always use the newest version of the model. In this paper we describe a methodology we have used to develop a model of a subset of Danish VAT rules using the general purpose Web Ontology Language (OWL) editor Protégé-OWL\(^1\) and we report on our experiences in doing so. We selected a subset of Danish VAT rules consisting of flat VAT (25%) plus a set of exceptions where goods and services are free of VAT, chosen because they seem representative. Further the rules are accessible to us by way of an official guideline by the Danish tax administration. Our study is focusing on the feasibility \(^1\)http://protege.stanford.edu/overview/protege-owl.html. of using OWL to model VAT rules and not on the usability of the Protégé-OWL tool itself. By feasibility we mean how easy or difficult it is (for a human) to express and understand VAT rules in OWL, in particular this does not cover issues such as modularization. The methodology presented here is inspired by the article [1] together with our own experience. Readers of this guide are assumed to have user experience of Protégé-OWL corresponding to [2] but not of computer science nor of modeling in general. 1.1 Motivation One of the overall goals of the strategic research project 3gERP is to reduce the TCO of Enterprise Resource Planning (ERP) systems. We believe that a VAT model helps to this end in two ways. First we envision that domain experts create and update the model thus eliminating a layer of interpretation (the programmer) where errors can be introduced. Second a VAT model can change handling of VAT from being a customization task into being a configuration task, meaning that no code needs to be changed when the model is updated. VAT and legal rules in general deal with frequent transactions between legal entities. Transactions are typically triggered when certain conditions are fulfilled and therefore dynamic checks on these conditions are needed. The idea is to use the model to automatically infer what actions should be taken based on the conditions. In the case of VAT rules we can ask the model whether a delivery is subject to VAT or not based on the information we know about the delivery. The answer from the model will be \textit{Yes}, \textit{No} or \textit{Maybe} and can be used to trigger an appropriate transaction. In a broader perspective the model is supposed to work as a VAT knowledge system that given a context and a question can tell other systems what to do, e.g. guide accounting systems and if required indicate that authorities should be contacted etc. 1.2 Roadmap The remainder of this paper is structured as follows. In Section 2 we give a short account of description logic and OWL. In Section 3, 4 and 5 we present our methodology by giving examples. Finally we outline future work in Section 6 and we conclude in Section 7. 2 Description Logic and OWL In this section we give a short introduction to description logic (DL) and OWL. This introduction can be skipped, if you are already familiar with the concepts. Description logics are knowledge representation languages that can be used to structure terminological knowledge in knowledge systems which are formally well-understood. A knowledge system typically consists of a knowledge base together with a reasoning service. The knowledge base is often split into a set of concept axioms the \textit{TBox}, a set of assertions the \textit{Abox} and a \textit{Role hierarchy}. These constitute the \textit{explicit} knowledge in the knowledge system. The reasoning service is a program that can check the consistency of the knowledge base and make implicit knowledge explicit, e.g. decide equivalence of concepts. Since the reasoning service is a pluggable component knowledge systems separate the technical task of reasoning from the problem of constructing the knowledge base. \footnote{In the case where insufficient information is provided in order to answer the question.} 2.1 OWL OWL which is short for Web Ontology Language is an ontology language designed to be compatible with the World Wide Web and the Semantic Web. The most important abstraction in OWL is concept axioms which are called classes. Each class has a list of necessary conditions and zero or more equivalent lists of necessary and sufficient conditions [2]. A list of necessary conditions is a list of conditions that every member of the class must satisfy. In the same way a list of necessary and sufficient conditions is a list of conditions that must be satisfied by every member of the class and if satisfied guarantees membership in the class. OWL is based on XML, RDF and RDF-S and can be used to represent information in a way that is more accessible to applications than traditional web pages. In addition OWL has a formal semantics, which enables logic reasoning. OWL comes in three variants: OWL-Lite ⊆ OWL-DL ⊆ OWL-Full of increasing expressive power. The variants OWL-Lite and OWL-DL are based on the description logics $SHIF(D)$ and $SHOIN(D)$ respectively [3], which guarantees that important inference problems such as satisfiability and subsumption are decidable. Since OWL is XML based we need an editor to create OWL ontologies. We have used the general purpose OWL editor Protégé developed by Stanford Medical Informatics at the Stanford University School of Medicine. 3 VAT Exemption 1: Sales outside EU Our methodology is aimed at modeling VAT rules as described in guidelines instead of the raw law text itself. This choice was made because guidelines are more accessible to us, and because these are the rules that small companies adhere to in practice. Further the investigation of the feasibility of using OWL to model VAT rules concerns the ease with which rules can be formalized and not so much from where the rules are extracted\(^3\). In what follows we refer to the guideline as the legal source. In order to ease reading we have used the word concept only when we speak about the legal source. The corresponding concept in the model (OWL) is called a class. A concept in the legal source is modeled as one or more classes in the model. Here we present the steps we took in order to make our model of Danish VAT rules. 3.1 Pre-modeling 1. Download Protégé-OWL from http://protege.stanford.edu/download/release/full/ and install. Make sure you can start Protégé in OWL-mode (logic view). When started and if you select the Class tab it should look like Figure 1. 2. Download [2] and read it. This is important because many of the constructions we use are explained herein. 3.2 Modeling First you must decide which legal source(s) you want to model. \(^3\)Since we have used the official guidelines by SKAT (Danish tax administration) we believe that the content of the guidelines is in accordance with the law. 3.2 Modeling 3 VAT EXEMPTION 1: SALES OUTSIDE EU Figure 1: Protégé-OWL class-tab, logic view. In our case we used the official guideline Moms - fakturering, regnskab mv, E nr. 27, Version 5.2 digital, 19. januar 2005. 3.2.1 Overall framework Modeling should start with a read through of the legal source. Based on this general (to be refined later) classes such as Location, Goods, Services and FreeOfVAT together with attributes such as hasDeliveryType and hasSalesPrice can be created as subclasses of the built-in top-level class owl:Thing. An attribute can usually take on at most a finite number of values. In that case we use value partitions to model them as described in [2][p. 73-76]4. If the domain is not finite we use data type properties instead. Deciding on the overall framework helps to structure the capturing of rules in a homogeneous way and enables working in parallel (which can be needed if the legal source is large). After our read through of the legal source we arrived at the overall framework in Figure 2. Figure 2: Overall framework. **Naming Convention.** All classes, properties, individuals etc. should be given names picked from or inspired of the legal source. All names should be in the same language as the legal source (in our case Danish). Using the naming convention supported by Protégé-OWL class and individual names should be written in Pascal Notation, e.g. InternationalOrganization not internationalOrganization or International_Organization, while property names are written in Camel Hump Notation, e.g. someProperty. Typically a property is used to assign an attribute to a class. In this case we prefix the name of the property with a verb describing the kind of relation the class has along that property, e.g. hasNumberOfSides or isFragile. 3.2.2 Rule modeling - step I Having modeled the overall framework it is time to go through the legal source one section at a time looking for rules that should be modeled. Here we give an elaborate description of how to model a single rule from the legal source starting from the overall framework in Figure 4 --- 4 An exception is the domain of truth values, which is built-in as a data type. 3.2 Modeling 3 VAT EXEMPTION 1: SALES OUTSIDE EU Table 1 Extract from the legal source and its translation into English. And translated into English: Sales outside EU (3rd countries). No VAT should be added to goods delivered to destinations outside the European Union, or to the Faroe Islands or Greenland. This fact ordinarily also applies to services, but VAT should be added to certain services. Translated from [4][p. 9] Table 2 Necessary & sufficient conditions for application of the rule in Table 1. - The rule concerns sales. - The rule concerns both goods and services. - The place of delivery must be outside the European Union, or the Faroe Islands or Greenland. 2. In Section 4 and 5 we give a brief description of how to model other rules. Together the modeling of these rules cover all the constructions we have used in our VAT model. Since our legal source is in Danish we present the rules in their original Danish phrasing together with a translation into English. Now let us consider the rule shown in Table 1. Since our model is only a prototype we make a slight simplification and assume that the rule also applies to all services. With this simplification we can identify the necessary and sufficient conditions for application of the rule. These are shown in Table 2. In order to model the necessary and sufficient conditions in Table 2 we must add some attributes to VarerOgYdelser. The first and second condition in Table 2 tell us that we must be able to model that goods and services are sold5. We do that by adding an attribute to the class VarerOgYdelser (translates into GoodsAndServices) which already exists in our overall framework. Attributes are modeled using functional properties. In accordance with our naming convention we select the name harLeveranceType (translates into hasDeliveryType). Since there is a finite number of delivery types we model this attribute as a value partition, i.e. an enumeration. Value partitions can be created using a built-in wizard6. Just as in [2] we store value partitions as subclasses of the class ValuePartitions. The reason plain enumerations are not used is that they cannot be sub-partitioned. Using value partitions we retain the possibility of further refining the concepts the value partitions model. 5Instead of being sold goods can also be used as e.g. a trade sample. See [4][p. 8-9] for other examples. 6Menu ► Tools ► Patterns ► Value Partition.... Remark. Technically enumerations are constructed by defining a class in terms of a finite set of individuals plus a functional property that has this class as its range. Since individuals are atoms they cannot be subdivided. On the other hand a value partition is defined using a functional property having as its range a class defined as the union of its subclasses all of which are distinct. These subclasses can (because they are classes) be partitioned into more subclasses if needed. Having created the value partition harLeveranceType which can have Salg (translates into Sale) as a value we need to add it as an attribute to the class VarerOgYdelser. This is done by adding to the necessary conditions an existential quantification over the corresponding property having the value partition (or data type in case of data type attribute) as its range. Thus we add $\exists$ harLeveranceType some LeveranceType to VarerOgYdelser. The third condition tells us that we must be able to model that goods and services have a place of delivery. A read through of the legal source tells us that only three places are needed namely Denmark, EU and non-EU. Thus this attribute which we name harLeveranceSted (translates into hasPlaceOfDelivery) must be modeled as a value partition. Having modeled these attributes the class VarerOgYdelser looks as shown in Figure 3. 3.2.3 Rule modeling - step II Now we are ready to model the rule itself. Since the rule describes a situation where you do not have to pay VAT we model it as a subclass of Momsfritaget (translates into FreeOfVAT). Following our naming convention we name the class MomsfritagetSalgAfVarerOgYdelserTilIkke-EU (translates into VATFreeSalesOfGoodsAndServicesInNon-EU). Then we add a textual description of the rule and a reference to where in the legal source the rule stems from to the rdfs:comment field. Next we must specify necessary and sufficient conditions on membership in MomsfritagetSalgAfVarerOgYdelserTilIkke-EU. It is important to remember that if a class has two sets of necessary and sufficient conditions then they must imply each other, see [2][p. 98]. Based on the necessary and sufficient conditions captured in Table 2 we add the following necessary and sufficient conditions to MomsfritagetSalgAfVarerOgYdelserTilIkke-EU: - VarerOgYdelser - $\exists$ harLeveranceSted some Ikke-EU - $\exists$ harLeveranceType some Salg The result is shown in Figure 4. 4 VAT Exemption 2: Sales to Embassies In this section and onwards we will not mention when to add references to the legal source in rdfs:comment fields of classes and properties. The rule of thumb is that this should always be done. Now let us consider the rule in Table 3. We identify the necessary and sufficient conditions for application of the rule. These are shown in Table 4. Figure 3: Class and property view after adding attributes. VAT EXEMPTION 2: SALES TO EMBASSIES Figure 4: Asserted Conditions of our model of the legal rule in Table 1. Table 3 Extract from the legal source and its translation into English. <table> <thead> <tr> <th>Sales til ambassader. Du skal ikke beregne moms af varer og transportydelser, som du leverer til ambassader og internationale organisationer i andre EU-lande.</th> </tr> </thead> </table> And translated into English: Sales to embassies. VAT should not be added to goods and transport services delivered to embassies and international organizations in countries within the European Union. Translated from [4][p. 9] Table 4 Necessary & Sufficient conditions for application of the rule in Table 3. - The rule concerns sales. - The rule concerns goods and transport services. - The place of delivery must be in the European Union. - The buyer must be an embassy or an international organization. 4.1 Rule modeling - step I We are already able to model that the rule concerns sale and that the place of delivery must be in EU. We cannot model the specific service transportation yet. Therefore we must add it to our model. Since it is a service it should be modeled as a subclass of Services. We name the class modeling the service transportation Transport (translates into Transportation). Now we can model that something belongs to the set of goods and transport services by requiring membership of Varer ⊔ Transport. Finally we must be able to model that the buyer is an embassy or an international organization. Since there are only finitely many different kinds of buyers we model this as a value partition, and because this attribute applies to both Varer and Transport we add it to their most specific common super-class which is VarerOgYdelser. We name this attribute harKøberType (translates into hasKindOfBuyer). After having done all this the model looks as shown in Figure 5. 4.2 Rule modeling - step II Having added all the necessary classes and attributes to the model we are ready to model the rule itself. Since the rule describes a situation where you do not have to pay VAT we model it as a subclass of Momsfritaget. Following our naming convention we name the class MomsfritagetSalgTilAmbassaderOgInternationaleOrganisationerIEU (translates into VATFreeSalesToEmbassiesAndInternationalOrganizationsInEU). Based on the necessary and sufficient conditions captured in Table 4 we add the following necessary and sufficient conditions to MomsfritagetSalgTilAmbassaderOgInternationaleOrganisationerIEU: - harLeveranceType some Salg - Varer ⊔ Transport - harLeveranceSted some EU - harKøberType some AmbassadeOgPersonaleMedDiplomatiskRettighed The result is shown in Figure 6. 5 VAT Exemption 3: Sales in other EU countries In this section we consider one final rule, the rule in Table 5. We identify the necessary and sufficient conditions for application of the rule. These are shown in Table 6. 5.1 Rule modeling - step I We are already able to model that the rule concerns sale of goods delivered inside the European Union. The new thing is that we must be able to indicate whether a buyer is registered for VAT and if so, we must register the buyers VAT registration number. We use a functional data type property named erKøberMomsregistreret (translates into isTheBuyerRegisteredForVAT) with the data type xsd:boolean as its range to model whether the buyer is registered for VAT. Similarly we use a functional data type property named erKøbersMomsnummer (translates into isBuyersVATRegistrationNumber) with the data type xsd:string as its range to register the buyers VAT registration number if he has one. Figure 5: The model after adding classes and attributes as described in Section 4.1. 5.1 Rule modeling - step 5 VAT EXEMPTION 3: SALES IN OTHER EU COUNTRIES Figure 6: Asserted Conditions of our model of the legal rule in Table 3. Table 5 Extract from the legal source and its translation into English. And translated into English: Sales in other EU countries. No VAT should be added to goods delivered to companies in other EU countries, provided that the companies are registered for VAT. In this case you must acquire the VAT registration number of the company. Translated from [4][p. 8] Table 6 Necessary & Sufficient conditions for application of the rule in Table 5. - The rule concerns sales. - The rule concerns goods. - The place of delivery must be in the European Union. - The buyer must be registered for VAT. - You must acquire the VAT registration number of the company. Figure 7: Asserted Conditions of \texttt{VarerOgYdelser} after adding the requirement for registering VAT registration numbers. A read through of [4] will reveal that you must register the VAT registration number of the buyer exactly when the buyer is registered for VAT. Thus we model this as a property of \texttt{VarerOgYdelser} and not of \texttt{Varer} (as indicated by the rule). The requirement can be modeled as follows: \begin{itemize} \item $((\text{erKøberMomsregistreret has true}) \land (\text{erKøbersMomsnummer exactly 1})) \lor ((\text{erKøberMomsregistreret has false}) \land (\text{erKøbersMomsnummer exactly 0}))$ \end{itemize} The result is shown in Figure 7. 5.2 Rule modeling - step II Having added the necessary attributes to the model we are ready to model the rule itself. Since the rule describes a situation where you do not have to pay VAT we model it as a subclass of \texttt{Momsfritaget}. Following our naming convention we name the class \texttt{MomsfritagetSalgTilAndreEU-lande} (translates into \texttt{VATFreeSalesToOtherEUCountries}). Based on the necessary and sufficient conditions captured in Table 6 we add the following necessary and sufficient conditions to \texttt{MomsfritagetSalgTilAndreEU-lande}: \begin{itemize} \item \texttt{harLeveranceType some Salg} \item \texttt{Varer} \item \texttt{harLeveranceSted some EU} \item \texttt{erKøberMomsregistreret has true} \end{itemize} We note that the obligation to register the buyers VAT registration number is modeled indirectly, see Section 5.1. The result is shown in Figure 8. 6 Future work Since this is work in progress there are a lot of areas we need to address. In the near future we plan to integrate our model in a prototype ERP system as described in the introduction. This opens the possibility for modeling the parts of the Danish VAT legislation concerning depreciation and VAT reporting (since they are intertwined and contain a lot of technical requirements on the financial reports). We also need to model other countries VAT rules in order to confirm that Danish VAT rules are indeed representative with respect to the constructions that are needed in the modeling language. Based on this we need to refine our overall framework such that it captures the common structure and we need to identify what kinds of questions a model must be able to answer. The synthesized knowledge from modeling the VAT rules of other countries should also result in a more detailed analysis of what we can and cannot model. Based on all this we should design a minimal description logic extended with the needed functionality identified in the analysis just mentioned, such as predicates like $x < 100$ which are needed in some rules. We should also provide a reasoner for the logic together with an editor such that the above process can be repeated. Finally in order to compare our OWL model with a different approach we want to make a model using Datalog, which is the de facto standard language used to express rules in deductive databases, of the rules we have formalized in OWL already. It would also be interesting to try a hybrid solution e.g. OWL plus a rule language like SWRL. This work is independent of the tasks mentioned above and can be carried out in parallel. 7 Conclusion We have shown how to model a subset of Danish VAT rules concerning exemption from VAT using Protégé-OWL. First we created an overall framework for the VAT model with the property that legal rules and the concepts they involve can be modeled as subclasses of existing classes in the framework. This helps to ensure that related concepts are modeled in the same way and that a single concept is not modeled twice. The second step was an iterative process consisting of two steps repeated for each rule. The first step is to extend the model such that the rule in question can be modeled. This is done by modeling concepts from the legal source as classes in the model and by adding attributes to the necessary conditions of such classes. The second step is to model the rule itself. This is done by adding specific requirements for application of the rule to the necessary and sufficient conditions of the class modeling the rule. The step by step iterative modeling has been working fine in practice and an extension to cover several different VAT and duty rates does not seem to be problematic as long as they do not require us to model restrictions such as $x < 100$ which is not supported directly in OWL 7. --- 7 Whether this is a weakness of OWL, or just us trying to use OWL for something it was not designed to Apart from modeling inequalities we have not had modeling problems. One problem though is that reasoning about individuals in OWL models is not supported very well. Therefore we have tried to avoid the use of individuals wherever possible (using value partitions). References
{"Source-Url": "https://static-curis.ku.dk/portal/files/15432526/nielsen-simonsen-larsen.pdf", "len_cl100k_base": 5766, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 30346, "total-output-tokens": 6867, "length": "2e12", "weborganizer": {"__label__adult": 0.0005784034729003906, "__label__art_design": 0.00079345703125, "__label__crime_law": 0.0038967132568359375, "__label__education_jobs": 0.0028514862060546875, "__label__entertainment": 0.0001691579818725586, "__label__fashion_beauty": 0.0003342628479003906, "__label__finance_business": 0.0188446044921875, "__label__food_dining": 0.0005617141723632812, "__label__games": 0.0009927749633789062, "__label__hardware": 0.0008411407470703125, "__label__health": 0.0009741783142089844, "__label__history": 0.0005192756652832031, "__label__home_hobbies": 0.0002570152282714844, "__label__industrial": 0.0014104843139648438, "__label__literature": 0.0007805824279785156, "__label__politics": 0.0013132095336914062, "__label__religion": 0.00047969818115234375, "__label__science_tech": 0.11956787109375, "__label__social_life": 0.00021135807037353516, "__label__software": 0.05987548828125, "__label__software_dev": 0.78271484375, "__label__sports_fitness": 0.0002818107604980469, "__label__transportation": 0.001407623291015625, "__label__travel": 0.00033593177795410156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27157, 0.0301]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27157, 0.49784]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27157, 0.90364]], "google_gemma-3-12b-it_contains_pii": [[0, 442, false], [442, 3067, null], [3067, 6362, null], [6362, 9211, null], [9211, 9307, null], [9307, 11405, null], [11405, 14091, null], [14091, 16916, null], [16916, 16975, null], [16975, 17898, null], [17898, 20637, null], [20637, 20722, null], [20722, 21717, null], [21717, 23688, null], [23688, 26341, null], [26341, 26606, null], [26606, 27157, null]], "google_gemma-3-12b-it_is_public_document": [[0, 442, true], [442, 3067, null], [3067, 6362, null], [6362, 9211, null], [9211, 9307, null], [9307, 11405, null], [11405, 14091, null], [14091, 16916, null], [16916, 16975, null], [16975, 17898, null], [17898, 20637, null], [20637, 20722, null], [20722, 21717, null], [21717, 23688, null], [23688, 26341, null], [26341, 26606, null], [26606, 27157, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27157, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27157, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27157, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27157, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27157, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27157, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27157, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27157, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27157, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27157, null]], "pdf_page_numbers": [[0, 442, 1], [442, 3067, 2], [3067, 6362, 3], [6362, 9211, 4], [9211, 9307, 5], [9307, 11405, 6], [11405, 14091, 7], [14091, 16916, 8], [16916, 16975, 9], [16975, 17898, 10], [17898, 20637, 11], [20637, 20722, 12], [20722, 21717, 13], [21717, 23688, 14], [23688, 26341, 15], [26341, 26606, 16], [26606, 27157, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27157, 0.01325]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
c73fc3b0ab0ca46561fcb546523a39b928a5fc0c
Adaptive Resonance Theory (ART): An Introduction by L.G. Heins & D.R. Tauritz May/June 1995 ART-1 network produced by SNNS v3.3 - 9 input nodes, 9 output nodes Index Paragraph 1 - Introduction.................................................................3 Paragraph 2 - Motivation..................................................................3 Paragraph 3 - Concepts.................................................................4 Paragraph 4 - Mechanics............................................................7 Paragraph 5 - Adaptations.........................................................11 References.................................................................................12 Appendix - ANSI-C source code for cluster-euclidean instability example........13 Paragraph 1 - Introduction The purpose of this paper is to provide an introduction to Adaptive Resonance Theory (ART) by examining ART-1, the first member of the family of ART neural networks. The only prerequisite knowledge in the area of neural networks necessary for understanding this paper is backpropagation [Hinton86]. For an easy introduction to neural networks see [Freeman91], for a more in depth overview of the field see [Hertz91]. Many interesting problems concern the classification of data. For example, say we want to classify animals according to certain characteristics described by a set of parameters. We might have a dog, a cat and an owl. Some characteristics might be "number of legs", "can fly", "has fur" and "is a carnivore". With these characteristics we would hope that the cat and the dog are classified together and the owl separately. In this paper an algorithm which performs this mapping is called a clustering algorithm. A clustering algorithm takes as input a set of input vectors and gives as output a set of clusters and a mapping of each input vector to a cluster. Input vectors which are close to each other according to a specific similarity measure should be mapped to the same cluster. Clusters can be labelled to indicate a particular semantic meaning pertaining to all input vectors mapped to that cluster. The cat and the dog might be classified in a cluster labelled "mammals" and the owl in "birds". However one could also choose "pets" as label for the cluster with the cat and the dog and "winged animal" for the other. Clusters are usually internally represented using prototype vectors which are vectors indicating a certain similarity between the input vectors which are mapped to a cluster. In the above example the first cluster might have prototype vector (4 legs,can’t fly,has fur,is a carnivore) and the second might have prototype vector (2 legs,can fly,doesn’t have fur,is a carnivore). In paragraph 2 the argument will be made that many popular neural networks such as backpropagation have drawbacks making them less suitable to solving these kinds of classification problems. This will be the motivation for introducing ART. In paragraph 3 the sequential algorithm underlying the ART-1 network is given along with another sequential clustering algorithm with which it is compared. Paragraph 4 will introduce competitive networks, show how to extend them to ART networks, and examine the ART architecture in detail. Paragraph 5 will mention some other members of the ART family including many references for further reading. Paragraph 2 - Motivation For example, say we want to categorize the vectors within a certain input environment. At a certain point in time we start training a backpropagation network with N vectors. When training is completed these N vectors will be correctly classified and hopefully other vectors within this input environment will also be because of generalization. However, as the input environment changes in time the accuracy of the backpropagation network will rapidly decrease because the weights are fixed thus preventing the network from adapting to the changing environment. This algorithm is not plastic. An algorithm is plastic if it retains the potential to adapt to new input vectors indefinitely. To overcome this problem the network can be retrained on the new input vector (or the last few). The network will adapt to any changes in the input environment (remain plastic) but this will cause a rapid decrease in the accuracy with which it categorizes the old input vectors because old information is lost. This algorithm is not stable. An algorithm is stable if it preserves previously learned knowledge. This conflict between stability and plasticity is called the stability-plasticity dilemma [Carpenter87a]. The problem can be posed as follows: - How can a learning system be designed to remain plastic, or adaptive, in response to significant events and yet remain stable in response to irrelevant events? - How does the system know how to switch between its stable and its plastic modes to achieve stability without rigidity and plasticity without chaos? - In particular, how can it preserve its previously learned knowledge while continuing to learn new things? - And, what prevents the new learning from washing away the memories of prior learning? Most existing algorithms are either stable but not capable of forming new clusters, or plastic but unstable. The above method using backpropagation could be adapted by retraining the network on the entire set of input vectors each time a new input vector is presented. This however would be extremely inefficient and thus its use would be precluded in any practical application. The problem with this method is that it is not incremental. What we need is a network which itself is incremental thus making it unnecessary to retrain the network on the entire set of input vectors. As we will see in an example in paragraph 3 some incremental networks are unstable. ART was specifically designed to overcome the stability-plasticity dilemma [Grossberg76b]. The ART-1 neural network was designed to overcome this dilemma for binary input vectors [Carpenter87a], ART-2 for continuous ones as well [Carpenter87b]. In this paper we will further confine ourselves to discussing ART-1. ART-1 is an unsupervised neural network. It is unsupervised in the sense that it establishes the clusters without external interference. **Paragraph 3 - Concepts** We can study some properties of a neural network by examining its sequential counterpart without being distracted by its architecture. To gain insight into what ART-1 does, as opposed to how it does it, an algorithmic description will be presented in this chapter. First of all let us clarify what is meant by an incremental clustering algorithm by presenting an algorithm shell for this purpose. **CLUSTER - A clustering algorithm shell with incremental update of prototype vectors and a variable number of clusters** Step 1 - Initialisation - Start with no cluster prototype vectors Step 2 - Apply new input vector - Let I := [next input vector] Step 3 - Find the closest cluster prototype vector (if any) • Let \( P := \text{[closest prototype vector]} \) Step 4 - Check if \( P \) is too far from \( I \) • If \( P \) is too far from \( I \), or if there are no cluster prototype vectors yet, then create a new cluster, with prototype vector equal to \( I \); output the index of this cluster; goto step 2 Step 5 - Update the matched prototype vector • Update \( P \) by moving it closer to \( I \) • Output \( P \)'s index • Goto step 2 To obtain an actual algorithm it is necessary to define "closest", "too far" and "move closer to". A possible instantiation of CLUSTER is one using the Euclidean distance measure. **CLUSTER-EUCLIDEAN - An instantiation of CLUSTER using a Euclidean distance measure** Step 1 - Initialisation • Start with no cluster prototype vectors Step 2 - Apply new input vector • Let \( I = (I_1, \ldots, I_n) := \text{[next input vector]} \) Step 3 - Find the closest cluster prototype vector (if any) • Find the \( P = (P_1, \ldots, P_n) \) to minimize \( d(P,I) = \sqrt{\sum_{x=1}^{n} (P_x - I_x)^2} \) Step 4 - Check if \( P \) is too far from \( I \) • If \( d(P,I) > \theta \), or if there are no cluster prototype vectors yet, then create a new cluster, with prototype vector equal to \( I \); output the index of this cluster; goto step 2 Step 5 - Update the matched prototype vector • Let \( P := (1 - \lambda) \cdot P + \lambda \cdot I \) • Output \( P \)'s index • Goto step 2 This instantiation is, however, unstable in the sense that the prototype vectors can cycle indefinitely during repetitive presentation of a finite sequence of input vectors [see fig.1]. Also, different prototype vectors may have infinitesimal differences. Both problems are solved in the ART-1 algorithm. Fig. 1 Snapshots of Cluster-Euclidean (see Appendix) for e = 1, λ = 0.2 (radius of circle of input vectors is 1) after a) 10 input vectors b) 40 input vectors and c) 200 input vectors. The squares represent the input vectors (numbered in their order of presentation in a), the circles the traces of the prototype vectors. After the second input vector has been presented there are (and continue to be) precisely two prototype vectors, moving counter-clockwise, and eventually reaching a limit-cycle. ART-1 Clustering Algorithm Note: v \cap w = bitwise AND of vectors v and w; \|u\| = [magnitude of u] = # of 1’s in u Step 1 - Initialisation - Initialise the vigilance parameter \( \rho \) so 0 < \( \rho \) ≤ 1 - Initialise the set \( P \) of prototype vectors to {} Step 2 - Apply new input vector - Let \( I := \{ \text{next input vector} \} \) - Let \( P' := P \) be the set of candidate prototype vectors Step 3 - Find the closest prototype vector from \( P' \) - Find the \( i \) which maximizes \( \frac{\|p_i \cap I\|}{\beta + \|p_i\|} \) Step 3' - Check if \( I \) is closer to \( p_i \) or to (1, 1, ..., 1) - If \( \frac{\|p_i \cap I\|}{\beta + \|p_i\|} < \frac{\|I\|}{\beta + n} \) then create a new cluster \( p_j \) equal to \( I \); \( P = P \cup \{ p_j \} \); output \( j \); goto step 2 Step 4 - Check if \( p_i \) is too far from \( I \) - If \( \frac{\|p_i \cap I\|}{\|I\|} < \rho \) then \( P' = P' \setminus p_i \); if \( P' \) is empty goto step 2 otherwise goto step 3 Step 5 - Update the matched prototype vector - Let \( p_i = p_i \cap I \); output \( i \); goto step 2 The \( \beta \) acts as a tie-breaker, favouring larger magnitude prototype vectors when multiple prototype vectors are subsets of the input vector. This compensates for the fact that prototype vectors can only move in one direction. The vigilance parameter defines the class sizes. When it is small it produces large classes. As it gets larger, the vigilance of the network increases, and finer classes are the result. When equal to one, the prototype vectors have to match the input vectors perfectly. In this situation every input vector produces a new class equal to itself. Also notice that in step 4 a form of contrast enhancement is performed. This means that clusters represented by a smaller magnitude prototype vector have a smaller variance of the vectors mapped to that cluster. When implementing this algorithm it is necessary to deal with the restriction of limited memory resources. The following algorithm allocates a fixed amount of memory to work on, assuming that this will be enough. This corresponds to how the actual ART-1 network works. It has two major drawbacks. First of all one may not always know beforehand the maximum number of different clusters. And secondly, if this maximum is known, but very high, one may not want to allocate all the memory resources before they are really needed. To overcome both problems it is possible to begin with a small fixed amount of memory and whenever there is a shortage of unused prototype vectors to allocate another portion of memory. **ART-1 Network Algorithm** **Step 1 - Initialisation** - Initialise N to the total number of clusters - Initialise the vigilance parameter \( \rho \) so \( 0 < \rho \leq 1 \) - Let \( p_i = ( , ) \quad \forall i \in [ , ] \) - Initialise the set \( P \) of prototype vectors to \( i \quad i \) **Step 2 - Apply new input vector** - Let \( l:= \) [next input vector] - Let \( P':=P \) be the set of candidate prototype vectors **Step 3 - Find the closest prototype vector from \( P' \)** - Find the \( i \) which maximizes \( \frac{|p_i \cap l|}{\beta + |p_i|} \) **Step 4 - Check if \( p_i \) is too far from \( l \)** - If \( \frac{|p_i \cap l|}{|l|} < \rho \) then \( P' = P' - p_i \); if \( P' \) is empty goto step 2 otherwise goto step 3 **Step 5 - Update the matched prototype vector** - Let \( p_i = p_i \cap l \); output \( i \); goto step 2 Though ART-1 is unsupervised it can sometimes be useful to add a limited amount of supervision by allowing the vigilance parameter to be changed externally. When for example the granularity of the clusters is not fine enough one can dynamically increase the vigilance parameter. Paragraph 4 - Mechanics The network algorithm presented in the previous chapter describes the dynamic behaviour of the ART-1 neural network. The different steps correspond with phases that can be distinguished when examining the behaviour of the network. In this chapter a closer look at these phases will be taken. Two inherent aspects of neural networks are that they are continuous and parallel: continuous in the sense that the activations of the nodes and the weights of the connections change continuously in time, parallel in the sense that these changes occur concurrently. A class of neural networks often used for clustering is the class of competitive networks. First a particular competitive network will be described. After that the ART-1 network, which is in essence an extension of this competitive network, will be introduced. Competitive Network A competitive network consists of two layers of nodes, the input layer $F_1$ and the output layer $F_2$. $F_1$ is fully connected with $F_2$ via weighted bottom-up connections called pathways. The set of pathways with corresponding weights is called an adaptive filter, adaptive because the weights can be changed dynamically to adapt to new input vectors. Patterns of activation of $F_1$ and $F_2$ nodes are called short term memory (STM) traces because they only exist during a single presentation of an input vector. The weights in the adaptive filter encode the long term memory (LTM) traces. LTM traces are equivalent to the prototype vectors in the previously discussed clustering algorithms. An input pattern that is presented to the network generates an activity pattern $X$ at the $F_1$ layer. The $F_1$ activity pattern $X$ is the normalized input pattern (eq.1), see [Grossberg76a] for how this can be implemented. This pattern is transformed by the weights in the pathways from $F_1$ to $F_2$. Each $F_2$ node receives as input pattern $X$ multiplied by the weights in the pathways to that node (eq.2) which comprise the prototype vector corresponding to that node. The output node for which the dot product of the input vector and the prototype is largest represents the cluster which best matches the input vector. The $F_2$ layer is a competitive layer. Every node in this layer has inhibiting connections to the other nodes. As a result only the node with the largest input has an output. Finally the weights in the pathways are changed to accommodate the new input vector (eq.3). $\varepsilon$ is a parameter which determines the speed of learning. It can be shown, through explicit counterexamples, that this network is not stable, see [Carpenter87a]. This network thus clearly does not overcome the stability-plasticity dilemma. As mentioned in paragraph 2, ART was specifically designed in response to this problem. A competitive network similar to the one just described can be augmented to obtain ART-1. A top-down adaptive filter and various components which modulate the working of the network are added. **ART-1 Network** The ART-1 network self-organizes and self-stabilizes its recognition codes in response to arbitrary orderings of arbitrarily many and arbitrarily complex binary input patterns. In this paragraph a description of the ART-1 network will be given by following the phases which can be distinguished during the processing of a specific input pattern. **ART-1 Network Diagram** Two layers, $F_1$ and $F_2$, of the attentional subsystem encode patterns of activation in the STM traces. Bottom-up and top-down pathways between $F_1$ and $F_2$ contain LTM traces which multiply the signals in these pathways. The remainder of the circuit modulates these STM and LTM processes. $F_1$ nodes are *suprалимially* activated (that is, sufficiently activated to generate output) if they receive a signal from at least two out of three possible input sources. The three are bottom-up input, top-down input and attentional gain control input. If a $F_1$ node receives input from only one of these sources it is *subliminally* activated. This is called the *2/3 rule*. After the presentation of an input vector a parallel search is initiated. This is called the hypothesis testing cycle: 1. Input pattern $I$ generates the STM activity pattern $X$ at $F_1$ and activates both $F_1$’s gain control and the orienting subsystem $A$. Pattern $X$ both inhibits $A$ and generates the bottom-up signal pattern $S$ which is transformed by the adaptive filter into the input pattern $T$. $F_2$ is designed as a competitive network, only the node which receives the largest total input is activated ("winner-take-all"). This is step 3 of the network algorithm. The parallel search, or hypothesis testing cycle, repeats automatically at a very fast rate until one of three possibilities occurs: (1) a $F_2$ node is chosen whose top-down expectation approximately matches input $I$; (2) a previously uncommitted $F_2$ node is selected; or (3) the entire capacity of the system is used and input $I$ cannot be accommodated. If $V$ mismatches $I$ this results in a decrease in the total inhibition from $F_1$ to $A$. 2. Pattern $Y$ at $F_2$ generates the top-down signal pattern $U$ which is transformed by the top-down adaptive filter into the expectation pattern $V$. Pattern $Y$ also inhibits $F_1$'s gain control. As a result only those $F_1$ nodes that represent bits in the intersection of the input pattern $I$ and the expectation pattern $V$ remain supraliminally activated. If $V$ mismatches $I$ this results in a decrease in the total inhibition from $F_1$ to $A$. 3. If the mismatch is severe enough (step 4 of the network algorithm) $A$ can no longer be prevented from releasing a nonspecific arousal wave to $F_2$. This resets the active node at $F_2$. The vigilance parameter $\rho$ determines how much mismatch will be tolerated. 4. After the $F_2$ node is inhibited its top-down expectation is eliminated and $X$ can be reinstated at $F_1$. The cycle then begins again. $X$ once again generates input pattern $T$ to $F_2$, but a different node is activated. The previously chosen $F_2$ node remains inhibited until $F_2$’s gain control is disengaged by removal of the input pattern. The parallel search, or hypothesis testing cycle, repeats automatically at a very fast rate until one of three possibilities occurs: (1) a $F_2$ node is chosen whose top-down expectation approximately matches input $I$; (2) a previously uncommitted $F_2$ node is selected; or (3) the entire capacity of the system is used and input $I$ cannot be accommodated. Until one of these outcomes prevails, essentially no learning occurs because all the STM computations of the hypothesis testing cycle proceed so quickly that the more slowly varying LTM traces in the bottom-up and top-down adaptive filters cannot change in response to them. Significant learning (step 5 of the network algorithm) in response to an input pattern occurs only after the cycle that it generates comes to an end and the system is in a resonant state. The above description does not tell us how the components work. There are various ways to implement these. Guidelines in the form of mathematical equations are to be found in Carpenter87a]. A possible implementation is the one used in SNNS, the Stuttgart Neural Network Simulator (available via FTP from the Internet) which is further described in [Herrmann92]. **Paragraph 5 - Adaptations** Since the introduction of ART-1 many adaptations have been made by Grossberg and Carpenter and more recently by various other researchers in the field of neural networks. About one year after ART-1 Grossberg and Carpenter introduced a variant which could handle continuous input vectors which they called ART-2. Since then they have introduced adaptations such as: ART-3 [Carpenter90], ART-2a [Carpenter91a], ARTMAP [Carpenter91b], Fuzzy ART [Carpenter91c] and Fuzzy ARTMAP [Carpenter92]. More recently various other researchers in the field of neural networks have introduced adaptations. For example, in 1994 Bartfai introduced a variant on ARTMAP which he called SMART [Bartfai94] which stands for Self-consistent Modular ART and is capable of bi-level clustering using two different vigilance parameters. At the moment he is working on HART [Bartfai95] which stands for Hierarchical ART and is capable of multi-level clustering. Another neural network architecture inspired by ART is CALM [Murre89] which stands for Categorizing And Learning Model. A continually updated list of references to ART related publications can be found on the ART WWW site at http://www.wi.leidenuniv.nl/art/. References Appendix /* Title : Cluster-Euclidian Description: This program implements a simple pattern clustering algorithm for two-dimensional vectors using a Euclidian distance measure as described in [Moore89]. Language : ANSI-C */ #define BMWIDTH 320 #define BMHEIGHT 320 #define SQUARESIZE 300 #define ITER 40 #define THETA 1 #define LAMBDA 0.2 #define PI 3.141592654 #include <stdio.h> #include <stdlib.h> #include <math.h> /* Linked list containing prototype vectors */ struct proto { double x,y; struct proto *next; } *protos; unsigned char *bm; /* Plot a prototype vector */ void plotproto(x,y,last) double x,y; int last; { int xx=BMWIDTH/2+(int)(x*(SQUARESIZE/2)), yy=BMHEIGHT/2-(int)(y*(SQUARESIZE/2)); bm[(yy-1)*BMWIDTH+xx]=255; bm[yy*BMWIDTH+xx-1]=255; } bm[yy*BMWIDTH+xx+1]=255; bm[(yy+1)*BMWIDTH+xx]=255; if (last) bm[yy*BMWIDTH+xx]=255; } /* Plot prototype vectors */ void display(last) int last; { struct proto *p; p=protos; while (p) { plotproto(p->x,p->y,last); p=p->next; } } /* Plot an input vector */ void plotinput(x,y) double x,y; { int xx=BMWIDTH/2+(int)(x*(SQUARESIZE/2)), yy=BMHEIGHT/2-(int)(y*(SQUARESIZE/2)); bm[yy*BMWIDTH+xx]=255; } /* Incremental update */ void input(x,y) double x,y; { struct proto *p,*bestp; double dist,bestdist; plotinput(x,y); p=protos; bestp=NULL; while (p) { dist=sqrt((p->x-x)*(p->x-x)+(p->y-y)*(p->y-y)); if (!bestp || dist<bestdist) { bestp=p; bestdist=dist; } p=p->next; } if (!bestp || bestdist>THETA) { p=malloc(sizeof(struct proto)); p->x=x; p->y=y; p->next=protos; protos=p; } else { bestp->x=(1-LAMBDA)*bestp->x+LAMBDA*x; bestp->y=(1-LAMBDA)*bestp->y+LAMBDA*y; } } void main() { FILE *outfile; long i; double angle; unsigned char *p; bm = calloc(BMWIDTH*BMHEIGHT,1); protos=NULL; for (i=0,angle=0; i<ITER/2; i++,angle+=PI/18) { input(cos(angle),sin(angle)); input(cos(angle+PI),sin(angle+PI)); display(i==ITER/2-1); } /* Write bitmap to Targa file. */ outfile = fopen("euclid.tga","wb"); putc(0,outfile); putc(0,outfile); putc(2,outfile); for (i=0;i<9;i++) putc(0,outfile); putc(BMWIDTH%256,outfile); putc(BMWIDTH/256,outfile); putc(BMHEIGHT%256,outfile); putc(BMHEIGHT/256,outfile); putc(24,outfile); putc(32,outfile); for (i=0,p=bm;i<BMWIDTH*BMHEIGHT;i++,p++) { putc(*p^255,outfile); putc(*p^255,outfile); putc(*p^255,outfile); } fclose(outfile); free(bm); }
{"Source-Url": "http://web.mst.edu/~tauritzd/papers/art.pdf", "len_cl100k_base": 5831, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 31197, "total-output-tokens": 7774, "length": "2e12", "weborganizer": {"__label__adult": 0.0005025863647460938, "__label__art_design": 0.0008001327514648438, "__label__crime_law": 0.0004227161407470703, "__label__education_jobs": 0.0011119842529296875, "__label__entertainment": 0.00018227100372314453, "__label__fashion_beauty": 0.00027632713317871094, "__label__finance_business": 0.0003085136413574219, "__label__food_dining": 0.0004839897155761719, "__label__games": 0.0011072158813476562, "__label__hardware": 0.004116058349609375, "__label__health": 0.00109100341796875, "__label__history": 0.0004107952117919922, "__label__home_hobbies": 0.00023603439331054688, "__label__industrial": 0.0008611679077148438, "__label__literature": 0.0005040168762207031, "__label__politics": 0.0003964900970458984, "__label__religion": 0.0007891654968261719, "__label__science_tech": 0.412353515625, "__label__social_life": 0.00012183189392089844, "__label__software": 0.00861358642578125, "__label__software_dev": 0.5634765625, "__label__sports_fitness": 0.0005030632019042969, "__label__transportation": 0.0008120536804199219, "__label__travel": 0.00021982192993164065}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27685, 0.02574]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27685, 0.82088]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27685, 0.84347]], "google_gemma-3-12b-it_contains_pii": [[0, 161, false], [161, 788, null], [788, 4281, null], [4281, 6957, null], [6957, 8765, null], [8765, 10563, null], [10563, 13011, null], [13011, 15545, null], [15545, 17666, null], [17666, 20203, null], [20203, 21616, null], [21616, 24180, null], [24180, 25797, null], [25797, 26856, null], [26856, 27685, null]], "google_gemma-3-12b-it_is_public_document": [[0, 161, true], [161, 788, null], [788, 4281, null], [4281, 6957, null], [6957, 8765, null], [8765, 10563, null], [10563, 13011, null], [13011, 15545, null], [15545, 17666, null], [17666, 20203, null], [20203, 21616, null], [21616, 24180, null], [24180, 25797, null], [25797, 26856, null], [26856, 27685, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27685, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27685, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27685, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27685, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27685, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27685, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27685, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27685, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27685, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27685, null]], "pdf_page_numbers": [[0, 161, 1], [161, 788, 2], [788, 4281, 3], [4281, 6957, 4], [6957, 8765, 5], [8765, 10563, 6], [10563, 13011, 7], [13011, 15545, 8], [15545, 17666, 9], [17666, 20203, 10], [20203, 21616, 11], [21616, 24180, 12], [24180, 25797, 13], [25797, 26856, 14], [26856, 27685, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27685, 0.0]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
de11e141d99c6fce7e82100736d2258f73683634
This specification defines a method for microblogging over XMPP. Legal Copyright This XMPP Extension Protocol is copyright © 1999 – 2018 by the XMPP Standards Foundation (XSF). Permissions Permission is hereby granted, free of charge, to any person obtaining a copy of this specification (the "Specification"), to make use of the Specification without restriction, including without limitation the rights to implement the Specification in a software program, deploy the Specification in a network service, and copy, modify, merge, publish, translate, distribute, sublicense, or sell copies of the Specification, and to permit persons to whom the Specification is furnished to do so, subject to the condition that the foregoing copyright notice and this permission notice shall be included in all copies or substantial portions of the Specification. Unless separate permission is granted, modified works that are redistributed shall not contain misleading information regarding the authors, title, number, or publisher of the Specification, and shall not claim endorsement of the modified works by the authors, any organization or project to which the authors belong, or the XMPP Standards Foundation. Warranty ## NOTE WELL: This Specification is provided on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. ## Liability In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall the XMPP Standards Foundation or any author of this Specification be liable for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising from, out of, or in connection with the Specification or the implementation, deployment, or other use of the Specification (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if the XMPP Standards Foundation or such author has been advised of the possibility of such damages. Conformance This XMPP Extension Protocol has been contributed in full conformance with the XSF’s Intellectual Property Rights Policy (a copy of which can be found at <https://xmpp.org/about/xsf/ipr-policy> or obtained by writing to XMPP Standards Foundation, P.O. Box 787, Parker, CO 80134 USA). ## Contents 1 Introduction 1 2 Protocol 1 2.1 Location 1 2.2 Subscribing to a Microblog 2 2.3 Publishing a Post 2 2.3.1 Publishing a Post with Rich Content 3 2.4 Receiving a Post 4 2.5 Replying to a Post 5 2.6 Repeating a Post 6 2.7 Post Categories 7 3 Comments 8 3.1 Post Comments 8 3.2 Adding a Comment 9 4 Pubsub Node Configuration 9 4.1 Microblog node configuration 9 4.2 Comments node configuration 10 5 Microblog Metadata 10 6 Geotagging 11 7 Aggregators 12 7.1 Pubsub Item ID vs. Atom Entry id 13 7.2 Aggregator Usecases 13 7.2.1 Representing Posts in the Web 13 8 Message Body 13 9 Security Considerations 14 9.1 Comment Author 14 10 IANA Considerations 14 11 XMPP Registrar Considerations 14 12 XML Schema 14 13 Acknowledgements 14 1 Introduction Microblogging is an increasingly popular technology for lightweight interaction over the Internet. It differs from traditional blogging in that: - Posts are short (typically less than 140 characters, which is the limit in SMS). - Posts are in plain text. - People can reply to your posts, but not directly comment on them. - People learn about your posts only if they have permission to view them. - Your microblogging feed is discovered based on your identity at a domain or with a service. These characteristics map well to instant messaging systems such as those built using Jabber/XMPP technologies (e.g., permissions can be based on existing presence subscriptions as reflected in the XMPP roster or "buddy list"). Furthermore, the push nature of XMPP (especially as formalized in the Personal Eventing Protocol (XEP-0163)\(^1\) profile of Publish-Subscribe (XEP-0060)\(^2\)) overcomes the problems of polling for updates via HTTP, which has caused scaling issues in existing microblogging services. Therefore this specification defines a method for microblogging over XMPP, building on the existing method for transporting Atom syndication data RFC 4287\(^3\) over XMPP as described in AtomSub\(^4\). These XMPP-based methods are complementary to HTTP-based methods, and can provide an XMPP interface to existing microblogging services (which may also be accessible via HTTP, Short Message Service (SMS), and other messaging transports). 2 Protocol 2.1 Location A person’s microblog SHOULD be located at a personal eventing (PEP) node named "urn:xmpp:microblog:0" but MAY be located at a generic publish-subscribe node that is not attached to a user’s IM account. For instance, if the Shakespearean character Romeo has a JabberID of <romeo@montague.lit> then his microblog would be located at that JID with a node of "urn:xmpp:microblog:0". Outside of native XMPP systems, this node can be referred to as the following XMPP URI (see XEP-0060 § 12.21). Note that the ":" character from the namespace URN is percent-encoded in the query component (see RFC 5122 5 and RFC 3986 6). Naturally, this node can be discovered by contacting romeo@montague.lit directly using Service Discovery (XEP-0030) 7. 2.2 Subscribing to a Microblog Let us imagine that Juliet wishes to receive the posts that Romeo publishes to his microblog. She has two options: 1. Implicitly subscribe by advertising support for "urn:xmpp:microblog:0+notify" in her Entity Capabilities (XEP-0115) 8 data. Romeo’s PEP service then automatically sends posts to her when it receives presence from her, until and unless she sends presence of type unavailable or stops advertising an interest in microblog updates. 2. Explicitly subscribe by sending a formal subscription request to the "urn:xmpp:microblog:0" node at Romeo’s JabberID. Romeo’s PEP service may send her all posts even if she is offline at the time (depending on service policies regarding presence integration). 2.3 Publishing a Post Romeo can publish a post via any interface provided by his service, such as a website, the Atom Publishing Protocol (see RFC 5023 9), SMS, an IM bot, or XMPP pubsub. Here we assume that the post is provided via XMPP pubsub. The post content itself can be either text (content element without "type" attribute or with "type" attribute with "text" value) or XHTML ("content" element "type" attribute with "xhtml" value). If Romeo publishes XHTML content, his client MUST publish two "content" elements: a text one, and a XHTML one. For XHTML publishing, see Publish-Subscribe (XEP-0060) 10. Note: Publishing via HTTP, AtomPub, SMS, or IM bot is simpler for the client (e.g., because the client does not need to generate an Item ID). --- Listing 1: Publishing a post ```xml <iq from='romeo@montague.lit/pda' id='pub1' to='romeo@montague.lit' type='set'> <pubsub xmlns='http://jabber.org/protocol/pubsub'> <publish node='urn:xmpp:microblog:0'> <item id='1cb57d9c-1c46-11dd-838c-001143d5d5db'> <entry xmlns='http://www.w3.org/2005/Atom'> <title type='text'>hanging out at the Cafè Napolitano</title> <id>tag:montague.lit,2008-05-08:posts-1cb57d9c-1c46-11dd-838c-001143d5d5db</id> <published>2008-05-08T18:30:02Z</published> <updated>2008-05-08T18:30:02Z</updated> </entry> </item> </publish> </pubsub> </iq> ``` Note: The "title" element is required to be included in an "atom:entry" element according to RFC 4287. An implementation MAY provide also "atom:summary" and/or "atom:content" elements too if it needs. 2.3.1 Publishing a Post with Rich Content It’s possible to insert some rich content in the post or comment. It can be some text formatting, images, etc. Only "xhtml" content type is supported for the moment by this document but possibly it will be extended later. Also, it is RECOMMENDED for the client to restrict XHTML content to the XHTML-IM subset (XHTML-IM (XEP-0071)). Listing 2: Publishing a post with rich content ```xml <iq from='romeo@montague.lit/pda' id='pub1' to='romeo@montague.lit' type='set'> <pubsub xmlns='http://jabber.org/protocol/pubsub'> <publish node='urn:xmpp:microblog:0'> <item id='1cb57d9c-1c46-11dd-838c-001143d5d5db'> <entry xmlns='http://www.w3.org/2005/Atom'> <title type='xhtml'> <div xmlns='http://www.w3.org/1999/xhtml'> </div> </title> </entry> </item> </publish> </pubsub> </iq> ``` 2.4 Receiving a Post Because Juliet has sent presence to Romeo including Entity Capabilities data that encapsulates the "urn:xmpp:microblog:0+notify" feature, Romeo’s XMPP server will send a PEP notification to Juliet. The notification can include an XMPP message body for backwards-compatibility with Jabber clients that are not pubsub-capable (see Message Body). Listing 3: Receiving a post ```xml <message from='romeo@montague.lit' to='juliet@capulet.lit' type='headline'> <body>hanging out at the Caf&amp;#233; Napolitano</body> <items xmlns='http://jabber.org/protocol/pubsub#event'> <item id='1cb57d9c-1c46-11dd-838c-001143d5d5db' publisher='romeo@montague.lit'> <entry xmlns='http://www.w3.org/2005/Atom'> <title type='text'>hanging out at the Caf&amp;#233; Napolitano</title> <link rel='alternate' type='text/html' href='http://montague.lit/romeo/posts/1cb57d9c-1c46-11dd-838c-001143d5d5db'/> <link rel='alternate' href='xmpp:romeo@montague.lit?node=urn%3Axmpp%3Amblog%3A0;item=1cb57d9c-1c46-11dd-838c-001143d5d5db'/> </entry> </item> </items> </message> ``` Note: these alternate links were not posted by the original client because clients can’t compute them themselves. These things SHOULD be inserted at server side though. 2.5 Replying to a Post Anyone can publish a post in reply to Romeo’s post. Here we assume that a reply comes from Benvolio. Note: Inclusion of the "thr:in-reply-to" element defined in RFC 4685 indicates the post to which the user is replying. This reply includes two such elements (one pointing to the HTTP URL for the post and the other pointing to the XMPP URI for the post. Note: The post can be a reply to more than the only one another. Listing 4: Publishing a reply ```xml <iq from='benvolio@montague.lit/mobile' id='uv2x37s5' to='benvolio@montague.lit' type='set'> <pubsub xmlns='http://jabber.org/protocol/pubsub'> <publish node='urn:xmpp:microblog:0'> <item id='c4145006-1c53-11dd-b2d5-000bcd82471e'> <entry xmlns='http://www.w3.org/2005/Atom' xmlns:thr='http://purl.org/syndication/thread/1.0'> <author> <name>Benvolio Montague</name> <xmp:xmpp:romeo@montague.lit</uri> </author> <title type='text'>@romeo cappuccino this late in the day?</title> <link rel='alternate' type='text/html' href='http://montague.lit/benvolio/posts/c4145006-1c53-11dd-b2d5-000bcd82471e'/> <link rel='alternate' href='xmpp:benvolio@montague.lit? node=urn%3Axmpp%3Amicroblog%3A0; -- ----------------------------------------------------- item=c4145006-1c53-11dd-b2d5-000bcd82471e'/> <id>tag:montague.lit,2008-05-08:posts-c4145006-1c53-11dd-b2d5-000bcd82471e</id> <published>2008-05-08T18:31:21Z</published> <updated>2008-05-08T18:31:21Z</updated> ``` Assuming that Romeo has also shared presence with Benvolio and has advertised support for "urn:xmpp:microblog:0+notify", he will receive the reply that Benvolio sent. 2.6 Repeating a Post When Benvolio wants to repeat a Romeo’s post, his client publishes the same post under a different name. But to be able to track the repeated post original author, Benvolio’s client MAY use specific "atom:author" child node, "atom:name" and "atom:uri", containing, respectively, the name of the original post author, and his XMPP URI (JID). If a comments link is present (see the Post Comments section of this document), the client SHOULD repeat it to keep the same discussion about the post. The client also MAY create a separate node to discuss and specify it or specify both. The client SHOULD also put an "atom:link" element with "rel" attribute set to "via" and point it to the original post. Listing 5: Repeating a Post ```xml <iq from='benvolio@montague.lit/mobile' id='pub2' to='benvolio@montague.lit' type='set'> <pubsub xmlns='http://jabber.org/protocol/pubsub'> <publish node='urn:xmpp:microblog:0'> <item id='1re57d3c-1q46-11dd-748r-024943d2d5rt'> <entry xmlns='http://www.w3.org/2005/Atom'> <author> <name>Romeo Montague</name> <url>xmpp:romeo@montague.lit</url> </author> </entry> </item> </publish> </pubsub> </iq> ``` Thus, a different author JID value lets the client know the microblog item has been repeated from another one. It’s also possible for Benvolio to add his own thoughts to the repost. To do this he SHOULD wrap the original content in the "xhtml:blockquote" element and after it add his own content. Also, the client MAY post reply without quotation to the original thread to inform users about the repost. ### 2.7 Post Categories It’s often handy to specify categories (or tags) to a post to make it easier to find it or to structure a blog. It’s possible by adding "atom:category" element to the entry (it can be blog or replies entry). Listing 6: Specify post’s categories ```xml <entry xmlns='http://www.w3.org/2005/Atom'> ... <category term='humour'/> <category term='xmpp'/> ... </entry> ``` 3 Comments Juliet and Benvolio may want to discuss about latest Romeo's post. To enable this, Romeo's client MAY add a "atom:link" element to the PubSub item. The element MUST have "rel", "title" and "href" attributes, where "rel" MUST have the "replies" value; "title" MUST have the "comments" value; "href" MUST be an XMPP URI (see RFC 5122 and RFC 3986). 3.1 Post Comments We assume Romeo's client first created a comments node (named "urn:xmpp:microblog:0:comments/ID", where "ID" is the microblog item ID, or the SHA-1 encoded attachment URI, as defined in RFC 3174). Listing 7: Adding a comments link to a Post ```xml <iq from='romeo@montague.lit/pda' id='pub4' type='set'> <pubsub xmlns='http://jabber.org/protocol/pubsub'> <item id='2ze57d9c-1c46-21df-830c-002143d3d2qgf'> <entry xmlns='http://www.w3.org/2005/Atom'> <title type='text'>hanging out at the Caf&amp;#233; Napolitano</title> <link rel='replies' title='comments' href='xmpp:pubsub.montague.lit?node=urn%3Axmpp%3Amicroblog%3A0%3Acomments%3A30499ed8f0c05df0c5f2'/> <id>tag:montague.lit,2008-05-08:posts-2ze57d9c-1c46-21df-830c-002143d3d2qgf</id> <published>2008-05-08T18:38:02Z</published> <updated>2008-05-08T18:38:02Z</updated> </entry> </item> </pubsub> </iq> ``` 3.2 Adding a Comment If Juliet wants to comment Romeo’s latest post, her client sends a new Atom entry to the defined PubSub node. Note: A comments node SHOULD be located at a generic publish-subscribe node that is not attached to a user’s IM account, but MAY be located at a personal eventing (PEP) node. Listing 8: Adding a comment to a comments node ```xml <iq from='juliet@capulet.lit/pc' id='comment1' to='pubsub.capulet.lit' type='set'> <pubsub xmlns='http://jabber.org/protocol/pubsub'> <publish node='urn:xmpp:microblog:0:comments/dd88c9bc58886fce0049ed050df0c5f2'> <item id='b2106a80de39ef5ec6b8f7b69cb610c2'> <entry xmlns='http://www.w3.org/2005/Atom'> <author> <name>Juliet Capulet</name> <uri>xmpp:juliet@capulet.lit</uri> </author> <title type='text'>She is so pretty!</title> <published>2008-05-08T18:39:02Z</published> </entry> </item> </publish> </pubsub> </iq> ``` If Benvolio wants to retrieve the comments node, his client will send a standard PubSub stanza to request all items (see Publish-Subscribe (XEP-0060) \(^\text{17}\) for all items retrieving). 4 Pubsub Node Configuration We have described two pubsub nodes types here: the one is for the microblog itself and the other one is for comments to posts. Usage specific requires the special parameters to these nodes to be specified. This section describes recommendation for them. 4.1 Microblog node configuration Here are recommendations for a microblogging node (usually located at PEP and named by the "urn:xmpp:microblog:0" namespace) configuration: 1. The "pubsub#notify_retract" MUST be set to true to provide clients the ability to track if some items were retracted and reflect such changes in the UI correctly. 2. The "pubsub#max_items" SHOULD be increased from the default value to some reasonable value. 3. The "pubsub#send_last_published_item" SHOULD be changed to "never". 4.2 Comments node configuration Here are recommendations for a comments node configuration: 1. The "pubsub#notify_retract" MUST be set to true to provide clients the ability to track if some items were retracted and reflect such changes in the UI correctly. 2. The "pubsub#max_items" SHOULD be increased from the default value to some reasonable value. 3. The "pubsub#access_model" SHOULD be set to "open" to allow any user to comment to a post. Other values are suitable too according to the user's settings. 5 Microblog Metadata In order to provide users with some metadata (i.e. blog title or author information) about the microblog, the client MUST add an item with such information. The client MUST set the ID of the item to "0". Listing 9: Publishing microblog metadata ```xml <iq from='romeo@montague.lit/pc' id='pub8' to='romeo@montague.lit' type='set'> <pubsub xmlns='http://jabber.org/protocol/pubsub'> <publish node='urn:xmpp:microblog:0'> <item id='0'> <feed xmlns='http://www.w3.org/2005/Atom'> <title>Romeo's Microblog</title> <id>tag:montague.lit,2008:home</id> <updated>2008-05-08T18:30:02Z</updated> <author> <name>Romeo Montague</name> <uri>xmpp:romeo@montague.lit</uri> </author> </feed> </item> </publish> </pubsub> </iq> ``` It also necessary to link a comments node to the post which discussed in the node. We will do it by adding the "atom:link" element with "rel=start" attribute: Listing 10: Publishing comments node metadata ```xml <iq from='romeo@montague.lit/pc' id='pub8' to='pubsub.montague.lit' type='set'> <pubsub xmlns='http://jabber.org/protocol/pubsub'> <publish node='urn:xmpp:microblog:0'> <item id='0'> <feed xmlns='http://www.w3.org/2005/Atom'> <title>Comments to a post</title> <id>tag:pubsub.montague.lit,2008:comments-2ze57d9c-1c46-21df-830c-0021433d2qgf</id> <updated>2008-05-08T18:30:02Z</updated> <link rel='start' href='xmpp:romeo@montague.lit?node=urn:3Axmpp%3Amicroblog%3A0;item=1cb57d9c-1c46-11dd-838c-001143d5d5db' ref='tag:montague.lit,2008-05-08:posts-1cb57d9c-1c46-11dd-838c-001143d5d5db'/> </feed> </item> </publish> </pubsub> </iq> ``` 6 Geotagging Juliet may want to know which places are Romeo’s notices related to. That’s why Romeo’s client MAY geotag microblog entries, using the User Geolocation (XEP-0080)\(^\text{18}\) protocol for storing geolocation information. Romeo’s client MUST create a "geoloc" element, with the User Geolocation (XEP-0080)\(^\text{19}\) reference namespace: "http://jabber.org/protocol/geoloc". Listing 11: Geotagging a Post 7 Aggregators In order to provide some statistical information, to represent the blogs other ways than XMPP (i.e. in web or NNTP), we will need other entities called "Aggregators". You can think of aggregators like about search engines in the web: they gather information from the whole web and then represent it suitable ways. The same is true here: Aggregators just subscribe to many entities and then they can build a database to make queries and show appropriate information according to these queries. Also they can be used to represent information in web or other networks. Unlike web search engines, an XMPP aggregator does not need to gather information by downloading it frequently to check if something was changed. Instead, it can listen to pubsub events and make its database up-to-date based on this information. Aggregators can be started by different people with different aims. It can be aggregator which devoted to the concrete blog service provider which will be subscribed only to its own users, or it can be a global search engine which tries to subscribe to most of users or to aggregate the feeds of other aggregators. This section will describe most used aggregator usecases but the list is not exhaustive. 7.1 Pubsub Item ID vs. Atom Entry id There are two different things that carry a similar sense: the XMPP pubsub Item ID and the "atom:id" element. This section is devoted to make a separation between them. The pubsub Item ID MUST be used when linking to an entry with an XMPP channel (i.e. by including it in the URI with the "xmpp" schema). the Atom entry ID MUST be built according to RFC 4287 and used in aggregators with the aim of revealing of post duplicates, reposts, mentions, syndications, etc. Note that the rules of comparing, building and security notes for "atom:id" are listed in the RFC 4287. 7.2 Aggregator Usecases 7.2.1 Representing Posts in the Web One of the possible aims of aggregator services is to provide web representation of blogs. TBD: insert alternate link to the post with the http address of the post. 8 Message Body Depending on service policies and the value of the "pubsub#include_body" node configuration option, microblogging notifications SHOULD include a message "body" element for backwards-compatibility with Jabber clients that are not pubsub-capable. It is RECOMMENDED for the XML character value of the "body" element to be the same as that of the "atom:title" child of the "atom:entry". 9 Security Considerations 9.1 Comment Author The client SHOULD check that the comment author information (provided in the "author" element) is valid, by checking that the "publisher" item attribute value matches the "uri" element value. If there is a difference or the check cannot be performed because there was not a "publisher" attribute included, the comment can be displayed, but it is RECOMMENDED to specify there is a security problem. 10 IANA Considerations This document requires no interaction with the Internet Assigned Numbers Authority (IANA) \(^{22}\). 11 XMPP Registrar Considerations The XMPP Registrar \(^{23}\) is requested to issue an initial namespace of "urn:xmpp:microblog:0". 12 XML Schema This specification re-uses the schema for the Atom content format, i.e., the 'http://www.w3.org/2005/Atom' namespace (see RFC 4287). 13 Acknowledgements Thanks to Ralph Meijer and Paul Scott for their suggestions. \(^{22}\) The Internet Assigned Numbers Authority (IANA) is the central coordinator for the assignment of unique parameter values for Internet protocols, such as port numbers and URI schemes. For further information, see <http://www.iana.org/>. \(^{23}\) The XMPP Registrar maintains a list of reserved protocol namespaces as well as registries of parameters used in the context of XMPP extension protocols approved by the XMPP Standards Foundation. For further information, see <https://xmpp.org/registrar/>.
{"Source-Url": "https://xmpp.org/extensions/xep-0277.pdf", "len_cl100k_base": 6437, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 50478, "total-output-tokens": 8481, "length": "2e12", "weborganizer": {"__label__adult": 0.0003485679626464844, "__label__art_design": 0.0005183219909667969, "__label__crime_law": 0.00177764892578125, "__label__education_jobs": 0.0006251335144042969, "__label__entertainment": 0.00018703937530517575, "__label__fashion_beauty": 0.0001246929168701172, "__label__finance_business": 0.001155853271484375, "__label__food_dining": 0.00020062923431396484, "__label__games": 0.0006060600280761719, "__label__hardware": 0.0014019012451171875, "__label__health": 0.0001615285873413086, "__label__history": 0.00028395652770996094, "__label__home_hobbies": 8.177757263183594e-05, "__label__industrial": 0.0002818107604980469, "__label__literature": 0.0005049705505371094, "__label__politics": 0.0004472732543945313, "__label__religion": 0.0003142356872558594, "__label__science_tech": 0.0234527587890625, "__label__social_life": 0.00014138221740722656, "__label__software": 0.127197265625, "__label__software_dev": 0.83935546875, "__label__sports_fitness": 0.00016963481903076172, "__label__transportation": 0.0002949237823486328, "__label__travel": 0.0001900196075439453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26149, 0.05839]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26149, 0.09228]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26149, 0.75909]], "google_gemma-3-12b-it_contains_pii": [[0, 65, false], [65, 2600, null], [2600, 3399, null], [3399, 5857, null], [5857, 8211, null], [8211, 10145, null], [10145, 11265, null], [11265, 12876, null], [12876, 14303, null], [14303, 15111, null], [15111, 16827, null], [16827, 18686, null], [18686, 20401, null], [20401, 22074, null], [22074, 22835, null], [22835, 24701, null], [24701, 26149, null]], "google_gemma-3-12b-it_is_public_document": [[0, 65, true], [65, 2600, null], [2600, 3399, null], [3399, 5857, null], [5857, 8211, null], [8211, 10145, null], [10145, 11265, null], [11265, 12876, null], [12876, 14303, null], [14303, 15111, null], [15111, 16827, null], [16827, 18686, null], [18686, 20401, null], [20401, 22074, null], [22074, 22835, null], [22835, 24701, null], [24701, 26149, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 26149, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26149, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26149, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26149, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26149, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26149, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26149, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26149, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26149, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26149, null]], "pdf_page_numbers": [[0, 65, 1], [65, 2600, 2], [2600, 3399, 3], [3399, 5857, 4], [5857, 8211, 5], [8211, 10145, 6], [10145, 11265, 7], [11265, 12876, 8], [12876, 14303, 9], [14303, 15111, 10], [15111, 16827, 11], [16827, 18686, 12], [18686, 20401, 13], [20401, 22074, 14], [22074, 22835, 15], [22835, 24701, 16], [24701, 26149, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26149, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
0c769690df8889071b09bd4447ef243493dbe2a9
WELCOME CIS 210 COMPUTER SCIENCE I PROGRAMMING GUIDE – FALL 2020 WEEK 1 CIS 210 Fall 2020 Guide for CIS 210 Projects - Week 1 Project 1a “UO guide” Turtle graphics Python Turtle graphics, inherited from the computer programming language Logo, gives us a graphics (and simple user interface) programming tool. From plotting data to creating unique pieces of art, we will make use of the graphical output in several problems this term. The Python “Turtle” is essentially a robot that is controlled with Python code. The Turtle/robot comes equipped with multiple attributes, such as position, heading, color, and size. We will be making use of the the anonymous turtle for this project. This means you don’t need to set up the Turtle ( t = turtle.Turtle() ) like in the book. Commands can be called directly, after using ‘from turtle import *’ (leave off the quotes). Turtle graphics commands you will need for Project 1a: The following Turtle commands are all you need for P1a. If a command is not listed in the project 1a section, do not use it for project 1a. Make sure to try out the commands in the shell to aid in understanding them. fd(distance) / forward(distance): Move the Turtle forward by the specified distance, in the direction the Turtle is headed. ``` >>> from turtle import * >>> fd(60) ``` Try it yourself in the shell: ```python >>> from turtle import * >>> fd(60) ``` Note how the command used is ‘fd(60)’ and not ‘turtle.fd(60)’ We are able to do this because we imported everything from the turtle module (from turtle import *). You will never need to preface any of your commands with turtle. (i.e. turtle.command()) for this project. **bk(distance) / back(distance) / backward(distance):** Move the Turtle backward by distance, opposite to the direction the Turtle is headed. Does not change the Turtle’s heading. Try it yourself in the shell: ```python >>> from turtle import * >>> bk(60) ``` **lt(angle) / left(angle):** Turn the Turtle left by angle units, relative to the Turtle’s current heading. (By default, the unit is degrees) Try it yourself in the shell: ```python >>> from turtle import * >>> lt(90) ``` **rt(angle) / right(angle):** Turn the Turtle right by angle units, relative to the Turtle’s current heading. (By default, the unit is degrees) Try it yourself in the shell: ```python >>> from turtle import * >>> rt(90) ``` Now combine them! Look at the code below, try drawing what you think will happen, then execute the code to see: ```python >>> from turtle import * >>> fd(100) >>> lt(90) >>> bk(100) >>> lt(90) >>> fd(100) >>> rt(270) >>> bk(100) ``` What happens? **stamp()**: Stamp a copy of the Turtle shape onto the canvas at the current Turtle position. ``` >>> from turtle import * >>> stamp() >>> fd(60) ``` **dot()**: Puts a dot onto the canvas at the current Turtle position. Note the similarity to stamp, the only difference is the shape being left. ``` >>> from turtle import * >>> dot() >>> fd(60) ``` More Turtle commands: These Turtle commands are included in the Project 1a starter code. You can simply execute the starter code to use these commands, but you may find it interesting to explore them, too. reset(): Deletes the Turtle’s drawings from the screen, re-centers the Turtle and sets variables (pen size, speed, etc...) to their default values. ``` >>> from turtle import * >>> fd(20) >>> rt(90) >>> fd(20) >>> reset() ``` clear(): Deletes the Turtle’s drawings from the screen. Does not move or rotate the Turtle. ``` >>> from turtle import * >>> fd(20) >>> rt(90) >>> fd(20) >>> clear() ``` title(titlestring): sets title of Turtle window to titlestring. ``` # Python Turtle Graphics Top of turtle window before a title is specified # Welcome to Computer Science at the UO! Specify a title # Welcome to Computer Science at the UO! Top of turtle window after a title is specified ``` Try it yourself in the shell: ```python >>> from turtle import * >>> title('Welcome to Computer Science at the UO!') ``` **speed(newspeed):** sets the Turtle’s speed to an integer value in the range 0 – 10. Strings can also be used to set the speed (see below). - “fastest”: 0 - “fast”: 10 - “normal”: 6 - “slow”: 3 - “slowest”: 1 speed(“fast”) is the same as speed(10), speed(“slow”) is the same as speed(3), etc... **bgpic():** Sets background image. Note: image file must be in same folder as .py file **pencolor(colorstring):** Sets the pen color. Note: once the pen color changes this only affects lines drawn after the color change occurred. Previously drawn lines will keep the same color that they were originally drawn in. Try it yourself in the shell: >>> from turtle import * >>> fd(30) >>> pencolor(“red”) >>> fd(30) `screensize(canvaswidth, canvasheight)`: If no arguments are given, returns the width and height of the canvas, else resizes the canvas the Turtle is drawing on. Try it yourself in the shell: ```python >>> from turtle import * >>> screensize(1000, 1000) ``` **Python** These Python keywords/and syntax are included as part of the starter code, and will continue to be core elements of our Python programming. **from Turtle import ***: ‘from Turtle’ means we are importing content from the Turtle library. ‘*’ means we want to import everything in the library, so we can directly use it without having to add “Turtle.” to the start of what we are using. i.e. `fd(100)` will make the Turtle move, as opposed to having to type `Turtle.fd(100)` from and import are Python keywords **def**: `def` is a Python keyword that marks the start of a function header. The pattern for creating a function being: `def`, function name, parameter list inside parentheses (though if no parameters are needed, the parenthesis are just left empty) **Comments**: a comment is code that programmers can read when looking at a file, but is not executed by the Python interpreter. `#` a hash mark like this indicates a single line comment ``` ``` Multi-line comments are made by typing between two sets of triple quotes ``` ``` `````` Double quotes work as well Just make sure there are 3 before, and 3 after. ``` **pass**: `pass` is a null operation, when it is executed nothing happens. It is generally used as a placeholder for where a statement is required by syntax, but no code is needed to be executed, or a programmer later wants to come back and write code at that place. Is a Python keyword. **return**: indicates the end of the function. Is a Python keyword (1) The file header comments provide information about this Python file (program): ```python ... Title: a one-line description/title for the program Author: Your name Credit: reference any other sources (materials, people, etc.) for this work ... ``` (2) The import statement provides access to turtle module functions when the function is executing. Import is a Python keyword. By convention, import statements appear at the top of the program file, after the file header and before the rest of the code. ```python from turtle import * ``` (3) Note the structure of the Python `uo_guide` function: ```python def uo_guide_start(): ... Welcome to the UO! Welcome to Computer Science! Guide students from the EMU Lawn to Dechutes Hall, home of the Computer Science Department, and then to Price Science Commons (Science Library), home of B004/A computer lab and study space. >>> uo_guide_start() ... # setting the scene (supply this code) reset() clear() title('Welcome to Computer Science at the UO!') color('purple') pensize(3) speed('slowest') bgpic('uo_campus_map.png') screensize(1195, 488) stamp() #mark start of route on EMU East lawn # replace pass with your code # guide to Dechutes pass # guide to Price Science Commons pass return ``` - **Function header:** - `def function name`: defines a function with the provided name. - Parameter list inside parentheses (empty here, `uo_guide` has no parameters) - **Function docstring:** - Inside triple quotes – a brief description of the function followed by an example call to the function - **Function code:** - Some is provided for you; you will write the rest - **Return statement:** - Indicates the end of the function. `return` is a Python keyword. Project 1b “Art Show” You’ve seen that Turtle can be used to create maps to aid others in the finding of specific locations. What else can Turtle be used for, though? What about in the creation of art? Turtle graphics Turtle graphics commands you will need for Project 1b: The following Turtle commands, as well as the commands we learned in P1a, are all you need for P1b. if a command is not listed in the P1a or P1b section, do not use it for this project. Make sure to use the shell to aid in understanding the commands. We will once again use the anonymous turtle, as well as directly using commands (i.e. just call command(), not turtle.command() ) **fillcolor('colorname')**: Sets the fill color for when the Turtle draws a shape. This may seem like the pencolor command, but pencolor changes the color of the lines drawn, where fillcolor decides the color of the negative space between the drawn lines. Let’s try it! Consider the following code, try figuring out what you think it will create, and then check it in the shell! ```python >>> from turtle import * >>> fillcolor('blue') >>> begin_fill() >>> fd(100) >>> lt(120) >>> fd(100) >>> lt(120) >>> fd(100) >>> end_fill() ``` Were you right? More Turtle commands: These Turtle commands are included in the Project 1b starter code. You can simply execute the starter code to use these commands, but you may find it interesting to explore them, too. **pu() / penup():** Picks the Turtle pen up, so that if the Turtle moves, it will not draw. ``` >>> from turtle import * >>> fd(30) >>> pu() >>> fd(30) ``` **pd() / pendown():** Puts the Turtle pen down, so if the Turtle moves, lines will be drawn. ``` >>> from turtle import * >>> fd(20) >>> pu() >>> fd(20) >>> pd() >>> fd(20) ``` **hideTurtle():** Hides the Turtle arrow, so that only the marks the Turtle draws are seen. ![Hide Turtle Arrow] <table> <thead> <tr> <th>Position of turtle:</th> <th>setpos(20, -20)</th> </tr> </thead> <tbody> <tr> <td>(0, 0)</td> <td></td> </tr> </tbody> </table> Try it yourself in the shell: ```python >>> from turtle import * >>> hideturtle() >>> fd(60) ``` **setpos(x, y):** sets the new position for the Turtle. Lines get drawn between the old and new position, if the Turtle pen is down. Note that the new Turtle position is independent of its prior position, as opposed to fd/bk which move relative to the Turtle’s current position. ![Set Position] <table> <thead> <tr> <th>Position of turtle:</th> <th>setpos(20, -20)</th> </tr> </thead> <tbody> <tr> <td>(20, -20)</td> <td></td> </tr> </tbody> </table> Try it yourself in the shell: ```python >>> from turtle import * >>> setpos(20, -20) ``` **pensize(size):** sets the size of the line the Turtle draws. ![Pensize] | fd(30) | pensize(3) | fd(30) | Try it yourself in the shell: ```python >>> from turtle import * >>> fd(30) >>> pensize(3) >>> fd(30) ``` **Python** *for loop*: Often when writing code, you will find that you end up writing the same line multiple times over, this is when a for loop comes in handy. A For loop can be used to repeat the same line of code multiple times over. Here is an example of the type of for loop you will need to complete project 1b: ```python for i in range(4): print(i) ``` Keyword ‘for’ that tells python we want to make a for loop Set a variable name in this case ‘i’, can be any valid variable name Denote how much our loop will run, in this case 4 times. The value of `i` will be 0, 1, 2, or 3 depending on loop iteration Code indented after the colon will be executed each time the loop runs Try typing this for loop into your own shell and playing around with it. What happens when you change the number within range? What happens when you change ‘i’ to different variable names? Maybe try adding more prints, or other lines of code to run within the loop. **Assignment**: we saw the use of the variable ‘i’ in the above for loop, but we can also assign values to variables outside of for loops using the ‘=’ operator. For example, `x = 45` or `course = ‘CIS210’` Try these examples in the shell: ```bash >>> x = 45 >>> x ``` What is printed in the shell? ```bash >>> course = ‘CIS210’ >>> course ``` What is printed in the shell? ```bash >>> pi = 3.14 >>> pi ``` What is printed in the shell? Writing Functions: for this project you will write your own functions. Start by looking at the functions already provided to you with the project starter code from P1a as well as P1b, and then try writing your own with a similar structure, but with different contents. Functions generally have 4 main sections: 1 the function header def name(parameters if any, will be empty for this assignment): 2 the docstring Information about the function/what the function does Examples of use 3 the code within the function pass 4 the return marking the end of the function return Project 1c “Art Show Better” We’ve made some great artwork with “Art Show”, but the program really can only make one picture, and that picture is hard to tweak. Function Parameters can be used to make our picture easier to modify. Turtle graphics Turtle graphics commands you will need for Project 1c: There are no new commands, we will be using the same commands we learned in P1a, and P1b. If a command is not listed in the P1a or P1b section, do not use it for this project. We will once again use the anonymous turtle, as well as directly using commands (i.e. just call command(), not turtle.command() ) Python Function parameters: In the past we have left the parentheses of our functions empty, we will now be putting variables into our parentheses. These are called function parameters. A function parameter allows you to pass a variable into a function when the function is called, this is called passing an argument to the function. As an example, let’s say we want a function that has the Turtle draw a line of a provided distance, and then return to where it started. How do you get the number for the distance into that function? You can use a parameter: def draw_line_and_return(distance): Draws a line the length of “distance” then returns the Turtle to its starting point >>> draw_line_and_return(40) fd(distance) pu() bk(distance) pd() return Here our parameter is ‘distance’ note how instead of having a hardcoded number, ‘distance’ is used for fd and bk. But what happens if someone calls the function without an argument (e.g. draw_line_and_return())? Currently the function will just throw an error, but functions can also have Default Parameters. A Default Parameter is a parameter that is set by the programmer, which a function can use if the user does not provide an argument. For the previous example function, let’s change it so if the user calls the function without an argument, instead of having an error, 20 will be used: def draw_line_and_return(distance=20): ''' Draws a line the length of “distance” then returns the Turtle to its starting point >>> draw_line_and_return(40) >>> draw_line_and_return() ''' fd(distance) pu() bk(distance) pd() return Note in the function header how instead of (distance) it is now (distance=20), the ‘=20’ is the key to setting the default value. What this says is that if a user doesn’t provide a value for ‘distance’, then ‘distance’ is, by default, set to 20. Try it yourself, what happens if you change 20 to 60? What about other numbers? Can you have multiple default parameters? Turtle Commands - `fd / forward` .... project 1a, page 1 - `bk / back` ........ project 1a, page 2 - `lt / left` ........... project 1a, page 2 - `rt / right` .......... project 1a, page 2 - `stamp` ............. project 1a, page 3 - `dot` ............... project 1a, page 3 - `reset` ............. project 1a, page 4 - `clear` ............. project 1a, page 4 - `title` ............. project 1a, page 4 - `speed` ............. project 1a, page 5 - `bgpic` ............. project 1a, page 5 - `pencolor` .......... project 1a, page 5 - `screensize` ....... project 1a, page 6 - `fillcolor` .......... project 1b, page 9 - `begin_fill` ....... project 1b, page 9 - `end_fill` .......... project 1b, page 9 - `pu / penup` ....... project 1b, page 10 - `pd / pendown` ... project 1b, page 10 - `hideTurtle` ....... project 1b, page 11 - `setpos` .......... project 1b, page 11 - `pensize` .......... project 1b, page 11 Python - `from turtle import *` ... project 1a, page 7 - `def` ..................... project 1a, page 7 - `comments` ............ project 1a, page 7 - `pass` ..................... project 1a, page 7 - `return` ................ project 1a, page 7 - `for loop` ............. project 1a, page 7 - `assignment` .......... project 1b, page 12 - `functions` .......... project 1b, page 13 - `function parameters` .. project 1c, page 13 - `default parameters` .... project 1c, page 14
{"Source-Url": "https://classes.cs.uoregon.edu/20F/cis210/Manual_Week1.pdf", "len_cl100k_base": 4409, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 29079, "total-output-tokens": 5307, "length": "2e12", "weborganizer": {"__label__adult": 0.0006775856018066406, "__label__art_design": 0.0018491744995117188, "__label__crime_law": 0.00055694580078125, "__label__education_jobs": 0.06231689453125, "__label__entertainment": 0.0002038478851318359, "__label__fashion_beauty": 0.00035691261291503906, "__label__finance_business": 0.0003676414489746094, "__label__food_dining": 0.0008969306945800781, "__label__games": 0.0017499923706054688, "__label__hardware": 0.0016765594482421875, "__label__health": 0.0006875991821289062, "__label__history": 0.0005822181701660156, "__label__home_hobbies": 0.0003955364227294922, "__label__industrial": 0.0008177757263183594, "__label__literature": 0.0008955001831054688, "__label__politics": 0.00035071372985839844, "__label__religion": 0.001064300537109375, "__label__science_tech": 0.020751953125, "__label__social_life": 0.0003764629364013672, "__label__software": 0.01232147216796875, "__label__software_dev": 0.88916015625, "__label__sports_fitness": 0.0006170272827148438, "__label__transportation": 0.000858306884765625, "__label__travel": 0.00042319297790527344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17139, 0.03707]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17139, 0.66229]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17139, 0.83632]], "google_gemma-3-12b-it_contains_pii": [[0, 73, false], [73, 1666, null], [1666, 2377, null], [2377, 2980, null], [2980, 4004, null], [4004, 4717, null], [4717, 4977, null], [4977, 6478, null], [6478, 8296, null], [8296, 9504, null], [9504, 10047, null], [10047, 11117, null], [11117, 12522, null], [12522, 14506, null], [14506, 15743, null], [15743, 17139, null]], "google_gemma-3-12b-it_is_public_document": [[0, 73, true], [73, 1666, null], [1666, 2377, null], [2377, 2980, null], [2980, 4004, null], [4004, 4717, null], [4717, 4977, null], [4977, 6478, null], [6478, 8296, null], [8296, 9504, null], [9504, 10047, null], [10047, 11117, null], [11117, 12522, null], [12522, 14506, null], [14506, 15743, null], [15743, 17139, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 17139, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17139, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17139, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17139, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 17139, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17139, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17139, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17139, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17139, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17139, null]], "pdf_page_numbers": [[0, 73, 1], [73, 1666, 2], [1666, 2377, 3], [2377, 2980, 4], [2980, 4004, 5], [4004, 4717, 6], [4717, 4977, 7], [4977, 6478, 8], [6478, 8296, 9], [8296, 9504, 10], [9504, 10047, 11], [10047, 11117, 12], [11117, 12522, 13], [12522, 14506, 14], [14506, 15743, 15], [15743, 17139, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17139, 0.01892]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
0f25652b3e97e0c2a2fae6c40f5c2411885b5f01
The KInfoCenter Mike McBride The KInfoCenter Contents 1 The KInfoCenter 5 1.1 Starting the KInfoCenter ................................. 5 1.2 The KInfoCenter Screen .................................. 5 1.3 The KInfoCenter Toolbar ................................ 6 1.3.1 Module Help button .................................. 6 1.3.2 Help Menu button ...................................... 6 1.4 Exiting The Information Center .......................... 7 2 The Default KInfoCenter Modules 8 2.1 About System Module ..................................... 8 2.2 Memory Information Module ................................ 8 2.2.1 Memory Types ......................................... 8 2.2.2 Memory Information Module ............................ 8 2.3 Energy Information Module ................................. 9 2.4 Device Information Module ................................. 9 2.4.1 Device Viewer ......................................... 9 2.4.1.1 Information Panel .................................. 10 2.4.1.2 UDI Information .................................... 11 2.4.2 Interrupt Request (IRQ) Information Module .......... 11 2.4.3 DMA Channel Information Module ....................... 11 2.4.4 USB Controller/USB Devices Information Module ..... 11 2.4.5 Input/Output Port Information Module ................ 12 2.4.6 PCI-bus/Installed PCI Cards Information Module ...... 12 2.5 Network Information Module ............................... 12 2.5.1 Network Interfaces Information Module ............... 12 2.5.2 Samba Status Information Module ..................... 13 2.5.2.1 Exports ............................................ 13 2.5.2.2 Imports ............................................. 13 2.5.2.3 Log ................................................ 13 2.5.2.4 Statistics ......................................... 14 2.5.2.5 Section Author ................................... 14 2.6 Graphical Information Module ............................. 15 2.6.1 Wayland Information Module .......................... 15 2.6.2 X Server Information Module .......................... 15 2.6.3 OpenGL Information Module ............................ 15 3 Credits and License 16 Abstract This documentation describes Plasma’s information center. Chapter 1 The KInfoCenter The KInfoCenter provides you with a centralized and convenient overview of your system and desktop environment. The information center is made up of multiple modules. Each module is a separate application, but the information center organizes all of these programs into a convenient location. This next section details the use of the information center itself. For information on individual modules, please see Default KInfoCenter Modules. 1.1 Starting the KInfoCenter The KInfoCenter can be started in three ways: 1. By selecting Applications → System → KInfoCenter from the application launcher in the panel. 2. By pressing Alt+F2 or Alt+Space. This will bring up KRunner. Type kinfocenter, and press Enter. 3. You can type kinfocenter & at any command prompt. All three of these methods are equivalent, and produce the same result. 1.2 The KInfoCenter Screen When you start the information center, you are presented with a window, which can be divided into three functional parts. Across the top is a toolbar. The toolbar will provide you with quick access to most of KInfoCenter’s features like get help on the current module and a help menu. Along the left hand side, is a column with a filter field at the top. This is where you choose which module to investigate. To navigate through the various KCM modules, left click on the module in the tree view. You can also use the arrow keys to scroll though the KCM’s and pressing Enter will select the module. The module will now appear on the main panel of the KInfoCenter window. Some items within the tree view are categories, you can left click or again press Enter to expand and collapsed these items. This will show the modules under the category. You can right click on the module listing to show the following options: - **Collapse All Categories**: Collapses the tree to show only top level modules and categories. - **Expand All Categories**: Expands the tree to show modules. - **Clear Search**: This will clear any filter you have applied on the module listing via the search box The main panel shows you the system information about the selected module. ### 1.3 The KInfoCenter Toolbar This next section gives you a brief description of what each toolbar item does. #### 1.3.1 Module Help button This button opens KHelpCenter with the current help page for the information module. #### 1.3.2 Help Menu button KInfoCenter has the common KDE Help menu items, for more information read the section about the Help Menu of the KDE Fundamentals. 1.4 Exiting The Information Center You can exit the info center one of two ways: - Type Ctrl+Q on the keyboard. - Click on the Close button on the frame surrounding the info center. Chapter 2 The Default KInfoCenter Modules 2.1 About System Module This page shows a brief summary about your system, i.e. your distribution, KDE Plasma Version; KDE Frameworks Version; Qt Version; Kernel Version; and OS Type; and in the hardware section information about Processors; Memory; and Graphics Processor. Use the information on this page if you ask for help in support channels or report a bug at KDE’s bugtracker. 2.2 Memory Information Module This module displays the current memory usage. It is updated constantly, and can be very useful for pinpointing bottlenecks when certain applications are executed. 2.2.1 Memory Types The first thing you must understand, is there are two types of ‘memory’, available to the operating system and the programs that run within it. The first type, is called physical memory. This is the memory located within the memory chips, within your computer. This is the RAM (for Random Access Memory) you bought when you purchased your computer. The second type of memory, is called virtual or swap memory. This block of memory, is actually space on the hard drive. The operating system reserves a space on the hard drive for ‘swap space’. The operating system can use this virtual memory (or swap space), if it runs out of physical memory. The reason this is called ‘swap’ memory, is the operating system takes some data that it doesn’t think you will want for a while, and saves that to disk in this reserved space. The operating system then loads the new data you need right now. It has ‘swapped’ the not needed data, for the data you need right now. Virtual or swap memory is not as fast as physical memory, so operating systems try to keep data (especially often used data), in the physical memory. The total memory, is the combined total of physical memory and virtual memory. 2.2.2 Memory Information Module This window is divided into a top and bottom section The top section shows you the total physical memory, total free physical memory, shared memory, and buffered memory. All four values are represented as the total number of bytes, and as the number of megabytes (1 megabyte = slightly more than 1,000,000 bytes). The bottom section shows you three graphs: - **Total Memory** (this is the combination of physical and virtual memory). - **Physical Memory** - **Virtual memory, or Swap Space**. The grey areas are free, and the blue and green areas are used. **Tip** The exact values of each type of memory are not critical, and they change regularly. When you evaluate this page, look at trends. Does your computer have plenty of free space (grey areas)? If not, you can increase the swap size or increase the physical memory. Also, if your computer seems sluggish: is your physical memory full, and does the hard drive always seem to be running? This suggests that you do not have enough physical memory, and your computer is relying on the slower virtual memory for commonly used data. Increasing your physical memory will improve the responsiveness of your computer. ### 2.3 Energy Information Module This provides information about CPU wakeups, battery percentage and consumption over a user defined history and detailed information about the battery. ### 2.4 Device Information Module Device Information is a device viewer module. It shows all relevant devices that are present within your PC. It has three sections, a device viewer, a information panel and a UDI listing for the currently selected device. #### 2.4.1 Device Viewer The device viewer displays all the current devices detected on your PC in a tree. The main topics at the beginning of the tree are the device categories, left click on a collapsed category to expand it and vice versa to collapse it. To display information about a device, left click on the device in the viewer, the information will display on the right side information panel. You can right click on the device viewer to show the following options: - **Collapse All**: Collapses the tree to show only the main categories. - **Expand All**: Expands the tree to show all the children devices. - **Show All Devices**: Show all the categories no matter if devices are present in those categories - **Show Relevant Devices**: Only show categories that have devices present. The default display is to collapse all while showing only relevant devices. Please note that the devices shown in the device listing are not all devices within your PC, they are just devices that have been detected via the Solid. The device viewer can show the following devices: - **Processors**: These are your computers CPUs (Central Processing Units). - **Storage Drives**: Devices that are used to store your PC's files and data. - **Network Interfaces**: Devices that allow you to connect to a network or to another PC. - **Audio Interfaces**: Devices that allow your PC to play Sound. They are split into 2 categories, ALSA and OSS sound architectures. - **Video Devices**: Devices that allow you to stream live video. - **Serial Devices**: Devices that are connected to your serial port in your PC. - **Smart Card Devices**: Devices that are smart card readers. - **Digital Video Broadcasting Devices**: Devices that use the open standards for digital television. - **Device Buttons**: These are buttons that are present on your PC or external devices. - **Batteries**: These are battery devices that are plugged into your laptop. - **AC Adapters**: These devices will be present when you plug in your AC Adapter. - **Multimedia Players**: Devices that play media files, like a music player. - **Camera Devices**: These are digital cameras that are connected to your PC. **NOTE** Video devices do not include your video card adapter 2.4.1.1 Information Panel The information panel is where device information is shown when you select a device. The first two information topics are always: - **Product**: The name of the device. - **Vendor**: The vendor's name of the device. The following information topics are dependent on the device chosen. They are labeled with easy to understand names. The information labels have the ability to be selected and copied from. **NOTE** Processor **Max Speed** and **Supported Instruction sets**: topics are usually not set by Solid. **NOTE** Top categories in the device listing do not show any information. 2.4.1.2 UDI Information The bottom information panel shows the current selected devices UDI. This is the unique device identifier. All labels have the ability to be selected and copied from. 2.4.2 Interrupt Request (IRQ) Information Module This page displays information about the Interrupt Request Lines in use, and the devices that use them. An IRQ is a hardware line used in a PC by (ISA bus) devices like keyboards, modems, sound cards, etc., to send interrupt signals to the processor to tell it that the device is ready to send or accept data. Unfortunately, there are only sixteen IRQ's (0-15) available in the i386 (PC) architecture for sharing among the various ISA devices. Many hardware problems are the result of IRQ conflicts, when two devices try to use the same IRQ, or software is misconfigured to use a different IRQ from the one a device is actually configured for. **NOTE** The exact information displayed is system-dependent. On some systems, IRQ information cannot be displayed yet. On Linux®, this information is read from `/proc/interrupts`, which is only available if the `/proc` pseudo-filesystem is compiled into the kernel. The first column, is the IRQ number. The second column, is the number of interrupts that have been received since the last reboot. The third column shows the type of interrupt. The fourth, identifies the device assigned to that interrupt. The user cannot modify any settings on this page. 2.4.3 DMA Channel Information Module This page displays information about the DMA (Direct Memory Access) Channels. A DMA channel is a direct connection that allows devices to transfer data to and from memory without going through the processor. Typically, i386-architecture systems (PC’s) have eight DMA channels (0-7). **NOTE** The exact information displayed is system-dependent. On some systems, DMA Channel information cannot be displayed yet. On Linux®, this information is read from `/proc/dma`, which is only available if the `/proc` pseudo-filesystem is compiled into the kernel. A list of all currently-registered (ISA bus) DMA channels that are in use is shown. The first column shows the DMA channel, and the second column shows the device which uses that channel. Unused DMA channels are not listed. The user cannot modify any settings on this page. 2.4.4 USB Controller/USB Devices Information Module This module allows you to see the devices attached to your USB bus(es). This module is for information only, you cannot edit any information you see here. 2.4.5 Input/Output Port Information Module This page displays information about the I/O ports. I/O Ports are memory addresses used by the processor for direct communication with a device that has sent an interrupt signal to the processor. The exchange of commands or data between the processor and the device takes place through the I/O port address of the device, which is a hexadecimal number. No two devices can share the same I/O port. Many devices use multiple I/O port addresses, which are expressed as a range of hexadecimal numbers. NOTE The exact information displayed is system-dependent. On some systems, I/O port information can not yet be displayed. On Linux®, this information is read from /proc/ioports which is only available if the /proc pseudo-filesystem is compiled into the kernel. A list of all currently-registered I/O port regions that are in use is shown. The first column is the I/O port (or the range of I/O ports), the second column identifies the device that uses these I/O ports. The user cannot modify any settings on this page. 2.4.6 PCI-bus/Installed PCI Cards Information Module This page displays information about the PCI-bus and installed PCI cards, and other devices that use the Peripheral Component Interconnect (PCI) bus. NOTE The exact information displayed is system-dependent. On some systems, PCI-information can not yet be displayed. On Linux®, this information is read from /proc/pci which is only available if the /proc pseudo-filesystem is compiled into the kernel. A listing of all PCI devices found during kernel initialization, and their configuration, is shown. Each entry begins with a bus, device and function number. The user cannot modify any settings on this page. 2.5 Network Information Module 2.5.1 Network Interfaces Information Module This page displays information about the network interfaces installed in your computer. NOTE The exact information displayed is system-dependent. On some systems, this information can not yet be displayed. The user cannot modify any settings on this page. 2.5.2 Samba Status Information Module The Samba and NFS Status Monitor is a front end to the programs `smbstatus` and `showmount`. `Smbstatus` reports on current Samba connections, and is part of the suite of Samba tools, which implements the SMB (Server Message Block) protocol, also called the NetBIOS or LanManager protocol. This protocol can be used to provide printer sharing or drive sharing services on a network including machines running the various flavors of Microsoft® Windows®. `showmount` is part of the NFS software package. NFS stands for Network File System and is the traditional UNIX® way to share folders over the network. In this case the output of `showmount -a localhost` is parsed. On some systems `showmount` is in `/usr/sbin`, check if you have `showmount` in your `PATH`. 2.5.2.1 Exports On this page you can see a big list which shows the currently active connections to Samba shares and NFS exports of your machine. The first column shows you whether the resource is a Samba (SMB) share or a NFS export. The second column contains the name of the share, the third the name of the remote host, which accesses this share. The remaining columns have only a meaning for Samba-shares. The fourth column contains the User ID of the user, who accesses this share. Note that this does not have to be equal to the UNIX® user ID of this user. The same applies for the next column, which displays the group ID of the user. Each connection to one of your shares is handled by a single process (`smbd`), the next column shows the process ID (pid) of this `smbd`. If you kill this process the connected user will be disconnected. If the remote user works from Windows®, as soon as this process is killed a new one will be created, so he will almost not notice it. The last column shows how many files this user has currently open. Here you see only, how many files he has `open` just now, you don’t see how many he copied or formerly opened etc. 2.5.2.2 Imports Here you see which Samba- and NFS-shares from other hosts are mounted on your local system. The first column shows whether it is a Samba- or NFS-share, the second column displays the name of the share, and the third shows where it is mounted. The mounted NFS-shares you should see on Linux® (this has been tested), and it should also work on Solaris™ (this has not been tested). 2.5.2.3 Log This page presents the contents of your local Samba log file in a nice way. If you open this page, the list will be empty. You have to press the `Update` button, then the Samba log file will be read and the results displayed. Check whether the Samba log file on your system is really at the location as specified in the input line. If it is somewhere else or if it has another name, correct it. After changing the file name you have to press `Update` again. Samba logs its actions according to the log level (see `smb.conf`). If loglevel = 1, Samba logs only when somebody connects to your machine and when this connection is closed again. If log level = 2, it logs also if somebody opens a file and if he closes the file again. If the log level is higher than 2, yet more stuff is logged. If you are interested in who accesses your machine, and which files are accessed, you should set the log level to 2 and regularly create a new Samba log file (e.g. set up a `cron` task which once a The KInfoCenter week moves your current Samba log file into another folder or something like that). Otherwise your Samba log file may become very big. With the four checkboxes below the big list you can decide, which events are displayed in the list. You have to press Update to see the results. If the log level of your Samba is too low, you won’t see everything. By clicking on the header of one column you can sort the list by this column. 2.5.2.4 Statistics On this page you can filter the contents of the third page for certain contents. Let’s say the Event field (not the one in the list) is set to Connection, Service/File is set to *, Host/User is set to *, Show expanded service info is disabled and Show expanded host info is disabled. If you press Update now, you will see how often a connection was opened to share * (i.e. to any share) from host * (i.e. from any host). Now enable Show expanded host info and press Update again. Now you will see for every host which matches the wildcard *, how many connections were opened by him. Now press Clear Results. Now set the Event field to File Access and enable Show expanded service info and press Update again. Now you will see how often every single file was accessed. If you enable Show expanded host info too, you will see how often every single user opened each file. In the input lines Service/File and Host/User you can use the wildcards * and ? in the same way you use them at the command line. Regular expressions are not recognized. By clicking on the header of a column you can sort the list by this column. This way you can check out which file was opened most often, or which user opened the most files or whatever. 2.5.2.5 Section Author Module copyright 2000: Michael Glauche and Alexander Neundorf neundorf@kde.org Originally written by: Michael Glauche Currently maintained by: Alexander Neundorf neundorf@kde.org CONTRIBUTORS • Conversion to KControl applet: Matthias Hölzer-Klüpfel hoelzer@kde.org • Use of K3Process instead of popen, and more error checking: David Faure faure@kde.org • Conversion to kcmodule, added tab pages 2,3,4, bug fixed: Alexander Neundorf neundorf@kde.org Documentation copyright 2000 Alexander Neundorf neundorf@kde.org Documentation translated to docbook by Mike McBride no mail 2.6 Graphical Information Module When you open the modules in this section, you are presented with some information. The left hand side of the window is organized into a tree. Some of the elements have a plus sign in front of the label. Clicking this sign opens a ‘submenu’ related to the label. Clicking on a minus sign in front of a label hides the submenu. The right hand side of the window contains the individual values for each of the parameters on the left. The information presented will vary depending on your setup. <table> <thead> <tr> <th>NOTE</th> </tr> </thead> <tbody> <tr> <td>Some setups may not be able to determine some or all of the parameters.</td> </tr> </tbody> </table> You can not change any values from this module. It is for information only. 2.6.1 Wayland Information Module This screen is useful for getting specific information about your Wayland Compositor. 2.6.2 X Server Information Module This screen is useful for getting specific information about your X Server and the current session of X. 2.6.3 OpenGL Information Module This page displays information about installed OpenGL implementation. OpenGL (for "Open Graphics Library") is a cross-platform, hardware independent interface for 3D graphics. GLX is the binding for OpenGL to X Window system. DRI (Direct Rendering Infrastructure) provides hardware acceleration for OpenGL. You must have a videocard with 3D accelerator and properly installed driver for this. Read more at the official OpenGL site OpenGL. Chapter 3 Credits and License KInfoCenter Program copyright 1997-2001 The KInfoCenter Developers Contributors: • Matthias Hölzer-Klüpfel hoelzer@kde.org • Matthias Elter elter@kde.org Documentation copyright 2000 Mike McBride© no mail Contributors: • Paul Campbell paul@taniwha.com • Helge Deller deller@kde.org • Mark Donohoe • Pat Dowler • Duncan Haldane duncan@kde.org • Steffen Hansen stefb@mip.ou.dk. • Matthias Hölzer-Klüpfel hoelzer@kde.org • Martin R. Jones mjones@kde.org • Jost Schenck jost@schenck.de • Jonathan Singer jsinger@leeta.net • Thomas Tanghus tanghus@earthling.net • Krishna Tateneni tateneni@pluto.njcc.com • Ellis Whitehead ewhitehe@uni-freiburg.de This documentation is licensed under the terms of the GNU Free Documentation License. This program is licensed under the terms of the GNU General Public License.
{"Source-Url": "https://docs.kde.org/stable5/en/kinfocenter/kinfocenter/kinfocenter.pdf", "len_cl100k_base": 5382, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 29708, "total-output-tokens": 6099, "length": "2e12", "weborganizer": {"__label__adult": 0.0003809928894042969, "__label__art_design": 0.000885009765625, "__label__crime_law": 0.0002243518829345703, "__label__education_jobs": 0.001251220703125, "__label__entertainment": 0.0002455711364746094, "__label__fashion_beauty": 0.00013756752014160156, "__label__finance_business": 0.0002758502960205078, "__label__food_dining": 0.0001982450485229492, "__label__games": 0.0011358261108398438, "__label__hardware": 0.0234832763671875, "__label__health": 0.0002923011779785156, "__label__history": 0.0002856254577636719, "__label__home_hobbies": 0.00024211406707763672, "__label__industrial": 0.0004870891571044922, "__label__literature": 0.00024890899658203125, "__label__politics": 0.0001461505889892578, "__label__religion": 0.0005235671997070312, "__label__science_tech": 0.039306640625, "__label__social_life": 0.00014281272888183594, "__label__software": 0.4033203125, "__label__software_dev": 0.52587890625, "__label__sports_fitness": 0.00018930435180664065, "__label__transportation": 0.00028228759765625, "__label__travel": 0.00018489360809326172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24007, 0.02751]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24007, 0.37933]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24007, 0.82881]], "google_gemma-3-12b-it_contains_pii": [[0, 30, false], [30, 46, null], [46, 2304, null], [2304, 2371, null], [2371, 3391, null], [3391, 4920, null], [4920, 5104, null], [5104, 7027, null], [7027, 9391, null], [9391, 11456, null], [11456, 13976, null], [13976, 16047, null], [16047, 19418, null], [19418, 21728, null], [21728, 23167, null], [23167, 24007, null]], "google_gemma-3-12b-it_is_public_document": [[0, 30, true], [30, 46, null], [46, 2304, null], [2304, 2371, null], [2371, 3391, null], [3391, 4920, null], [4920, 5104, null], [5104, 7027, null], [7027, 9391, null], [9391, 11456, null], [11456, 13976, null], [13976, 16047, null], [16047, 19418, null], [19418, 21728, null], [21728, 23167, null], [23167, 24007, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 24007, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24007, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24007, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24007, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24007, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24007, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24007, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24007, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24007, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24007, null]], "pdf_page_numbers": [[0, 30, 1], [30, 46, 2], [46, 2304, 3], [2304, 2371, 4], [2371, 3391, 5], [3391, 4920, 6], [4920, 5104, 7], [5104, 7027, 8], [7027, 9391, 9], [9391, 11456, 10], [11456, 13976, 11], [13976, 16047, 12], [16047, 19418, 13], [19418, 21728, 14], [21728, 23167, 15], [23167, 24007, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24007, 0.01176]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
c670eb277597b795bd78ccf658bac69548598211
CoRE: A Cold-start Resistant and Extensible Recommender System Mostafa Bayomi ADAPT Centre Trinity College Dublin, Ireland mostafa.bayomi@adaptcentre.ie Annalina Caputo ADAPT Centre Trinity College Dublin, Ireland annalina.caputo@adaptcentre.ie Matthew Nicholson ADAPT Centre Trinity College Dublin, Ireland matthew.nicholson@adaptcentre.ie Anirban Chakraborty ADAPT Centre Trinity College Dublin, Ireland anirban.chakraborty@adaptcentre.ie Séamus Lawless ADAPT Centre Trinity College Dublin, Ireland seamus.lawless@adaptcentre.ie ABSTRACT In this paper, we propose the Cold-start Resistant and Extensible Recommender (CoRE), a novel recommender system that was developed as part of collaborative research with Ryanair, the world’s most visited airline website. CoRE is an algorithmic approach to the recommendation of hotel rooms that can function in extreme cold-start situations. It is a hybrid recommender that blends elements of naive collaborative filtering, content-based recommendation and contextual suggestion to address the various shortcomings which exist in the underlying user and product data. We evaluated the performance of CoRE in a number of scenarios in order to assess different aspects of the algorithm: personalization, multi-model and the resistance to the extreme cold-start situations. Experimental results on an authentic, real-world dataset show that CoRE effectively overcomes the different problems associated with the underlying data in these scenarios. CCS CONCEPTS • Information systems → Recommender systems; KEYWORDS Context-aware recommendations, recommendation explanation ACM Reference Format: 1 INTRODUCTION Recommender systems provide their users with the recommendations of items that are relevant to their needs or aligned with their interests. They can be broadly categorised in: (1) Collaborative Filtering [5] whose predictions are based upon the analysis of the behaviour and activity of similar users and (2) Content-based recommenders [4], whose suggestions come from the comparison of the preferences of the individual user with some descriptive features of the items to be recommended. However, both approaches suffer from well known problems. The former has to tackle with the cold-start problem, caused by the lack of knowledge about new users or items, and data sparsity. On the other side, recommending items only on the basis of the individual user’s past history gives rise to the phenomena of overspecialisation and to unreliable recommendations with cold-start users. As recommender systems evolved, they started to incorporate a range of different information about their users, in an attempt to provide more personalised recommendations. Context-aware recommender systems make use of contextual information, such as time, location, season, companions of the user in a trip situation, etc., in addition to the typical information on users and items [1]. They are built upon the notion that user preferences change under different contexts. A problem common to all of the aforementioned techniques is that they are static in two ways: (1) Once the recommendation engine is built, the integration of new sources of information or dimension requires not feasible extension or significant changes to the underlying algorithm; (2) Once the user profile has been established, changing this profile is deemed to be difficult [2]. In this paper, we propose CoRE, a real-time, extensible and cold-start resistant recommender system. CoRE was developed as part of collaborative research with Ryanair, the world’s most visited airline website (over 500 million “uniques” annually). CoRE was designed to provide personalized, contextually appropriate hotel room recommendations on the Ryanair Rooms. Due to the historical nature of data sharing with their third-party service providers, Ryanair now finds itself in an extreme cold-start situation from a recommender system perspective, where they have very limited information about their users’ behavior with regard to hotel booking. Furthermore, this data includes no ratings or reviews and, therefore, it does not capture the opinions and preferences of users towards their previous bookings. Hence, we developed CoRE, a new approach that is resistant to such cold-start situations. CoRE combines a number of different techniques to address the shortcomings in the user and item data. This paper is organized as follows: Section 2 describes the challenges and limitations in the underlying data, Section 3 describes the design of CoRE and the different models we use to tackle the challenges in data and to overcome the limitations of current approaches. Sections 4 describes the experimental design and the CoRE evaluation. 2 DATA AND CHALLENGES Ryanair currently uses a set of third-party providers to deliver hotel search and booking to their customers. When a hotel booking is made by a Ryanair customer, all this data is saved on the provider’s side and only brief summary information is sent to Ryanair (e.g., hotel ID, user ID, check-in and check-out dates), with no information on user feedbacks and reviews. The data that Ryanair held at the point when this research was conducted has 29,704 hotel bookings, 11,683 unique hotels that have at least 1 booking and 20,223 users who have made at least 1 hotel booking. Since Ryanair relies on several third-party providers for hotel bookings, there is no uniqueness in the dataset. Hence, there is a need to consolidate the hotel data from different providers in order to have a single static inventory from which to generate our recommendations. Data Consolidation. We saved all data associated with hotel bookings from the different providers in a single database table. Each record has a different sequence of hotel ids according to its provider. To consolidate these records, we used the static hotels repository provided by Expedia1. This repository contains approximately 219 thousand hotels from all over the world and contains a range of information about each hotel. We mapped each hotel in our bookings record to the relevant entry in the Expedia static inventory by using a configurable Information Retrieval approach that takes the name and the location of the hotel (from the third-party provider) as an input and searches for it in Expedia’s repository. Once the hotel is found, its record is updated in our consolidated inventory by adding a new field that contains the id of that hotel in Expedia. After the mapping, our dataset contained 18,700 bookings, 9,279 unique hotels with at least 1 booking, and 17,384 users who have made at least 1 hotel booking. Flight Bookings. Since hotel bookings data has inadequate information about each booking, we exploit Ryanair abundance of user data regarding flight bookings. Ryanair clusters their users into ten different segments using a KNN clustering approach based on the commonalities among users’ history while booking their flights. Each user segment contains users who are “similar” to each other. Examples of segments are: “Business Traveler” and “Student Backpackers”. In addition to this segmentation, Ryanair has ten different trip types that can be assigned to each flight booking. The trip type represents the context in which the booking took place. Examples of trip types are: “Adult Sun Break” and “Family with infants”. Using such flight bookings data, we performed a further mapping to enrich hotel bookings with more information. Using the user id, hotel location, flight destination and times of both bookings (flight time and hotel check-in time), we searched around 15 million flight bookings to map each users’ hotel booking with their flight booking. This mapping was carried out in order to assign user segment and trip type to each user at the time of each hotel booking. The total number of bookings made by users assigned to a segment is 1,434, while the total number of bookings whose trip type was known during the room booking is 1,962. User ratings. The available data does not have any explicit ratings from users, it merely has the act of a Ryanair user making a booking. Hence, we have to rely solely on implicit feedback, where the user’s preferences are inferred by observing their actions within the system, such as booking a hotel room. 3 CORE DESIGN To overcome the challenges associated with the data and to address the drawbacks of current approaches, we built CoRE, a novel approach that blends content-based recommendation, naive collaborative filtering and contextual information in order to make personalized hotel recommendations. In order to design a content-based recommendation technique, features associated with hotels need to be identified. In the underlying hotels repository (from Expedia) there are 381 features associated with hotels, such as: Free Wi-Fi, Free Breakfast, Restaurant, Hair dryer, etc. We use these features to build the different models that underlie the recommendation process in CoRE, as discussed below. 3.1 Data Modelling CoRE is capable of recommending items to users regardless of the granularity of information currently held about them. It is designed as a flexible recommender system that incorporates different models that influence the recommendation process based on the availability of data associated with each model. As a user interacts with the system and more data is gathered about individuals, and the community of users, the delivered recommendations continue to improve. It uses a set of discrete features from hotel descriptions to characterize the hotel and build different models. Each model is built based upon a weighted vector of features of the hotels that have been previously booked. The weights denote the importance of each feature to the model. In this research, three models are built: (1) User Model, (2) Segment Model and (3) Contextual Model. User Model. Hotel features are taken from the underlying static inventory and are used to build a vector of descriptive features which are used to discriminate between hotels in the inventory. The values for each feature are stored for each hotel. Then, for each user who has previously made a booking, the user model is constructed as a weighted feature vector of hotels previously booked by that user. Each feature is represented as a weight of the co-occurrences of that feature in the model. Segment Model. As mentioned in Section 2, we enriched hotel bookings with the segment that the user, who made the booking, belongs to and each segment contains users who are similar in their travel patterns. In our approach, we build a model for each segment. A segment model is constructed as a weighted feature vector of hotels that have been booked by all users who belong to that specific segment. Each feature is represented as a weight of the co-occurrences of that feature in the model. We exploit this model to influence the recommendation process for the user based upon the preferences of people in the same segment. This is a form of collaborative filtering using “the wisdom of the crowd”. If the user’s segment is not known at the hotel booking time, the segment model is omitted from the recommendation process. 1https://github.com/ExpediaInc/ean-pc-dbscripts Contextual Model. Contextual information is an important aspect when recommending hotels to users. With continuous change in users' preferences and fluctuations in their needs, relying on user's historical data or even collaborative preferences will produce unreliable recommendations that do not reflect the user's current needs. Contextual information associated with the user can represent a range of different dimensions. Since CoRE is flexible in incorporating information, a variety of models can be used based on their availability, to influence the recommendation process and thus reflect the different contexts associated with the user's trip. In this research, we use the trip type model as it is the only available contextual information in the data at hand. A trip type model is constructed as a weighted feature vector of the hotels that have been booked by all users who were in that trip type. The trip type associated with this user can be captured in two ways: if the user has booked a flight, then we use the trip type associated with this flight booking. If the user is just booking a hotel without a flight or if the user is anonymous to the system, inferring such context is relatively easy by using the user's input to the system. Hybrid Recommender. All generated models are exploited, when they exist for a particular user, to generate recommendations: (1) The user model built from previous bookings; (2) The segment model built from all bookings made by users in the same segment; (3) The context model(s) built from all bookings made by users while in that same context. Combining these aspects delivers an algorithm capable of recommending hotels to users even with limited or no previous information about them. When a user makes a request to CoRE, the recommender retrieves the feature vectors related to that user as follows: If the user has previous hotel bookings in the dataset, the algorithm retrieves the feature vector (user model) that has been built based upon these previous bookings. If the user belongs to a segment based on his/her flight booking history, the algorithm retrieves the feature vector for that segment. After identifying or inferring the type of the user's trip, the algorithm retrieves the feature vector for that trip type model. After that, the centroid vector of the retrieved models is calculated. This vector is considered a central representation of the features in the retrieved models. The algorithm then starts to assign a score to each hotel within the target city where the user is travelling to and that matches the criteria the user has specified2. The score is the cosine similarity between each hotel vector and the centroid vector. This similarity score can be seen as the blended, weighted input of the different models. Hence, this score is considered a balance between the preferences of the user, the segment that he/she belongs to and people who have experienced the same contexts as the user is currently in. The hotels are then ranked based upon their similarity scores and returned to the user as a ranked list in order to choose a hotel to book. As the centroid vector in CoRE is considered a central representation of the prominent features of models that influence the recommendation process, these prominent features are considered a balance between the preferences of these models. Hence, using these prominent features can provide a sensible explanation to the user of why a hotel is recommended for her. A simple example: "As you are travelling with infants (trip type), this hotel provides free infant beds and free babysitting". Such an explanation would give the user a sense of inclusion and ultimately trust in the recommendations generated. In addition to the models currently used to recommend hotels, other models, such as user location, seasonality, and price sensitivity, can be easily added to impact the recommendation process. 3.1.1 Real-Time Recommendation. Each built model (User, Segment, Contextual) is saved in the database to be easily updated, and timely retrieved and used in the recommendation process. CoRE enables real-time model updates to ensure that the recommendations provided are timely and appropriate. With each interaction from the user with the system (i.e. booking), the system updates the various models at run-time using the feature vector of the booked hotel. The features in question are located in each model and updated to reflect the new information. When a request is sent to our recommender, the feature vectors are retrieved for each available model and their centroid can be quickly generated at run-time. 3.1.2 Model Weighting and Feature Selection. As described above, CoRE relies on different models to recommend hotels to the target user. Each model provides different clues related to the potential interest or end goal of the user. Certain models can carry variable influence, depending upon the data situation. The weight assigned to each model indicates the weight its features vector is given while building the centroid vector. Applying model weighting in building the centroid vector and thus in the recommendation process is considered a strength of CoRE over traditional systems. Moreover, CoRE gives users control over the recommended items by allowing them to assign the weight to each model. This can be assigned by the user through a set of sliding bars that indicate the percentage of the influence each model has on the recommended hotels. However, to alleviate the burden of manually setting model weights, we experimented with different values for these weights and found that setting the user model to 0.8 and the other two models to 0.1 produced the best results. This finding shows that the contribution of the user model should be set, by default, more than any other model and then the user can modify these values according to her needs. This finding is not surprising since user model represents the personal preferences of the user while the other models represent other people’s preferences. In addition to the model weighting, also the hotel features can have a different impact on the overall recommendation process, since some features are deemed more relevant than others when booking a hotel. There are a range of feature weighting and selection methods which can be used. In this research, we use the filter-based feature weighting algorithm ReliefF [3]. ReliefF copes with multi-class datasets. The simple intuition behind the ReliefF algorithm is that a good feature has little within class variance and generous amounts of between-class variance. A bad feature is characterized by within-class and between-class variances of similar magnitude. We trained ReliefF over the hotel categories generated by the star rating classification used in the hotel repository. We set the threshold to zero and select features that have a weight higher than zero. After applying ReliefF, the total number of features used in the recommendation was reduced from 381 to 233. --- 2Criteria such as price range or star rating. 4 EVALUATION We carried out various offline simulated user experiments in order to evaluate the efficacy of CoRE in recommending hotel rooms. We adopted the “leave-one-out” cross validation approach. In each run, we generated a list of hotels in the target city of the test booking where hotels are sorted in descending order by their predicted value. For the evaluation, we adopted the Mean Percentile Rank (MPR), which is used to measure the user satisfaction of items in an ordered list. We compared CoRE accuracy against the ranking approach that is currently used by Ryanair in their rooms booking website. This approach sorts all hotels in the target city based on Expedia sequence number, which reflects the transactional data from the last 30 days. This value ranks hotels with 1 indicating the best-performing hotel and others following in ascending numerical order. We feel that this is a strong baseline, as the rankings are based upon the best performing hotels for each city from the entire Expedia inventory across all its websites. We carried out different experiments in order to assess the performance of CoRE in different scenarios. Table 1 reports the results of this evaluation. The first scenario (User) aims to evaluate the personalisation aspect of CoRE, regardless of the user segment or trip type. We selected users who have more than two bookings in their profiles in order to build the user model from at least two bookings. There are 65 users in the dataset who meet this criterion. Form the results we can see that CoRE outperforms the competing baseline. Running a significance test with 95% confidence level showed that the difference between the accuracy of CoRE and the baseline is statistically significant. We also evaluated the impact of feature weighting on the recommendation process (CoREFW). From the results, although this impact is not statistically significant we can see that feature weighting enhances the performance of CoRE. In the second scenario (User + Segment) we evaluate CoRE on users who have previous bookings (User), are assigned to a segment (Segment) and their trip type was known at the time of booking (Contextual). In this scenario, we evaluate the personalization, collaboration and the contextual aspects of the recommender. There are only 22 users in the dataset who have all these elements in their profiles. Although our approach allows users to control the relative contribution of each model, in this offline evaluation we set the user model to 0.8 and the other two models to 0.1 as experimenting with these values produced the best results. It is worth noting that in this experiment, when we remove a user’s booking from the user’s model (leave-one-out) we also remove this booking from the other two models so that this test booking would have no influence on the recommendation process. The results show that CoRE significantly outperforms the baseline system and using feature weighting enhances its performance. In the last scenario (Segment + Contextual) we assess the performance of CoRE in the extreme cold-start situation where the user is new to the system and has no previous bookings in her profile. The recommendations in this scenario are based solely on the information from the other two models (segment and trip type). We selected users who only have one booking in their profile, belong to a segment and their trip type is known. We use this booking as the test booking. We applied equal weighting to the two models. CoRE performs well in the extreme cold-start situations and its accuracy is statistically significantly higher than the baseline. However, we note that, in this scenario, the performance of CoRE is slightly improved when feature weighting is not used. An analysis of the two models highlighted a very scattered distribution within-model of the segments and trip types. We argue that with more bookings and a better fitted hotel classification method for the feature selection, feature weighting would be more reliable in the cold-start scenario. 5 CONCLUSIONS In this paper we presented CoRE, a real-time hybrid recommender system that blends content-based recommendation, naive collaborative filtering and contextual information in order to make personalized hotel recommendations. The recommender was developed as part of collaborative research with Ryanair. CoRE uses a set of discrete features from hotel descriptions to characterize hotels and build user, segment and contextual models. We evaluated the performance of CoRE against the recommendation approach currently used by Ryanair on an authentic, real-world hotel booking dataset from Ryanair. The results showed that CoRE significantly outperformed the baseline system and performed well in the extreme cold-start situations where the user is new to the system or has no previous bookings. ACKNOWLEDGMENTS This work was supported by the European Unions Horizon 2020 research and innovation programmender the Marie Skodowska-Curie grant agreement No.: 713567 and by the ADAPT Centre for Digital Content Technology, which is funded under the Science Foundation Ireland Research Centres ProgrammeGrant 13/RC/2106) and is co-funded under the European Regional Development Fund. REFERENCES Table 1: System performance using different model configurations. <table> <thead> <tr> <th>Used Models</th> <th># Users</th> <th>Approach</th> <th>MPR (%)</th> </tr> </thead> <tbody> <tr> <td>User</td> <td>65</td> <td>Sequence</td> <td>59.56</td> </tr> <tr> <td></td> <td></td> <td>CoRE</td> <td>28.55</td> </tr> <tr> <td></td> <td></td> <td>CoREFW</td> <td>22.67</td> </tr> <tr> <td>User + Segment + Contextual</td> <td>22</td> <td>Sequence</td> <td>63.50</td> </tr> <tr> <td></td> <td></td> <td>CoRE</td> <td>22.04</td> </tr> <tr> <td></td> <td></td> <td>CoREFW</td> <td>18.27</td> </tr> <tr> <td>Segment + Contextual</td> <td>1,385</td> <td>Sequence</td> <td>58.70</td> </tr> <tr> <td></td> <td></td> <td>CoRE</td> <td>38.66</td> </tr> <tr> <td></td> <td></td> <td>CoREFW</td> <td>39.23</td> </tr> </tbody> </table>
{"Source-Url": "http://www.tara.tcd.ie/bitstream/handle/2262/86856/CoRE-CR.pdf?isAllowed=y&sequence=1", "len_cl100k_base": 4897, "olmocr-version": "0.1.53", "pdf-total-pages": 4, "total-fallback-pages": 0, "total-input-tokens": 12607, "total-output-tokens": 5406, "length": "2e12", "weborganizer": {"__label__adult": 0.0014448165893554688, "__label__art_design": 0.0015382766723632812, "__label__crime_law": 0.001384735107421875, "__label__education_jobs": 0.005817413330078125, "__label__entertainment": 0.0005121231079101562, "__label__fashion_beauty": 0.001056671142578125, "__label__finance_business": 0.0031948089599609375, "__label__food_dining": 0.005527496337890625, "__label__games": 0.00588226318359375, "__label__hardware": 0.0035686492919921875, "__label__health": 0.00453948974609375, "__label__history": 0.00479888916015625, "__label__home_hobbies": 0.0006642341613769531, "__label__industrial": 0.0009088516235351562, "__label__literature": 0.00148773193359375, "__label__politics": 0.00098419189453125, "__label__religion": 0.0014009475708007812, "__label__science_tech": 0.277587890625, "__label__social_life": 0.0005183219909667969, "__label__software": 0.0880126953125, "__label__software_dev": 0.54150390625, "__label__sports_fitness": 0.0008997917175292969, "__label__transportation": 0.0042266845703125, "__label__travel": 0.042816162109375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25507, 0.02456]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25507, 0.04849]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25507, 0.93786]], "google_gemma-3-12b-it_contains_pii": [[0, 4999, false], [4999, 11614, null], [11614, 18702, null], [18702, 25507, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4999, true], [4999, 11614, null], [11614, 18702, null], [18702, 25507, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25507, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25507, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25507, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25507, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25507, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25507, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25507, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25507, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25507, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25507, null]], "pdf_page_numbers": [[0, 4999, 1], [4999, 11614, 2], [11614, 18702, 3], [18702, 25507, 4]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25507, 0.14286]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
b80278dfe61b7962a0b18ee1ebec0918c3a7f834
ABSTRACT Since software applications increased rapidly in modern life, it is important to have enough reliability and minimizing the probability of faults in software products. Software testing is a process to finding faults in software products and increasing their reliability. Because testing process is very costly, automation techniques are need to reduce these costs and also, increase reliability. In automatic testing, an attempt is made to reduce human roles in testing process by converting the testing phases or part of them to performed by intelligent methods. Automatic testing has several advantages such as decrease testing time, resources and costs and on the other hand, increase quality and reliability. In this paper, after explaining software testing phases, a classification of existing automatic testing methods has shown. These methods can automate software testing phases or at least part of them with aim to reach above advantages. The aim of this classification is to show which technique can applied in which phase of software testing. The classification clarifies which problems must address in each phase before moving to next problems and how to automate each phase. Moreover, approaches for automating each testing phase have recommended. Categories and Subject Descriptors D.2.4 [Software Engineering]: Software/Program Verifications – Model checking, Reliability, Statistical methods, Validation. General Terms Reliability, Verification. Keywords 1. INTRODUCTION Software is a principle component of modern life. Nowadays, it is very difficult to live without the helping of computers. The growth effects of these machines are increasing day by day in human life and they have many critical duty. Therefore, it is very important to have enough reliability and minimizing the probability of faults in software products. Software Reliability Engineering is the probability of failure-free software operation for specific period of time in a specific environment[1]. Software Testing is a process to improve software reliability by finding errors and faults in software products. Errors or faults are any repugnance between what the software expects to do and actual outputs[2]. Error detection performs by various testing process models and techniques such as unit testing, acceptance testing, load testing, and many methods such as Black-Box and White-Box testing. Each of these technique focuses on specific aspects of software testing process to find faults. For example in Black-Box testing, we investigating on software outputs accuracy only, without considering how these outputs generated. On the other hand, in White-Box testing, we completely concentrate on outputs generation process[2]. Since software testing process is a very costly process in terms of time, budget and resources, many software developers do not keep enough attention to it. Consequently, their products became very risky to fail and loosing market[3]. Therefore, we have to find approaches to decrease testing cost and also increase reliability. One approach is using methods to automating this process. Researches show that automated and intelligent testing process or at least portion of it, can significantly decreasing the test cost. In automated testing, developers attempt to convert testing process which performs by human, to perform by software with intelligent techniques and algorithms like Artificial Intelligence and Statistical methods. Automated testing has several advantages. First, while computers are faster than humans in repetitive tasks, the process can complete sooner. Second, we can reduce human resources during testing. Third, we can test more aspects of the software under test in shorter time. Finally, by reducing human role in the process, we can prevent intentional or unintentional human faults. Consequently, testing cost can reduce, but testing quality can improve and the software product becomes more reliable. In this paper, after explaining software testing phases, a classification of existing automatic testing methods has shown. These methods can automate software testing phases or at least part of them with aim to reach above advantages. The aim of this classification is to show which technique can applied in which phase of software testing. The classification clarifies which problems must address in each phase before moving to next provides and how to automate each phase. The methods for automating testing phases varied between artificial intelligence and statistical methods. Figure 1 illustrates this classification. Moreover, approaches for automating each testing phase have recommended. The remaining parts of this paper are organized as follows. Before explain which methods can use to automate software testing process, this is necessary to know what are testing phases to understand how these methods can automate the testing process. Section 2 addressed this issue. Section 3 introduces 2. SOFTWARE TESTING PHASES Based on [4], testing process can divide into four phases which explains in following subsections. With these phases, a framework has created to depict testers must consider which problems before moving to next problem, and which phase can automate by which methods. These test automation methods will explain in section 3. Moreover, approaches for automating each phase have recommended. 2.1 Modeling the software environment Testers must simulate relationships and interactions between software and its environment. Usually these interactions performed via interfaces such as human, software, file system and communication interfaces. Methods that can simulate the interfaces may usable for automating this phase. 2.2 Selecting test scenarios In this phase, testers must select proper test scenarios or Test Cases that covering each line of source code, input sequences and execution paths to ensure all software modules tested adequately. Because the number of test cases can be very large to execute them all in limited testing time, this is very important to selecting test cases that have higher probability of finding errors. Approaches for automatic determination and selection of important test cases are highly beneficial. 2.3 Running and evaluating test scenarios After preparing and selecting test cases, testers must execute them on the software under test and then, they must evaluate outputs to find if there is a fault. Testers compare the outputs generated by executed test cases (actual outputs) and the expected outputs. Expected outputs are generated by predefined software specifications and/or application logic (a testing oracle). Automation process requires a method to mapping each input to corresponding output of the entire operational environment and a tool for comparing these outputs. To put it differently, an automatic oracle is highly applicable. In section 4, a general framework of using an automated oracle is introduced. Sometimes expected outputs are not clearly defined. This may due to uncertainty in software behavior or lack of complete specification. Stochastic software modeling methods may use to facilitate this difficulty. 2.4 Measuring testing process It is very important to identify what is the status of testing process and when the testing process can stop. Testers need quantitative measurement for identify the status of testing process by predicting the number of bugs in the software and the probability that any of these bugs will be discovered. Approaches for automatically predicting the number of bugs based on software specifications or previous similar projects are useful. For example, software quality estimation techniques can apply for automating this phase. 3. CLASSIFICATION OF TEST AUTOMATION METHODS IN TESTING PHASES These methods have applied for automating a phase or at least some part of a phase in software testing process. As mentioned before, the classification has made based on applications of automated and intelligent methods in the software testing phases. In following, an attempt is made to explain such methods. 3.1 Modeling the Software Environment (Phase 1) Nowadays, most of the software has Graphic User Interface (GUI). Modeling a GUI is a challenging task in testing process. In following, a method has explained to model the software GUI in regression testing and used this model to derived proper test cases. Since regression testing is a process to retest functionalities of software that remain in new versions, Regression GUI Testing is a process to reevaluate pre-tested parts of the software GUI in modified version of the software. The GUI test designer must regenerate test cases to target these common functionalities, and keeping track of such parts is an expensive and challenging process. So, usually in practice, no regression testing of GUI is performed. Many of GUI test cases from previous software testing process are unusable. Commonly, a GUI test case contains a reachable initial state, a legal event sequence and expected states. The initial state applied to initialize the GUI to a desired state for specific test case. An expected state is the state after execution of specific event. Therefore, a modification to the GUI can affect any of these parts and lead to useless of pre-designed test cases. The GUI regression test cases can divide into two groups: affected test cases and unaffected test cases. Affected are test cases who should rerun but due to modifications in GUI, they must redesign. Unaffected are test cases that can execute exactly like original software GUI testing process but because they already evaluated in previous testing process, there is no need to test them again. Each affected test case is divided into four parts: 1. **Correct expected states affected test cases.** 2. **Illegal event sequence affected test cases.** 3. **Incorrect expected states affected test cases.** 4. **Unaffected test cases.** In this study a **Regression Tester** designed to determine and regenerate affected test cases. The overview of this regression tester is shown in Figure 2. One of the inputs is **Original test suits** that generated to test the original GUI. Other inputs are **representations** of original and modified GUIs. Regression Tester determined which test cases are affected, unaffected or must be discarded. Because discarded test cases verified functionalities that not further exist to modified software GUI, they must eliminate from testing process. **Test case selector** partitions the original test suits into (1) unaffected test cases, (2) obsolete tasks test cases, (3) illegal event sequence affected test cases and (4) incorrect expected states affected test cases. Illegal event sequence affected test cases are regenerated by **Planning-based test case regenerator.** But if planner failed to find a plan, the test case marks as discarded because it belongs to absolute tasks. **Expected-state regenerator** is used to regenerate expected state for incorrect expected state test cases and if it fails, test case will discard. Consequently, this method performed regression testing based on re-planning affected test cases and associating a task with each test case and also create an interface between original and modified GUI to generate test cases. Furthermore, this method automate test case selection phase (the second phase of software testing phases) in regression GUI testing. ### 3.2 Selecting the Test Scenarios (Phase 2) Test case selection is second phase in software testing process. Testers consider in effective test cases. Effective test cases can reveal the majority of software faults. According to [11], an effective test case should: - Have a high probability of finding an error - Not reevaluate tested sections - Be the best of its breed - Be neither too complex nor too simple Each test case is defined by a set of inputs and expected output values. Basically, since the numbers of all test cases are very large in modern software, it is impossible to execute all of them in limited time and resources. Also, because many of test cases evaluate same section or part of the software, there is no need to execute all of them. Therefore, testers must wisely select effective test cases with higher probability to finding faults. Likewise, if executing a test case does not report any faults, testers must not imagine the software is fault free and reliable. In fact, in these situations, testers only waste their time. So, this is very important to determine and select effective test cases. Automating this process can significantly decrease testing cost and increase testing quality. A good effective test case selection approach introduced in [12]. This research has revealed that program’s input-output analysis can identify which input attributes mostly affect the value of specific outputs. It has shown I/O analysis can significantly reduce the number of test cases. An **Artificial Neural Networks (ANN)** used to automating I/O analysis by identifying important attributes and ranking them. An ANN is a mathematical modeling of human neural networks that can learn from past experience using <input, output> pairs in a training phase and generate outputs for unknown inputs based on previous data. An ANN consists of layers -each layer represented by one or more processing unit called neurons- and connections between them. ANN’s can learn by adjusting connections values in the network[6]. This study modeled the software behavior using ANNs and identified which input has less effect on producing outputs by an ANN pruning algorithm. Pruning an ANN removes unnecessary connections between neurons but retaining significance ones. The removing process deletes unimportant inputs and also decreases the number of test cases. Finally, they generated test cases by remaining most significant inputs. Figure 3 depicts this process. 3.3 Running and Evaluating Test Scenarios (Phase 3) As mentioned in section 2, evaluating test results in third phase of software testing phases required software fault free output or accurate oracle. Testers need an approach to generate expected outputs for each input. Then, they can compare this output with the actual outputs and if these outputs are not the same, a fault is detected. This is a place which testers need automatic testing Oracle. The Oracle is a fault free source of expected outputs. Non-automatic testing oracle can be a program specification or the developer knowledge of software behavior [13]. An Oracle must accept every input specified in software specification and should always generate a correct result. The process of using automated oracle has shown in Figure 4. Let \( y = F(x) \) which \( x = (x_1, x_2, ..., x_n) \) is software input vector, \( y = (y_1, y_2, ..., y_m) \) is corresponding output vector and \( F \) is software behavior as a continuous function. In [14], Ye et al. modeled software behavior with modeling the relationship between the inputs and outputs \((F)\) and developed an automatic Oracle. In this study, an ANN has used to approximate this behavior. Then, this model can use as automated Oracle for generating correct outputs. Because ANNs have a suitable capability to modeling continues deterministic functions, this method of approximation has a good accuracy if \( F \) is deterministic and without ambiguity. For situations with uncertain behavior, testers must use another approaches. Last and his colleges [7, 15] introduced a full automated black-box regression testing method using Info Fuzzy Network (IFN). IFN is an approach developed for knowledge discovery and data mining. The interactions between the input and the target attributes of any type (discrete and continuous) are represented by an information theoretic connectionist network. An IFN represents the functional requirement by an “oblivious” tree-like structure, where each input attribute is associated with a single layer and the leaf nodes corresponds to combinations of input value[7]. The authors developed automated Oracle which can generate test cases, execute and evaluate them automatically based on previous version of the software under test. The structure of their method is shown in Figure 5. As can be seen in Figure 5, Random Test Generator provides test case inputs by means of Specification of System Inputs. These specifications contain information about system inputs such as data type and values domain. Test Bed executes these inputs on Legacy Version (Previous version of the software under test) and receives system outputs. Next, these test cases are used to train and model IFN as automated Oracle. Therefore, this Oracle can be used to detect faults in new software version. This method completely automated the third phase of software regression testing. 3.4 Measuring Testing Process (Phase 4) As mentioned earlier, measuring testing process is possible by a model of software quality. Software quality model has many applications in modeling the software reliability engineering. It predicts a statistical measure of software reliability and enables the testers to perform quality control and risk analysis. Quality Control can use to answer the question “When Stop Testing?” Answering this question can help testers to “Measuring Testing Process”. One approach is to use software metrics. Prior studies have shown that software metrics are correlated to number of faults. Therefore, software metrics can apply to predict the number of faults in program modules. Consequently, testers can evaluate quality level of the software under test and make a decision when to stop testing process based on previous experiences. These metrics are quantitative descriptors of modules attributes. Also, software metrics are usable to perform risk analysis. Risk analysis helps developers to identify risky module and causes to have special attention to these modules. They are two types of methods to perform quality modeling automatically: methods that can model linear relationship between input and output patterns such as regression analysis, and methods who can model non-linear relationship such as ANN and Case-Based Reasoning (CBR). A CBR system is a computational intelligent expert system which can find solutions to a new problem based on the solution of similar past problems. This solution represented by cases in a library, based on prior experience. A CBR system consists of a case library, a solution process algorithm, a similarity function and the associated retrieval and decision rules. CBR is useful in situations where the environmental knowledge is not enough and when an optimal solution is not known. To put it differently, CBR is an automated reasoning process aimed to solve new problems[5]. Since the relationships between software metrics and quality factors are usually complex and non-linear, approaches which use former methods have better accuracy. Khoshgoftar et al. [16] proposed a method for using ANN and Regression Modeling to predict the number of faults in program, using of software metrics, and compared results of both methods. The process is shown in Figure 6. Software metrics are used as input for trained ANN and independent variable to regression model. Also, outputs of ANN and regression model (dependent variable) are predications of number of faults in the module under test. By comparing the prediction fault in both methods, this study has shown that ANN prediction was superior to regression model. In addition, in regression modeling, testers must manually choose which metrics related to program quality and have effects on fault prediction. On the other hand, because during the learning process of ANN effective metrics are automatically chosen by adjusting the network parameters, metrics choosing is not necessary in ANN approach. In modern complex software systems, number of these metrics can be very large and some of them have a little influence in prediction of faults. Therefore, modeling the quality control may need a lot of processing resources and time. As a result of a study conducted in [17], using of Principle Component Analysis (PCA) is suggested to reduce the number of software metrics and deriving most important and effective metrics for modeling the quality of the software. PCA is a statistical technique for finding patterns in data of high dimension, and expressing the data in such a way as to highlight their similarities and differences. Once these patterns have found in data, PCA compress the data by reducing the number of dimensions, without much loss of information[8]. If we have an $n \times m$ matrix, we can reduced it to an $n \times p$ matrix ($p<m$) using PCA, by extracting linear combination of the original data. The findings of this research has revealed that using PCA has a proper prediction in both ANN and regression model. Another application of quality modeling is risk analysis. With risk analysis in earlier phases of SDLC\(^1\), developers and project managers can determine error-prone modules and assigning testing resources more accurately. Moreover, because this determination has performed at earliest SDLC phases, testing and maintenance cost can reduce remarkably. An ANN based approach has recommended in [18] to classifying error-prone modules based on module attributes and quality factors, to fault-prone and not fault-prone modules. Then, developers can concentrate on designing and testing the fault-prone modules more carefully. Similar application of ANN is in testability. Testability is the probability of test case inability to finding faults in a faulty module. To put it differently, it is the probability that a test case cannot find faults, if there are faults. Testers can use testability to find parts of the software that may hide errors. Because testability is a dynamic aspect of software attribute, this is a bit challenging to measure directly. The study of predicting testability with ANN conducted in [19]. The findings of this study have indicated that ANN modeling of static measurement in source code can predict the module testability. Another intelligent technique in software quality modeling is CBR. Khoshgoftaar and Seliya [5] presented a three-group quality classification technique using CBR. In this research, they applied a two-group classification method three times on a given data set. By combining these three iterations, it is possible to classify modules into any of the three groups. A two-group risk analysis classifier divides modules into low-risk and high-risk, but a three-group classifier divides modules into low, medium and high risk modules. Figure 7 explains the process of a two-group risk classifier. --- \(^1\) Software Development Life Cycle: The process to developing a software-based system. of these methods can be applied in any type of test and some of them in special tests like regression testing. Each of these methods has limitations based on the tools they used. For example, ANN models of software cannot be accurate enough if software behavior is non-deterministic; or IFN model can be applied if application is data oriented. Furthermore, testers must consider overhead costs, extra knowledge and specialist needed for using such techniques. On the other hand, by comparing costs of use and not use of these methods, it is possible to find out automatic approaches have positive effects in reducing testing cost and increasing software quality. Finally, because each method has influenced in special type of test, elimination of human role in testing process needs more study. Fully automation of testing process required more comprehensive methods which applicable in any type of test. Consequently, more researches are necessary in order to automate the whole testing process. 5. REFERENCES
{"Source-Url": "http://www.rezanet.com/Downloads/Intelligent%20and%20Automated%20Software%20Testing%20Methods%20Classification.pdf", "len_cl100k_base": 4501, "olmocr-version": "0.1.49", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 18467, "total-output-tokens": 5649, "length": "2e12", "weborganizer": {"__label__adult": 0.000293731689453125, "__label__art_design": 0.00022590160369873047, "__label__crime_law": 0.0002779960632324219, "__label__education_jobs": 0.0008158683776855469, "__label__entertainment": 5.5730342864990234e-05, "__label__fashion_beauty": 0.00011837482452392578, "__label__finance_business": 0.0001423358917236328, "__label__food_dining": 0.00032067298889160156, "__label__games": 0.0007867813110351562, "__label__hardware": 0.0006284713745117188, "__label__health": 0.00034499168395996094, "__label__history": 0.0001277923583984375, "__label__home_hobbies": 5.799531936645508e-05, "__label__industrial": 0.00023317337036132812, "__label__literature": 0.00024628639221191406, "__label__politics": 0.0001327991485595703, "__label__religion": 0.0003044605255126953, "__label__science_tech": 0.00897979736328125, "__label__social_life": 6.347894668579102e-05, "__label__software": 0.00809478759765625, "__label__software_dev": 0.97705078125, "__label__sports_fitness": 0.00022935867309570312, "__label__transportation": 0.00023949146270751953, "__label__travel": 0.0001398324966430664}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26574, 0.01613]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26574, 0.72121]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26574, 0.9184]], "google_gemma-3-12b-it_contains_pii": [[0, 4480, false], [4480, 8416, null], [8416, 14005, null], [14005, 19420, null], [19420, 22942, null], [22942, 26574, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4480, true], [4480, 8416, null], [8416, 14005, null], [14005, 19420, null], [19420, 22942, null], [22942, 26574, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26574, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26574, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26574, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26574, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26574, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26574, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26574, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26574, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26574, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26574, null]], "pdf_page_numbers": [[0, 4480, 1], [4480, 8416, 2], [8416, 14005, 3], [14005, 19420, 4], [19420, 22942, 5], [22942, 26574, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26574, 0.0]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
03ce0435ab8f10f74e6d14df1f1a17bb35b5b553
RapIoT toolkit: rapid prototyping of collaborative Internet of Things applications Simone Mora, Francesco Gianni and Monica Divitini Norwegian University of Science and Technology Department of Computer and Information Science Trondheim, Norway Email: {simone.mora, francesco.gianni, monica.divitini}@idi.ntnu.no Abstract—The Internet of Things holds huge promises to enhance collaboration in multiple application domains. By bringing Internet connectivity to everyday objects and environments it promotes ubiquitous access to information and integration with third-party systems. Further, connected “things” can be used as physical interfaces to enable users to cooperate leveraging multiple devices via parallel and distributed actions. Yet creating prototypes of IoT systems is a complex task for non-experts because it requires dealing with multi-layered hardware and software infrastructures. We introduce RapIoT, a software toolkit that facilitates prototyping IoT systems providing an integrated set of developer tools. Our solution abstracts low-level details and communication protocols allowing developers to focus on the application logic, facilitating rapid prototyping. RapIoT supports the development of collaborative applications by enabling the definition of high-level data types primitives. RapIoT primitives act as a loosely-coupled interface between generic IoT devices and applications; simplifying the development of systems that make use of an ecology of devices distributed to multiple users and environments. We illustrate the potential of our toolkit by presenting the development process of a IoT system for crowd-sourcing of air quality data. We conclude discussing strength and limitations of our platform highlighting further possible uses for collaborative applications. Index Terms—Internet of Things, IoT, Ubiquitous Computing, Development, Toolkit. I. INTRODUCTION The Internet of Things (IoT), holds huge promises to enhance computer-supported collaboration in several applications domains. By enabling seamless interconnection of people, computers, everyday objects and environments it promotes collaboration off the screen, into our everyday routines. By increasing the amount and quality of information captured by connected objects it might ultimately improve collaboration among people using those objects [1]. Research works have shown how IoT systems can leverage connected objects in collaborative applications; for example to support patient/physician dialogue in chronic disease treatments [2], to foster social communication among friends and relatives [3], to enhance collaboration in crisis management [4] and to support citizens’ participation in public administrations [5]. Yet, since the term Internet of Things was coined in 1999 by technologist Kevin Ashton [6], research has mainly focused on developing machine-centric infrastructures to enable connected things to exchange information over the Internet. Few works [1], [7] have investigated how IoT can enable collaboration and how HCI theory could drive the development of IoT collaborative systems. Likewise only few works have investigated collaborative IoT application authoring [8] and how to involve non-experts in design activities [9], [10]. We summarise the characteristics of IoT systems that can support the development of collaborative applications in four areas. - Ubiquitous access to information - IoT’s focus on connecting everyday objects using short-range wireless networks multiplies the number of point of access for information that could be used to support collaboration - Integration with third-party systems - IoT make use of web standards and cloud computing as base technologies [11], enabling integration with established information systems and knowledge bases - Physical user interfaces - IoT can leverage physical and embodied interaction approaches to interact with the “Things”. Using physical affordances to interact with computer systems has been proved successfully in supporting collaboration [12, p. 97] - Interactions spread among multiple things - The user experience with IoT is usually distributed on a ecology of devices, providing more opportunities for collaboration via distributed users’ actions performed on multiple interfaces. Notably, while the first two characteristics focus on the internet and low-level technology aspects of the IoT, the latter focus on the thing aspects; in terms of behaviors and user interfaces. Prototyping IoT systems is challenging because it requires dealing with a heterogeneous mix of hardware and software components arranged in a multi-layer architecture. A popular design pattern consists in three layers: - an embedded layer implemented as a physical object augmented with sensors, actuators and short-range wireless connectivity to provide sensing and user interface capabilities - a gateway layer, implemented as a device such as smartphone or WiFi router, provide connectivity to the embedded layer enabling ubiquitous access to information • a server layer implemented as a cloud service enables for data storage and integration with third-party services. As an example, popular wearable fitness tracker products feature a pedometer sensor with a simple user interface to show the number of steps counted or calories burned (embedded layer), a cloud service for aggregating data from multiple users (server layer); and a smartphone app acting both connecting the device to the server layer and as an extended user interface to compare data with other users (gateway layer) (Figure 1). This architectural pattern could be used to implement applications that support collaboration at multiple layers, e.g. by means of both personal or shared multiple devices; which are granted ubiquitous access to information via an infrastructure of multiple gateways. Implementing such architecture in working prototypes has for long time required large efforts together with a multidisciplinary team. Our research aims at supporting rapid prototyping and enabling non-experts in building IoT systems. On one end we aim at lowering the thresholds of skills required to build prototypes; on the other end, we point at raising the ceiling providing extended tools and hacking opportunities to build complex ecosystems. Although there are a number of tools available to support IoT development, those tools often (i) do not offer an integrated support to multiple architectural layers, (ii) require pre-existing knowledge in hardware development or embedded programming, (iii) are often bounded to a specific hardware and vendor-lock technologies. This results in a steep learning curve for the tools and large time for integration; obstructing the ability and rapidity to explore design choices by iterating implementing functioning prototypes. In this paper we present RapIoT: an integrated set of tools to support rapid prototyping of IoT applications. RapIoT does not explicitly support a specific application domain, acting as an enabling technology for the development of collaborative applications by non-experts such as makers, designers and students. In this perspective, RapIoT enables the definition, implementation and manipulation of high-level data type primitives. RapIoT primitives allow to abstract low-level implementation details and provide a loosely-coupled interface between different architectural layers. Data types primitive facilitate the development of collaborative applications in two ways. First they act as a loosely coupled interface between devices and applications, allowing devices to serve different applications without need for reprogramming the embedded layer. Second, they allow for centralising the application logic in the server layer, offering a platform as a service, thus simplifying the development of systems that make use of an ecology of devices distributed to multiple users/environments. In the following sections an analysis of existing IoT frameworks and toolkits is provided, the RapIoT approach is then described in detail addressing the technical implementation and flexibility in relation to different application domains. We discuss strengths and weakness of our approach and we conclude the paper highlighting future works. II. RELATED WORKS Several works have provided tools to facilitate the development of IoT systems by non-experts. Besides relying on standard protocols and APIs that allow mutual integration, each tool often focuses on supporting a specific architectural layer. The knowledge required to use each tool also vary according with the level of abstraction they provide and complexity of the applications that can be achieved. In the remaining of this section we survey development toolkits that can be used for IoT prototyping, considering the barriers that hinder their adoption by non-experts. A. Development toolkits In this section we review tools that can be used to support the development of the embedded, gateway and server layer of an IoT infrastructure. 1) Embedded layer: Embedded devices often requires to be programmed with low-level procedural languages which are usually oriented towards production rather than prototyping. On the other side, designers and software developers are usually familiar with high-level, object-oriented programming languages (for example web scripting). For these reasons development tools often provide high-level programming abstractions in the form of proprietary simple textual or visual language or as APIs. Arduino is a popular prototyping platform which includes both a microcontroller-based board to which sensor and actuators can be wired to; and a software library created to simplify writing code without limiting flexibility [13]. The Arduino library hides developers from learning microcontroller-specific instructions or electronic knowledge. Modkit [14] extends the Arduino platform providing a block-based visual programming language based on the Scratch project [15], further expanding Arduino target users to non-professional developers such as kids and artists. Focusing on developing interfaces based on simple input/output feedbacks, Bloctopus [16] provides a platform based on modules with sensors/actuators couplings and a hybrid visual and textual programming language. Developers can model the behavior of the system taking advantage of both simple visual abstractions and powerful textual commands. 2) Gateway layer: Several research works focused on the gateway layers of the IoT infrastructure. Developing gateways to provide internet connectivity to resource-constrained embedded devices is particularly limiting for non-experts, because it requires pre-existing knowledge of low-level technologies like transport protocols and wireless networks. McGrath et al. [17] simplify the development and deployment of internet gateways for Bluetooth Low Energy (BLE) devices by abstracting the complexity of dealing with multiple languages and networking aspects. Rather than invoking BLE commands to each local device, their platform provides a proxy to access multiple devices via a centralised API. Yet this approach still requires pre-existing knowledge about the BLE protocol. Also, the development of firmware for the embedded layer, to provide custom abstractions or primitives to the programmer is not specifically addressed. Zhu et al. [18] addresses the development of a gateway for ZigBee wireless devices. Their system is based on three layers: perception, transmission, and application. IoT devices can be controlled and accessed remotely and the gateway handle conversion between different data protocols. Yet this solution implies that only the parent node is connected to the network, and child nodes are not directly accessible through an unique IP address. 3) Server layer: The server layer is the core element that takes care of managing IoT devices connected via multiple gateways as well as interaction with third-party web services such as data providers or social networks. The framework PatRICIA [19] leverages a programming model and a cloud-based execution environment for reducing complexity and supporting scalable development of IoT applications. The solution however focuses on providing sensor management in a cloud environment and storing data received from connected devices; neglecting interaction with other third-party solutions. They also neglect the management of connected devices through an API, and rather focus on reading and combining data from different sources. Each device is directly connected to the cloud through the MQTT protocol, excluding mobile and low-powered IoT devices. Similarly, the framework developed by Khodadadi et al [20] focuses on connecting data sources by managing querying and filtering of data, and facilitating sharing with third-party platforms. Their work take into account data-gathering from multiple sources, both from sensor networks, and from other web applications (blogs, social media, databases). Users are provided with an API for configuring data sources and to trigger actions within stand-alone applications. Kovatsch et al. [21] describes a similar higher-level architecture. They address the need of an API for connected devices for pushing and retrieving data. The proposed solution, which builds on the CoAP protocol, enables devices to publish data to third-party servers, but doesn’t support bi-directional exchange of events in real-time. B. Non-experts as IoT developers RapIoT builds on top of Arduino strengths and extend a similar approach to the IoT world. Developers interested in building applications are offered with a set of primitives that are tailored and specific for the affordances of the IoT hardware in use, but at the same time they share a common semantic structure and are used in the same way when coding the application logic. Another point in common is the abstraction of vendor-specific programming mechanisms: like the Arduino user, which is not required to know type and producer of the microcontroller, RapIoT users are not required to know any hardware- or software-related detail of the IoT devices. The user only need to be aware of the set of primitives defined and available to be used for development. III. RapIoT Fundamentals A. Design Goals RapIoT aims at providing a holistic support to the development of IoT systems. The following design goals constitute the foundation of our platform. Support both novice and expert developers – Provide a simple, easy-to-use programming environment without hindering expert users in building complex systems Decoupling infrastructure from application – IoT infrastructure is provided as a service to applications. In this way the infrastructure (IoT devices, gateways and server) can be reused across different applications without or little changes Hide hardware complexities – Provide high-level representations of low-level embedded hardware complexities Hide networking details – Spare developers from implementing connection and data transfer protocols Generic embedded devices – Enable the development of applications that make use of simple, low-end devices, no matter manufacturer Multiple embedded devices – Enable the development of IoT systems that make use of multiple devices which collaborate as a structured ecology Mobile devices – Support development of IoT systems with mobile devices, e.g. wearables. We believe that those design goals can be achieved by empowering data primitives. We provide tools to support development and use of primitives across different layers. B. Input/Output Primitives RapIoT supports the development of collaborative applications by enabling the definition, implementation and manipulation of high-level data type primitives. A RapIoT input primitive is a discrete information sensed by an IoT device; for example a data-point captured by a sensor or a manipulation 1MQTT protocol specifications - http://mqtt.org 2CoAP protocol specifications - http://coap.technology performed via a user interface. An **output primitive** is an action that can be performed by the IoT device via output features such as actuators or displays, for example a motor spinning or a LED (Light Emitting Diode) blinking. Primitives act as a loosely coupled interface between embedded devices and one or more application logics. Each primitive encapsulates a data type plus up to two optional parameters as payload. Example of an input primitive is “AirQuality (primitive name), city center (parameter 1), low (parameter 2)” in case of an air quality sensor device or “FrontDoor, knocked” in case of a smart home equipped with an accelerometer device on the front door. Otherwise “Necklace, vibrate” represents an output primitive that issues a vibrate command to a necklace equipped with an haptic motor device. The role of primitives is twofold. On one side they provide an event-driven approach to programming, on the other side they facilitate collaboration among developers working on different IoT layers by providing simple constructs to be used to describe the data exchanged between embedded devices and applications. Furthermore they allow non-experts to think in terms of high-level abstractions without dealing with hardware complexities e.g. “shake, clockwise rotation, free fall” for physical manipulations recognised by accelerometer data. The definition and implementation of primitives is performed by programming the firmware of an Arduino-compatible device in order to register the primitives. The primitives are then available to the framework and it is possible to implement low-level hardware details; for example dealing with accelerometer or GPS sensors as well as motor or display actuators. Primitives not only support simple input/output operations, they can also encapsulate more complex behavior to support the development of physical interfaces; as illustrated in [10]. An example of HCI primitive introduced in [10] is the “proximity” input primitive. The primitive does not encapsulate any sensor-data from the surrounding environment, it is triggered when one or more IoT devices are moved close to one another. It is available to be used for devices that have the on-board hardware to support the functionality (i.e. RFID antennas and tags). Primitives specific for each device can be implemented by using **RapEmbedded** library running on Arduino boards. Instances of primitives are propagated using **RapGateway** smartphone app and accessible from client applications via a simple API provided by **RapCloud**. **C. Architecture** RapIoT is composed by: i **RapEmbedded**: an Arduino library to support definition and implementation of input and output primitives on embedded hardware devices; ii **RapMobile**: a cross-platform mobile app that acts as internet gateway and allows to discover and configure IoT devices; iii **RapCloud**: a cloud service, API and javascript library that support the development of applications that interact with IoT devices. In the following section we illustrate how RapIoT can be employed to create a simple IoT application. **IV. CREATING RAPIOT APPLICATIONS** The development of an IoT application using RapIoT is a five-step process. The first three steps entail application development. The last two steps involve application appropriation by end users. 1) **Device development** – it involves (i) building a hardware prototype of a IoT device using electronic components on an BLE-enabled, Arduino-compatible board and (ii) use the **RapEmbedded** library to register and implement input/output primitives 2) **Application development** – it entails coding application features by using APIs and libraries provided by **RapCloud**, input and output primitives are here employed as programming constructs 3) **Application deployment** – it involves uploading an application code on **RapCloud** using a web interface 4) **Device appropriation** – it entails wireless discovery of the prototype built in step 1 using the **RapMobile** smartphone app 5) **Application appropriation** – it involves selecting an application previously uploaded on RapCloud and running it using the RapMobile app. The list of steps and their relation with RapIoT components is reported in Figure 3. To describe the development process of RapIoT applications we introduce as running example the development of *Breath Better Air*, an IoT system to engage citizens in monitoring air quality in their neighbourhoods. This is a collaborative application that relies on individual contributors to generate a community-wide awareness about air quality in the city. The Breathe Better Air (BBA) system makes use of a IoT device to sense air quality information and provide visual feedbacks to the users (prototype in Figure 4). The device sends data to a server (developed with RapCloud) which computes the average air quality in a city using the data furnished by all BBA users. Eventually, the BBA device provides visual warnings using a green and red LEDs to display whenever the air quality captured by the device is over or above the average value provided by other users. In the following we describe the BBA application development and deployment process. A. Device development Device development involves hardware and firmware development. Hardware development involves plugging together electronic components using an Arduino-compatible board (Figure 4). To date, RapEmbedded supports a number of development platforms that feature an Arduino-compatible microcontroller and a Bluetooth Low Energy (BLE) chip; such as RFduino\(^3\) and Simblee\(^4\) boards. RapEmbedded does not pose limitations on the type of sensors and actuators connected. Firmware development requires writing Arduino code that interfaces with hardware to generate and consume primitives. The RapEmbedded library provides functions to: (i) register device types to enable dynamic application/devices couplings and thus simple application appropriation by end-users, (ii) register primitive definitions according with name of the primitive, type (input or output) and name of (up to two) optional parameters and; (iii) code conditions under which primitives are triggered, in case of input primitives, or consumed, as for output primitives. According with our example the BBA prototype is assembled using a air quality sensor, a RGB LED (Light Emitting Diode) device and an RFduino board. (Figure 4). After having installed the RapEmbedded library in the Arduino IDE, the device developer registers the BBAdevice device type and defines one input primitive, AirQuality, and one output primitive, LED. The AirQuality primitive models air quality levels, it is triggered by sensor readings continuously provided by the sensor and has one \texttt{QLevel} parameter that can assume “Low Quality” or “High Quality” states. The LED output primitive provides the \texttt{color} parameter that can assume the “green” and “red” states and control a LED to light up in different colors. \begin{verbatim} RIOTe.regDeviceType("BBAdevice"); RIOTe.regPrimitive(in,"Air", "Quality"); RIOTe.regPrimitive(out,"LED", "Color"); \end{verbatim} Finally the developer codes the loop of conditions under which the input primitives are triggered according with readings from the air quality sensor, and implements how to consume the output primitives by issuing commands to control the LED device to light up in different colors. \begin{verbatim} if(CO2Sensor.read() > threshold) RIOTe.trigger("Air", "Low"); else RIOTe.trigger("Air", "High"); \end{verbatim} --- \(^3\)http://rfduino.com \(^4\)http://simblee.com RIOTe.when("LED", "green", callbG()); RIOTe.when("LED", "red", callbR()); callbG(){digital.write(greenPin,HIGH);} callbR(){digital.write(redPin,HIGH);} After the firmware is developed and deployed, each BBA device is autonomous and ready to establish a connection with RapCloud to send and receive primitives (via the RapMobile app acting as gateway, as described later). B. Application development and deployment After primitives are defined and implemented in a (Arduino-compatible) firmware, they are available to application developers from a centralised cloud environment via the RapCloud API. In order to facilitate writing application we also developed a javascript library acting as a wrapper for the functionality provided by the RapCloud API. Back to our BBA example, the application developers proceed coding the application logic. First she registers the application name and the type of device required. Then she proceeds coding the application logic: whenever the AirQ primitive is received, its QLevel value is stored in a database (DB). The DB is then queried for average air quality values computed from reading provided by multiple BBA devices. If the current QLevel value is lower than the QLevel average an output primitive is issued to turn the LED on the BBA device to the green state; otherwise to the red state (current air quality lower then the average): ```javascript RapIoTApplication bba; bba = rIoT.regApp("BBA","BBAdevice"); bba.when("Air",DB.add(bba.Air.Quality)); if(bba.Air.Quality > DBStore.Average) bba.trigger(LED.green); else bba.trigger(LED.red); ``` As a final step the application developer proceeds uploading the source code to the RapIoT cloud server using a dedicated web form. The BBA application is now available to end users. C. Device and Application appropriation End users are provided with RapMobile app, compatible with Android and iOS devices. RapMobile mainly acts as a gateway layer between IoT devices (implemented with RamEmbedded) and the RapCloud service; yet it also allows to select and activate applications previously registered with RapIoT. In order to run the BBA application the user performs four steps. First the user install the RapMobile app on her smartphone. Second, she selects the BBA application among the ones available. Third, she discovers and selects the BBA device she wants to associate to the BBA application among the list of Bluetooth devices available nearby. Fourth, she starts the application. Whenever the application is running the phone can be set in standby mode but should remain within a 10 meters reach from the BBA device to ensure reliable data transfer. The GUI supporting appropriation and execution of BBA is shown in Figure 5. V. IMPLEMENTATION RapIoT is built on top of MQTT and CoAP protocols. Primitives are coded in JSON-formatted messages that contain a unique identifier of a device, currently implemented as MAC address, followed by the identifier of a primitive and the two optional parameters. Primitives are exchanged between IoT devices and applications on a event-driven basis. The event protocol is very lightweight and designed for low-resource embedded devices, since the information required to route primitives from each device to applications is offloaded by the gateway and server layers. This design choice spares hardware and application developers from implementing event routing, since each hardware module can be unequivocally controlled by an application connected to the API; no matter where the application or the hardware are deployed. Application developers only need to handle input primitives received from the hardware modules and send output primitives to those devices, without the need to know how the modules implement the actual recognition and actuation of primitives. Our platform employs a broadcast-based architecture in which all embedded devices interact with a common (wireless) communication channel where messages are broadcasted over the MQTT protocol. This architecture enables the reuse of deployed devices for different applications without changing the firmware. Furthermore, hardware modules can be discovered, attached or removed to the platform while clients are still running. Special system-wide events inform connected clients of the availability of new devices in real time. The current implementation has several limitations. The web interface for uploading application code on RapCloud is still under development, yet it is possible to launch applications manually from a command line interface. Likewise, RapMobile does not yet fully support selection and execution of applications (steps 1 and 3 in Figure 5); requiring developers to hard-code devices’ MAC addresses. VI. DISCUSSION In this section we analyse how RapIoT could drive the development of collaborative applications and we discuss its strengths and limitations. A. Support for collaboration Our approach to IoT system development embeds mechanisms that facilitate the authoring of collaborative applications. Primitives demonstrated to be a flexible construct that allow to break down interaction routines and data flows into simpler blocks that can be combined when writing the application logic. The RapIoT toolkit presents three fundamental features that help developing collaborative systems: - **Support for multiple devices** – RapIoT supports applications that make use of several devices connected to the same gateway (RapMobile app). This allow multiple users to interact with several devices placed in the same environment, which are then ruled by a centralised application logic running on the RapCloud server. Collaborative applications are then a concrete possibility: users can cooperate interacting with different devices for a common goal; - **HCI primitives for physical interaction** – some of the primitives rely on composite actions and events, which involve more than one physical device. It is possible to design and implement applications that support time coordination, sequential actions, proximity and other forms of cooperative practices that characterize coordinated ecologies of devices; - **Distributed gateways and devices** – applications developed with RapIoT can use several gateways physically located in different places, each of which can control a group of devices. This open to several possible scenarios: (i) groups of users can move from site to site where different groups of IoT devices are located and perform collaborative tasks that involve IoT devices on the site, i.e. a collaborative treasure hunt game, (ii) users can carry one or more IoT devices connected to their smartphone and perform some tasks or collecting data in the environment, remotely cooperating with other users that are following the same workflow but on a different site, (iii) users can move from gateway to gateway performing a subset of tasks involving different IoT devices, remotely collaborating with other users that follow the same process in other sites, with different IoT devices. B. Limitations The RapIoT architecture does not comprehend any coded application logic embedded into IoT devices (embedded layer). Since the primitives have to follow a complete round trip from the embedded layer to the application layer, network latency can be a significant factor affecting performance and application responsiveness. Network quality and availability is crucial for the entire period when the application is in use. This limitation can be particularly amplified when the application layer deals with batches of primitives in rapid sequence. In these cases, most of the execution time is spent waiting for the network, which can hinder the user experience. Another possible limitation is connected to the concept of primitive: for some applications the behavior to encapsulate in a primitive can be too complex to be exposed with a simple interface like the one provided by input/output primitives. This restriction could be partially mitigated splitting the logic into two or more primitives, with the drawback of delegating more work to the network. VII. CONCLUSIONS In this paper we presented the RapIoT toolkit for rapid prototyping of IoT applications. The development process of a RapIoT application has been demonstrated by describing how the provided tools have been applied to the development of a system for crowdsourcing air quality data. RapIoT leverages the concept of data primitives as a communication block and interface between generic devices and application layers. Further, we highlighted how RapIoT primitives can support the development of collaborative applications via multiple embedded devices, physical interfaces and distributed gateways. RapIoT takes advantage and builds on top of the most recent technological evolutions in the field like the Arduino platform, cloud computing, BLE radios and mobile applications; reducing complexity and entry barriers for non-experts. Future works will be oriented towards testing and refinement of the tools composing the system, as well as development of more complex applications and collaboration-specific primitives. REFERENCES
{"Source-Url": "http://simonemora.com/papers/conference/2016_RapIoT.pdf", "len_cl100k_base": 6146, "olmocr-version": "0.1.49", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 25036, "total-output-tokens": 7042, "length": "2e12", "weborganizer": {"__label__adult": 0.00037741661071777344, "__label__art_design": 0.000537872314453125, "__label__crime_law": 0.00039577484130859375, "__label__education_jobs": 0.0006842613220214844, "__label__entertainment": 8.89897346496582e-05, "__label__fashion_beauty": 0.00020945072174072263, "__label__finance_business": 0.00022542476654052737, "__label__food_dining": 0.00041294097900390625, "__label__games": 0.0006723403930664062, "__label__hardware": 0.004611968994140625, "__label__health": 0.0008287429809570312, "__label__history": 0.0004072189331054687, "__label__home_hobbies": 0.00013935565948486328, "__label__industrial": 0.0006380081176757812, "__label__literature": 0.00022232532501220703, "__label__politics": 0.00027298927307128906, "__label__religion": 0.0005865097045898438, "__label__science_tech": 0.16796875, "__label__social_life": 0.0001137852668762207, "__label__software": 0.01496124267578125, "__label__software_dev": 0.80419921875, "__label__sports_fitness": 0.00035262107849121094, "__label__transportation": 0.000888824462890625, "__label__travel": 0.00025725364685058594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34179, 0.00514]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34179, 0.52795]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34179, 0.90284]], "google_gemma-3-12b-it_contains_pii": [[0, 5052, false], [5052, 10155, null], [10155, 16058, null], [16058, 20877, null], [20877, 23722, null], [23722, 28264, null], [28264, 34179, null], [34179, 34179, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5052, true], [5052, 10155, null], [10155, 16058, null], [16058, 20877, null], [20877, 23722, null], [23722, 28264, null], [28264, 34179, null], [34179, 34179, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34179, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34179, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34179, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34179, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34179, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34179, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34179, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34179, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34179, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34179, null]], "pdf_page_numbers": [[0, 5052, 1], [5052, 10155, 2], [10155, 16058, 3], [16058, 20877, 4], [20877, 23722, 5], [23722, 28264, 6], [28264, 34179, 7], [34179, 34179, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34179, 0.0]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
458264ca4d480d2effd92e292cdec121954f1765
Textual Description of annaffy Colin A. Smith April 30, 2024 Introduction annaffy is part of the Bioconductor project. It is designed to help interface between Affymetrix analysis results and web-based databases. It provides classes and functions for accessing those resources both interactively and through statically generated HTML pages. The core functionality of annaffy depends on annotation contained in Bioconductor data packages. The data packages are created by the SQLForge code inside of another package called AnnotationDbi. It gathers annotation data from many diverse sources and makes the information easily processed by R. Preconstructed packages for most Affymetrix chips are available on the Bioconductor web site. 1 Loading Annotation Data annaffy represents each type of annotation data as a different class. Currently implemented classes include: - aafSymbol gene symbol - aafDescription gene description/name - aafFunction gene functional description - aafChromosome genomic chromosome - aafChromLoc location on the chromosome (in base pairs) - aafGenBank GenBank accession number - aafLocusLink LocusLink ids (almost never more than one) - aafCytoband mapped cytoband location aafUniGene UniGene cluster ids (almost never more than one) aafPubMed PubMed ids aafGO Gene Ontology identifiers, names, types, and evidence codes aafPathway KEGG pathway identifiers and names For each class, there is a constructor function with the same name. It takes as arguments a vector of Affymetrix probe ids as well as the chip name. The chip name corresponds to the name of the data package that contains the annotation. If the data package for the chip is not already loaded, the constructor will attempt to load it. The constructor returns a list of the corresponding objects populated with annotation data. (NA values in the annotation package are mapped to empty objects.) ```r > library("annaffy") (For the purpose of demonstration, we will use the hgu95av2.db metadata package and probe ids from the aafExpr dataset.) > data(aafExpr) > probeids <- featureNames(aafExpr) > symbols <- aafSymbol(probeids, "hgu95av2.db") > symbols[[54]] [1] "ARVCF" attr(,"class") [1] "aafSymbol" > symbols[[55:57]] An object of class "aafList" [[1]] [1] "MRPS14" attr(,"class") [1] "aafSymbol" [[2]] [1] "TDRD3" attr(,"class") [1] "aafSymbol" [[3]] character(0) attr(,"class") [1] "aafSymbol" ``` All annotation constructors return their results as `aafList` objects, which act like normal lists but have special behavior when used with certain methods. One such method is `getText()`, which returns a simple textual representation of most `anaffy` objects. Note the differing ways `anaffy` handles missing annotation data. ```r > getText(symbols[54:57]) [1] "ARVCF" "MRPS14" "TDRD3" "" ``` Other annotation constructors return more complex data structures: ```r > gos <- aafGO(probeids, "hgu95av2.db") > gos[[3]] An object of class "aafGO" [[1]] An object of class "aafGOItem" @id "GO:0005163" @name "nerve growth factor receptor binding" @type "Molecular Function" @evid "IBA" [[2]] An object of class "aafGOItem" @id "GO:0005515" @name "protein binding" @type "Molecular Function" @evid "IPI" [[3]] An object of class "aafGOItem" @id "GO:0005576" @name "extracellular region" @type "Cellular Component" @evid "TAS" [[4]] An object of class "aafGOItem" @id "GO:0005615" @name "extracellular space" @type "Cellular Component" @evid "IBA" ``` An object of class "aafGOItem" @id "GO:0005737" @name "cytoplasm" @type "Cellular Component" @evid "ISS" An object of class "aafGOItem" @id "GO:0005788" @name "endoplasmic reticulum lumen" @type "Cellular Component" @evid "TAS" An object of class "aafGOItem" @id "GO:0007169" @name "transmembrane receptor protein tyrosine kinase signaling pathway" @type "Biological Process" @evid "IBA" An object of class "aafGOItem" @id "GO:0007399" @name "nervous system development" @type "Biological Process" @evid "TAS" An object of class "aafGOItem" @id "GO:0007411" @name "axon guidance" @type "Biological Process" @evid "TAS" An object of class "aafGOItem" @id "GO:0007416" @name "synapse assembly" @type "Biological Process" @evid "IDA" An object of class "aafGOItem" @id "GO:0007422" @name "peripheral nervous system development" @type "Biological Process" @evid "IBA" An object of class "aafGOItem" @id "GO:0007613" @name "memory" @type "Biological Process" @evid "IBA" An object of class "aafGOItem" @id "GO:0008021" @name "synaptic vesicle" @type "Cellular Component" @evid "IBA" An object of class "aafGOItem" @id "GO:0008083" @name "growth factor activity" @type "Molecular Function" @evid "IBA" An object of class "aafGOItem" @id "GO:0010832" @name "negative regulation of myotube differentiation" @type "Biological Process" @evid "ISS" An object of class "aafGOItem" @id "GO:0010976" @name "positive regulation of neuron projection development" @type "Biological Process" @evid "ISS" An object of class "aafGOItem" @id "GO:0021675" @name "nerve development" @type "Biological Process" @evid "IBA" An object of class "aafGOItem" @id "GO:0030424" @name "axon" @type "Cellular Component" @evid "IBA" An object of class "aafGOItem" @id "GO:0030425" @name "dendrite" @type "Cellular Component" @evid "IBA" An object of class "aafGOItem" @id "GO:0031547" @name "brain-derived neurotrophic factor receptor signaling pathway" @type "Biological Process" @evid "TAS" An object of class "aafGOItem" @id "GO:0031550" @name "positive regulation of brain-derived neurotrophic factor receptor signaling pathway" @type "Biological Process" @evid "TAS" An object of class "aafGOItem" @id "GO:0033138" @name "positive regulation of peptidyl-serine phosphorylation" @type "Biological Process" @evid "IBA" An object of class "aafGOItem" @id "GO:0038180" @name "nerve growth factor signaling pathway" @type "Biological Process" @evid "IBA" An object of class "aafGOItem" @id "GO:0043524" @name "negative regulation of neuron apoptotic process" @type "Biological Process" @evid "IBA" An object of class "aafGOItem" @id "GO:00455664" @name "regulation of neuron differentiation" @type "Biological Process" @evid "IBA" An object of class "aafGOItem" @id "GO:0048471" @name "perinuclear region of cytoplasm" @type "Cellular Component" @evid "ISS" An object of class "aafGOItem" @id "GO:0048668" @name "collateral sprouting" @type "Biological Process" @evid "IDA" An object of class "aafGOItem" @id "GO:0048672" @name "positive regulation of collateral sprouting" @type "Biological Process" @evid "IBA" An object of class "aafGOItem" @id "GO:0048672" @name "positive regulation of collateral sprouting" @type "Biological Process" @evid "IDA" An object of class "aafGOItem" @id "GO:0048812" @name "neuron projection morphogenesis" @type "Biological Process" @evid "IBA" An object of class "aafGOItem" @id "GO:0050804" @name "modulation of chemical synaptic transmission" @type "Biological Process" @evid "IBA" An object of class "aafGOItem" @id "GO:0051965" @name "positive regulation of synapse assembly" @type "Biological Process" @evid "IDA" An object of class "aafGOItem" @id "GO:1900122" @name "positive regulation of receptor binding" @type "Biological Process" @evid "IDA" An object of class "aafGOItem" @id "GO:2000008" @name "regulation of protein localization to cell surface" @type "Biological Process" @evid "TAS" An object of class "aafGOItem" @id "GO:2001234" @name "negative regulation of apoptotic signaling pathway" @type "Biological Process" @evid "ISS" The gene ontology constructor, aafGO(), returns aafLists of aafGO objects, which are in turn lists of aafGOItem objects. Within each of those objects, there are four slots: id, name, type, and evidence code. The individual slots can be accessed with the @ operator. > gos[[3]][[1]]@name [1] "nerve growth factor receptor binding" If the reader is not already aware, R includes two subsetting operators, which can be the source of some confusion at first. Single brackets ([]) always return an object of the same type that they are used to subset. For example, using single brackets to subset an aafList will return another aafList, even if it only contains one item. On the other hand, double brackets ([[[]]]) return just a single item which is not enclosed in a list. Thus the above statement first picks out the third aafGO object, then the first aafGOItem, and finally the name slot. 2 Linking to Online Databases One of the most important features of the annaffy package is its ability to link to various public online databases. Most of the annotation classes in annaffy have a getURL() method which returns single or multiple URLs, depending on the object type. The simplest annotation class which produces a URL is aafGenBank. Because Affymetrix chips are generally based off GenBank, all probes have a corresponding GenBank accession number, even those missing other annotation data. The GenBank database provides information about the expressed sequence that the Affymetrix chip detects. Additionally, it helps break down the functional parts of the sequence and provides information about the authors that initially sequenced the gene fragment. (See this URL [here].) > gbs <- aafGenBank(probeids, "hgu95av2.db") > getURL(gbs[[1]]) In most distributions of R, you can open URLs in your browser with the `browseURL()` function. Many other types of URLs are also possible. Entrez Gene (formerly LocusLink) is a very useful online database that links to many other data sources not referenced by Bioconductor. One worthy of note is OMIM, which provides relatively concise gene function and mutant phenotype information. (See this URL here.) ```r > lls <- aafLocusLink(probeids, "hgu95av2.db") > getURL(lls[[2]]) ``` If you are interested in exploring the area of the genome surrounding a probe, the `aafCytoband` provides a link to NCBI's online genome viewer. It includes adjacent genes and other genomic annotations. (See this URL here.) ```r > bands <- aafCytoband(probeids, "hgu95av2.db") > getURL(bands[[2]]) ``` For primary literature information about a gene, use the `aafPubMed` class. It will provide a link to a list of abstracts on PubMed which describe the gene of interest. The list abstracts that Bioconductor provides are by no means complete and will sometimes only include the paper in which the gene was cloned. (See this URL here.) ```r > pmids <- aafPubMed(probeids, "hgu95av2.db") > getURL(pmids[[2]]) ``` A number of interesting queries can be done with the gene ontology class. You can display the gene ontology family hierarchy for an entire probe id at once, including multiple GO ids. The usefulness of such a query may be dubious, but it is possible. See this URL here. ```r > getURL(gos[[1]]) [1] "http://amigo.geneontology.org/amigo/term/G0:0001501%0aG0:0001501%0aG0:0005515%0aG0:0005576" ``` You can also show the family hierarchy for a single GO id. (See this URL here.) ```r > getURL(gos[[1]][[4]]) ``` The last link type of note is that for KEGG Pathway information. Most genes are not annotated with pathway data. However, for those that are, it is possible to retrieve schematics of the biochemical pathways a gene is involved in. (See this URL here. Look for the enzyme in question to be highlighted in red.) ```r > paths <- aafPathway(probeids, "hgu95av2.db") > getURL(paths[[4]]) ``` 3 Building HTML Pages In addition to using `annaffy` interactively through R, it may also be desirable to generate annotated reports summarizing your microarray analysis results. Such a report can be utilized by a scientist collaborator with no knowledge of either R or Bioconductor. Additionally, by having all the annotation and statistical data presented together on one page, connections between and generalizations about the data can be made in a more efficient manner. The primary intent of the `annaffy` package is to produce such reports in HTML. Additionally, it can easily format the same report as tab-delimited text for import into a table, spreadsheet, or database. It supports nearly all the annotation data available through Bioconductor. Additionally, it has facilities for including and colorizing user data in an informative manner. The rest of this tutorial will make use of an `ExpressionSet` generated for demonstration purposes. It contains anonymized data from a microarray experiment which used the Affymetrix hgu95av2 chip. There are eight total samples in the set, four control samples and four experimental samples. 250 expression measures were selected at random from the results, and another 250 probe ids were selected at random and assigned to those expression measures. The data therefore has no real biological significance, but can still fully show the capabilities of `annaffy`. 3.1 Limiting the Results HTML reports generated by `annaffy` can grow to become quite large unless some measures are taken to limit the results. Multi-megabyte web pages are unwieldy and should thus be avoided. Doing a ranked statistical analysis is one way to limit results, and will be shown here. We will rank the expression measures by putting their two-sample Welch t-statistics in order of decreasing absolute value. The first step is to load the `multtest` package which will be used for the t-test. (It is also part of the Bioconductor project.) > library(multtest) The `mt.teststat()` function requires a vector that specifies which samples belong to the different observation classes. Using a few R tricks, that vector can be produced directly from the first covariate of `pData`. > class <- as.integer(pData(aafExpr)$covar1) - 1 [1] 0 0 0 0 1 1 1 1 Using the class vector, we calculate the t-statistic for each of the probes. We then generate an index vector which can be used to order the probes themselves in increasing order. As a last step, we produce the vector of ranked probe ids. Latter annotation steps will only use the first 50 of those probes. > teststat <- mt.teststat(exprs(aafExpr), class) > index <- order(abs(teststat), decreasing = TRUE) > probeids <- featureNames(aafExpr)[index] 3.2 Annotating the Probes Once there is a list of probes, annotation is quite simple. The only decision that needs to be made is which classes of annotation to include in the table. Including all the annotation classes, which is the default, may not be a good idea. If the table grows too wide, its usefulness may decrease. To see which columns of data can be included, use the `aaf.handler()` function. When called with no arguments, it returns the annotation types `annaffy` can handle. > aaf.handler() [1] "Probe" "Symbol" "Description" [4] "Chromosome" "Chromosome Location" "GenBank" [10] "Gene Ontology" "Pathway" To help avoid typing errors, subset the vector instead of retyping each column name. > anncols <- aaf.handler()[c(1:3,8:9,11:13)] [1] "Probe" "Symbol" "Description" "Cytoband" "PubMed" [6] "Pathway" NA NA Figure 1: Graphical display for selecting annotation data columns. This may be too many columns, but it is possible at a later stage to choose to either not show some of the columns or remove them altogether. Note that by using the widget=TRUE option in the next function, it is also possible select data columns with a graphical widget. See Figure 1. Now we generate the annotation table with the aafTableAnn() function. Note that for this tutorial, annaffy is acting as its own data package. If you wish to annotate results for other chips, download the appropriate data package from the Bioconductor website. ```r > anntable <- aafTableAnn(probeids[1:50], "hgu95av2.db", annocols) ``` To see what has been produced so far, use the saveHTML() method to generate the HTML report. Using the optional argument open=TRUE will open the resulting file in your browser. ```r > saveHTML(anntable, "example1.html", title = "Example Table without Data") ``` See this page online [here](#). ### 3.3 Adding Other Data To add other data to the table, just use any of the other table constructors to generate your own table, and then merge the two. For instance, listing the t-test results along with the annotation data is quite useful. annaffy provides the option of colorizing signed data, making it easier to assimilate. ```r > testtable <- aafTable("t-statistic" = teststat[index[1:50]], signed = TRUE) > table <- merge(anntable, testtable) ``` After HTML generation, a one line change to the style sheet header will change the colors used to show the positive and negative values. In fact, with a bit of CSS it is possible to heavily customize the appearance of the tables very quickly, even on a column by column basis. annaffy also provides an easy way to include expression data in the table. It colorizes the cells with varying intensities of green to show relative expression values. Additionally, because of the way merge works, it will always match probe id rows together, regardless of their order. This allows a quick "sanity check" on the other statistics produced, and can help decrease user error. (Check, for example, that the t-statistics and ranking seem reasonable given the expression data.) ```r > exprtable <- aafTableInt(aafExpr, probeids = probeids[1:50]) > table <- merge(table, exprtable) > saveHTML(table, "example2.html", title = "Example Table with Data") ``` Producing a tab-delimited text version uses the `saveText()` method. The text output also includes more digits of precision than HTML. > `saveText(table, "example2.txt")` ### 4 Searching Metadata Often a biologist will make hypotheses about changes in gene expression either before or after the microarray experiment. In order to facilitate the formulation and testing of such hypotheses, `annaffy` includes functions to search annotation metadata using various criteria. All search functions return character vectors of Affymetrix probe ids that can be used to subset data and annotation. #### 4.1 Text The two currently implemented search functions are simple and easy to use. The first is a text search that matches against the textual representation of biological metadata. Recall that textual representations are extracted using the `getText()` method. For complex annotation structures, the textual representation can include a variety of information, including numeric identifiers and textual descriptions. For the purposes of demonstration, we will use the `hgu95av2.db` annotation data package available through Bioconductor. ```r > library(hgu95av2.db) > probeids <- ls(hgu95av2SYMBOL) > gos <- aafGO(probeids[1:2], "hgu95av2.db") > getText(gos) ``` ```r [1] "GO:0004663: Rab geranylgeranyltransferase activity, GO:0004663: Rab geranylgeranyltransferase activity, ... GO:0018344: protein geranylgeranylation, GO:0031267: small GTPase binding, GO:0036211: protein modification process" ``` The textual search is probably best applied to the `Symbol`, `Description`, and `Pathway` metadata types. (A specialized Gene Ontology search will be discussed later.) For instance, to find all the kinases on a chip, simply do a text search of `Description` for kinases. ```r > kinases <- aafSearchText("hgu95av2.db", "Description", "kinase") > kinases[1:5] ``` ```r [1] "1000_at" "1001_at" "1007_s_at" "1008_f_at" "101_at" ``` > `print(length(kinases))` 15 One can search multiple metadata types with multiple queries all with a single function call. For instance, to find all genes with "ribosome" or "polymerase" in the Description or Pathway annotation, use the following function call. ```r > probes <- aafSearchText("hgu95av2.db", c("Description", "Pathway"), + c("ribosome", "polymerase")) > print(length(probes)) ``` ``` [1] 81 ``` When doing searches of multiple annotation data types or multiple terms, by default the search returns all probe ids matching any of the search criteria. That can be altered by changing the logical operator from OR to AND using the `logic="AND"` argument. This is useful because `aafSearchText()` does not automatically tokenize a search query as Google and many other search engines do. For example, "DNA polymerase" finds all occurrences of that exact string. To find all probes whose description contains both "DNA" and "polymerase", use the following function call. ```r > probes <- aafSearchText("hgu95av2.db", "Description", + c("DNA", "polymerase"), logic = "AND") > print(length(probes)) ``` ``` [1] 16 ``` Another useful application of the text search is to map a vector of GenBank accession numbers onto a vector of probe ids. This comes in handy if you wish to filter microarray data based on the results of a BLAST job. ```r > gbs <- c("AF035121", "AL021546", "AJ006123", "AL080082", "AI289489") > aafSearchText("hgu95av2.db", "GenBank", gbs) ``` ``` [1] "1954_at" "32573_at" "32955_at" "34040_s_at" "35581_at" "38199_at" ``` Lastly, two points for power users. One, the text search is always case insensitive. Second, individual search terms are treated as Perl compatible regular expressions. This means that you should be cautious of special regular expression characters. See the Perl documentation[1] for further information about how to use regular expressions. 4.2 Gene Ontology The second type of search available is a Gene Ontology search. It takes a vector of Gene Ontology identifiers and maps them onto a list of probe ids. Gene Ontology is a tree and you can include or exclude descendants with the `descendents` argument. The search also supports the `logic` argument. Because the Bioconductor metadata packages include pre-indexed Gene Ontology mappings, this search is very fast. The input format for Gene Ontology ids is very flexible. You may use numeric or character vectors, either excluding or including the "GO:" prefix and leading zeros. ```r > aafSearchGO("hgu95av2.db", c("GO:0000002", "GO:0000008")) [1] "1287_at" "41146_at" "32822_at" "34988_at" "38885_at" [6] "1665_s_at" "36879_at" "33286_at" "1187_at" "1188_g_at" [11] "41099_at" "41747_s_at" "37181_at" "39745_at" "1014_at" [16] "34314_at" "38846_at" "39086_g_at" "1028_at" "41004_at" [21] "1939_at" "1974_s_at" "31618_at" "36541_at" "40781_at" [26] "39643_at" "34804_at" ``` ```r > aafSearchGO("hgu95av2.db", c("2", "8")) [1] "1287_at" "41146_at" "32822_at" "34988_at" "38885_at" [6] "1665_s_at" "36879_at" "33286_at" "1187_at" "1188_g_at" [11] "41099_at" "41747_s_at" "37181_at" "39745_at" "1014_at" [16] "34314_at" "38846_at" "39086_g_at" "1028_at" "41004_at" [21] "1939_at" "1974_s_at" "31618_at" "36541_at" "40781_at" [26] "39643_at" "34804_at" ``` ```r > aafSearchGO("hgu95av2.db", c(2, 8)) [1] "1287_at" "41146_at" "32822_at" "34988_at" "38885_at" [6] "1665_s_at" "36879_at" "33286_at" "1187_at" "1188_g_at" [11] "41099_at" "41747_s_at" "37181_at" "39745_at" "1014_at" [16] "34314_at" "38846_at" "39086_g_at" "1028_at" "41004_at" [21] "1939_at" "1974_s_at" "31618_at" "36541_at" "40781_at" [26] "39643_at" "34804_at" ``` A good source for finding relevant Gene Ontology identifiers is the AmiGO website, operated by the Gene Ontology Consortium.
{"Source-Url": "https://www.bioconductor.org/packages/release/bioc/vignettes/annaffy/inst/doc/annaffy.pdf", "len_cl100k_base": 6369, "olmocr-version": "0.1.49", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 33178, "total-output-tokens": 8272, "length": "2e12", "weborganizer": {"__label__adult": 0.0003159046173095703, "__label__art_design": 0.0005230903625488281, "__label__crime_law": 0.00034165382385253906, "__label__education_jobs": 0.0009131431579589844, "__label__entertainment": 0.0001647472381591797, "__label__fashion_beauty": 0.0001729726791381836, "__label__finance_business": 0.00021636486053466797, "__label__food_dining": 0.0003936290740966797, "__label__games": 0.0005621910095214844, "__label__hardware": 0.0016393661499023438, "__label__health": 0.0007944107055664062, "__label__history": 0.00029349327087402344, "__label__home_hobbies": 0.00016605854034423828, "__label__industrial": 0.0005626678466796875, "__label__literature": 0.00028061866760253906, "__label__politics": 0.0002923011779785156, "__label__religion": 0.000583648681640625, "__label__science_tech": 0.1353759765625, "__label__social_life": 0.0001690387725830078, "__label__software": 0.051971435546875, "__label__software_dev": 0.8037109375, "__label__sports_fitness": 0.0002999305725097656, "__label__transportation": 0.0003113746643066406, "__label__travel": 0.0002002716064453125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24201, 0.08018]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24201, 0.71982]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24201, 0.78332]], "google_gemma-3-12b-it_contains_pii": [[0, 1208, false], [1208, 2414, null], [2414, 3467, null], [3467, 4239, null], [4239, 5011, null], [5011, 5830, null], [5830, 6638, null], [6638, 7477, null], [7477, 9466, null], [9466, 11496, null], [11496, 14043, null], [14043, 15724, null], [15724, 15791, null], [15791, 18115, null], [18115, 20314, null], [20314, 22263, null], [22263, 24201, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1208, true], [1208, 2414, null], [2414, 3467, null], [3467, 4239, null], [4239, 5011, null], [5011, 5830, null], [5830, 6638, null], [6638, 7477, null], [7477, 9466, null], [9466, 11496, null], [11496, 14043, null], [14043, 15724, null], [15724, 15791, null], [15791, 18115, null], [18115, 20314, null], [20314, 22263, null], [22263, 24201, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24201, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24201, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24201, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24201, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24201, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24201, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24201, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24201, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24201, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24201, null]], "pdf_page_numbers": [[0, 1208, 1], [1208, 2414, 2], [2414, 3467, 3], [3467, 4239, 4], [4239, 5011, 5], [5011, 5830, 6], [5830, 6638, 7], [6638, 7477, 8], [7477, 9466, 9], [9466, 11496, 10], [11496, 14043, 11], [14043, 15724, 12], [15724, 15791, 13], [15791, 18115, 14], [18115, 20314, 15], [20314, 22263, 16], [22263, 24201, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24201, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
7261bf2a79308884919dadc8a444710236b11d5a
Supporting the ARIS community system in Mozambique M. Pscheidt Universidade Católica de Moçambique, Mozambique Radboud Universiteit Nijmegen, Netherlands < markus.pscheidt@gmail.com > Th. P. van der Weide Radboud Universiteit Nijmegen, Netherlands < tvdw@cs.ru.nl > This paper discusses the ideas behind community informatics (CI) and documents the need for bottom up approaches in ICT4D endeavors. It connects community informatics with software development, and proceeds to describe the experience with the ARIS project in Mozambique. It discusses the democratic nature that the ARIS project has developed in integrating the stakeholders and designing the system in order to enable local adaptation. We investigate how community participants have been empowered during the North-South collaboration. Based on these experiences we show how CI ideas can form a framework for a supportive organization to sustain the information system after the end of the North-South cooperation. Some propositions are specific to the domain of Higher Education, while others are more generic and may be relevant to the development and implementation of other community systems in less developed countries. Keywords: North-South cooperation, ICT4D, community informatics, software development, information systems, higher education Introduction The development and implementation of information systems (IS) is a complex endeavor, even more so in the context of less developed countries (LDCs). The complexity is partly related to technology - or the absence of technological infrastructure -, but in a more holistic sense also to the social context of the IS and its community. In this paper, we reflect on our experiences in the ARIS project in Mozambique, which we consider from the viewpoint of a community system, which, according to Bieber et al. (2007), not only includes technology, but also encompasses people, knowledge, processes and support. From this analysis we derive a supportive organization structure and a set of empowerment methods for the community participants. This is based on the notion that the "owner" of the IS is rather the community than a particular organization as in traditional Management Information Systems (MIS) research (Gurstein, 2007). In this paper we investigate ways to provide support to the community consisting of stakeholders having a shared interest concerning the effective and efficient administration of academic information. The study is motivated by the need to resolve the real-world problem to facilitate sustainability of the "ARIS" project. This project is a North-South cooperation project with the aim of providing an electronic Academic Registry Information System (ARIS) for Mozambican universities. The practical orientation of this study corresponds to research practices in the field of Community Informatics (CI), which according to Gurstein (2007) typically relates to specific outcomes or actions in the world of practice. CI ideas have been found useful during the development and implementation of the project in the context of a development country, even though the project may be classified as a traditional MIS project rather than a typical CI project. However, we show that the application of CI practices to the given project empowers local stakeholders by increasing their skills and enabling effective use. Furthermore, we want to stress that CI activities do not substitute IS development activities. Instead, they complement each other, and the integration of CI with traditional MIS shall prepare the Southern partner better to gain local ownership and control over IS endeavors. This study is part of an action research project which consists of a research cycle and a problem-solving cycle that mutually inform each other (McKay and Marshall, 2001). We reflect on the practical action taken to empower the ARIS community during the North-South cooperation, and use this experience to plan further action by designing empowerment activities of a local supportive organization for the time after the cooperation. The actions taken so far as well as the planned actions have been results of discussions between community participants in the South as well as in the North. The structure of the paper is as follows. First we consider theoretical aspects of communities relevant for information system projects in LDCs. These aspects are used in the following chapter to describe and analyze the ARIS project and our experiences about empowering the ARIS community, by taking a close look at the different stakeholders. This leads to the design of support activities for Mozambican institutions of higher learning, with the goal to provide a long-term solution and malleability to both the administrators, as well as users and future users. Communities In this chapter we look into concepts related to communities, which form the basis for the analysis of the ARIS community and the design of a supportive organization in later chapters. We consider community development and community informatics, particularly in the context of LDCs, and methods to empower communities. Finally we outline the basic processes that need to be carried out by a community related to information system development and implementation. What is a community? The term "community" has no single definition in the social sciences (Hamman, 1997). Therefore it is necessary to define the term in every paper that uses it. Here, borrowing from Hamman, "community" refers to (1) a group of people (2) who share ongoing social interaction (3) with some common ties between themselves and other members of the group and (4) who share an area (common space) for at least some of the time. This definition encompasses both physical and virtual communities. Moreover, communities exist in different contexts such as in family or work group contexts, as well as in different intensity of involvement. In any case communities have the function of enhancing the well-being of its participants (Bieber et al., 2007). Virnoche and Marx (1997) state that "community is constituted by individual identification of and involvement in a network of particular associations" (p. 86), hence within a community there typically exist groups formed by individuals by e.g. working together on a project, sharing knowledge, making decisions or socializing. Community Informatics Community Informatics (CI) is the application of information and communications technology (ICT) to enable and empower community processes. CI is a framework for systematically approaching information systems from a community perspective, where the community is the "owner" or operative agent. This is an alternative to the traditional view that information systems are owned and operated by organizations (Gurstein, 2007). McIver (2003) expresses the need for CI in contrast to MIS, which has established best practices that generally assume an abundance of resources and expertise to which communities often do not have access. According to McIver the "grand challenge" in CI is to develop technological solutions for communities that are economically, socially, and culturally appropriate and that are operationally and economically sustainable. This is especially true for developing countries, where resources and training may be even scarcer than in most communities. **Community Informatics and ICT4D** Conventional approaches to ICT4D tend to be dominated by a western, donor community set of values and priorities. ICT4D policies often follow a top-down philosophy that starts by defining national policy plans, followed by creating enabling conditions in the market, and finally creating projects that follow policy guidelines. This macro-level oriented ICT4D strategy does not necessarily give access within the information society to individuals and groups on the micro level, and may thus prevent development opportunities in LDCs that could only be possible by more inclusive bottom-up approaches. This technocratic ICT development discourse has been emphasized by organizations such as the World Bank. It has received critics such as by Thompson (2004) who outlined that this approach excludes alternative views of technology and development. Vaughan (2006) positions CI as an alternative by referring to best practices and lessons learned from a plethora of case studies in the ICT4D field that suggest methods of CI, even though some do not explicitly mention CI. Community Informatics is an alternative to the common top-down approach. CI attempts to embed ICT in existing community structures, utilizing existing social capital in those structures. Rather than to impose externally designed ICT solutions, ICT is introduced with the objective to help the community identify and meet its needs and to target effective use. Gurstein (2007) outlines the similarity between the CI approach to ICT and the design and deployment of information systems in industrial contexts, where the difference is that CI uses bottom-up processes for system design, whereas in industrial settings system design is guided by corporate management. **Community Development, Informatics and Appropriation** Community development (CD) seeks to empower individuals and groups of people by providing these groups with the skills they need to effect change in their own communities. For Stoecker (2005) in order for CI being able to empower communities, it needs to fit into an overall community development strategy. According to Gurstein (2007) ICT based CI activities and non ICT based CD activities often run parallel and depend on factors such as the skills and preferences of the involved community members. Community appropriation of ICT designates a situation where the community has become sufficiently comfortable with the technology to work both face-to-face as well as in technology enabled modes and decides on its own when ICT is appropriate or not. **Empowering communities** Schuflan (1996) gives a rough taxonomy of community development approaches and how they empower communities. He argues that CD actions are context dependent; the same action may sometimes be empowering, other times not. Rather than a single event, empowerment is a continuous process that enables people to understand, upgrade and use their capacity to better control and gain power over their own lives. Schuflan's taxonomy comprises the approaches service delivery, capacity building, advocacy and social mobilization. Table 1 summarizes these approaches. Service delivery provides a usually structured set of services to defined beneficiaries and addresses actions directly related to the immediate causes of maldevelopment. Examples include health and educational services. On its own it tends not to be very sustainable. Table 1: Community development approaches that empower communities (Schuftan, 1996) <table> <thead> <tr> <th>Service delivery</th> </tr> </thead> <tbody> <tr> <td>• to use local human resources whenever possible,</td> </tr> <tr> <td>• to involve community representatives in the choice of delivered services,</td> </tr> <tr> <td>• to train local staff with the aim of behavioral change,</td> </tr> <tr> <td>• to assure a continuous flow of information between providers and end users of services enabling the latter to be equal partners in planning, delivery, management and evaluation of the services.</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Capacity building</th> </tr> </thead> <tbody> <tr> <td>• enabling individuals, communities and organizations to continuously upgrade their ability to know, analyze and understand their situation and their problems.</td> </tr> <tr> <td>• increasing people's awareness of what is permissible and fair to do</td> </tr> <tr> <td>• capacitating people to use explicit assessment-analysis-action processes</td> </tr> <tr> <td>• emphasizing skills that lead to community ownership of the interventions undertaken</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Advocacy</th> </tr> </thead> <tbody> <tr> <td>• convincing and persuading people</td> </tr> <tr> <td>• increasing people's demand for, access to and utilization of services and their access to the means of production</td> </tr> <tr> <td>• promoting a more local control of resources</td> </tr> <tr> <td>• improving the access of end-users and facilitators to reliable community development-related information</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Social mobilization</th> </tr> </thead> <tbody> <tr> <td>• articulating people's felt needs into concrete demands</td> </tr> <tr> <td>• networking with others, striving for achieving a critical mass of concerned people locally and externally</td> </tr> <tr> <td>• operating in complete assessment-analysis-action cycles, thus collectively identifying problems, searching for solutions and implementing them</td> </tr> <tr> <td>• giving people power over decisions, thus increasing their self-esteem and self-confidence</td> </tr> </tbody> </table> Capacity building raises people's knowledge, awareness and skills to use their own capacity and that from available support systems to solve local problems. It strengthens the Assessment-Analysis-Action process in the community and therefore leads to more sustainability. Advocacy is about setting in motion a dynamic process of developing a consensus and a mandate for action. It brings together like-minded allies with a common goal. Social mobilization is the community development approach that gets people actively involved in development assessment-analysis-action processes. It engages them in actions to fight for their rights and to gain more control over needed resources. It aims at networking, placing concrete demands and mobilizing resources. Effective use Gurstein (2003) in his analysis of the digital divide proposes "effective use" as the goal to be achieved rather than simply access to ICTs and the information society. Access on its own ensures opportunities to "consume" ICT enabled systems such as information systems, but it is a passive mechanism. It needs to be extended with or embedded in a greater context. In addition to access as such, it is significant to have the knowledge, skills, and supportive organizational and social structures to make effective use of that access in order to achieve social and community objectives. For development to occur, access is a precondition, but the focus has to be on the whole "development process" including infrastructure, hardware, software, and social organizational elements. Local communities need to train the capabilities to be producers, not only consumers, so that end users can do locally significant things with technology tools to which they have access. Effective use occurs in social settings such as work groups and larger communities and is hence context dependent. What is appropriate in one context may not be in a different one. An example of a community informatics approach to support local effective use is participatory design, where application design is done with full participation of the end users and the local community. In this way, an application is directly linked to local needs and creates local ownership and local champions who can provide feedback on its development and evolution. Effective use is thus a goal of support and empowerment of communities because it fosters active community participants who increase their knowledge and become productive in the continuous improvement of the ICT systems they use to meet their needs. Supporting and enabling communities The "effective use" concept is the basis for a framework developed by Bieber et al. (2007). The framework, called "Supporting and Enabling Communities framework" (SEComm), refocuses information systems towards communities and collaborative decision and design processes. It emphasizes system design that supports community members to become active participants in order to realize both collective and individual goals. It provides a model for reflecting on community support at all levels of the so called "Enabling Community". Community systems include (1) technology, (2) the people, (3) knowledge, (4) processes and (5) support. The design of community systems should specify people's roles (both those participating in the system's activities and those using its resulting outcomes or services); the kind of data and knowledge that should be acquired, stored and shared; the steps of the processes for accomplishing the system's purpose; and the support the system requires. Ideally, all participants should have access to the community's data and knowledge in a manner they can understand and utilize. The SEComm framework consists of two elements: First, the Participant Support System (PaSS) encompasses processes, people, knowledge and technology in order to provide desired services and products. These elements are further influenced by environmental factors such as policies, constraints and the shared goal or purpose within the community. The second element, the Community Participant Levels (CPaL), reflects the multi-level characteristic of communities and distinguishes between individual, group, community and supportive organization level. The different levels influence each other. At each of these levels a PaSS can be applied, and a PaSS at one level can influence another as environmental factor. Software development processes A traditional model of computer system development is the Software Development Life Cycle (SLDC). The SDLC outlines the basic phases of the development and implementation of software systems. It comes in many flavors (see for example Brandon, 2005). The phases are: 1. Definition: Determine the goals, scope and requirements of the system 2. Design: Resolution of technical issues, selection of architecture and standards 3. Construction: Realization of the design, typically by programming and testing; Documentation of the system. 4. Installation: roll-out of the services offered by the systems to the end-users, training. 5. Operation/maintenance: problem solving, user support, and incremental improvement through monitoring and evaluation focusing on the use of the services by the end-users. The SDLC exhibits the processes that need to be carried out for the development and implementation of ICT based systems. It is not tied to the Waterfall model, but a number of variations have evolved. Many of these approaches break down large projects into smaller, more manageable pieces. We do not prescribe a particular variation of the SDLC, but use it in this study to associate empowerment methods to certain SDLC phases. To clarify terminology, we understand the term implementation in accordance with Walsham (2009) in a "human and social sense, so that the system is used frequently by organization members or that it is considered valuable for work activities or coordination" (p. 210). **Learning from project experience** In this chapter we describe the empowerment approaches that were used during the North-South ARIS project. At the moment of writing this study, we have run through the complete software development life cycle and we describe the tools and methods used and our experiences in applying them. This reflection shall not only be an inspiration for similar projects in the future structure but also guides the design of a not yet existing supportive organization for the ARIS community in the next chapter. We first give a description of the community system during the project, followed by a reflection on empowerment methods that were used. **ARIS community overview** To give a brief description of the ARIS community system we follow Bieber et al. (2007) and outline technology, people, knowledge, processes and support. The common interest of the ARIS community is to manage academic data at institutions of higher learning utilizing appropriate electronic means. This academic data includes studies, students, exams and marks, taking into account the specific Mozambican reality and requirements. People actively involved in this data management process are the academic registrars at universities and faculties in Mozambique. There is usually a "chief" academic registrar who can be characterized as a functional administrator for the area of academic registry at the respective institution. Other interested stakeholders include the university managers who need a data basis for decision making at faculty and university level, and the Education Ministry with similar intentions on a national scale. Academic registrars in the ARIS community have different backgrounds and experiences regarding the use of computers and administrative work. This heterogeneity means that learning and work progress vary strongly between users when computer based tools are introduced. For those with more difficulties in making sense of the new system it can be a motivation to be aware that others use a certain computer system and consider it helpful. Another observation is that time plays an important role in establishing a system. When difficulties arise and assistance is not (anymore) available, users tend to give up on new systems and go back to previous solutions. The relevance of time is further illustrated by the diffusion of innovations theory and its concept of adoption rate of different adopter types, which states that innovators are much faster to adopt innovations than laggards (Simpson, 2005). On the support side the ICT staff is important for the technical maintenance of the system. Furthermore, a help desk is useful for users to ask for assistance in case of any difficulties, and a software development unit is inevitable in the long run to develop the information system and adapt to upcoming changes in requirements (Lehman and Ramil, 2001). ARIS has been designed as a client-server architecture and built on Open Source components. Data is stored in a central database. The application logic runs on an Apache Tomcat server providing HTTP content to clients. The clients use a web-browser to access the server. Each user has a separate login and password, and an associated user role to limit the user's privileges in the system. Furthermore the system has a modular architecture which comprises a kernel, a reporting module, a scholarship module, a fee module and university specific extension modules. One of the implementing universities leased a virtual Linux server from a provider in the United States on which ARIS is deployed. This allows client access from any Internet connected computer via a web browser. The hardware requirements on the client side are limited to a computer with connection to the Internet. All users have used Microsoft Windows as an operating system, although this is not a requirement. This particular university has faculties in different cities in various parts of Mozambique. Each faculty has academic registry staff that is trained in common workshops and directly on the job. The process is further facilitated with the help of technical experts by importing existing academic data into the database of the ARIS application, so that the benefits of the new system are more obvious to users. These benefits include automatic creation of reports, certificates and statistics from data existing in the database. **ARIS community levels** We follow Bieber et al. (2007) to identify community levels in the ARIS North-South project: - Individual level: Academic registrar (i.e. user), technician, functional administrator, IS project manager. - Group level: Several individuals who support each other form groups, e.g. the academic registrars within the university. - University level: All the individuals and groups within a university who work together to manage the university's academic data. - North-South level: Several universities together with a local government coordination unit and the partner in the North. ![Figure 1: The project community levels during the North-South project](image-url) academic registrar who is the functional administrator for this subject area. The community furthermore includes participants specific to the North-South cooperation and outside the universities who either come from the local government coordinating body or from the partner in the North. During the North-South project these different participants together engage in the processes of the SDLC such as requirements engineering and design as well as management of the project. Empowerment activities during the project Figure 2 gives an overview of activities to empower the community as they were used in SDLC phases. The activities were planned by the coordinators of the North-South project in collaboration with the participants of the universities. The definition phase began with awareness building with university rectors and handout of initial printed project documentation. A website was created for further documentation and information publication. The requirements engineering process was an important method to actively involve university participants such as the academic registrars. It proved to be a challenge to get the complete picture of requirements, possibly due to inexperience of the users with computer software. In order to verify and adapt requirements throughout the design and construction process the incremental development approach (Davis et al. 1988) was used where the users reviewed the progress at regular intervals when participants from North and South came together. During later stages local software development was done at one participating university. This process was characterized by the proximity between developers and users which allowed to quickly react to the discovery of bugs and to improve the user interface. This local software development followed agile software development practices as described by Cockburn and Highsmith (2001), which involve users as part of the team, focus on incremental changes with short times between decision making and seeing the consequences of that decision and work best in a people-centered, collaborative culture. The design of ARIS incorporates the need for working in local language, regarding both user data and user interface. The system's modular structure, together with a shared code repository and mailing list, enable the adaptation and improvement of the system by any participant in the community. This, in turn, is a possibility for local learning and local ownership through a two-way communication based technology transfer process between South and North (Rogers, 2002). To foster local involvement in the software development of ARIS an extensive developer training was provided. Our experience was mixed; the success of this method depends also on the continuous availability of the local software developers once they are trained, which was not always the case. To address difficulties during the installation of ARIS at the universities the basic user and technician trainings were accompanied by visits of the project coordinators and developers to the participating universities. These visits also served to assess areas for further training in ICT, such as Linux and Samba. Towards the end of the North-South project the universities needed to get more actively involved with the system. In the operation phase good experience was made with working face-to-face with users at their respective workplace. This time-intensive method did not only educate users how to use the system effectively and efficiently, but also identified difficulties with the system that were not obvious during earlier trainings, and cultivated a culture among users of actively pursuing support, which in some occasions lead to improvements of the system. A related method is the monitoring of the quality of data that is entered by the users, which increased users’ awareness of what is permissible and fair to do. Table 2 gives another perspective on the empowerment methods used during the project, organized according to Schuftan's taxonomy of community development activities. ### Design of a Supportive Organization As the ARIS project moves from a North-South cooperation to a self-sustaining project the activities and support given by the project partner in the North and the government unit in the South need to be organized in a different way. In this section we develop recommendations for a supportive organization to support the community participants at universities, including institutions that already use the system as well as those interested to use it in the future. The design of the supportive organization as presented here is based on experiences and discussions between project participants, and includes some activities that have been useful during the North-South project. With regard to the Action Research methodology it represents the step of action planning based on previous actions taken. #### Empowering activities For ARIS, service delivery includes activities concerning the optimization of the functionality of the system such as further system development, adjustment and data import, and basic training of users and technicians. Capacity building in the case of ARIS involves activities aimed at an adequate transfer of knowledge and skills to enable effective use by users and university management. Advocacy includes activities to promote and create a positive attitude towards the system. This includes to raise awareness to potentially interested institutions who do not yet use the system. Activities regarding social mobilization are aimed at creating interaction among system stakeholders. It is the basis for continuous system development, by involving end-users to collectively identify and address current problems and upcoming requirements. <table> <thead> <tr> <th>Table 3: Possible empowerment methods by the supportive organization</th> </tr> </thead> <tbody> <tr> <td><strong>Service delivery</strong></td> </tr> <tr> <td>• User help desk</td> </tr> <tr> <td>• Assistance with system installation</td> </tr> <tr> <td>• Documentation such as user and technical manuals</td> </tr> <tr> <td>• Website with project information</td> </tr> <tr> <td>• Development of additional functionality / modules</td> </tr> <tr> <td>• Hosting of the system on behalf of universities</td> </tr> <tr> <td>• Data import of existing data</td> </tr> <tr> <td>• User, technician and software developer training</td> </tr> <tr> <td><strong>Capacity building</strong></td> </tr> <tr> <td>• Face to face sessions at workplace</td> </tr> <tr> <td>• Establish effective use of the system in the institution</td> </tr> <tr> <td>• Collaborative agile software development,</td> </tr> <tr> <td>◦ e.g. based on Open Source</td> </tr> <tr> <td><strong>Advocacy</strong></td> </tr> <tr> <td>• Keep community participants informed about system updates</td> </tr> <tr> <td>• Maintain project website with news and downloads</td> </tr> <tr> <td><strong>Social mobilization</strong></td> </tr> <tr> <td>• User group meetings</td> </tr> <tr> <td>• Steering group of community participants</td> </tr> <tr> <td>• Collaborative requirements engineering for system improvements</td> </tr> <tr> <td>• Mailing lists and on-line forums</td> </tr> </tbody> </table> Based on these observations, Table 3 gives an overview of possible empowerment activities by the supportive organizations. A particular method, which covers various community building areas, is Open Source. It creates capacity by giving community participants the possibility to learn and create local ownership. It advocates the system through clear license terms and free access to documentation and source code. It fosters social mobilization by allowing participants to get actively involved in assessment-analysis-action processes for system improvement. Agile software development methods are worth considering since they focus on the skills of individuals and improve the team's sense of community (Cockburn and Highsmith, 2001). **International community** The role of the supportive organization for ARIS in Mozambique is to assist local implementing organizations in local language, taking into account the specific Mozambican rules and regulations in the field of academic registry. In order to give qualified support to the Mozambican community, in some cases the supportive organization's staff in turn may find it helpful to be able to turn to a larger community. Thus, from a donor perspective, it is beneficial to nourish the further adaptations and implementations of the system in other contexts outside Mozambique. This facilitates social mobilization by creating a critical mass of concerned community participants and South-South collaborations. Conclusions The democratic nature of the project has lead to certain positive effects. Integrating local stakeholders from the beginning facilitated a system that corresponds to the needs of the universities. However, a number of challenges were encountered along the way that have threatened success. In the early stages, one of the five participating Mozambican universities acquired a commercial system. Not only did this particular university have little interest throughout the remainder of the project, but also advertised the commercial system to the project partners. Participants of another university received expensive training in software development overseas, but finally abandoned the system and never implemented it at their institution. These setbacks, however, illustrate the possibilities of aligning such ICT4D projects with the “effective use” principle. Costly projects can be geared to higher success rates with the encouragement of active participation in activities such as requirements gathering and programming, and by nurturing those participants who show active interest. ICT4D projects face the latent danger that local participants stay passive during much of the project life time, waiting for the foreign experts to deliver solutions. The CI approach can be useful to avoid this pattern as early as possible. The future of the project depends to a certain degree on the support given to the universities and on the maintenance of the system so that it can respond to emerging needs. We have analyzed the community during the ARIS project in Mozambique and have designed the yet missing supportive organization which is supposed to take over certain activities from the North-South cooperation and give much needed support to community participants. Further research is recommended to evaluate the supportive organization once it has been put into reality. References
{"Source-Url": "http://www.cs.ru.nl/M.Pscheidt/Pscheidt_Weide_2009_Community%20Informatics%20and%20Software%20Development.pdf", "len_cl100k_base": 6598, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 25551, "total-output-tokens": 7711, "length": "2e12", "weborganizer": {"__label__adult": 0.0005259513854980469, "__label__art_design": 0.0008287429809570312, "__label__crime_law": 0.0007715225219726562, "__label__education_jobs": 0.10650634765625, "__label__entertainment": 0.0001474618911743164, "__label__fashion_beauty": 0.0003159046173095703, "__label__finance_business": 0.0015201568603515625, "__label__food_dining": 0.0006628036499023438, "__label__games": 0.0008139610290527344, "__label__hardware": 0.0011873245239257812, "__label__health": 0.0013341903686523438, "__label__history": 0.0010318756103515625, "__label__home_hobbies": 0.00031566619873046875, "__label__industrial": 0.0007290840148925781, "__label__literature": 0.0010786056518554688, "__label__politics": 0.0011434555053710938, "__label__religion": 0.0009474754333496094, "__label__science_tech": 0.1129150390625, "__label__social_life": 0.0017232894897460938, "__label__software": 0.041534423828125, "__label__software_dev": 0.72216796875, "__label__sports_fitness": 0.0003650188446044922, "__label__transportation": 0.0010700225830078125, "__label__travel": 0.00043320655822753906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37254, 0.01074]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37254, 0.42131]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37254, 0.93856]], "google_gemma-3-12b-it_contains_pii": [[0, 2982, false], [2982, 6653, null], [6653, 10552, null], [10552, 13369, null], [13369, 17387, null], [17387, 21283, null], [21283, 23655, null], [23655, 26526, null], [26526, 28819, null], [28819, 32537, null], [32537, 35894, null], [35894, 37254, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2982, true], [2982, 6653, null], [6653, 10552, null], [10552, 13369, null], [13369, 17387, null], [17387, 21283, null], [21283, 23655, null], [23655, 26526, null], [26526, 28819, null], [28819, 32537, null], [32537, 35894, null], [35894, 37254, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37254, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37254, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37254, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37254, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37254, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37254, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37254, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37254, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37254, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37254, null]], "pdf_page_numbers": [[0, 2982, 1], [2982, 6653, 2], [6653, 10552, 3], [10552, 13369, 4], [13369, 17387, 5], [17387, 21283, 6], [21283, 23655, 7], [23655, 26526, 8], [26526, 28819, 9], [28819, 32537, 10], [32537, 35894, 11], [35894, 37254, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37254, 0.29268]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
dd6df446bf3fd0462155cd0fed3f8250416549bb
PetaLinux SDK User Guide Zynq AMP Linux FreeRTOS Guide UG978 (v2013.04) April 22, 2013 Warning of Disclaimer The information disclosed to you hereunder (the "Materials") is provided solely for the selection and use of Xilinx products. To the maximum extent permitted by applicable law: (1) Materials are made available "AS IS" and with all faults, Xilinx hereby DISCLAIMS ALL WARRANTIES AND CONDITIONS, EXPRESS, IMPLIED, OR STATUTORY, INCLUDING BUT NOT LIMITED TO ARRANTIES OF MERCHANTABILITY, NON-INFRINGEMENT, OR FITNESS FOR ANY PARTICULAR PURPOSE; and (2) Xilinx shall not be liable (whether in contract or tort, including negligence, or under any other theory of liability) for any loss or damage of any kind or nature related to, arising under, or in connection with, the Materials (including your use of the Materials), including for any direct, indirect, special, incidental, or consequential loss or damage (including loss of data, profits, goodwill, or any type of loss or damage suffered as a result of any action brought by a third party) even if such damage or loss was reasonably foreseeable or Xilinx had been advised of the possibility of the same. Xilinx assumes no obligation to correct any errors contained in the Materials or to notify you of updates to the Materials or to product specifications. You may not reproduce, modify, distribute, or publicly display the Materials without prior written consent. Certain products are subject to the terms and conditions of the Limited Warranties which can be viewed at http://www.xilinx.com/warranty.htm; IP cores may be subject to warranty and support terms contained in a license issued to you by Xilinx. Xilinx products are not designed or intended to be fail-safe or for use in any application requiring fail-safe performance; you assume sole risk and liability for use of Xilinx products in Critical Applications: http://www.xilinx.com/warranty.htm#critapps. © Copyright 2012 Xilinx, Inc. Xilinx, the Xilinx logo, Artix, ISE, Kintex, Spartan, Virtex, Vivado, Zynq, and other designated brands included herein are trademarks of Xilinx in the United States and other countries. All other trademarks are the property of their respective owners. Revision History <table> <thead> <tr> <th>Date</th> <th>Version</th> <th>Notes</th> </tr> </thead> <tbody> <tr> <td>2012-12-17</td> <td>2012.12</td> <td>Initial public release for PetaLinux SDK 2012.12</td> </tr> <tr> <td>2013-04-22</td> <td>2013.04</td> <td>Updated for PetaLinux SDK 2013.04 release</td> </tr> </tbody> </table> # Table of Contents ## Revision History - 1 ## Table of Contents - 2 ## About this Guide - Prerequisites - 3 ## Overview - FreeRTOS Demo Application - 5 - Linux Demo Application - 5 ## Installation - 6 ## Testing Pre-Built Reference Design - Boot PetaLinux - 7 - Starting FreeRTOS Firmware - 7 - Demo Application - 8 - Accessing the Trace Buffer - 9 ## Bringup a Linux FreeRTOS AMP System on the ZC702 - Hardware Setup - 11 - XSDK Setup - 12 - Setup Workspace - 12 - Create FSBL - 12 - Create PetaLinux BSP - 13 - Create FreeRTOS BSP - 14 - Create FreeRTOS Application - 16 - PetaLinux Configuration - 18 - Platform Setup - 18 - Kernel Configuration - 18 - System Configuration - 19 - DTS (Device Tree Source) Setup - 20 - Build PetaLinux with AMP support - 21 - Install the Firmware Image (built-in) - 21 - Install the Firmware Image (module) - 21 - Build PetaLinux - 21 - Create the BOOT.BIN - 22 ## Known Issues - U-Boot Old Kernel Image - 23 - Trace Buffer is slow - 23 ## Additional Resources - References - 24 About this Guide This document details the Linux FreeRTOS AMP system with PetaLinux and Xilinx EDK for Zynq. It includes the following topics: - Overview of the AMP Reference Design - Installation of AMP Reference Design - Getting started with the Reference Design; including the pre-built reference BSP - How to recreate Linux FreeRTOS AMP system with PetaLinux *Please note: the reader of this document is assumed to have Linux knowledge such as how to run Linux commands as well as strong familiarity with the PetaLinux tools.* Prerequisites This document assumes that the following prerequisites have been satisfied: - PetaLinux SDK has been installed. - You know how to build a PetaLinux system image. - You know how to boot a PetaLinux system image. - PetaLinux setup script has been sourced in each command console in which you work with PetaLinux. Run the following command to check whether the PetaLinux environment has been setup on the command console: ```bash $ echo $PETALINUX ``` If the PetaLinux working environment has been setup, it should show the path to the installed PetaLinux. If it shows nothing, please refer to section Environment Setup in the Getting Started with PetaLinux SDK document to setup the environment. Overview This section describes the Linux-FreeRTOS AMP reference design system, the components and their configuration. The Linux-FreeRTOS AMP system is designed to demonstrate Linux’s ability to configure the secondary CPU for FreeRTOS and the loading of FreeRTOS firmware. This includes the following components: remoteproc drivers, generic rpmsg drivers, application specific rpmsg drivers and the Trace Buffer. The remoteproc drivers are Linux drivers which control the process of loading and unloading AMP modules on the secondary CPU. This controls the detachment of the secondary CPU from Linux, the associated configuration of the CPU and the loading of the FreeRTOS firmware into the target CPU’s memory region. The rpmsg drivers are Linux drivers as well as FreeRTOS library code that controls and manages memory and interrupts for interprocessor communication. This is implemented with pre-allocated memory segments which are used for the transfer of data to and from Linux/FreeRTOS. Figure 1 shows the allocation of these VRING Buffers. The buffers contain messages which are arranged in a specific structure. Once messages are created and stored in the VRING buffer the sender will issue a ‘kick’ to the recipient CPU, the ‘kick’ is a software interrupt generated by the Zynq Software Generated Interrupts and is routed to the target CPU. The Trace Buffer is a pre-allocated segment of memory used by FreeRTOS firmware as a log buffer. It allows FreeRTOS to display log messages which can be accessed from Linux without the need for an additional hardware serial console. ![Figure 1: Linux-FreeRTOS AMP Reference Design](image-url) FreeRTOS Demo Application The demo application provided in the reference design demonstrates the use of the rpmsg drivers/library for communication of FreeRTOS interrupt latency statistics. The FreeRTOS firmware is implemented as two FreeRTOS tasks (as shown in Figure 1). The first task samples the interrupt latency by configuring the Triple Timer Counter to generate an interrupt on the overflow condition. Once the overflow is hit the timer continues counting. When the interrupt is processed by the FreeRTOS firmware it immediately pauses the timer and reads the current value; this is the approximate time between when the interrupt occurred and when the interrupt was processed, and forms the latency data provided by the demo application. The second task is a demonstration task which tests its own scheduling to ensure that it meets the expected scheduling jitter. It will print a message to the log buffer as to whether the jitter is as expected or not. Linux Demo Application The demo application provided in the reference design called latencystat demonstrates the communication between the FreeRTOS application and Linux. This application uses the rpmsg drivers to send requests for latency data from the FreeRTOS application and displays this data as output. Installation PetaLinux is provided with the FreeRTOS BSP Repository as well as a Linux Demo Application. These are installed as part of PetaLinux, you can access them from "$/PETALINUX/hardware/edk_user_repository/FreeRTOS" and "$/PETALINUX/software/demo-apps/latencystat". Install the ZC702 AMP reference design with petalinux-install-bsp as follows inside PetaLinux tree: ``` $ petalinux-install-bsp <path-to-bsp>/Xilinx-ZC702-AMP-14.5.bsp ``` After you have installed the BSP, you will find the ZC702 AMP reference design from "$/PETALINUX/hardware/reference-designs/Xilinx-ZC702-AMP-14.5/" directory. Inside this directory, you will also find the hardware project files and the pre-built images. The structure of the directory is as follows: - **Xilinx-ZC702-AMP-14.5** - Hardware project files generated with Xilinx EDK such as "system.mhs", "system.xmp", etc. - "pre-built" - "images" - "BOOT.BIN" - BIN file composed of FPGA bitstream, FSBL boot loader and u-boot - "u-boot.elf" - U-Boot ELF file - "image.ub" - Linux kernel in uImage format - "zynq_fsbl.elf" - FSBL ELF file - "freertos" - FreeRTOS firmware ELF file - "implementation" - "download.bit" - FPGA bitstream - "sw-bsp" - PetaLinux configuration files and DTS (Device Tree Source) Testing Pre-Built Reference Design You can test the pre-build images as follows: **Boot PetaLinux** 1. Configure the ZC702 to use SD boot mode by connecting 1-2 of jumper J22 and J25 on the board. 2. Connect the UART port on ZC702 to your host 3. Connect the Ethernet port on ZC702 to a local network 4. Copy the "BOOT.BIN" and "image.ub" from the pre-built images directory: "Xilinx-ZC702-AMP-14.5/pre-built/images/" to a SD card. 5. Insert the SD card into the SD card slot on ZC702 and then power on the board. 6. Use a serial terminal application such as Kermit to monitor the UART output from ZC702. Configure the terminal application to use a baudrate of 115200-8N1. **Starting FreeRTOS Firmware** 1. Watch the console. Log into PetaLinux with username: root and password: root. 2. Load the remoteproc modules to prepare to load the 2nd processor with FreeRTOS firmware from the PetaLinux console as follows: ```bash # modprobe virtio # modprobe virtio_ring # modprobe virtio_rpmsg_bus # modprobe rpmsg_proto # modprobe remoteproc ``` 3. Since the 2nd processor hasn't been unloaded by Linux for use by FreeRTOS yet, the system is still a SMP system. You can see the 2nd processor from "/proc/cpuinfo": ```bash # cat /proc/cpuinfo Processor : ARMv7 Processor rev 0 (v7l) processor : 0 BogoMIPS : 1332.01 processor : 1 BogoMIPS : 1332.01 Features : swp half thumb fastmult vfp edsp neon vfpv3 tls CPU implementer : 0x41 CPU architecture: 7 CPU variant : 0x3 CPU part : 0xc09 CPU revision : 0 Hardware : Xilinx Zynq Platform Revision : 0000 Serial : 0000000000000000 ``` 4. Load the 2nd processor with FreeRTOS firmware as follows: ```bash # modprobe zynq_remoteproc ``` 5. You can see the following messages on the console: ``` NET: Registered protocol family 40 CPU1: shutdown remoteproc0: 0.remoteproc-test is available remoteproc0: Note: remoteproc is still under development and considered experimental. remoteproc0: THE BINARY FORMAT IS NOT YET FINALIZED, and backward compatibility isn't yet guaranteed. remoteproc0: powering up 0.remoteproc-test remoteproc0: Booting fw image freertos, size 2310301 remoteproc0: remote processor 0.remoteproc-test is now up virtio_rpmsg_bus virtio0: rpmsg host is online remoteproc0: registered virtio0 (type 7) virtio_rpmsg_bus virtio0: creating channel rpmsg-timer-statistic addr 0x50 ``` ### Demo Application 1. The FreeRTOS application provided in the pre-built reference design collects interrupt latency statistics within the FreeRTOS environment, and reports the results to Linux which are displayed by the `latencystat` Linux demo application. The `rpmsg_freertos_statistic` module must first be loaded so that we can send/receive messages to FreeRTOS. To load the module run the following command in the PetaLinux console: ```bash # modprobe rpmsg_freertos_statistic ``` 2. Run `latencystat` demo application as follows: ```bash # latencystat -b ``` 3. The application will print output similar to the following: ``` Linux FreeRTOS AMP Demo. 0: Command 0 ACKed 1: Command 1 ACKed Waiting for samples... 2: Command 2 ACKed 3: Command 3 ACKed 4: Command 4 ACKed ----------------------------------------------------------- Histogram Bucket Values: Bucket 332 ns (37 ticks) had 14814 frequency Bucket 440 ns (49 ticks) had 1 frequency Bucket 485 ns (54 ticks) had 1 frequency Bucket 584 ns (65 ticks) had 1 frequency Bucket 656 ns (73 ticks) had 1 frequency ----------------------------------------------------------- Histogram Data: min: 332 ns (37 ticks) avg: 332 ns (37 ticks) max: 656 ns (73 ticks) out of range: 0 total samples: 14818 ``` Accessing the Trace Buffer The `latencystat` demo application sends requests to FreeRTOS to ask for latency histogram data. The FreeRTOS will reply with the histogram data, and the latency demo application dumps that data. The `latencystat` demo application can display the information in a graph format or dump the data in hex. Use the `-h` parameter to display the help information of the application. Accessing the Trace Buffer The Trace Buffer is a section of shared memory which is only written to by the FreeRTOS application. This Trace Buffer can be used as a logging console to transfer information to Linux. It can act similar to a one way serial console. The Trace Buffer is a ring buffer, this means that after the Buffer is full it will wrap around and begin writing to the start of the buffer. When accessing the buffer via Linux it will not be read as a stream. The default Trace Buffer is 32 KB in size. The Trace Buffer can be accessed via `debugfs` as a file. ``` /sys/kernel/debug/remoteproc/remoteproc0/trace0 ``` The following is the output seen in the trace buffer when running a `cat` on the buffer. # mount -t debugfs # cat /sys/kernel/debug/remoteproc/remoteproc0/trace0 Setup TLB for 0:ttc Setup TLB for address f8000000, TLBptr 103e00 Setup TLB for 1:uart Setup TLB for address e0000000, TLBptr 103800 Setup TLB for 2:scu Setup TLB for address f8f00000, TLBptr 103e3c Protect MMU Table at 100000 Clear TLB for address 100000, TLBptr 100004 FreeRTOS main demo application Dec 4 2012 11:45:39 task_latency: starting sampling of irq latency task.demo: started task.demo: task resumed as expected rpmmsg: CLEAR request rpmmsg: START request task.demo: task resumed as expected task.latency: sampled 1 full buffers task.latency: sampled 1 full buffers task.demo: task resumed as expected task.latency: sampled 1 full buffers task.latency: sampled 1 full buffers task.demo: task resumed as expected task.latency: sampled 1 full buffers task.latency: sampled 1 full buffers task.demo: task resumed as expected task.latency: sampled 1 full buffers task.latency: sampled 1 full buffers task.demo: task resumed as expected task.latency: sampled 1 full buffers task.latency: sampled 1 full buffers task.demo: task resumed as expected task.latency: sampled 1 full buffers task.latency: sampled 1 full buffers rpmmsg: STOP request rpmmsg: CLONE request rpmmsg: GET request rpmmsg: QUIT request WARNING: The Trace Buffer has a known issue where the contents of the buffer is delayed between access in FreeRTOS and Linux. This is a known issue, see the section Trace Buffer is slow for more information. Bringup a Linux FreeRTOS AMP System on the ZC702 The following section describes the process to create a Linux FreeRTOS AMP system with PetaLinux and Xilinx EDK. Please note that the following section describes the bringup process specific to a Linux FreeRTOS AMP system only. Please refer to PetaLinux SDK Board Bringup Guide (UG980) for the details on how to create a project with PetaLinux and how to build PetaLinux. Hardware Setup The ZC702 Development Board Template which is provided with Xilinx EDK can be used as a base configuration, the template meets the minimum PetaLinux requirements. WARNING: PetaLinux requires at least one UART and one storage peripheral (e.g. QSPI, SD, etc.). The FreeRTOS BSP requires one serial UART to be selected from the XPS Zynq PS MIO Configurations wizard. The UART port on the ZC702 is connected to UART 1 which is configured for use by Linux. Enable UART 0 for use by FreeRTOS and set its IO as EMIO. The FreeRTOS Demo Application provided uses Timer 1. Enable Timer 1 and set its IO as EMIO. Figure 2: Zynq Peripheral Configuration XSDK Setup Once the project is configured, be sure to select **Export Design to XSDK** from XPS to update the hardware description files of XSDK workspace. XSDK Setup XSDK is used to build and prepare the PetaLinux BSP, FSBL and the FreeRTOS BSP and application. Setup Workspace When you open your XSDK workspace, you will need to add PetaLinux and FreeRTOS BSP repositories as follows: ```plaintext WARNING: Ensure that the XSDK workspace is within the PetaLinux tree, this is required for the PetaLinux tools. (e.g. "$PETALINUX/hardware/user-platforms/<hw-project-name>/workspace") ``` 1. Go to **Xilinx Tools > Repositories** to add the following repositories: - "$PETALINUX/hardware/edk_user_repository" - "$PETALINUX/hardware/edk_user_repository/FreeRTOS" - "$PETALINUX/hardware/edk_user_repository/FreeRTOS/drivers" - "$PETALINUX/hardware/edk_user_repository/FreeRTOS/bsp" 2. Click **Rescan Repositories** 3. Click **Apply** 4. Click **Ok** Create FSBL Create a Xilinx FSBL application project as follows: 1. Go to **File > New > Application Project** 2. Name the project (e.g. **FSBL**) 3. Select **ps7_cortexa9_0** as **Processor** from the New Project wizard 4. Click **Next** 5. Select **Zynq FSBL** as the **Select Project Template** 6. Click **Finish** to create the project Create PetaLinux BSP Create a PetaLinux BSP as follows: 1. Go to File > New > Board Support Package 2. Select ps7_cortexa9_0 as the CPU in the New Board Support Package Project wizard 3. Select petalinux as Board Support Package OS ![Figure 3: New BSP Wizard - PetaLinux](image) 5. Select petalinux from the Overview in the Board Support Package Settings window 6. Select ps7_uart_1 as stdout and stdin in the Configuration for OS table 7. Select ps7_ddr_0 as the main_memory 8. Select ps7_qspi_0 as the flash_memory 9. Select ps7_sd_0 as the sdio 10. Select ps7_ethernet_0 as the ethernet 11. Click **Ok** **Create FreeRTOS BSP** Create a FreeRTOS BSP as follows: 1. Go to **File > New > Board Support Package** 2. Select `ps7_cortexa9_1` as the **CPU** in the **New Board Support Package Project** wizard 3. Select `freertos` as **Board Support Package OS** 4. Click **Finish**. A Board Support Package Settings window will pop up. 5. Select **freertos** from the **Overview** in the **Board Support Package Settings** window. 6. Select **ps7_uart_0** as stdout and stdin in the **Configuration for OS table**. 7. Select **cpu_cortexa9** from **Overview > drivers**. 8. Add `-DUSE_AMP=1` to the `extra_compiler_flags` of the **Configuration for driver** table. ![FreeRTOS BSP Configuration](image) Figure 7: FreeRTOS BSP Configuration 9. Click **Ok** **Create FreeRTOS Application** Create a FreeRTOS AMP application project as follows: - Go to **File > New > Application Project** - Name the project (e.g. `freertos_amp_demo_application`) - Select `ps_cortexa9_1` as **Processor** from the New Project wizard - Select **Board Support Package, Use Existing** and select the existing BSP created in the previous section. XSDK Setup Figure 8: New Application Project - FreeRTOS Demo Application - Click **Next** - Select **FreeRTOS AMP** as the template. You are free to modify this FreeRTOS AMP application template for your own purpose. Figure 9: New Application Project - FreeRTOS Demo Application PetaLinux Configuration So far, we have generated a FSBL project, a PetaLinux BSP and a FreeRTOS AMP application with XSDK. In this section, we are going to configure PetaLinux software platform to support AMP. Platform Setup At first, we will create a software platform with the PetaLinux tool. Specify a vendor name and platform name to use when creating the platform. ``` $ petalinux-new-platform -c arm -v Demo -p ZC702-AMP-14.5 ``` The `petalinux-new-platform` command will create and select the new platform. Use `petalinux-copy-autoconfig` to import your hardware settings in the new platform (see the PetaLinux Board Bringup Guide for further details). Kernel Configuration 1. Configure the kernel using `petalinux-config-kernel`. ``` $ petalinux-config-kernel ``` 2. Set **Physical address of main memory** to `0x10000000` and select **Enable loadable module support**. The kernel is configured to use the memory starting after the first 256MB as the memory is segmented so that FreeRTOS has the first 256MB of memory and Linux has the rest. ``` Kernel Configuration ---> (0x10000000) Physical address of main memory [*] Enable loadable module support ---> ``` 3. Select **High Memory Support** within **Kernel Features** 4. Select **2g/2g user/kernel split** as **Memory split** within **Kernel Features**. ``` Kernel Configuration ---> Kernel Features ---> Memory split (2G/2G user/kernel split) ---> [*] High Memory Support ``` 5. Enable **Userspace firmware loading support** and configure external firmware blobs: ``` Kernel Configuration ---> Device Drivers ---> Generic Driver Options ---> <*> Userspace firmware loading support [ ] Include in-kernel firmware blobs in kernel binary (freertos) External firmware blobs to build into the kernel binary (firmware) Firmware blobs root directory ``` PetaLinux Configuration **TIP:** Please note that if you want to use Zynq remoteproc driver as a built-in driver, you will need to enable **Include in-kernel firmware blobs in kernel binary**, as the firmware must be available during driver initialization, which is before the root file system is available. - **The External firmware blobs to build into the kernel binary** is the file name of the FreeRTOS firmware. - **Firmware blobs root directory** is the directory name in the root filesystem which contains the firmware. In this case, the directory to hold the firmware in the target root filesystem is "/lib/firmware". 6. Enable **Remoteproc** and **Rpmsg** drivers: Kernel Configuration ---> Device Drivers ---> Remoteproc drivers (EXPERIMENTAL) ---> <M> Support ZYNQ remote proc Rpmsg drivers (EXPERIMENTAL) ---> <M> rpmsg OMX driver <M> An FreeRTOS statistic You can enable the drivers as a Module <M> or Built-in <*>. Exit the menuconfig and save changes. System Configuration 1. Configure the PetaLinux System Settings to support AMP as follows: $ petalinux-config-apps 2. Configure the system to boot using the **old Uboot kernel image** format. This step is required due to an issue with U-Boot and kernel start memory addresses which are non-zero, see **U-Boot Old Kernel Image** for more information. PetaLinux Configuration ---> System Settings ---> [*] Build old Uboot kernel image 3. Select the Linux AMP demo application. The **latencystat** demo can be found in the **PetaLogix Demo Applications** submenu. PetaLinux Configuration ---> PetaLogix Demo Applications ---> [*] latencystat ---> DTS (Device Tree Source) Setup The PetaLinux Kernel is driven by device trees. The remoteproc driver is also instantiated and configured by a device tree node. The DTS file inside the PetaLinux BSP is generated from a hardware description and requires modification to support AMP. Open the DTS file (with a text editor) located in the PetaLinux kernel directory, where vendor and product are the respective names of your AMP platform created in the previous sections: ``` "$PETALINUX/software/linux-2.6.x/arch/arm/boot/dts/<vendor>-<product>.dts" ``` Add a device node for the Zynq remoteproc driver so that the driver can be probed. You can add the device node to the end of the DTS file: ``` test: remoteproc-test0 { compatible = "xlnx,zynq_remoteproc"; reg = < 0x0 0x10000000 >; interrupt-parent = <&ps7_scugic_0>; interrupts = < 0 37 0 0 38 0 >; firmware = "freertos"; ipino = <5>; vring0 = <2>; vring1 = <3>; } ``` - The `compatible` property must match the one in the driver. - The `reg` property is the memory segment for the firmware. - The `interrupts` property allows the consumption of interrupts from Linux to be routed for FreeRTOS, in this case the TTC1 interrupts are routed. - The `firmware` property is the default name of the firmware. - The `ipino` property is the IPI from firmware to Linux, - `vring0` and `vring1` properties are the IPIs from Linux to firmware. Build PetaLinux with AMP support The previous sections cover the compilation and configuration of the FreeRTOS application and PetaLinux. This section will cover the process of building and preparing bootable images. Install the Firmware Image (built-in) **IMPORTANT:** This section only applies if the remoteproc and associated drivers are set as built-in. 1. The kernel must have access to the firmware image during its build, a copy of the FreeRTOS application ELF file must exist at "linux-2.6.x/firmware". $ cd <Path-to-XSDK-workspace>/<FreeRTOS-application-directory/Debug $ cp <FreeRTOS-application>.elf \ $PETALINUX/software/linux-2.6.x/firmware/<firmware-name> **WARNING:** firmware-name is the string that is set in the kernel config as External firmware blobs to build into the kernel binary. 2. Run make inside the "$PETALINUX/software/petalinux-dist" directory: $ cd $PETALINUX/software/petalinux-dist $ make Install the Firmware Image (module) **IMPORTANT:** This section only applies if the remoteproc and associated drivers are set as module. The firmware image for the FreeRTOS application must be placed in the firmware directory of the root filesystem "$PETALINUX/software/petalinux-dist/romfs/lib/firmware/". $ mkdir $PETALINUX/software/petalinux-dist/romfs/lib/firmware $ cd <Path-to-XSDK-workspace>/<FreeRTOS-application-directory/Debug $ cp <FreeRTOS-application>.elf \ ${PETALINUX}/software/petalinux-dist/romfs/lib/firmware/<firmware-name> Build PetaLinux $ cd $PETALINUX/software/petalinux-dist $ make Create the BOOT.BIN The generated kernel image and u-boot images are in the "$PETALINUX/software/petalinux-dist/images" directory. In order to create a boot image, use the petalinux-gen-zynq-boot to generate the "BOOT.BIN". $ petalinux-gen-zynq-boot -b <path-to-FSBL> -f <path-to-FPGA-bitstream> The "BOOT.BIN" file will be in the "$PETALINUX/software/petalinux-dist/images" directory. Now that all images are created you can boot the system. Known Issues U-Boot Old Kernel Image By default, PetaLinux will use the FIT image format for the kernel image, but the U-Boot in PetaLinux v2013.04 is unable to load the kernel image if the physical address of the main memory is set to a non-zero value. This will be resolved in a future release. Currently we will use the old Uboot kernel image format for the kernel image. Trace Buffer is slow The trace buffer is slow due to the CPU cache which has a delayed writeback. This will be resolved in a future release. This does not affect functionality and the trace buffer can still be accessed. Alternatively the FreeRTOS application can use a UART to display log information. Additional Resources References - PetaLinux SDK Application Development Guide (UG981) - PetaLinux SDK Board Bringup Guide (UG980) - PetaLinux SDK Eclipse Plugin Guide (UG979) - PetaLinux SDK Firmware Upgrade Guide (UG983) - PetaLinux SDK Getting Started Guide (UG977) - PetaLinux SDK Installation Guide (UG976) - PetaLinux SDK QEMU System Simulation Guide (UG982)
{"Source-Url": "http://www.xilinx.com/support/documentation/sw_manuals/petalinux2013_04/ug978-petalinux-zynq-amp.pdf", "len_cl100k_base": 6983, "olmocr-version": "0.1.50", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 46427, "total-output-tokens": 8347, "length": "2e12", "weborganizer": {"__label__adult": 0.0007319450378417969, "__label__art_design": 0.0008683204650878906, "__label__crime_law": 0.00037384033203125, "__label__education_jobs": 0.0005621910095214844, "__label__entertainment": 0.00018286705017089844, "__label__fashion_beauty": 0.00036978721618652344, "__label__finance_business": 0.00030303001403808594, "__label__food_dining": 0.0005517005920410156, "__label__games": 0.0019474029541015625, "__label__hardware": 0.1287841796875, "__label__health": 0.0005154609680175781, "__label__history": 0.00032806396484375, "__label__home_hobbies": 0.00038695335388183594, "__label__industrial": 0.002002716064453125, "__label__literature": 0.00017845630645751953, "__label__politics": 0.00025177001953125, "__label__religion": 0.0007748603820800781, "__label__science_tech": 0.0777587890625, "__label__social_life": 8.291006088256836e-05, "__label__software": 0.04644775390625, "__label__software_dev": 0.7353515625, "__label__sports_fitness": 0.0005273818969726562, "__label__transportation": 0.0006704330444335938, "__label__travel": 0.00023257732391357425}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27867, 0.04675]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27867, 0.27747]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27867, 0.8057]], "google_gemma-3-12b-it_contains_pii": [[0, 89, false], [89, 2519, null], [2519, 3622, null], [3622, 4870, null], [4870, 6521, null], [6521, 7800, null], [7800, 9114, null], [9114, 10713, null], [10713, 12769, null], [12769, 13899, null], [13899, 15393, null], [15393, 16479, null], [16479, 17789, null], [17789, 18461, null], [18461, 18734, null], [18734, 19046, null], [19046, 19605, null], [19605, 19887, null], [19887, 21713, null], [21713, 23385, null], [23385, 24808, null], [24808, 26373, null], [26373, 26819, null], [26819, 27502, null], [27502, 27867, null]], "google_gemma-3-12b-it_is_public_document": [[0, 89, true], [89, 2519, null], [2519, 3622, null], [3622, 4870, null], [4870, 6521, null], [6521, 7800, null], [7800, 9114, null], [9114, 10713, null], [10713, 12769, null], [12769, 13899, null], [13899, 15393, null], [15393, 16479, null], [16479, 17789, null], [17789, 18461, null], [18461, 18734, null], [18734, 19046, null], [19046, 19605, null], [19605, 19887, null], [19887, 21713, null], [21713, 23385, null], [23385, 24808, null], [24808, 26373, null], [26373, 26819, null], [26819, 27502, null], [27502, 27867, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 27867, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27867, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27867, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27867, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27867, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27867, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27867, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27867, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27867, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27867, null]], "pdf_page_numbers": [[0, 89, 1], [89, 2519, 2], [2519, 3622, 3], [3622, 4870, 4], [4870, 6521, 5], [6521, 7800, 6], [7800, 9114, 7], [9114, 10713, 8], [10713, 12769, 9], [12769, 13899, 10], [13899, 15393, 11], [15393, 16479, 12], [16479, 17789, 13], [17789, 18461, 14], [18461, 18734, 15], [18734, 19046, 16], [19046, 19605, 17], [19605, 19887, 18], [19887, 21713, 19], [21713, 23385, 20], [23385, 24808, 21], [24808, 26373, 22], [26373, 26819, 23], [26819, 27502, 24], [27502, 27867, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27867, 0.00849]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
89131eb697a46fee912960800dadba32e1ea56d8
CSE 141 – Computer Architecture Summer Session 1, 2005 Lectures 9 Pipelining Hazards Pramod V. Argade July 20, 2005 CSE141: Introduction to Computer Architecture Instructor: Pramod V. Argade (p2argade@cs.ucsd.edu) Office Hour: Tue. 7:30 - 9:00 (Center 105) Wed. 5:00 - 6:00 PM (HSS 1330) By Appointment TAs: Chengmo Yang: c5yang@cs.ucsd.edu Wenjing Rao: wrao@cs.ucsd.edu Lecture: Mon/Wed. 6:00 - 8:50 PM, HSS 1330 Textbook: Computer Organization & Design The Hardware Software Interface, 3rd Edition. Authors: Patterson and Hennessy Web-page: http://www.cse.ucsd.edu/classes/su05/cse141 <table> <thead> <tr> <th>Lecture #</th> <th>Date</th> <th>Time</th> <th>Room</th> <th>Topic</th> <th>Quiz topic</th> <th>Homework Due</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Mon. 6/27</td> <td>6 - 8:50 PM</td> <td>HSS 1330</td> <td>Introduction, Ch. 1 ISA, Ch. 2</td> <td></td> <td></td> </tr> <tr> <td>2</td> <td>Wed. 6/29</td> <td>6 - 8:50 PM</td> <td>HSS 1330</td> <td>Arithmetic, Ch. 3 ISA Ch. 2</td> <td>ISA Ch. 2</td> <td>#1</td> </tr> <tr> <td></td> <td>Mon. 7/4</td> <td>No Class</td> <td></td> <td>July 4th Holiday</td> <td></td> <td></td> </tr> <tr> <td>3</td> <td>Wed. 7/6</td> <td>6 - 8:50 PM</td> <td>HSS 1330</td> <td>Performance, Ch. 4 Single-cycle CPU Ch. 5</td> <td>Arithmetic Ch. 3</td> <td>#2</td> </tr> <tr> <td>4</td> <td>Mon. 7/11</td> <td>6 - 8:50 PM</td> <td>HSS 1330</td> <td>Single-cycle CPU Ch. 5 Cont. Multi-cycle CPU Ch. 5</td> <td>Performance Ch. 4</td> <td>#3</td> </tr> <tr> <td>5</td> <td>Tue. 7/12</td> <td>7:30 - 8:50 PM</td> <td>HSS 1330</td> <td>Multi-cycle CPU Ch. 5 Cont. (July 4th make up class)</td> <td></td> <td></td> </tr> <tr> <td>6</td> <td>Wed. 7/13</td> <td>6 - 8:50 PM</td> <td>HSS 1330</td> <td>Single and Multicycle CPU Examples and Review for Midterm</td> <td>Single-cycle CPU Ch. 5</td> <td></td> </tr> <tr> <td>7</td> <td>Mon. 7/18</td> <td>6 - 8:50 PM</td> <td>HSS 1330</td> <td>Mid-term Exam Exceptions</td> <td></td> <td>#4</td> </tr> <tr> <td>8</td> <td>Tue. 7/19</td> <td>7:00 - 8:50 PM</td> <td>HSS 1330</td> <td>Pipelining Ch. 6 (July 4th make up class)</td> <td></td> <td></td> </tr> <tr> <td>9</td> <td>Wed. 7/20</td> <td>6 - 8:50 PM</td> <td>HSS 1330</td> <td>Hazards, Ch. 6</td> <td></td> <td></td> </tr> <tr> <td>10</td> <td>Mon. 7/25</td> <td>6 - 8:50 PM</td> <td>HSS 1330</td> <td>Memory Hierarchy &amp; Caches Ch. 7</td> <td>Hazards Ch. 6</td> <td>#5</td> </tr> <tr> <td>11</td> <td>Wed. 7/27</td> <td>6 - 8:50 PM</td> <td>HSS 1330</td> <td>Virtual Memory, Ch. 7 Course Review</td> <td>Cache Ch. 7</td> <td>#6</td> </tr> <tr> <td>12</td> <td>Sat. 7/30</td> <td>TBD</td> <td>TBD</td> <td>Final Exam</td> <td></td> <td></td> </tr> </tbody> </table> Announcements - **Reading Assignment** - Chapter 5. The Processor: Datapath and Control Sections 5.6 - Chapter 6. Enhancing Performance with Pipelining Sections 6.1 - 6.10 - **Homework 5: Due Mon., July 25th in class** 5.49, 5.50 6.1, 6.2, 6.4, 6.6, 6.15, 6.17, 6.20, 6.22, 6.23, 6.31, 6.32 - **Quiz** **When:** Mon., July 25th **Topic:** Pipelining and Hazards Announcements • **Reading Assignment** – Chapter 5. The Processor: Datapath and Control Sections 5.6 – Chapter 6. Enhancing Performance with Pipelining Sections 6.1 - 6.10 • **Homework 5: Due Mon., July 25th in class** 5.49, 5.50 6.1, 6.2, 6.4, 6.6, 6.15, 6.17, 6.20, 6.22, 6.23, 6.31, 6.32 • **Quiz** **When:** Mon., July 25th **Topic:** Pipelining and Hazards Would our pipeline design work in any case? - What happens when... - add $3, $10, $11 - lw $8, 1000($3) - sub $11, $8, $7 Data Hazards - When a result is needed in the pipeline before it is available, a “data hazard” occurs. - Result of SUB instruction not available until CC5 or later! Software Solutions to Data Hazards - Have compiler guarantee no hazards - Rearrange code to remove hazard - Not possible every time - Insert “nops” - Where do we insert the “nops”? - `sub $2, $1, $3` - `and $12, $2, $5` - `or $13, $6, $2` - `add $14, $2, $2` - `sw $15, 100($2)` - Problem: Data hazards are very common! - “nops” really slows us down! Hardware Solutions to Data Hazards - Stall the pipeline (insert bubbles) - Data hazards are too common - Same as “nops” - Severe performance hit - Forward the data as soon as it is available - Modify the pipeline to forward (bypass data) Forwarding - Use temporary results, don’t wait for them to be written Forwarding a. No forwarding b. With forwarding Forwarding unit Reducing **EX** Data Hazards Through Forwarding Reducing **EX** Data Hazards Through Forwarding if (EX/MEM.RegWrite and (EX/MEM.RegisterRd != 0) and (EX/MEM.RegisterRd = ID/EX.RegisterRs)) ForwardA = 10 if (EX/MEM.RegWrite and (EX/MEM.RegisterRd != 0) and (EX/MEM.RegisterRd = ID/EX.RegisterRt)) ForwardB = 10 Pramod Argade CSE 141, Summer Session I, 2005 Reducing **MEM** Data Hazards Through Forwarding [Diagram of a pipeline with forwarding unit] Reducing **MEM** Data Hazards Through Forwarding if (MEM/WB.RegWrite and (MEM/WB.RegisterRd != 0) and (MEM/WB.Register**Rd** = ID/EX.Register**Rs**)) ForwardA = 01 if (MEM/WB.RegWrite and (MEM/WB.RegisterRd != 0) and (MEM/WB.Register**Rd** = ID/EX.Register**Rt**)) ForwardB = 01 Simultaneous EX/MEM Forwarding - Consider following code \[ \begin{align*} \text{add} &\quad & \text{$1$, $1$, $2$} \\ \text{add} &\quad & \text{$1$, $1$, $3$} \\ \text{add} &\quad & \text{$1$, $1$, $4$} \\ \text{...} & \end{align*} \] Simultaneous EX/MEM Forwarding - Consider following code \[ \text{add } $1, $1, $2 \\ \text{add } $1, $1, $3 \\ \text{add } $1, $1, $4 \\ \ldots \] - Must forward from MEM stage - Disable WB stage forwarding \[ \begin{align*} \text{if (MEM/WB.RegWrite} & \text{ and (MEM/WB.RegisterRd } \neq 0) \\ \text{and } ?? & \text{ and (MEM/WB.RegisterRd } = \text{ ID/EX.RegisterRs}) \text{ ForwardA } = 01 \\ \text{if (MEM/WB.RegWrite} & \text{ and (MEM/WB.RegisterRd } \neq 0) \\ \text{and } ?? & \text{ and (MEM/WB.RegisterRd } = \text{ ID/EX.RegisterRt}) \text{ ForwardB } = 01 \end{align*} \] Forwarding in Action sub $1, $12, $3 and $12, $3, $4 add $3, $8, $11 Memory Access Write Back sub $1, $12, $3 and $12, $3, $4 add $3, $8, $11 Memory Access Write Back Forwarding in Action Instruction Fetch sub $1, $12, $3 and $12, $3, $4 add $3, $8, $11 Write Back Forwarding in Action Instruction Fetch sub $1, $12, $3 and $12, $3, $4 add $3, $8, $11 Write Back sub $1, $12, $3 add $3, $8, $11 and $12, $3, $4 Pramod Argade CSE 141, Summer Session I, 2005 Forwarding in Action sub $1, $12, $3 and $12, $3, $4 add $3, $9, $11 Forwarding in Action sub $1, $12, $3 and $12, $3, $4 add $3, $9, $11 Data Hazard: Load followed by Store lw $2, 10($1) st $2, 0x1000($5) or $13, $6, $2 add $14, $2, $2 sw $15, 100($2) This forwarding can be done but is there a forwarding path? M⇒M Forwarding for LW⇒SW (HW Exercise 6.20 in the Textbook) Forwarding does not eliminate Data Hazard in all cases - Consider this code: ```assembly lw $2, 10($1) and $12, $2, $5 or $13, $6, $2 add $14, $2, $2 sw $15, 100($2) ``` Data Hazard: Load followed by R-type lw $2, 10($1) and $12, $2, $5 or $13, $6, $2 add $14, $2, $2 sw $15, 100($2) Eliminating Data Hazards via Forwarding and stalling - lw $2, 10($1) and $12, $2, $5 - or $13, $6, $2 - add $14, $2, $2 - sw $15, 100($2) Pipeline Interlocks - Not all data hazards can be handled by forwarding - Pipeline Interlock or Hazard Detection Unit - detects a hazard and stalls the pipeline until the hazard is clear - A stall creates a pipeline bubble: - Preventing the IF and ID stages from proceeding - don’t write the PC (PCWrite = 0) - don’t rewrite IF/ID register (IF/IDWrite = 0) - Inserting “nops” - set all control signals propagating to EX/MEM/WB to zero (inserts a no-op instruction) Hazard Detection Unit - Keeping an instruction in the same stage is called pipeline stall ```java if (ID/EX.MemRead and ((ID/EX.RegisterRt = IF/ID.RegisterRs) or (ID/EX.RegisterRt = IF/ID.RegisterRt))) then stall the pipeline ➔ PCWrite = 0, IF/IDWrite = 0, EX/M/WB = 0 ``` Hazard Detection Unit and $4, $2, $5 $w\,\$2, \,20($1)$ Hazard Detection Unit and $4, S2, S5 bubble lw $2, 20($1) Hazard Detection Unit \[ \text{and } \#4, \#2, \#5 \quad \text{bubble} \quad \text{\texttt{lw } \#2, 20(\#1)} \] Data Hazard Key Points - Pipelining provides high throughput - Data dependencies cause *data hazards* - Data hazards can be solved by: - Software (nops) - Hardware data forwarding - Hardware pipeline stalling - Our processor, and indeed all modern processors, use a combination of forwarding and stalling Announcements ● **Reading Assignment** – Chapter 5. The Processor: Datapath and Control Sections 5.6 – Chapter 6. Enhancing Performance with Pipelining Sections 6.1 - 6.10 ● **Homework 5: Due Mon., July 25th in class** 5.49, 5.50 6.1, 6.2, 6.4, 6.6, 6.15, 6.17, 6.20, 6.22, 6.23, 6.31, 6.32 ● **Quiz** **When:** Mon., July 25th **Topic:** Pipelining and Hazards Control Hazards Conditional Branches in a Pipeline - In a program flow, data computed by certain instructions is used to determine next instruction to execute - using conditional branches - In a pipelined processor, conditional branches result in control hazards ``` sub $2, $2, $5 and $6, $2, $4 beq $6, $8, L9 L9: and $x, $y, $z sub $p, $q, $r ``` Pipelined Datapath and Control Decision about whether to branch doesn’t occur until the MEM pipeline stage Impact of a Branch Instruction on the Pipeline Program execution order (in instructions) 40 beq $1, $3, 7 44 and $12, $2, $5 48 or $13, $6, $2 52 add $14, $2, $2 72 lw $4, 50($7) Time (in clock cycles) CC 1 CC 2 CC 3 CC 4 CC 5 CC 6 CC 7 CC 8 CC 9 IM Reg IM DM IM DM IM DM IM DM IM DM Decision about whether to branch doesn’t occur until the MEM pipeline stage Can you explain why the PC of the target instruction is 72? Dealing With Branch Hazards ● Software – Insert nops, – Insert instructions that get executed either way (delayed branch). ● Hardware – Stall until you know which direction ➢ 3 cycles wasted for every branch – Guess which direction ➢ assume not taken (easiest) ➢ more educated guess based on history (requires that you know it is a branch before it is even decoded!) – Ignore the branch for a cycle (branch delay slot) Branch Hazards When we decide to branch, other instructions are in the pipeline! Stalling for Branch Hazards: Assume the branch is taken beq $4, $0, there and $12, $2, $5 or ... add ... sw ... Wastes cycles if branch is not taken Assume Branch *Not Taken* - Same performance as stalling when you’re wrong - This is preferred approach There is a 3 cycle penalty if a branch is taken How could we reduce this penalty? Resolving Branch in ID Stage, and Flushing if Branch is Taken Note: Forwarding paths and muxes have to be added before registers are compared in ID stage - Forwarding path from EX/MEM to IF/ID - Forwarding path from MEM/WB to IF/ID Resolving Branch in ID Stage, and Flushing if Branch is Taken Note: Forwarding paths and muxes have to be added before registers are compared in ID stage - Forwarding path from EX/MEM to IF/ID - Forwarding path from MEM/WB to IF/ID Problem: What if instruction immediately preceding branch writes to the required register? - The pipeline must be stalled for one clock cycle in ID stage Reducing the delay of branches - Resolve the branch in ID stage - Move register compare in ID stage - Provide data forwarding - Ensure that most recent register values are used in ID stage - Add necessary forwarding muxes and paths - Implement faster logic to compare registers - Current ALU approach - Subtract the two registers and check whether the result is zero - Slow! - Faster approach - Exclusive OR the two registers. OR result bits to check whether the result is zero - Fast, since no carry propagation Flush Instructions in the Pipe if a Branch is Taken - Flushing an instruction means to prevent it from changing any permanent state (registers, memory, PC). - Similar to a pipeline bubble... - The difference is that we need to be able to insert those bubbles later in the pipeline - Flushing an instruction on a taken branch - Must flush the instruction being fetched in IF stage using IF.Flush signal, which changes all instruction fields to zero - SLL $0, $0, 0 is equivalent to NOP - Let the instruction fields percolate through the pipeline Branch is Taken 36 sub $10, $4, $8 40 beq $1, $3, 7 44 and $12, $2, $5 48 or $13, $6, $2 52 add $14, $2, $2 ... 72 lw $4, 50($7) Branch stall reduced from 3 cycles to 1 cycle! Eliminating the 1 Cycle Branch Stall - There’s no rule that says we have to see the effect of the branch immediately. Why not wait an extra instruction before branching? - The original SPARC and MIPS processors each used a single branch delay slot to eliminate single-cycle stalls after branches. - The instruction after a conditional branch is always executed in those machines, regardless of whether the branch is taken or not! - This works great for this implementation of the architecture, but becomes a permanent part of the ISA. - What about the MIPS R10000, which has a 5-cycle branch penalty, and executes 4 instructions per cycle? Branch delay slot instruction (next instruction after a branch) is executed even if the branch is taken. Scheduling Branch Delay Slot The branch delay slot is only useful if you can find something to put there. If you can’t find anything, you must put a *nop* to insure correctness. For b and c, $t4$ must be an unused temporary register. Importance of efficient processing of branches - Our implementation assumes “branch not taken” - Move branch resolution to ID stage - Flush instruction in IF stage if the branch is taken - 15 to 20% of all instructions are branches - MIPS - branch stall of 1 cycle, 1 instruction issued per cycle - delayed branch - Recent processors - 3-4 cycle hazard, 1-2 instructions issued per cycle - cost of branch mis-prediction goes up - Pentium Pro - 12+ cycle misprediction penalty, 3 instructions issued per cycle - HUGE penalty for mispredicting a branch - 36+ issue slots wasted Predicting Branch Direction ● Easiest – always not taken, always taken – forward not taken, backward always taken ➢ Appropriate for loops – compiler predicted (branch likely, branch not likely) ● Save history of branch outcome – If the history is available, use it in the fetch stage ➢ Change PC accordingly – In the decode stage verify that the prediction was correct ➢ If not, set correct PC, flush pipeline, update history ● Next easiest – Record 1-bit history of whether the branch was taken or not ➢ 1-bit predictor 1-bit Pattern History Table (PHT) - Uses low bits of branch address to choose an entry - The entry has 1 branch prediction bit - Size is small, e.g. 1 bits by N (e.g. 4K) - Why not use all bits of branch address? - What happens when the table is too small? Prediction is incorrect twice for loops ``` Loop: lw $t0, 0($s1) # $t0 = array element addu $t0, $t0, $s2 # add scalar in $s2 sw $t0, 0($s1) # store result addi $s1, $s1, -4 # decrement pointer bne $s1, $zero, Loop # branch $s1 !=0 ``` Branch History Table Note: Branch prediction logic is implemented in the IF stage - Check PC in PHT if the instruction is a branch - Modify next PC accordingly - Validate branch prediction in ID stage - If prediction was incorrect - Invalidate instructions in the pipeline - Start execution at correct PC 2-bit Branch Prediction Scheme Branch prediction has to be incorrect twice before it is changed Control Hazards -- Key Points - Control (or branch) hazards arise because we must fetch the next instruction before we know if we are branching or where we are branching. - Control hazards are detected in hardware. - We can reduce the impact of control hazards through: - early detection of branch address and condition - branch prediction - branch delay slots Exceptions Exception Handling in the Pipeline - Consider arithmetic overflow exception - add $1, $2, $1 - Extra hardware - Note: add is in EX stage - Flush instructions that follow add - In IF stage, assert IF.flush - In ID stage, use mux added for stall (OR ID.flush) - In EX stage, use EX.flush signal - **Prevent the ADD instruction from writing to $1** - Enables programmer to see the value of $1 that caused exception - Transfer control to PC = 0x8000 0180 - Save PC in EPC (**need to subtract 4 from it before saving**) - Save exception cause in Cause Register Datapath and Control for Exceptions Exception Handling in a Pipeline 0x40 sub $11, $2, $4 0x44 and $12, $2, $5 0x48 or $13, $2, $6 0x4c add $1, $2, $1 0x50 slt $15, $6, $7 0x54 lw $16, 50($7) 0x80000180 sw $25, 1000($0) 0x80000184 sw $26, 1004($0) Note: ALU overflow signal is input to the control unit Pramod Argade Issues in Handling an Exception - Five instructions are active in the pipeline - Multiple exceptions may occur - Earliest instruction that generated exception is interrupted - Exceptions are detected in different stage of the pipeline - Undefined instruction is discovered in ID stage - Overflow is detected in EX stage - Kernel call (i.e. OS call) is detected in EX stage - Precise exception - EPC saves PC of the instruction that caused exception - This is required for virtual memory - Imprecise exception - EPC may not save PC of the instruction that caused exception - For ease of implementation Summary: Hazards & Exceptions - **Data Hazard** - Operand forwarding - Stall when forwarding cannot be done - **Control Hazards** - Stall on branch - Predict branch - **Exceptions** - May be detected in different stages of the pipeline - Must flush the offending instructions and all following it - Save context - Jump to exception handler Announcements ● **Reading Assignment** – Chapter 5. The Processor: Datapath and Control Sections 5.6 – Chapter 6. Enhancing Performance with Pipelining Sections 6.1 - 6.10 ● **Homework 5**: Due Mon., July 25th in class 5.49, 5.50 6.1, 6.2, 6.4, 6.6, 6.15, 6.17, 6.20, 6.22, 6.23, 6.31, 6.32 ● **Quiz** **When**: Mon., July 25th **Topic**: Pipelining and Hazards
{"Source-Url": "http://cseweb.ucsd.edu/classes/su05/cse141/su05_09.pdf", "len_cl100k_base": 6072, "olmocr-version": "0.1.53", "pdf-total-pages": 64, "total-fallback-pages": 0, "total-input-tokens": 98606, "total-output-tokens": 8354, "length": "2e12", "weborganizer": {"__label__adult": 0.0008001327514648438, "__label__art_design": 0.00251007080078125, "__label__crime_law": 0.0008730888366699219, "__label__education_jobs": 0.0869140625, "__label__entertainment": 0.0003027915954589844, "__label__fashion_beauty": 0.0005955696105957031, "__label__finance_business": 0.0006031990051269531, "__label__food_dining": 0.0009860992431640625, "__label__games": 0.0019102096557617188, "__label__hardware": 0.0246734619140625, "__label__health": 0.0011854171752929688, "__label__history": 0.0011262893676757812, "__label__home_hobbies": 0.0007486343383789062, "__label__industrial": 0.0033168792724609375, "__label__literature": 0.0006623268127441406, "__label__politics": 0.0009589195251464844, "__label__religion": 0.0013837814331054688, "__label__science_tech": 0.2900390625, "__label__social_life": 0.00038051605224609375, "__label__software": 0.01221466064453125, "__label__software_dev": 0.5634765625, "__label__sports_fitness": 0.001071929931640625, "__label__transportation": 0.002536773681640625, "__label__travel": 0.0004730224609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18417, 0.06666]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18417, 0.3969]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18417, 0.81957]], "google_gemma-3-12b-it_contains_pii": [[0, 118, false], [118, 608, null], [608, 2595, null], [2595, 2980, null], [2980, 3365, null], [3365, 3494, null], [3494, 3661, null], [3661, 4047, null], [4047, 4297, null], [4297, 4368, null], [4368, 4434, null], [4434, 4482, null], [4482, 4794, null], [4794, 4889, null], [4889, 5170, null], [5170, 5408, null], [5408, 6034, null], [6034, 6209, null], [6209, 6309, null], [6309, 6509, null], [6509, 6579, null], [6579, 6649, null], [6649, 6829, null], [6829, 6889, null], [6889, 7061, null], [7061, 7176, null], [7176, 7315, null], [7315, 7801, null], [7801, 8091, null], [8091, 8149, null], [8149, 8210, null], [8210, 8324, null], [8324, 8636, null], [8636, 9021, null], [9021, 9037, null], [9037, 9377, null], [9377, 9485, null], [9485, 9929, null], [9929, 10372, null], [10372, 10454, null], [10454, 10609, null], [10609, 10714, null], [10714, 10796, null], [10796, 11029, null], [11029, 11417, null], [11417, 11964, null], [11964, 12523, null], [12523, 12702, null], [12702, 13347, null], [13347, 13452, null], [13452, 13688, null], [13688, 14283, null], [14283, 14835, null], [14835, 15357, null], [15357, 15667, null], [15667, 15764, null], [15764, 16134, null], [16134, 16145, null], [16145, 16733, null], [16733, 16769, null], [16769, 17054, null], [17054, 17674, null], [17674, 18033, null], [18033, 18417, null]], "google_gemma-3-12b-it_is_public_document": [[0, 118, true], [118, 608, null], [608, 2595, null], [2595, 2980, null], [2980, 3365, null], [3365, 3494, null], [3494, 3661, null], [3661, 4047, null], [4047, 4297, null], [4297, 4368, null], [4368, 4434, null], [4434, 4482, null], [4482, 4794, null], [4794, 4889, null], [4889, 5170, null], [5170, 5408, null], [5408, 6034, null], [6034, 6209, null], [6209, 6309, null], [6309, 6509, null], [6509, 6579, null], [6579, 6649, null], [6649, 6829, null], [6829, 6889, null], [6889, 7061, null], [7061, 7176, null], [7176, 7315, null], [7315, 7801, null], [7801, 8091, null], [8091, 8149, null], [8149, 8210, null], [8210, 8324, null], [8324, 8636, null], [8636, 9021, null], [9021, 9037, null], [9037, 9377, null], [9377, 9485, null], [9485, 9929, null], [9929, 10372, null], [10372, 10454, null], [10454, 10609, null], [10609, 10714, null], [10714, 10796, null], [10796, 11029, null], [11029, 11417, null], [11417, 11964, null], [11964, 12523, null], [12523, 12702, null], [12702, 13347, null], [13347, 13452, null], [13452, 13688, null], [13688, 14283, null], [14283, 14835, null], [14835, 15357, null], [15357, 15667, null], [15667, 15764, null], [15764, 16134, null], [16134, 16145, null], [16145, 16733, null], [16733, 16769, null], [16769, 17054, null], [17054, 17674, null], [17674, 18033, null], [18033, 18417, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 18417, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 18417, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18417, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18417, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 18417, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18417, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18417, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18417, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, true], [5000, 18417, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18417, null]], "pdf_page_numbers": [[0, 118, 1], [118, 608, 2], [608, 2595, 3], [2595, 2980, 4], [2980, 3365, 5], [3365, 3494, 6], [3494, 3661, 7], [3661, 4047, 8], [4047, 4297, 9], [4297, 4368, 10], [4368, 4434, 11], [4434, 4482, 12], [4482, 4794, 13], [4794, 4889, 14], [4889, 5170, 15], [5170, 5408, 16], [5408, 6034, 17], [6034, 6209, 18], [6209, 6309, 19], [6309, 6509, 20], [6509, 6579, 21], [6579, 6649, 22], [6649, 6829, 23], [6829, 6889, 24], [6889, 7061, 25], [7061, 7176, 26], [7176, 7315, 27], [7315, 7801, 28], [7801, 8091, 29], [8091, 8149, 30], [8149, 8210, 31], [8210, 8324, 32], [8324, 8636, 33], [8636, 9021, 34], [9021, 9037, 35], [9037, 9377, 36], [9377, 9485, 37], [9485, 9929, 38], [9929, 10372, 39], [10372, 10454, 40], [10454, 10609, 41], [10609, 10714, 42], [10714, 10796, 43], [10796, 11029, 44], [11029, 11417, 45], [11417, 11964, 46], [11964, 12523, 47], [12523, 12702, 48], [12702, 13347, 49], [13347, 13452, 50], [13452, 13688, 51], [13688, 14283, 52], [14283, 14835, 53], [14835, 15357, 54], [15357, 15667, 55], [15667, 15764, 56], [15764, 16134, 57], [16134, 16145, 58], [16145, 16733, 59], [16733, 16769, 60], [16769, 17054, 61], [17054, 17674, 62], [17674, 18033, 63], [18033, 18417, 64]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18417, 0.03233]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
4d9a0e9a44f8e24f8ad4d18d0eb48f1a88ad2164
SPACE STATION SOFTWARE RELIABILITY ANALYSIS BASED ON FAILURES OBSERVED DURING TESTING AT THE MULTISYSTEM INTEGRATION FACILITY Final Report NASA/ASEE Summer Faculty Fellowship Program-1987 Johnson Space Center Prepared by: Tak Chai Tamayo Academic Rank: Assistant Professor University Department: University of Houston-UP Department of Industrial Engineering Houston, Texas 77004 NASA/JSC Directorate: Mission Support Division: Spacecraft Software Division Branch: Systems Development JSC Colleague: Richard E. Coblentz Date: August 14, 1987 Contract Number: NGT 44-001-800 ABSTRACT Quality of software not only is vital to the success operation of the Space Station, it is also an important factor in establishing testing requirements, time needed for software verification and integration as well as launching schedules for the Space Station. Defense of management decisions can be greatly strengthened by combining engineering judgements with statistical analysis. Unlike hardware, software has the characteristics of no wearout and costly redundancies, thus making traditional statistical analysis not suitable in evaluating reliability of software. A statistical model was developed to provide a representation of the number as well as types of failures occur during software testing and verification. From this model, quantitative measure of software reliability based on failure history during testing are derived. Criteria to terminate testing based on reliability objectives and methods to estimate the expected number of fixings required are also presented here. INTRODUCTION The purpose of Multisystem Integration Facility (MSIF) is to provide a facility on which information systems for the Space Station, which are produced by different developers, may be integrated, tested, verified, certified for flight, and packaged for launch. The MSIF concept was motivated by the facts that Space Station softwares are being developed by multiple developers at different sites. The systems are highly distributed and will be built up in phases, over a number of launches. Several upgrades and changes will take place over the life of the Space Station. MSIF will be required to first perform testing using computer models of all the Space Station systems. As real systems are delivered at MSIF, testing will be performed using combinations of models and real systems. The final test will be one in which all systems are actual flight-ready versions. Since the correction of errors found during multisystem integration is the responsibility of the developer, control over delivered systems may be returned to the developer for the correction of errors, and then back to the MSIF to continue testing. Software is an important element of the Space Station, and is vital to its successful operation. Failure of softwares can be life-threatening in some cases. In addition, the quality of software can greatly affect the amount of fixings required during the testing, integration or verification process, thus making it possible to cause delays in launching of the Space Station, which is scheduled to begin in January, 1994. Consequently there is an urgent need to search for a quantitative measure of the reliability of the software, and to develop methods of combining reliability of software and hardware elements of the Space Station to establish the system reliability of the Station. The concept of software reliability differs from that of hardware reliability in that failure is not due to a "wearing out" process. Software failures are in fact errors which, owing to the complexity of a computer program, do not become evident until the combination of conditions bring the error to light. Unlike the hardware bathtub curve, there is no wearout characteristics, but only a continuing burn-in. Once a software error is identified and properly fixed, it is in general, fixed for all time. However, the large number of possible paths and its inputs in a space station software makes complete testing of the software generally impossible. Several approaches are currently available for testing of a software: path testing, functional testing and formal proofs of correctness. A complete functional test would verify that the correct output is produced for each input. It would consist of subjecting the program to all possible input streams. However, a ten-character string has $2^{80}$ possible input streams and corresponding outputs. So complete functional testing in this sense is clearly impractical. In path testing, one would design a sufficient number of test cases to assume that every path through the routine is exercised at least once. But most often, even the number of paths through a small routine can be astronomical to permit all paths to be tested. As for formal proofs of correctness, each program statement is examined and used in a step of an inductive proof that the routine will produce the correct output as stated by formal mathematics. The practical issue here is that such proofs are very expensive and have been applied only to numerical routines. Not only are all known approaches to absolute demonstrations of error-free impractical, they are impossible as well. Because exhaustively tested and error-free software are not made possible by current acceptance procedures, purchaser of a software product is provided with no quantitative information on which to base an acceptance decision and is thus forced to make these decisions based mostly on intuition and his own experience in similar situations. Therefore our goal should be to provide sufficient testing to assure that the probability of failure due to hibernating errors is sufficiently low to be acceptable. It is expected the level of testing required will depend on the system/component, criticality and complexity, state of development and cost and usage of the system. Software reliability is defined here as the probability that a given software operates for some time period without software error detectable by executing the codes on the machine for which it was designed, assuming that it is used within design limits. Such being the case, test cases should be designed to cover the operating scenarios of the information system designed. When softwares are delivered to MSIF, they have already been successfully tested on the flight-compatible hardware. MSIF testing will start using models of other systems, and progress to using delivered versions of the other systems. Current concepts of MSIF requires if errors are detected, the software be returned to its developer with discrepancy reports of the errors found. After proper fixing, the software is returned to MSIF for retesting. In general, while fixing the errors found, new errors are also introduced. A portion of the old errors persisted, and will reoccur during the retesting. This process is repeated until a decision is made about the quality of the software based on the testing results. In the past it normally means the software operates successfully on all test cases it was subjected to. The goals of this paper are: 1. Develop a statistical model to describe the failures behaviour during testing. 2. Obtain a statistical measure of software reliability, based on failure history observed during testing. 3. According to prespecified software reliability and failure history, establish criteria as to when testing of software can stop. 4. Combining reliability of software and hardware elements of the Space Station to establish "system" reliability of the Station. 5. Specified types of error records to be maintained during testing so that they can be used for later statistical analysis. These goals are motivated by the fact that in the Space Shuttle program, the extent and degree of testings performed on space softwares have generally been made based on management judgments. Verification requirements are to be determined individually for each hardware/software product, based on criticality and risk associated with the hardware/software when it is integrated into the operational enviroment. DESCRIPTION OF THE MODEL The model of interest here is the failure behavior of a software after it is delivered to MSIF for testing and integration. A number of test cases designed to cover a selection of the environment in which the software will be used are run and errors occurred during execution are recorded in discrepancy reports. The software is then returned to its developer for fixing. After proper fixing, it is returned to MSIF for retesting, where a portion of the old errors may reoccur and some new errors are detected. Assumptions of the Model 1. All errors are caused only by the faults in the software, thus all others involved in testing are assumed to possess high fidelity. 2. All errors occur during testing are observed. 3. The number of new errors found are statistically independent of the total number of errors found during the previous trial. 4. The failure rate of new errors for each trial is dependent of the number of fixings already performed on the software. 5. The number of test cases run during each trial remains relatively constant. 6. All persisted errors are statistically independent of each other. 7. The number of new errors observed during each trial follows a poisson process. Although it is possible that failure rate of new errors found during each trial can be directly proportional, constant, or inversely proportional to the number of fixings performed, historical information gathered during the development of the Shuttle Orbiter primary flight software indicates a decreasing trend. During each trial, the total number of errors detected consists of two independent entities: new and persisted errors. Let \[ N_k = \text{total number of errors detected during } k\text{th trial} \] \[ X_k = \text{number of new errors detected during } k\text{th trial, after (k-1) fixings by the developer} \] \[ R_k = \text{number of persisting errors from the previous trial.} \] Thus, the total number of errors detected at each trial is the sum of the number of new errors introduced by the last fixing and the number of errors persisted from the last trial, i.e. \[ N_k = X_k + R_k. \] **Analysis of the Model** Let \[ p_k = \text{probability of an error found in kth trial to persist in (k+1)th trial} \] If the number of errors found during kth trial is \( n_k \), then the probability of \( r \) errors persist in (k+1)th trial after fixing is: \[ \text{Prob}(R_{k+1} = r \mid N_k = n_k) = C(n_k, r) p_k^r (1-p_k)^{n_k-r}, \quad r = 0, 1, 2, \ldots, n_k \] where \[ C(n_k, r) = \text{the number of unordered samples of size } r \text{ taken from } n_k. \] This is the conditional probability density of persisting errors based on the total number of errors found in previous trial and it follows a binomial distribution \( B(n_k, p_k) \). As defined in the model, the total number of errors in the next trial is determined by the sum of two independent random variables, namely the numbers of new and persisted errors. This implies future errors is dependent of the number of errors found at present through persisted errors. Therefore the conditional probability density function of future errors based on present condition is: \[ \text{Prob}(N_{k+1} = n \mid N_k = n_k) = B(n_k, p_k)^* g(x_{k+1}). \] where \[ g(x_{k+1}) \text{ is the probability density function of the number of new errors found during } (k+1)\text{th trial, and } B^*g \text{ represents the convolution of the two random variables, } R_{k+1} \text{ and } X_{k+1}. \] By the assumption that \( X_{k+1} \) follows a poisson distribution with mean \( \lambda_{k+1} \), the convolution of a binomial and poisson distributions is given as follows: \[ \min(n, n_k) \] \[ \text{Prob}(N_{k+1} = n \mid N_k = n) = \sum_{j = 0} C(n_k, j) p_k^j (1 - p_k)^{n_k - j} \exp(-\lambda_{k+1})(\lambda_{k+1})^{n-j}/j! \] If \( p_k \) is relatively small, then the density function of \( B(n_k, p_k) \) can be approximated with a poisson distribution, and the convolution of \( R_{k+1} \) and \( X_{k+1} \) is a poisson process given as: \[ \text{Prob}(N_{k+1} = n \mid N_k = n_k) = \exp(-\lambda_{k+1} + \mu_{k+1}) (\lambda_{k+1} + \mu_{k+1})^n/n! \] with mean \( \lambda_{k+1} + \mu_{k+1} \) and \( \mu_{k+1} = n_k p_k \). In the case where \( n = 0 \), this conditional density function estimates the software reliability based on number of failures observed. **Criteria for Termination of Testing:** Extent of testing for softwares shall be conducted at a level consistent with its criticality level associated with the Space Station. Softwares that are highly critical to the successful operation of the Station will require high reliability, thus more detail testing than the others. Current concept documents of MSIF states that the degree to which a system is tested at MSIF depends on its risk category and how tightly coupled it is with the Data Management Systems (DMS). Suppose a particular software is required to have a minimum reliability level, say REL, during the mission period T. In particular, REL is defined as the probability no error will occur during the mission period, and (1-REL) is the probability any error is detected during T. Hence the following inequality must be satisfied in order to meet the required reliability level REL. \[ \text{REL} \geq \Pr(N_{k+1} = 0 \mid N_k = n_k) \] Using the approximation of a Poisson process, it is thus sufficient to solve for \( n_k^* \) such that: \[ \text{REL} \geq \exp(-\lambda_{k+1} - n_k p_k) \] or by taking logarithm on both sides, \[ n_k^* \geq \left[-\ln(\text{REL}) - \lambda_{k+1}\right] / p_k \tag{1} \] Equation (1) gives the set \( (k, n_k^*) \) which yields the required reliability level REL. Since \( n_k^* \geq 0 \), it implies that the earliest time testing may terminate can be obtained by setting \( n_k^* = 0 \) and then solve for a smallest \( k \) that satisfies: \[ \lambda_{k+1} \leq -\ln(\text{REL}). \] Let \( m \) be the expected number of fixings required before a software passes the testing and \[ m = E[k] \] \[ = \sum_{k=0}^{\infty} k \text{Prob(testing is terminated after } k \text{ fixings)} \] \[ = \sum_{k=0}^{\infty} k \text{Prob}(N_k = n_k^*, \ n_k^* \geq 0) \] which can be obtained by applying properties of conditional probabilities as follows: \[ \text{Prob}(N_k = n_k^*, \ n_k^* \geq 0) \] \[ = \sum_{n_{k-1}=0}^{\infty} \text{Prob}(N_k = n_k^*, \ n_k^* \geq 0, \ N_{k-1} = n_{k-1}) \] \[ = \sum_{n_{k-1}=0}^{\infty} \text{Prob}(N_k = n_k^*, \ n_k^* \geq 0 \mid N_{k-1} = n_{k-1}) \text{Prob}(N_{k-1} = n_{k-1}) \] and \[ \text{Prob}(N_j = n_j) = \sum_{n_{j-1}=0}^{\infty} \text{Prob}(N_j = n_j \mid N_{j-1} = n_{j-1}) \text{Prob}(N_{j-1} = n_{j-1}) \] for \( j = 1, 2, \ldots, n_{k-1} \). Example Suppose from historical data, one will observe $X_k$ new errors after $k$ fixings and $X_k$ follows a Poisson process with mean $\lambda_k = 0.5\exp(-k)$ per KSLOC\(^1\) per hour in execution time. Ten percent of the errors found during a trial are corrected by the developer after one fixing. One hundred test cases which takes a total of 100 hours to execute are designed to test the Atmosphere Control and Supply (ACS) subsystem software, which provides total and partial pressure control within the pressurized habitation module in the Space Station. Such system consists of 10 KSLOC and is classified as criticality level 1 which requires a minimum reliability level of 0.999 during its useful life cycle of 20 years, i.e. the probability of no error being detected during the next 20 years of operation is 0.999. By adjusting the unit of measurement, the mean number of failures for the ACS software during each trial period which lasts 100 hours is: $$\lambda_k = 50\exp(-k)$$ The number of hours in 20 years = 175,200 and is equivalent to 1,752 trial periods. To achieve reliability of 0.999 during the next 20 years, the software should achieve a minimum reliability level REL\(^*\) during the test period of 100 hours where $$(\text{REL}^*)^{1752} = 0.999$$ or REL = 0.9999994 \(^1\) KSLOC = thousand source line of codes The minimum number of fixings needed for the ACS software based on the required reliability level is given in Table 1. It shows that for required reliability of 0.9999994, the earliest time testing can terminate is after the 18th fixing when no error is detected during that trial. An estimate of the software reliability during the next period based on selected number of errors observed during testing are given in Table 2. Table 1. The Minimum Number of Fixings Needed for the ACS Software Based on Required Reliability Level <table> <thead> <tr> <th>Reliability Level (REL)</th> <th>Minimum Number of Fixings (k)</th> </tr> </thead> <tbody> <tr> <td>0.90</td> <td>6</td> </tr> <tr> <td>0.95</td> <td>6</td> </tr> <tr> <td>0.99</td> <td>8</td> </tr> <tr> <td>0.999</td> <td>10</td> </tr> <tr> <td>0.9999</td> <td>13</td> </tr> <tr> <td>0.99999</td> <td>15</td> </tr> <tr> <td>0.999999</td> <td>17</td> </tr> <tr> <td>0.9999997</td> <td>18</td> </tr> <tr> <td>0.9999999</td> <td>19</td> </tr> </tbody> </table> 28-14 <table> <thead> <tr> <th>Trial (k)</th> <th># Errors Observed During Testing ($N_k$)</th> <th>Reliability</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>0</td> <td>0.0011514</td> </tr> <tr> <td>1</td> <td>1</td> <td>0.0010418</td> </tr> <tr> <td>2</td> <td>0</td> <td>0.0829635</td> </tr> <tr> <td>2</td> <td>1</td> <td>0.0750685</td> </tr> <tr> <td>3</td> <td>0</td> <td>0.4002035</td> </tr> <tr> <td>3</td> <td>1</td> <td>0.3621191</td> </tr> <tr> <td>4</td> <td>0</td> <td>0.7139821</td> </tr> <tr> <td>4</td> <td>1</td> <td>0.6460377</td> </tr> <tr> <td>5</td> <td>0</td> <td>0.8834349</td> </tr> <tr> <td>5</td> <td>1</td> <td>0.7993650</td> </tr> <tr> <td>6</td> <td>0</td> <td>0.9554297</td> </tr> <tr> <td>6</td> <td>1</td> <td>0.8645085</td> </tr> <tr> <td>7</td> <td>0</td> <td>0.9833667</td> </tr> <tr> <td>7</td> <td>1</td> <td>0.8897870</td> </tr> <tr> <td>8</td> <td>0</td> <td>0.9938485</td> </tr> <tr> <td>8</td> <td>1</td> <td>0.8992713</td> </tr> <tr> <td>9</td> <td>0</td> <td>0.9977325</td> </tr> <tr> <td>9</td> <td>1</td> <td>0.9027858</td> </tr> <tr> <td>9</td> <td>2</td> <td>0.8168743</td> </tr> <tr> <td>10</td> <td>0</td> <td>0.9991652</td> </tr> <tr> <td>10</td> <td>1</td> <td>0.9040821</td> </tr> <tr> <td>10</td> <td>2</td> <td>0.8180473</td> </tr> <tr> <td>11</td> <td>0</td> <td>0.9996928</td> </tr> <tr> <td>11</td> <td>1</td> <td>0.9045595</td> </tr> <tr> <td>11</td> <td>2</td> <td>0.8184792</td> </tr> <tr> <td>12</td> <td>0</td> <td>0.9998869</td> </tr> <tr> <td>12</td> <td>1</td> <td>0.9047351</td> </tr> <tr> <td>12</td> <td>2</td> <td>0.8186382</td> </tr> <tr> <td>13</td> <td>0</td> <td>0.9999584</td> </tr> <tr> <td>13</td> <td>1</td> <td>0.9047998</td> </tr> <tr> <td>13</td> <td>2</td> <td>0.8186967</td> </tr> <tr> <td>14</td> <td>0</td> <td>0.9999847</td> </tr> <tr> <td>15</td> <td>0</td> <td>0.9999943</td> </tr> <tr> <td>16</td> <td>0</td> <td>0.9999979</td> </tr> <tr> <td>16</td> <td>1</td> <td>0.9048355</td> </tr> <tr> <td>17</td> <td>0</td> <td>0.9999992</td> </tr> <tr> <td>18</td> <td>0</td> <td>0.9999977</td> </tr> <tr> <td>19</td> <td>0</td> <td>0.9999999</td> </tr> </tbody> </table> Table 2. Software Reliability Based on Number of Errors Observed Collection of Data Certain data concerning the software will have to be collected in order to verify the statistical model described here. This includes: 1. The frequency of persisted errors from previous trial and new errors that are introduced during the fixing effort. 2. The number of fixings performed by the developer when these errors were found. 3. Criticality level of errors found at each trial. 4. A function which relates the failure rates of new errors at each trial with the number of fixings performed. 5. Probability distributions of the number of new errors detected at each trial. 6. Probability an identified error is corrected by the developer through one fixing. 7. Number of test cases applied on each trial. 8. Size of the software. The function which relates failure rates of new errors with the number of fixings can be obtained by applying regression analysis on the frequency of failures obtained through historical data. Since a perfect fit of n experimental data may require a polynomial of degree (n-1), techniques of selecting a "sufficient" function may be needed to reduce this polynomial to an acceptable form. If distributions of failures are unknown, a Chi-square goodness-of-fit test can be employed to test for pattern of distributions. Data from the Space Shuttle softwares were not applied to this model because they made no distinction between persisted and new errors. However, the data did support a decreasing trend with the number of fixings. When available, the amount of time and cost associated with each fixing can be combined with the amount of computer execution times required during testing to establish testing schedules and total cost of software testing so that launch schedules and budgets are met. Factor like degree of interaction with other systems is important in determining the reliability of a distributed system, and therefore should also be included in the data when available. **System Reliability** The Challenger accident shows that in case of accidents, defense of management decisions can be greatly strengthened if they are made based on combination of statistical analysis and engineering judgements. In the context of manned Space Station, it is important to explore reliability theory to assess the risk of extended human presence in space. Most Space Station systems are complex systems composed of hardware and software, both of which are required to be in operational states in order for the system to perform its designed function. It is thus necessary to include software as part of the components which form the reliability network of the Space Station. It has been shown that a system with subsystems and components in series will have reliability less than that of its weakest link. Suppose the ACS subsystem hardware in the habitation module has failure times which follow an exponential distribution with \( MTBF = 100 \) years and reliability of its software which interacts with the Data Management Systems (DMS) is 0.999 during the next 20 years. Then the ACS subsystem will have reliability less than those of the hardware, software and DMS alone. In particular, if no scheduled maintenance work are to be performed during the next 20 years and the DMS has reliability of 0.995, then the reliability of the ACS subsystem is: \[ \text{Reliability} = \exp\left(-\frac{20}{100}\right) \times 0.999 \times 0.995 \\ = 0.8138224. \] This demonstrates an important fact that the reliability of a complex system decreases rapidly as more subsystems are added to the system design. For instance, a system which requires five components in series configuration will have reliability of 0.77 if each of the component was tested to have reliability of 0.95. Thus in order to achieve the goal of a highly reliable system, efforts should be made to obtain highly reliable hardware as well as software through either engineering design or testing. While traditional methods of redundancy works well with hardware, it can be quite costly for complex software, as redundancy in software normally means an independent development of the computer program. It is this unique characteristic of software which makes software testing an effective method to maintain software quality. CONCLUSION The lack of quantitative method to evaluate reliability of software delivered by the developer motivated the statistical model designed here. The unique characteristics of no wearout and costly redundancy has made software testing an only way besides software design to maintain software quality. The model developed here represents the failure pattern during software testing, which includes new errors introduced by the fixing and persisted errors from previous trial. Quantitative approaches were derived to predict the software reliability and criteria to terminate testing based on failure history. These results can be applied to enhance the safety of the Space Station and to avoid delays in launch schedules due to delay in the software verification process. REFERENCES
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19880005502.pdf", "len_cl100k_base": 6188, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 34154, "total-output-tokens": 7114, "length": "2e12", "weborganizer": {"__label__adult": 0.000400543212890625, "__label__art_design": 0.0003807544708251953, "__label__crime_law": 0.0005211830139160156, "__label__education_jobs": 0.0012273788452148438, "__label__entertainment": 0.00011813640594482422, "__label__fashion_beauty": 0.00018644332885742188, "__label__finance_business": 0.0006432533264160156, "__label__food_dining": 0.000553131103515625, "__label__games": 0.000957965850830078, "__label__hardware": 0.002803802490234375, "__label__health": 0.0011186599731445312, "__label__history": 0.0005025863647460938, "__label__home_hobbies": 0.0001825094223022461, "__label__industrial": 0.0008058547973632812, "__label__literature": 0.0004167556762695313, "__label__politics": 0.00022101402282714844, "__label__religion": 0.0003676414489746094, "__label__science_tech": 0.364013671875, "__label__social_life": 0.00013113021850585938, "__label__software": 0.01508331298828125, "__label__software_dev": 0.607421875, "__label__sports_fitness": 0.0003902912139892578, "__label__transportation": 0.001209259033203125, "__label__travel": 0.0002963542938232422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26350, 0.0608]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26350, 0.84599]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26350, 0.9334]], "google_gemma-3-12b-it_contains_pii": [[0, 577, false], [577, 1578, null], [1578, 3327, null], [3327, 5163, null], [5163, 7001, null], [7001, 8089, null], [8089, 9379, null], [9379, 10494, null], [10494, 11771, null], [11771, 13075, null], [13075, 14119, null], [14119, 14837, null], [14837, 16186, null], [16186, 17383, null], [17383, 20293, null], [20293, 21750, null], [21750, 23346, null], [23346, 24550, null], [24550, 25329, null], [25329, 26350, null]], "google_gemma-3-12b-it_is_public_document": [[0, 577, true], [577, 1578, null], [1578, 3327, null], [3327, 5163, null], [5163, 7001, null], [7001, 8089, null], [8089, 9379, null], [9379, 10494, null], [10494, 11771, null], [11771, 13075, null], [13075, 14119, null], [14119, 14837, null], [14837, 16186, null], [16186, 17383, null], [17383, 20293, null], [20293, 21750, null], [21750, 23346, null], [23346, 24550, null], [24550, 25329, null], [25329, 26350, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26350, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26350, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26350, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26350, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26350, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26350, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26350, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26350, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26350, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26350, null]], "pdf_page_numbers": [[0, 577, 1], [577, 1578, 2], [1578, 3327, 3], [3327, 5163, 4], [5163, 7001, 5], [7001, 8089, 6], [8089, 9379, 7], [9379, 10494, 8], [10494, 11771, 9], [11771, 13075, 10], [13075, 14119, 11], [14119, 14837, 12], [14837, 16186, 13], [16186, 17383, 14], [17383, 20293, 15], [20293, 21750, 16], [21750, 23346, 17], [23346, 24550, 18], [24550, 25329, 19], [25329, 26350, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26350, 0.26289]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
053401db6fcfe6922cedf8e384a350d56bacb82f
COMPUTER SOFTWARE DOCUMENTATION P. A. COMELLA (NASA-TM-X-66161) COMPUTER SOFTWARE DOCUMENTATION (NASA) 34 p CSCL 09B G3/08 JANUARY 1973 GODDARD SPACE FLIGHT CENTER GREENBELT, MARYLAND COMPUTER SOFTWARE DOCUMENTATION P. A. Comella Laboratory for Space Physics Goddard Space Flight Center Greenbelt, Maryland January 1973 ## CONTENTS <table> <thead> <tr> <th>Section</th> <th>Page</th> </tr> </thead> <tbody> <tr> <td>ABSTRACT</td> <td>i</td> </tr> <tr> <td>I. INTRODUCTION</td> <td>1</td> </tr> <tr> <td>II. WHY DOCUMENTATION?</td> <td>1</td> </tr> <tr> <td>III. THE DIFFICULTY OF ACHIEVING GOOD DOCUMENTATION</td> <td>4</td> </tr> <tr> <td>IV. THE CONTENTS OF DOCUMENTATION</td> <td>5</td> </tr> <tr> <td>V. THE QUESTION OF STANDARDIZATION</td> <td>10</td> </tr> <tr> <td>VI. A METHODOLOGY FOR SOFTWARE DOCUMENTATION</td> <td>13</td> </tr> <tr> <td>A. PROBLEM DEFINITION</td> <td>15</td> </tr> <tr> <td>B. SYSTEM DESIGN</td> <td>19</td> </tr> <tr> <td>VII. DEVELOPING DOCUMENTATION TOOLS</td> <td>26</td> </tr> <tr> <td>VIII. CONCLUSION</td> <td>27</td> </tr> <tr> <td>BIBLIOGRAPHY</td> <td>28</td> </tr> </tbody> </table> COMPUTER SOFTWARE DOCUMENTATION P. A. Comella Laboratory for Space Physics Goddard Space Flight Center Greenbelt, Maryland ABSTRACT This paper is a tutorial in the documentation of computer software. It presents a methodology for achieving an adequate level of documentation as a natural outgrowth of the total programming effort commencing with the initial problem statement and definition and terminating with the final verification of code. It discusses the content of adequate documentation, the necessity for such documentation and the problems impeding achievement of adequate documentation. I. INTRODUCTION The sad state of computer software documentation is a thorn in the side of all associated with the computing field. The literature abounds with advice concerning the content and the format of documentation [1,2,7,8,12,15]. Managerial seminars develop methods to cajole and coerce designers, programmers, coders et al. to document [25,26,27,28]. But the problem remains. From the midst of the lamentations and hand-wringing over this plight comes a cry - a very loud and very persistent - "Standardize the Format of Documentation". Other voices suggest alternatives [3,5,16,19,20,21]. This paper investigates the area of computer software documentation: the problems, the necessity of solving these problems, the content of adequate documentation, methods of documentation and an evaluation of them. It discusses a methodology for achieving a good level of documentation and the implications of this methodology in the areas of system design, programming, coding, testing and debugging. Lastly, it suggests areas where it is feasible to develop realistic tools which could aid the documenter in his documentation efforts. II. WHY DOCUMENTATION? Any manager who has faced the problem of project continuity in the face of employee turnover knows "why documentation". Any designer modifying an existing system or merging his system with an existing system comprehends "why documentation". Any programmer modifying an existing program or interfacing his procedure with another's routines understands "why documentation". And, of course, let us not forget the user, who wishing to use the fully operational, completely debugged, exhaustively tested system, must find out: 1) how to use it; and 2) having used it must ascertain why his data caused an abnormal termination of the system with no output clues as to the cause of termination. He, assuredly, understands "why documentation". Good documentation because it is inextricably bound up in each facet of the project from conception and design to coding, testing and acceptance, results in a formalization of the programming effort and this formalization serves as a discipline in creating a programming methodology out of what is, today, still much of a programming art. Good documentation is an historical record of the implementation of a system. It is a vehicle for communicating the intended functions of the system, the actual functions of the system, and how the system performs these functions. It provides evidence that the system works. Good documentation also communicates what the system is NOT supposed to do. This is very, very important. After all, the system consists of a finite set of imperatives which can operate on a bounded set of data and it is absolutely essential to know what limitations the system imposes. Thus, the user can know whether the system, operating on his set of data, will output a correct solution. If not, he can discover whether the system is modifiable to his needs and if so what is necessary to modify it. Good documentation, in its archive function, can also serve as a tutorial in systems design, programming and coding and can result in an improvement of methodologies in these areas. Good documentation in requiring clear expression of concepts, definitions and functional specifications can prevent distortion of ideas that result in a system's being improperly or suboptimally implemented. It is a tool for project control and evaluations because its production and completion at each phase of a project demonstrates that the phase itself is completed satisfactorily. It is the project's working paper. It makes the system implementation visible and allows for orderly development of subsystems. Good documentation is also a record of design phase decisions, a record of the alternatives chosen and why as well as a record of the alternatives rejected and why. It is a record of the implications of the decisions: an explanation of the behavior of the system in its operating environment - a critical inclusion for change of environment frequently results in system malfunction, the result of some unnoticed and undocumented hardware (or software) dependency absent in the new environment. Ultimately, properly executed documentation frees resources, both human and computing: those consulting good documentation can ascertain the scope of a system and hence evaluate whether a given system is obsolete and should be scrapped or is functional but should be modified or extended. What and how to modify or extend is evident as well as the side-effects of such amendments. Thus good documentation helps to reduce duplication of human effort and unnecessary redundancy in computer systems. III. THE DIFFICULTY OF ACHIEVING GOOD DOCUMENTATION. Whenever the documentation effort is not reviewed as an if and only if proposition vis à vis the design, programming and coding effort, the documentation is apt to be inadequate or non-existent. At most it becomes an afterthought of dubious utility, a boring exercise inattentively executed. Documentation is an area of the computing field where computing personnel demonstrate the least competence. After all, who has learned to document? Programmers are taught to CODE in whatever the programming language! Not much stress is laid on program design and the documentation usually consists of a minimally, internally annotated program that executes a test case of not necessarily critical importance. Little value is placed on the importance of the design and design decisions while the executing code regardless of its goodness is of premium value. Where is the incentive to document? Adequate documentation can never occur until the design, programming, coding, etc. are regarded as completed if and only if the documentation is completed. Lastly, proliferation of hardware devices, programming languages and their accompanying compilers, and software systems, coupled with inadequate assessment of existing documentation procedures, make it difficult to formulate documentation procedures that really work. IV. THE CONTENT OF DOCUMENTATION Documentation communicates a message – here a (total) description of a computer software system. Ideally any query concerning the software system will find its answer in the message of the documentation for that system. Of paramount importance is the content of documentation (not necessarily its format). Documentation must communicate effectively concerning the proper operation of the given software system. It must describe the system itself as well as the resources used to develop the system. It must describe the environment of the system and the sub-systems that comprise the system. It must describe the problem that generated the idea for the system, i.e. the purpose of the system. It must describe the input set, i.e. the solutions, and the algorithms and procedures which transform an input subset into the appropriate output subset. Efficient documentation has as its offspring the generation of new concepts and the revision of old methodologies in problem solving because the system development has become visible, available and complete and hence evaluable. There is broad general agreement concerning the content of documentation. Basically adequate description of a computer software system involves: 1) documentation for management functions 2) documentation for user functions 3) documentation for operational functions 4) documentation for analysis, design and programming functions Documentation for management is a justification of the system. It outlines the problems and needs prompting the proposal of the new system (or existing system modification) and the benefits which will accrue because of its implementation. It is a statement of the broad conceptual design of the proposed system with particular emphasis on its being a good solution to the problem in comparison with other alternatives. It provides facts in the realm of dollars and cents, personnel requirements, time schedules, computer resource requirements necessary for measurement and evaluation of the system and for sound management decision-making. The format of the presentation must be satisfactory to management and is not the subject of this paper. For discussion of format and other specifics of management requirements, see references [7,12,25,26,27,28]. **Documentation for user functions** consists of a general system description with appropriate definition of terms enabling the user to ascertain the functions of the system and its limitations, the flow of the system, the domain of the input, the range of the output, the algorithms and procedures that transform the input into output, the procedures for preparing input, the error handling procedures for detection of bad input. It contains instructions for preparing the input and illustrative test cases. Gray and London [12] discuss user documentation in adequate detail although from the point of view of standardizing documentation. **Documentation for operational functions** describes the environment in which the system must operate; the hardware devices required, their configuration and set-up. It tells how to start up the system as well as how to restart in case of failure. It identifies the I/O devices and files and contains a detailed description of the data preparation procedures. It states storage requirements both main and peripheral and timing requirements: CPU and I/O in meaningful units (perhaps defined within the body of the documentation). Again Gray and London [12] is a good reference for content of operational documentation. Documentation for analysis design and programming functions is the category of documentation which is the subject of this paper. Analytical documentation is a detailed statement of the problem and design. It defines the problem completely according to its input, output and the functional specifications - the sequence of logical states transforming the input to output. It defines the operating environment of the system - the computer and peripheral devices, the operating system, the command and control language, and the programming language(s) of the implementation. It defines (and orders with respect to implementation) the sub-systems comprising the system with their appropriate task generations; the structure of the data base containing the input files to the system as well as the output files. This includes the assignment of files to specific hardware devices, data set name by which the file is known to the system, organization of the file and format of records within files and the relation of contents to input/output or intermediate processing. It discusses file maintenance in terms of the update and retrieval mechanisms. It creates the testing systems and sequences stating critical test points and paths and acceptance criteria. In the functional specifications it notes explicitly where errors can be detected and how such errors are to be handled. The analytical documentation imposes structure on the problem definition, translating the initial statement into a meta-language from which restatement the problem can be translated (directly) into the selected programming language(s). It specifies, as noted above, the order of tasking and sub-tasking to occur in the system implementation and the order of programming the functional specifications, thus, indicating the general logic flow and control of the system. It specifies the structure of the data base and defines the files comprising the data base. It describes the content and format of the files. It specifies the interfaces with the operating system and explicitly states what the system can and cannot do. In this design phase, too, the testing specifications are written. From the analytical specifications the programs are designed. Programming documentation describes the algorithms and procedures of the functional specifications, the detailed logic of each procedure which transforms the procedural input to procedural output. It specifies the interfaces with its sub-systems. (All of this should occur in a meta-language of the programming language chosen. For example, if the selected programming language is ALGOL the logic should be written in ALGOL-like constructs). It defines the variables and functions used in the computations. From these programming specifications the coding can directly proceed. The coded program forms an integral part of the documentation. It is especially valuable where cross-referencing with programming documentation is present. (Appropriate tools for clarification in this phase of the documentation might include glossaries and indexes to provide definitions and cross-referencing, schematics and figures to illustrate logic flows and the structure and contents of the data base). This explicit statement of the analysis, system design, programming and coding is the documentation. V. THE QUESTION OF STANDARDIZATION The question of standardization is implicit in any discussion of documentation. As suggested in the introduction management solutions to the documentation problem frequently lie in the realm of standardization of the format of documentation. Part of the rationale behind this leaning towards standardized formats is the idea that: 1) anyone can fill out a form with proper guidance; thus; 2) the documentation problem becomes a managerial problem with attendant solution lying in the comprehensible (to management) areas of forms design, forms distribution and on-the-job training. There is another reason, however, more obscure but more insidious in its pervasive influence on the computing industry - an under estimation of the complexity of the mechanisms and methodologies of problem specification and systems design and implementation. Dijkstra has stated this quite eloquently [9]. Thus, while management's aim is to achieve a system of documentation that is easy to prepare (thinking that is why people don't document), comprehensive in its description of the problem, solution and use of the system, it sabotages its goal to a large extent, not by its insistence on standardization per se but by its selection of format as the criterion for standardization. Standardization imposes a discipline on the user of the standardized procedure and so creates a focal point in the approach to a problem. Thus, in this case management has made the format and not (necessarily) the content of documentation, as originally intended, the focus of the documentation effort. But the critical need is how to create well-designed and implemented systems that are adequately documented: to do away with the transformation of input to output via alchemy with the functional specifications tucked away in the privacy of the author's mind. This writer is convinced that people don't document because they, recognizing that documentation is part of software development inseparable from the analytical, design, programming, coding and testing phases, don't know how to achieve this integration of documentation with the problem definition and solution. And to resolve this frustrating dilemma they skirt the documentation issue and thus diminish their powers of creativity in system design and implementation: this lack of graphic statements leads to imprecise sub-optimal systems containing (undocumented) side-effects. So it turns out that management has pinpointed the solution - standardization - but related it to the wrong problem - the format! But if the solution of standardization is applied to the problem of content it becomes apparent that the proper thrust should be towards developing a methodology of problem definition, systems design and implementation out of which flows naturally the documentation. The next section is a discussion of such a standard - the methodology of top-down definition, design, programming, coding and testing. The methodology has implications in the critical areas of demonstration of program correctness, test and validation of systems, and debugging of code. Dijkstra [9], Mills [19,20], and Parnas [21,22] have excellent articles describing implementations employing this methodology. The immediate consequences, in relation to the topic of software documentation, of this methodology are as follows: 1) the problem is well-defined and documented 2) the design, programming and coding are well-considered, approach optimal and are documented 3) the system is usable, amendable and extendable and the know-how is in the documentation 4) critical parts within the program are noted and the testing documentation explicitly demonstrates the operation of the code along these paths. Demonstration of program correctness becomes feasible. Thus the system is well-executed and the documentation is well-executed. Furthermore, and very importantly, the content of the documentation is a natural outgrowth of the system implementation. VI. A METHODOLOGY FOR SOFTWARE DOCUMENTATION This section describes a highly structured top-down approach to the problem of generating a software system and its attendant documentation. It is analogous to the construction of a tree with the root of the tree the problem statement; the first level the embedding of the problem in its operating environment; the second level the specification of internal system invariants and the interfacing of these invariants with the external environment; the third and deeper levels the representation of the functional specifications and sub-specifications in such a way that those specifications in closest propinquity to the root exert the greatest influence in constraining the system to meet problem objectives. Furthermore, the outer level nodes always determine the interfaces with nodes at the next most inner level. Influences and constraints percolate downward, never upward or laterally. The aim will be to establish an equivalence (in meaning) between the system and its documentation and to allow the documentation to keep abreast of system development. The raison d'être for a top-down approach to systems design and programming lies in the fact - as Hoare [14] so aptly states: "... all of the properties of a program and all of the consequences of executing it in any given environment can, in principle, be found out from the text of the program itself by means of purely deductive reasoning," (p. 576). Armed with this knowledge it makes sense to derive a methodology that makes it possible to approximate this realization. Dijkstra and Mills [9,19,20] among others advocate the application of the concepts of structured programming - a method of programming requiring all functional specifications of a procedure to be expressed using only the following forms of statements: 1) sequence 2) if . . . then . . . else 3) do . . . while with all functional specifications having but one set of input and one entry point and but one set of output and one exit point. (This advocacy of go-to-less programming has its origins in the theorems of Böhn and Jacopini [4]. From the point of view of documentation, verification and testing of systems such as approach is highly desirable because given a program written without go-to's much can be said about whether the program does what it is supposed to do. However, because a go-to-less language may not be available to the reader of this paper, the subject of structured programming will not be pursued further here, but instead the paper will describe a top-down methodology embodying the spirit of structured programming. (As an aside, Mills claims such aims can be achieved even with the 'go to' allowed [20]). The discussion will suggest a methodology for problem definition, system design and specification, including testing procedures, program design and coding, all with a view towards producing an optimal system optimally documented. A. Problem Definition The first step is to wrest the problem statement from the requestor in order to arrive at a problem definition. The guiding principle must be the realization that there is almost always dichotomy between what the user really wants or can have and what he thinks he wants - this is particularly true of the naive user of computer systems. Thus, the designer (consultant) in eliciting the request must also elicit the purpose of the request. This will aid him in pursuing a line of questioning, the answers to which will culminate in a workable problem definition. The user, while comprehending that total implementation will frequently occur through time, i.e. the implementation will occur in stages - frequently operates under the premise that problem definition should likewise take place across time. This is an erroneous viewpoint having as end result - even under an assumption of correct problem definition - which is unlikely - suboptimal design, patched programs and code, introduction of undesirable side-effects and excessive debug time! The designer must assist the requestor in achieving a correct, and complete statement of the problem before proceeding with design and analysis. Secondly, the designer must ascertain what are actually system parameters and what are system invariants. Here, particularly in a modeling situation the "solution" is frequently only a prelude to the solution and the "constants" but zeroth order estimates of the solution constants (this may motivate a design embedding no constants within the body of the functional code). Frequently, too, entire sections of code will be replaced: the functions themselves are parameters and will undergo tremendous revision. Such considerations, if known beforehand, can influence the design hugely, while lack of knowledge of this consideration can render the procedures difficult to modify. Forewarning, too, might influence the designer to choose a high level programming language for which an optimizing compiler exists so that functions can be readily coded and modified. Another area for scrutiny is the output specifications. The user requests what he knows about and what he thinks expeditious. Underlying a request for tabular output may be a need for plotting because the user is unknowledgeable concerning capabilities for computer plotting or because he thinks it will take too long to include the plotting right off. It is best to elicit these needs before the programming design and coding commence, for even if the initial implementation does not execute these options a place in the code can be set aside for later insertion of code using dummy procedures, etc. This is superior to patching a running program: patching frequently introduces side-effects, increases the difficulty of testing and debugging and decreases code readability. It is necessary to note here that testing is a design function: the testing specifications flow from the design considerations and the problem specifications. Input considerations are of crucial importance, too. It is necessary to know the answers to these questions before proceeding with system design: What is the input? Where do the data come from? What does the input look like? Is its format inflexible? This is very important. Many times the fixed formatted data is actually variable and the coding based on the fixed format premise, while efficient, has constrained the design too strongly percolating its influence through many levels of coding so that changes in format require - sometimes - major code revisions and that inevitable patching. The designer must be aware of this problem and must make the requestor cognizant of the implications of his specifications. Thus, the designer must make certain that the requestor can really live with his specifications. On the other hand, there are certain data sets whose formats are invariant (tapes from a satellite, for instance) but which could contain errors. Thus, the problem of error detection and error handling and correction requires attention. What is the probability of error occurrence? Of what type? What is to be done when the error occurs - abort, ignore, correct? What are the indications that an error has occurred? The relative sophistication of the expected user is also a consideration in the design of the input subsystem. On output design decisions are made concerning output media, optionality of output data sets, format of solution data sets, content, format and location (on-line or off-line with respect to the solution data sets) of error messages of adequate content to locate the source of error - a challenging problem. The requestor must understand the importance of the problem statement - the completeness of the statement ultimately determines the implemented system's usability. The last phase of these pre-design consultations is the presentation of the request - as the designer perceives it - to the requestor (preferably for his signature) - this signals concurrence with the problem statement and enables the designer to proceed to the analysis and design of the system. Addenda and amendments to the proposal should be stated in writing - problems can undergo striking metamorphoses in the course of development and to maintain clarity of problem definition and requestor - designer accord such a policy is wise. This phase completed the problem at that given instant of time is defined. This definition can be achieved even in a research environment - a point that is frequently argued as not possible - because the problem statement clearly indicates the variables and constants of the system. Even though it is not explicitly known when or how parameters will change in the course of development it is known that they may and the design can allow for this. Thus, from the design point of view the problem is defined. B. The System Design With problem statement in hand the designer commences the analysis and specification of the system - the problem solution. In the top-down approach, given the problem statement and a definition of the available computing resources, the designer proceeds to define the problem as a system embedded in an environment composed of a subset of these resources: hardware devices, operating systems, compilers, assemblers, etc. Criteria for selection of each resource are explicitly stated as well as the implications, i.e. constraints imposed by the selection. Reasons for rejection of alternatives - where available - are also explicitly stated. This environment with its attributes exerts external influences upon the developing system, independent of the constraints which the problem statement imposes. But it is critical to note: the external environment acts first and the system must conform to these external behavior demands. before it can respond to the internal demands of the problem itself. Consequently, in the top-down methodology the control commands that correctly interface the embryonic system with its outside world are written first. This places the problem definition in a proper perspective; but not only that, it makes the system in skeleton form known to the computing system. And, much to the joy of all concerned, satisfies that magical need to get running - but in a very special way - a hierarchical way such that encompassing code is always executing and "fully" tested before the next lower level of executable code is created and integrated into the system. "Fully" means at least to the point of determining the syntactical validity of constructs. Now the system communicates with this environment not only through this command and control language but also through its I/O requests. Furthermore, it is usually in terms of I/O that the user has stated his most stringent, least flexible requirements: in terms of satisfaction of user requirements the I/O area can be most critical. Thus, it is good top-down philosophy to define these communication paths at the next level; in IBMese the medium would be the Job Control Language (JCL). The JCL specifications, however, are a function of the data base design. Hence, creation of JCL controlling user I/O requests implies prior definition of the data base characteristics: the files comprising the data base; the format (organization) of each of these files, the data set names referencing these files; the criteria stipulating the organization and the implications of the type of organization selected. (Schemata and tables serve useful purposes in illustrating the structure of data bases and organization of files). This JCL funnels user input through the system environment connecting it with its processing program for functional transformation into output conforming with user requirements and transfers these results to the appropriate output media. Thus, this data base and its JCL connect user to system and force compliance with the user's most stringent requirements at the outermost levels. Next comes the creation of the control and functional code in **order of dominance** commencing with the coding of the **critical nucleus** (that section of code which controls the primary specification of functions) and tasks, its testing and integration into the system as follows: 1) Nucleus of control code for the i+1st level is created at the i\(^{th}\) level; functional code created at the i+1st level has its interfaces defined at the i\(^{th}\) level and only at the i\(^{th}\) level; 2) The i+1st level of code is tested and integrated with the system (which exists, is executing and has been implemented and tested through the i\(^{th}\) level of definition). Steps 1 and 2 are iterated until the system is fully implemented. It is important to note several features of this top-down methodology: 1) The programmer is able to carry out his design structure in code 2) The programmer is able to design a testable system 3) The documentation is a natural outflow of the design and test functions. The following paragraphs amplify these points. Sub-task specification and functional specifications are always carried out in executable code; for example, the appearance of a CALL SUBR (ARGI, ..., ARGN) in the i\textsuperscript{th} level of code signals (and is the only signal) the design need to create at the i+1\textsuperscript{st} level of code a procedure, known to the system by the name SUBR which operates on the input, (an already defined subset of the arguments, ARGI, ...) to produce the output, (defined within SUBR as a subset of ARGNI, ...). Further, the function (sub-task) is always referenced through its interface defined at the supervisory level; (ideally) each function (sub-task) reports to one supervisor only (to minimize connectivity see below). Observe: only that information which enables the function to be correctly coded without introduction of upward or lateral side-effects is supplied in the interface, nothing more and nothing less. The aim here is to minimize program connectivity - connections are assumptions that modules make about one another - and to prevent, in modification of a module, violations of assumptions other modules make about the module being changed. (Here, to modify one simply proceeds **down** the code-tree to the affected node(s) and replaces the function or task by another which satisfies the same criterion of correctness). Sub-task specification permits maintenance of program integrity and permits demonstration of program correctness - the documentation of the specifications; while the bottom-up approach of supra and lateral task specifications pays inadequate attention to the testing problem - the uncovering of errors, the debugging problem, the locating of the roots of the error and their subsequent correction, and the connectivity problem, all of which have repercussions in demonstration of program correctness and hence documentation. Aspects of testing must enter into design-stage decision making so that the resulting system has the following characteristics: 1) The structure of the program forms the basis for design of the test 2) the number of relevant states (states to be tested) is of reasonable extent 3) the relevant states are indeed testable. Test planning proceeds as follows: 1) determination of the extent of testing 2) identification of testable states 3) ranking of testable states in order of importance according to criteria derived from critical properties the problem solution must possess in order to satisfy user requirements 4) selection of relevant states using the ordering of step 3 5) structuring of program design so that these relevant states are testable 6) development of a set of test data which forces the system into all of its relevant states 7) verification of code by execution of the program using the test cases. Note: the statement of properties which the system must have to satisfy user requirements influences the test plan most strongly. This statement is a series of assertions describing the behavior of the program, that is, the effects of computation on the input set. Such assertions occur: 1) at the end of program/procedure stating what the correctness of the program/procedure means 2) at the start of the program/procedure concerning satisfaction of initial conditions 3) at some point along each loop 4) at points of functional specifications The test plan incorporates these assertions into the program. Execution of the test cases demonstrates program correctness as follows: whenever the initial conditions are true and the assertion is true at each critical point along the path then the final assertion is true. Conversely, if the program fails at some point, the path output can locate the source of the error, thus expediting correction. Note: The test plan code is permanent code optionally executed - it is an inherent part of the documentation indicating relevant states, critical paths and assertions. Test plan code is necessarily executed during initial system creation and subsequent modification phases. The test plan discussed in the preceding paragraphs is an integral part not only of the program structure but also of the documentation: execution of the test cases: 1) demonstrates that the program does what it is supposed to do 2) facilitates system modification - in fact, renders it feasible 3) demonstrates what transformations algorithms and procedures effect upon their set of input data - all primary functions of documentation. Lastly, in the top-down methodology the program itself becomes a high level flowchart. The outermost levels of the code synopsize the program. Functional specifications are known at their point of origin in the parent nodes by their symbolic source language names. Also, their interfaces are explicitly stated in these references. Finally, each "box" of this flowchart - node of the code-tree - with its combination of control code defining functional specifications at the next inner level and functional code operating on input data designated at the immediately outer level is a natural unit of documentation. For greater readability Mills [20] advocates limitation of unit size to one page of a computer program listing. In conclusion, the top-down methodology structures the entire development of the problem solution, standardizing the approach to the analysis, design, programming and coding functions with the outcome of well designed systems, properly coded and tested and adequately documented. VII. DEVELOPING DOCUMENTATION TOOLS This section briefly suggests two areas of attention which might offer some opportunities for amelioration of the documentation problem. The classroom occupies an inadequate place in the documentation effort, its potential as a maker of documentation tools - the programmer, himself, largely ignored. For the most-part programming courses are coding courses. Instructors almost invariably, stress the coding aspects of projects, indicating to students that cute working code is of inestimable assistance in earning the coveted high grade. Daily admonitions concerning the limited time remaining to get running and debugging drive the student to frenzies of coding and debugging. The design, analysis and documentation aspects of programming as well as their interrelationships receive scant attention. The result - a code first, document later mentality permeates the computing field! The solution - the development of curricula for programming courses which place coding in its proper relationship to analysis, design, programming, testing and documentation - and, the use if must be, of that infamous lever, the grade, to underscore the necessity of adequate analysis, design, testing and documentation in the total effort. Another potential tool is the use of syntax-directed compilers such as Wirth has developed for PL/I (see Mills [19]). This compiler constructs questions from the program being compiled for the programmer's later answering on an interactive system. This effort provides a standard way to elicit programmer response to specific questions regarding his program, the questions themselves, the result of compiler analysis of the program. VIII. CONCLUSION Only those programming methodologies which integrate the analysis, design, programming, testing, coding and documentation efforts can have as output well designed systems, adequately documented. This paper describes such a methodology. BIBLIOGRAPHY 25. Schwartz, M. Herbert and Bruce Beardsley, Systems proposal documentation for senior management, Data Management 8, No. 9, 88-93 (September 1970). 28. Wofsey, M. M., Systems development, conversion and operating plans, Data Management 8, No. 9, 64-65 (September 1970).
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19730007435.pdf", "len_cl100k_base": 7579, "olmocr-version": "0.1.50", "pdf-total-pages": 34, "total-fallback-pages": 0, "total-input-tokens": 57616, "total-output-tokens": 10000, "length": "2e12", "weborganizer": {"__label__adult": 0.00031447410583496094, "__label__art_design": 0.00037217140197753906, "__label__crime_law": 0.00022995471954345703, "__label__education_jobs": 0.0017671585083007812, "__label__entertainment": 5.495548248291016e-05, "__label__fashion_beauty": 0.00010949373245239258, "__label__finance_business": 0.00021326541900634768, "__label__food_dining": 0.0002453327178955078, "__label__games": 0.0005383491516113281, "__label__hardware": 0.0007195472717285156, "__label__health": 0.0002586841583251953, "__label__history": 0.00016820430755615234, "__label__home_hobbies": 7.581710815429688e-05, "__label__industrial": 0.0002073049545288086, "__label__literature": 0.0003993511199951172, "__label__politics": 0.00011521577835083008, "__label__religion": 0.0003077983856201172, "__label__science_tech": 0.005680084228515625, "__label__social_life": 6.514787673950195e-05, "__label__software": 0.007495880126953125, "__label__software_dev": 0.97998046875, "__label__sports_fitness": 0.0001621246337890625, "__label__transportation": 0.0002796649932861328, "__label__travel": 0.00011861324310302734}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42842, 0.02581]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42842, 0.44217]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42842, 0.90414]], "google_gemma-3-12b-it_contains_pii": [[0, 188, false], [188, 326, null], [326, 1093, null], [1093, 1694, null], [1694, 2980, null], [2980, 4217, null], [4217, 5701, null], [5701, 7170, null], [7170, 8580, null], [8580, 9942, null], [9942, 11338, null], [11338, 12949, null], [12949, 14510, null], [14510, 15962, null], [15962, 17355, null], [17355, 18663, null], [18663, 20114, null], [20114, 21382, null], [21382, 22830, null], [22830, 24334, null], [24334, 25685, null], [25685, 26989, null], [26989, 28450, null], [28450, 29942, null], [29942, 31346, null], [31346, 32827, null], [32827, 34153, null], [34153, 35413, null], [35413, 36855, null], [36855, 38300, null], [38300, 39091, null], [39091, 40303, null], [40303, 41516, null], [41516, 42842, null]], "google_gemma-3-12b-it_is_public_document": [[0, 188, true], [188, 326, null], [326, 1093, null], [1093, 1694, null], [1694, 2980, null], [2980, 4217, null], [4217, 5701, null], [5701, 7170, null], [7170, 8580, null], [8580, 9942, null], [9942, 11338, null], [11338, 12949, null], [12949, 14510, null], [14510, 15962, null], [15962, 17355, null], [17355, 18663, null], [18663, 20114, null], [20114, 21382, null], [21382, 22830, null], [22830, 24334, null], [24334, 25685, null], [25685, 26989, null], [26989, 28450, null], [28450, 29942, null], [29942, 31346, null], [31346, 32827, null], [32827, 34153, null], [34153, 35413, null], [35413, 36855, null], [36855, 38300, null], [38300, 39091, null], [39091, 40303, null], [40303, 41516, null], [41516, 42842, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42842, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42842, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42842, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42842, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42842, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42842, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42842, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42842, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42842, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42842, null]], "pdf_page_numbers": [[0, 188, 1], [188, 326, 2], [326, 1093, 3], [1093, 1694, 4], [1694, 2980, 5], [2980, 4217, 6], [4217, 5701, 7], [5701, 7170, 8], [7170, 8580, 9], [8580, 9942, 10], [9942, 11338, 11], [11338, 12949, 12], [12949, 14510, 13], [14510, 15962, 14], [15962, 17355, 15], [17355, 18663, 16], [18663, 20114, 17], [20114, 21382, 18], [21382, 22830, 19], [22830, 24334, 20], [24334, 25685, 21], [25685, 26989, 22], [26989, 28450, 23], [28450, 29942, 24], [29942, 31346, 25], [31346, 32827, 26], [32827, 34153, 27], [34153, 35413, 28], [35413, 36855, 29], [36855, 38300, 30], [38300, 39091, 31], [39091, 40303, 32], [40303, 41516, 33], [41516, 42842, 34]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42842, 0.06278]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
b5178566c95ca77f5cee4ffa99e9cd8f3f4ca456
[REMOVED]
{"len_cl100k_base": 5006, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 26247, "total-output-tokens": 6758, "length": "2e12", "weborganizer": {"__label__adult": 0.0003962516784667969, "__label__art_design": 0.0023479461669921875, "__label__crime_law": 0.0004100799560546875, "__label__education_jobs": 0.0011959075927734375, "__label__entertainment": 0.00014388561248779297, "__label__fashion_beauty": 0.00019919872283935547, "__label__finance_business": 0.0002410411834716797, "__label__food_dining": 0.00036454200744628906, "__label__games": 0.0006241798400878906, "__label__hardware": 0.0012454986572265625, "__label__health": 0.0005211830139160156, "__label__history": 0.0004529953002929687, "__label__home_hobbies": 0.00011229515075683594, "__label__industrial": 0.0005459785461425781, "__label__literature": 0.0004622936248779297, "__label__politics": 0.00023615360260009768, "__label__religion": 0.0005655288696289062, "__label__science_tech": 0.10858154296875, "__label__social_life": 0.00010019540786743164, "__label__software": 0.0262908935546875, "__label__software_dev": 0.85400390625, "__label__sports_fitness": 0.00021159648895263672, "__label__transportation": 0.000553131103515625, "__label__travel": 0.000263214111328125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26283, 0.02264]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26283, 0.60765]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26283, 0.88067]], "google_gemma-3-12b-it_contains_pii": [[0, 3080, false], [3080, 6127, null], [6127, 8019, null], [8019, 9309, null], [9309, 9466, null], [9466, 12599, null], [12599, 15517, null], [15517, 15706, null], [15706, 18425, null], [18425, 18887, null], [18887, 20377, null], [20377, 20457, null], [20457, 22137, null], [22137, 23615, null], [23615, 26283, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3080, true], [3080, 6127, null], [6127, 8019, null], [8019, 9309, null], [9309, 9466, null], [9466, 12599, null], [12599, 15517, null], [15517, 15706, null], [15706, 18425, null], [18425, 18887, null], [18887, 20377, null], [20377, 20457, null], [20457, 22137, null], [22137, 23615, null], [23615, 26283, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26283, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26283, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26283, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26283, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26283, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26283, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26283, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26283, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26283, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26283, null]], "pdf_page_numbers": [[0, 3080, 1], [3080, 6127, 2], [6127, 8019, 3], [8019, 9309, 4], [9309, 9466, 5], [9466, 12599, 6], [12599, 15517, 7], [15517, 15706, 8], [15706, 18425, 9], [18425, 18887, 10], [18887, 20377, 11], [20377, 20457, 12], [20457, 22137, 13], [22137, 23615, 14], [23615, 26283, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26283, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
da133d89b36bc734e32f12996c4adc54a1859cb0
Implementation of a Visual Modeling Tool for Defining Instance Aspect in Workflow Jianxun Liu, Zefeng Zhu, Yiping Wen Key Lab of Knowledge Processing and Networked Manufacture, Hunan University of Science and Technology, Xiangtan, Hunan, 41201, China Ljx529@gmail.com, zzfking@yahoo.com.cn Jinjun Chen Faculty of Information and Communication Technologies, Swinburne University of Technology, Melbourne, Australia 3122. jinjun.chen@gmail.com Abstract—The instance-aspect oriented workflow management system is to vertically combine multiple workflow activity instances and submit them for execution as a whole according to some batch or combination logics. It is inspired by the idea of aspect-oriented programming methodology and aims at improving the execution efficiency of business processes. Traditional workflow systems do not support workflow model with instance aspects. In our previous work, we have studied workflow instance modeling technology. This paper makes a research on the principles, methods and implementation of a workflow visual GUI tool for modeling instance aspects in workflow. It is based on an open source GUI tool, Together Workflow Editor, and makes some expansion in instance aspect functionality. Keywords - Workflow; Visual Modeling Tool; Instance Aspect I. INTRODUCTION Workflow is the automation of business processes in whole or partly[8,9,17]. A workflow management system (WfMS) is a computer system that manages and defines a series of tasks within an organization to produce a final outcome or outcomes. WfMSs allow you to define different workflows for different types of jobs or processes[17]. Workflow model is the abstract representation of workflow processes. In WfMSs, there are some core components, such as a computerized workflow model representation (XPDL), a visual modeling tool for defining workflow model, a workflow engine for enactment of a workflow process instance. There are many projects, products and literatures about workflow model or WfMSs [1-16] right now. However, most of current workflow models are only concerns about separation of process logics from functional logics in information systems. The process logic view focuses on the static and horizontal connections or relations between functional components in information systems or departments in an organization. They did not take into consideration the separation of batch or combination logics in multiple workflow instances, which we call instance aspect modeling, within a WfMS from process logics furthermore to improve execution efficiency. This instance aspect modeling in workflow concerns about the dynamic and vertical modeling of relations between workflow instances. In our previous work [8,9], we investigated a special instance aspect model, batch processing in workflow and the modeling of instance aspects in workflow using a partially space model. There is a need of a computerized model language and visual modeling tool for representation and definition of instance aspects in workflow. Based on XPDL and an open source WfMS, Enhydra Shark, this paper presents a framework and implementation solution in detailed for the problem. We first extend XPDL to represent instance aspects and then design and implement the visual modeling tool, which is an expansion of open source product, Together Workflow Editor. A case study is analyzed at last. II. BASIC PRINCIPLES A. Visual Modeling Technology With visual modeling tools such as UML, Object Model can be expressed accurately, directly and intuitively, so as to make communications between developers or customers and developers more conveniently. The foundation of a visual modeling tool is the graphical representation of abstract model. Therefore, it must support the following functions: - Draw Graphics and connect lines: in the process of workflow modeling, we use activities and the relations of activities to present a process. - Provide withdrawal and resuming functions: GUI editors have to provide functions of withdrawal and resuming so that people can edit those activities more conveniently. B. Together Workflow Editor Together Workflow Editor is the first graphical Java Workflow Editor fully implementing WfMC (Workflow Management Coalition) XPDL-Specifications (XML Process Definition Language). Every WfMC compliant XPDL-File can be viewed, edited and saved either from a local/remote mapped file system / drive or via Wf-XML directly from WfMC compliant workflow engines likes Together Workflow Server / Enhydra Shark, Fujitsu or TIBCO Staffware. Because of easy operation of Together Workflow Editor, it let people define and check workflow process quickly. C. XPDL XPDL Specifications is a process definition language based on XML. The top level in XPDL is a package which is used as a data container. Figure 1 shows the class diagram of XPDL model. There are entities, connections, references and etc. in the package. The package entity contains workflow process definition, workflow activities, transition information, workflow participants, workflow application definition, workflow related data, and the system and environment data. From the view that whether entities could participate process route directly, we divide the entities into two types: static entity and dynamic entity. Static entities do not involved in the routing of process directly, and it defines the property information or static data. Dynamic entity can participant process route directly, and it defines nodes, routers and control condition. Static entity includes definition head, process definition, workflow participant definition, workflow application definition, workflow related data, system data and data type. Dynamic entity includes: activity, transition information. Each activity contains a logical, self-contained unit, it may be the smallest independent unit, and also may be assembled by a series of smaller independent modules of sub-process or block. We can also divide activity into two types: Atomic activity which contains normal activity, router activity and nesting activity which consists of block activity, sub-workflow activity. In workflow process, the relationship between activities is implemented by defining control transition information. Each transition contains predecessors, successors and the transition condition. II. WORKFLOW IA MODEL A. Theory of IA in workflow As a powerful tool in supporting business process, workflow has attracted enough attention from academic and industrial field. Unfortunately, existing workflow management systems do not provide mechanisms for separation of batch or group processing logic from process logics. Based on our previous work in batch processing in workflow [8,9], we use Aspect-Oriented Programming (Hereafter AOP) methodology to expand traditional workflow models so that they can model batch or group processing logics dynamically and vertically between workflow instances. We try to abstract execution requirements or constraints of activity instances to be processed in batch or group as an Instance Aspect (Hereafter IA). And then we embed this IA into workflow model. Figure 2 shows an expansion in workflow meta-model to support this IA modeling. We add some AOP elements, such as Pointcut, Behavior PointcutExpression to traditional workflow model. IA is a group of instance activities which have the same instance characteristics, such as aspect of resource competition, batch processing or service priorities. Pointcut model follows the idea of AOP, which is a handling point of an IA. It usually has three styles: Incoming, Outcoming and Conjugated. Pointcut usually is a tuple with a PointcutExpression which is a logic expression, and Joint Point which is a connection of an IA and its Behavior. Behavior corresponds to the Advice in AOP, which represents the action logics of this IA. We can see from Figure 2 that the instance-based workflow model consists of two parts, i.e., traditional workflow model and IA description. Batch processing in workflow could be regard as a special kind of IA in workflow model. In IA, the assembly and split of data flow in WfMSs are the key problem. Figure 2. Batch processing in workflow model We can see from Figure 2 that the instance-based workflow model consists of two parts, i.e., traditional workflow model and IA description. Batch processing in workflow could be regard as a special kind of IA in workflow model. In IA, the assembly and split of data flow in WfMSs are the key problem. Figure 3 is an example of Instance-based workflow, i.e., batch processing. It consists of six activities: “Receive Order”, “Refuse order”, “Get money”, “Cooking”, “assign”, “Send”. In normal situations, the instances of cooking are independent of each other, i.e., a cooking activity only serves for a customer order. However, if there are many customers who order the same dish at the same time, we can group these orders and cook together, which can not only save resources, but also improve the efficiency. In other words, several activity cases from multiple instances of the same workflow can be grouped vertically by certain rules, and it is the group being submitted for execution instead of each activity instance. This is the concept of batch processing in workflow as we proposed in [8] and it is a typical example of IA in workflow. B. Extension of XPDL XPDL is the representation language of workflow, but traditional XPDL does not support IA. So we have to make extensions to XPDL to denote IA in workflow. A new attribute IA is added. It is shown in Table 1. In Together Workflow Editor, we use class XMLComplexChoice to manage XPDL and then we need to expand XMLComplexChoice component. <table> <thead> <tr> <th>Attribute</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Id</td> <td>IA id</td> </tr> <tr> <td>Name</td> <td>Name of IA</td> </tr> <tr> <td>Type</td> <td>Types of operation of IA</td> </tr> <tr> <td>Operation</td> <td>Operations of IA</td> </tr> </tbody> </table> B. Extension of XPDL Table 1. Attribute of XPDL Extension IV DESIGN OF VISUAL MODELING TOOL FOR IA IN WORKFLOW A. General Framework The general framework of a visual modeling tool for IA in workflow is as Figure 4. It consists of 4 parts: User Interface (Hereafter UI) layer, control layer, model layer and XPDL storage layer. In UI, we take use of Together Workflow Editor’s UI layer and add some new interfaces, e.g., the IA node definition interface. Control layer is used to monitor events in UI and creating node objects generated by users. Event monitor inherits all the events in Java. And it has extended some events so that the tool can monitor the event more perfectly. After event monitor gets the event happened, it transfer those event parameters to event processing module, and the event processing module will answer to those events. In event processing module, there are two major classes: GraphController and Handler. GraphController will handle some simple events such as how to answer the exceptions while Handler will handle some instance event such as add activity, delete activity and set parameters. To those instance nodes which we create in visual modeling workflow tool, if there is no need to store them in disk, it will be show in UI right away. But if we want to save the whole process in disk, the model layer will check the process. Only those processes which have been validated will be saved, or there would throw an exception. Figure 4. Framework for workflow modeling tools Model check has two principles: - Verification of general process: Any node in processes can not exist independently. A workflow process must have “Start” activity and “End” activity. Otherwise an exception will be thrown out - Verification of IA in workflow: IA node object could only be used by process object. It can not exist independently. The main function of XPDL storage layer is importing and exporting the XPDL documents. Those processes which have been validated will be imported. B. Extension of Entities The entity relationship of Together Workflow Editor is shown as Figure 5. Each process package has one or more processes, participants, process relevant data and applications. Participants, process relevant data and applications can be defined in package or process. At the same time, the process also defines one or more activities, the activities set, as well as connecting arc. Activity set is a set of activities, and it is the composition of a number of activities. However, the structure of the original tool does not include the node of IA activities, so we extend the together Workflow Editor’s activity. After the expansion, the node is divided into four types: block of activities, sub-processes, IA activities and routing activities. C. Extension of Functions Figure 6 is the function module diagram of IA workflow visual modeling tool. Since the top entity of workflow is XPDL package, in our design we make the XPDL management module and XPDL visual module to be the top function module. The main job of XPDL visual management is to drag and drop activity in GUI interface. Process management module includes addition, modification and deletion functions. Compared to other ordinary visual modeling tool, IA visual modeling tool add an IA entity in activity definition module. IA activities responsible for the process with the same or similar function in the model. V IMPLEMENTATION OF VISUAL MODELING TOOL FOR IA WORKFLOW A. Class Diagram of Model Layer The implementation of the whole system must starts from the model layer. Because model layer is essential in MVC Three-tier structure, and it will have an important impact on system operation efficiency. The function module which we have introduced before is the basis of the realization of model layer, and model layer are mainly used for model checking. The class diagram of the entire model layer is shown in Figure 7, it includes three parts: basic process entities part, verification part and IA handle part. Basic process entities include all the entity type which is necessary in process. We add an IA entity in this part. Class Verification checks workflow process. XPDLBase is responsible for the import and export of XPDL documents. IAHandle extends from IAHandleAPI, and it is responsible for the invocation of operation primitives. B. Implementation of Control layer We have mentioned GraphController and Handler in the introduction of visual modeling tool’s framework. GraphController in the control layer is responsible for the handling of simple events, such as exception. Handler is responsible for dealing with process events, such as nodes creation, deletion, and modification of parameters, which encapsulates all data operations and realizes some model function. Workflow modeling tools will achieve a lot of the Handler function, the most important function is to add, delete, and move the process entity in the canvas and the operation of data storage and modification. Table II shows some handler and their functions. ### C. Implementation of IA functions In the implementation of workflow visual modeling tool, the main work is to implement IA node. IA node must have a good definition, and make a relevant XPDL encapsulation, and let the system generate XPDL file automatically after the IA node have been called, the structure of IA function is shown in Figure 8. Class IAHandle is the core of the IA process. It is an extension from IAHandleAPI, which implements the control to IA process. Different instance activity calls different operation primitives, each type of IA activity extends from the basic class ElementXPDL, and all encapsulate corresponding XPDL labels, so when we have a visual operation, it could be able to generate the corresponding XPDL elements. #### VI. CASE STUDY We use this visual modeling tool to define the workflow process with IA in Figure 3. The GUI definition is shown in Figure 9. The steps for defining the workflow process are as follows: - **a)** Start modeling tools. - **b)** Create a new process package, and create a new process in process package and set a name for this process. - **c)** Draw the process in canvas area. - **d)** Set the IA parameters, in this case, cooking is the IA node. - **e)** Save the XPDL documentation which is generated by system. <table> <thead> <tr> <th>Entity</th> <th>Handler</th> <th>Function</th> <th>Entity</th> <th>Handler</th> <th>Function</th> </tr> </thead> <tbody> <tr> <td>Process</td> <td>ProgressChangeId</td> <td>Change process id</td> <td>Activity</td> <td>SetStartMode</td> <td>Set start mode</td> </tr> <tr> <td></td> <td>ProgressChangeId</td> <td>Change process name</td> <td></td> <td>SetFinishMode</td> <td>Set finish mode</td> </tr> <tr> <td></td> <td>ProgressChangeName</td> <td>Change process level</td> <td></td> <td>CopyActivity</td> <td>Copy activity</td> </tr> <tr> <td></td> <td>AddNewProgress</td> <td>Add new process</td> <td></td> <td>DeleteActivity</td> <td>Delete activity</td> </tr> <tr> <td></td> <td>DeleteProgress</td> <td>Delete process</td> <td></td> <td>PasteActivity</td> <td>Paste activity</td> </tr> <tr> <td></td> <td>CopyProgress</td> <td>Copy process</td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td>PasteProgress</td> <td>Paste process</td> <td></td> <td></td> <td></td> </tr> <tr> <td>Activity</td> <td>AddActivity</td> <td>Add activity</td> <td>Transition</td> <td>AddTransition</td> <td>Add transition</td> </tr> <tr> <td></td> <td>ChangeActivityId</td> <td>Change activity id</td> <td></td> <td>ChangeTransitionId</td> <td>Change transition id</td> </tr> <tr> <td></td> <td>ChangeActivityName</td> <td>Change activity name</td> <td></td> <td>ChangeTransitionName</td> <td>Change transition name</td> </tr> <tr> <td></td> <td>SetActivityPerformer</td> <td>Change performer</td> <td></td> <td>DeleteTransition</td> <td>Delete transition</td> </tr> <tr> <td></td> <td></td> <td></td> <td></td> <td>SetTransitionFrom</td> <td>Set start transition</td> </tr> <tr> <td></td> <td></td> <td></td> <td></td> <td>SetTransitionTo</td> <td>Set finish transition</td> </tr> </tbody> </table> ![Figure 8. Architecture of instance](image) ![Figure 9. Configuration of properties for IA node](image) #### VII. CONCLUSION In our previous work, we studied the IA modeling, especially batch processing, in workflow. There is a need of a GUI tool to define workflow model with IA features. In this paper, we make a research on the principles, methods and implementation of a workflow visual modeling tool for defining IA in workflow. We first extend XPDL to represent IA and then design and implement the visual modeling tool, which is an expansion of open source product, Together Workflow Editor. The general framework of the tool, the extension to Together Workflow Editor and the Implementation are detail. Finally, a case study is analyzed. #### ACKNOWLEDGEMENT This work was supported by National Basic Research Program of China, 973 Plan, under grant no: 2003CB317007, Natural Science Foundation of China, under grant no: 60673119, 90818004. #### REFERENCES 1. K. Bill, D. Panagiotakis, F. George. Workflow Requirements Modeling Using XML, Requirements Engineering 2002:124~138
{"Source-Url": "https://researchbank.swinburne.edu.au/file/019bc004-b906-4436-886a-eda0c133a665/1/PDF%20(Published%20version).pdf", "len_cl100k_base": 4098, "olmocr-version": "0.1.48", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 17215, "total-output-tokens": 5285, "length": "2e12", "weborganizer": {"__label__adult": 0.00022268295288085935, "__label__art_design": 0.0003314018249511719, "__label__crime_law": 0.00026226043701171875, "__label__education_jobs": 0.00098419189453125, "__label__entertainment": 5.048513412475586e-05, "__label__fashion_beauty": 0.00011748075485229492, "__label__finance_business": 0.00034427642822265625, "__label__food_dining": 0.00023233890533447263, "__label__games": 0.0003209114074707031, "__label__hardware": 0.0006923675537109375, "__label__health": 0.0003528594970703125, "__label__history": 0.0001951456069946289, "__label__home_hobbies": 7.051229476928711e-05, "__label__industrial": 0.0003924369812011719, "__label__literature": 0.00017511844635009766, "__label__politics": 0.00017535686492919922, "__label__religion": 0.0002837181091308594, "__label__science_tech": 0.034027099609375, "__label__social_life": 8.285045623779297e-05, "__label__software": 0.0181121826171875, "__label__software_dev": 0.94189453125, "__label__sports_fitness": 0.00018608570098876953, "__label__transportation": 0.0003387928009033203, "__label__travel": 0.0001474618911743164}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23029, 0.01869]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23029, 0.31746]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23029, 0.87873]], "google_gemma-3-12b-it_contains_pii": [[0, 5281, false], [5281, 7798, null], [7798, 11724, null], [11724, 14664, null], [14664, 19274, null], [19274, 23029, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5281, true], [5281, 7798, null], [7798, 11724, null], [11724, 14664, null], [14664, 19274, null], [19274, 23029, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23029, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23029, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23029, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23029, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23029, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23029, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23029, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23029, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23029, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23029, null]], "pdf_page_numbers": [[0, 5281, 1], [5281, 7798, 2], [7798, 11724, 3], [11724, 14664, 4], [14664, 19274, 5], [19274, 23029, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23029, 0.17073]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
0ee391efcf1c8144ff68c66db3c05fb2a7eb1937
Extended Scaled Neural Predictor for Improved Branch Prediction Zihao Zhou, Mayank Kejriwal, and Risto Miikkulainen Abstract—A perceptron-based scaled neural predictor (SNP) was implemented to emphasize the most recent branch histories via the following three approaches: (1) expanding the size of tables that correspond to recent branch histories, (2) scaling the branch histories to increase the weights for the most recent histories but decrease those for the old histories, and (3) expanding most recent branch histories to the whole history path. Furthermore, hash mechanisms, and saturating value for adjusting threshold were tuned to achieve the best prediction accuracy in each case. The resulting extended SNP was tested on well-known floating point and integer benchmarks. Using the SimpleScalar 3.0 simulator, while different features have different impact depending on whether the test is floating point or integer, overall such a well-tuned predictor achieves an improved prediction rate compared to prior approaches. I. INTRODUCTION Dynamic branch prediction is a fundamental component in modern computer architecture design to achieve high performance. A branch in machine code is essentially analogous to an if-statement in high level code. When a branch is first encountered, it may not be possible to decide whether or not it should be taken: the code is executed in multiple pipelined stages and the needed information may not be available yet. Instead of stalling, however, a pipelined processor uses branch prediction to predict the target of a branch, and pre-fetches and executes instructions on the path of the predicted decision. The more accurate such branch prediction is, the more likely such speculation is to be useful. Accurate branch prediction is therefore essential to facilitate instruction-level parallelism and better performance [1]. Most research in the 1990’s focused on branch predictors based on two-level adaptive scheme [2]. Two-level predictors make predictions from previous branch histories stored in a pattern history table (PHT) of two-bit saturating counters. The table is indexed by a global history shift register that stores the outcomes of previous branches. This scheme led to a series of subsequent works that focused on eliminating aliases [3]-[5]. However, all such improvements were within the framework of the existing prediction mechanism. On the other hand, machine learning techniques offer possibility of further improvement in the prediction mechanism itself. Jimenez and Lin [6] proposed to use fast perceptrons [7], instead of PHT. Compared to other artificial neural networks that are able to fit high-dimensional non-linear data, perceptrons are easier to understand and implement, faster to train, and computationally economical. In particular, they work well with linearly separable branches, which cover a significant number of branches of practical interests. Furthermore, perceptron-based predictors can take advantage of longer histories than traditional saturating counters. Thus, improvements in perceptron based predictors have become an active area of research. For example, more complicated mechanisms like expanded branch histories, path histories, separate storage of weights, and different training methods were introduced [8]-[12]. In this paper, several different approaches proposed in the literature are combined to improve the prediction rates. In particular, the effect of using different saturating numbers to change threshold [10] and different ways to make use of history of branch addresses to form path history [11] are considered, along with the usage of expanded history [12], and the scaling of branch histories by coefficients [12]. Although the previous works proposed the models and methods of perceptron-based branch predictor in sufficient detail, they did not address the issue of how different values of the parameters or different choices of mechanism may influence the prediction rates. Furthermore, to date this issue has not been investigated empirically. This paper aims at bridging this gap by using a wide range of both integer and floating point benchmarks. The above techniques are implemented into a Scaled Neural Predictor (SNP) [12], and thoroughly evaluated and tested on a well-known open-source simulator, SimpleScalar 3.0 [13]. Conclusions are then drawn on recommended choices of each mechanism for both floating point and integer tasks, in order to optimize the performance of SNP. The paper shows that adopting the recommendations can lead to significant improvements in branch prediction rates. SNP offers a crucial advantage over other digital neural predictors in that many of the crucial components of the SNP can be implemented using analog circuitry. Since branch prediction is primarily hardware oriented, practical branch predictors would have to be competitive in a hardware implementation. A reasonable implementation and the efficiency gains are discussed in detail in the original SNP paper [12]. The remainder of the paper is organized as follows. Section II briefly introduces the mechanism of perceptron-based branch predictors and techniques for their improvement. Section III presents the design and implementation of predictors based on SimpleScalar [13]; Section IV presents comparison between different choices of parameters as well as an analysis of the results on integer and floating point benchmarks. II. BACKGROUND ON NEURAL BRANCH PREDICTORS A perceptron is a vector of $h + 1$ small integer weights, where $h$ is the history length of the predictor. A branch history is a list of 1 (taken) and -1 (not taken) of length $h$, in reference to the most recent $h$ branches. The first $h$ weights, called the correlation weights, are in one-to-one correspondence to the $h$ branches, understood as the influence of the $i^{th}$ branch to the next prediction, and the last weight is the bias. A table of $n$ rows is used to store the weights of $n$ perceptrons, with each row containing the $h + 1$ weights of one perceptron. Given the branch program counter (PC), the address is mapped to one row of the table by a hash function, for example, modulo $n$, such that the weights of the perceptron are dot-multiplied to the branch history. The resulting value plus the bias is the predicted value: if the value is no less than zero, the branch will be taken, and not taken otherwise. This process is illustrated by Fig. 1. At any time when a misprediction is made, an update procedure is triggered, changing the weights of the perceptron. The bias will be decremented if the branch was taken and incremented if it was not. Each correlation weight is either increased or decreased, depending on whether the corresponding value in the branch history is 1 or -1. As an example, if the result of the branch prediction in Fig. 1 (taken) is incorrect, then the weight update once the true result is known is triggered. As shown in Fig. 2, the same branch will now be predicted as not taken, after this single update. Recent works focus on improving the basic perceptron predictors for better accuracy. Jimenez [11] suggested using path and the history of branch addresses, and proposed a path-based predictor, where weights are accessed as a function of the PC and the path. The benefit is that the predictor can correlate not only with the pattern history, but also with the path history. Seznec [9] suggested that breaking the weights into a number of independently accessible tables rather than keeping them in a single table with $h$ columns correlates the branch history with perceptron weights better. Renee, Jimenez and Burger [12] proposed using coefficients to scale weights, the idea of which was motivated by a practical observation that the more recent branching behavior should have more influence on the prediction in near future. They also suggested using expanded history, which contains a history of 128 bits repeatedly selected from a branch history of 40 bits. For training perceptrons, Seznec [10] proposed an adaptive threshold training algorithm in which weight update is triggered not only when the prediction is incorrect, but also when the perceptron output is less than a threshold. III. DESIGN AND IMPLEMENTATION In this paper, the original scaled perceptron-based branch predictor SNP [12] is implemented in SimpleScalar 3.0 [13]. The techniques discussed in Section II and incorporated in the original SNP were implemented, including the update procedure invoked on mispredictions, the adaptive threshold, coefficient scaling of weights to give greater preference to recent branches and usage of both the path and history of branch addresses. The relative effects of each of these techniques were then investigated to determine which of these yield the most significant improvements, and under what parameter settings. SimpleScalar 3.0 is a system software infrastructure that is widely deployed commercially and in academic research for program performance analysis, detailed micro-architectural modeling, and hardware-software co-verification [13]. Each perceptron in the implementation has 128 correlation weights, stored in 16 tables, each having eight columns containing eight weights ranging from -64 to 63. The first table has 512 rows, because the most recent weights are the most important, and all the others have 256 rows. Bias weights are stored in a vector of length 2048. All correlation weights in perceptron tables are initialized to be zeroes. Seznec [10] showed that good performance was achieved with bias weights initialized to be $2.14 \times (\text{expanded history} + 1) + 20.58$. Here, expanded history refers to the number of the most recent branch addresses that are considered by the predictor. The bias weights in this paper’s implementation were initialized in accordance with the above expression, with expanded history set to 128, for all experiments that were conducted. To index each table, branch PC is hashed to 11 bits. Eight of the 11 bits are XORed with a fraction of the array A, resulting in an eight or nine bit index for one of the 256 rows (or 512 rows for the first table). The whole 11 bits are also used to index one of 2048 bias weights. The indexed correlation weight vector is dot multiplied with the expanded history, and subsequently added to the indexed bias weight; this final value then leads to the prediction. As described in Section II, an adaptive threshold training algorithm is used. The weights are updated not just on mispredictions, but also if the calculated dot product is below a threshold that changes dynamically as the program executes. The change in threshold is controlled by a saturating counter, which records the number of consecutive correct or consecutive incorrect predictions. This counter increments every time a correct prediction is made and decrements otherwise. Unlike correlation weight update, the threshold is only changed once the magnitude of this counter reaches a pre-set maximum value. The algorithm to trigger modification of threshold is illustrated in Fig. 3. The pre-set maximum value shown in this figure is set to 32. Experiments are conducted in which different pre-set maximum values are used to find the best frequency to change the threshold and the results are discussed in Section IV-A. In the implementation discussed in this paper, expanded path history is used. The addresses of recent branches are hashed into 128 bits, forming the so-called path history, stored in an array $A[128]$. The algorithm uses an eight bit sliding window to obtain the vector $A[0...7], A[8...15],...$, $A[120-127]$ by moving the sliding window 16 times on recent branch histories. Only one bit of each branch address is selected. The moving distance of the sliding window can be one bit, two bits, four bits, or eight bits, thus allowing a choice of different numbers of branch addresses to generate the path array. The algorithm for generating eight bit long $A_k$ for $k^{th}$ table is illustrated in Fig. 4. The coefficient before $k$ controls the moving distance of the sliding window. If the moving distance is one bit each time, then only the most recent branch addresses are used. However, if the moving distance is eight bits, then no repeated history addresses are used and total of 128 different branch addresses are taken into account. The influence of different moving distances on prediction accuracy is discussed in Section IV-B. Another way to investigate the influence of path is to consider different hashing schemes. A hash function aims to select the most representative bits from the branch addresses to predict a future branch, but some bits may be more representative than others. For example, the third lowest bit is much more representative than the second and first lowest bit because the lowest two bits may never change if the instructions are stored by word. Different hashing schemes are evaluated: hashing the third and fourth lowest bits, hashing the third lowest bit, or hashing the second lowest bit from branch addresses into path, and their influences. Results are shown and discussed in Section IV-C. Renee, Jimenez and Burger [12] proposed that the branch history be stored in a global branch history register $H$ using 40 bits, and then expanded into 128 bits by repeatedly selecting bits from this register. They reported that expanded history, hashing from the most recent 40 branches to 128 bits, leads to better prediction rates. The claim was evaluated in this paper by comparing a 128 bit expanded history with a 128 bit ordinary history. The results are discussed in section IV-D. The more recent histories have a larger correlation with current prediction. To emphasize the recent branch histories, the recent histories were scaled to a larger value while reducing the effect of old histories. To this end, each bit in the expanded history has a coefficient to stress stronger influence of more recent branches on future predictions. In particular, Renee, Jimenez and Burger [12] obtained the relationship between the coefficient and $i^{th}$ bit in expanded history from experiments, which is $1/(0.1111+0.037i)$. In their implementation, they placed an upper bound of one for the coefficients, i.e. the coefficient is either $1 / (0.1111 + 0.037i)$, when the value is smaller than one, or one otherwise. In this paper, experiments are conducted to investigate the effect of scaling coefficients by assigning coefficients without bound, assigning coefficients with an upper bound of one, and not assigning any coefficients. The results are discussed in Section IV-E. It should be noted that in these experiments, the traditional training-prediction paradigm of machine learning is not used, in which the perceptron is first trained off-line by a set of training samples until weights converge, and then used for static prediction. Instead, an on-line training and prediction approach is used, wherein the weight update procedure is invoked every time the calculated dot product is below the adaptive threshold, as detailed in Section II. This is in contrast to a two-stage system, where the first stage only involves training, and the second, testing or evaluation. The branch predictor is therefore not static and can change continuously during the program flow. Also, during the training and prediction, an out-of-order simulator is employed, which means that the CPU is allowed to pipeline ahead of instructions. Hundred million instructions are used for each round of testing. IV. EXPERIMENTAL RESULTS In this section, comparisons of prediction rates under different configurations of the predictors are presented and the influence of those parameters on prediction accuracy is studied. All figures in this section have prediction accuracy on the $y$-axis, scaled to a range between zero and one. Prediction accuracy is defined as the ratio of the number of successful predictions to the total number of predictions made by the predictor. A. Saturating Counter for Changing Threshold Based on the algorithm in Fig. 3, saturating numbers for the saturation counter are chosen to be 0 (basic training method), 1, 16, 32, or 64. The third lowest bit of all 128 branch addresses is used to form path array $A$. Scaling coefficients are also applied to branch histories. Both integer benchmarks and floating point benchmarks are tested, as shown in Fig. 5. The results show that using a saturating number larger than zero generally leads to better results for integer benchmarks and for most floating point benchmarks. For integer benchmarks, there is not a significant difference as long as the saturating number takes a value larger than zero. For floating point benchmarks, saturating number of 32 generally yields better results, according to the geomean benchmark, indicating that for floating point benchmarks, the threshold should be adapted at every 32 consecutive correct or consecutive incorrect predictions. For general prediction design therefore, setting the saturation value to 32 is recommended since this value would maximize branch prediction accuracy. This result empirically confirms the proposal of Jimenez [11] that taking address path patterns into account, along with branch history patterns, leads to better prediction rates. Using paths is always much better than not using paths. ![Fig. 5 Integer benchmark results (a) and floating point benchmark results (b) results for different saturating numbers.](image-url) The bars represent different values for \( d \), the moving distance. For integer benchmarks, maximum accuracy (on average) is achieved when \( d \) equals two bits, and for floating point benchmarks, maximum accuracy (on average) is achieved when \( d \) equals four bits. However, the results also show that it is not necessarily the case that the longer the actual branch history addresses that are taken into account, the better the prediction rates will be. For integer benchmarks, a moving distance of two bits yields the best results, and for floating point, a moving distance of four bits proves to be superior to the suggestion of Renee, Jimenez and Burger [12], who use a moving distance of eight bits along with 128 different branch addresses. This result may be explained as follows: the most recent branches are just enough for future predictions, and taking into account recent branches from further into the past is not only useless, but also may increase mispredictions in the future. In conclusion, the recent branches have more positive influence and should be emphasized more when making future predictions. C. Different Hashing Schemes As mentioned earlier, when generating the path array \( A \), only one bit of each branch address is chosen, so it is crucial to choose the most representative bit from each branch address. Different low-order bits or their combinations from branch addresses are used to form a path. In particular, the third lowest bit, combination of the fourth and third lowest bits, and second lowest bit, are used, as shown in Fig. 7. Testing results clearly indicate that some bits or bit combinations are more representative than others: hashing instructions are stored by word. Hashing the third lowest bit is roughly similar to combining the third and fourth lowest bits for integer benchmarks, but leads to a 2% improvement for floating point benchmarks. Therefore, it is not necessarily the case that the combination provides more information than the single bit and the architectural cost of such combinations could be non-trivial. For general design therefore, hashing the third lowest bit is recommended. D. Expanded Branch History vs. Ordinary History The fourth experiment focused on using an expanded branch history of 128 bits that repeatedly hash from a history of the most recent 40 branches (i.e. the history of taken or not taken of each branch), and compared it with an ordinary history containing the results of most recent 128 branches. The results are illustrated in Fig. 8. The results show that the expanded history achieves roughly the same accuracy as ordinary history for floating point benchmarks, and slightly worse accuracy for integer benchmarks. The expanded history, i.e. 40 bits hashing to 128 bits, forms a good approximation to a real history of 128 bits. In general design with sufficient budget, an ordinary history of 128 bits is recommended, but in the case of limited budget, expanded history can be a good approximation. E. Scaling Coefficients Three ways to assign scaling coefficients are considered: (1) $1/(0.1111+0.037i)$, (2) $1/(0.1111+0.037i)$ with an upper bound of one, and (3) no scaling where all coefficients are assigned to one. The testing results are shown in Fig. 9. The results confirm the suggestion of Renee, Jimenez and Burger [12] that scaling places more emphasis on recent branches, and thus, generally leading to better prediction rates. Non-scaling is far worse than two other scaling methods. However, placing an upper bound behaves slightly worse than no upper bound: this result is intuitive because an upper bound limits the expressiveness of “recent influence” of branches. Therefore, for general design, placing no upper bound on the coefficients is recommended. V. CONCLUSION The SNP had already been shown in prior work to achieve state-of-the-art performance compared to other competitive schemes. In this paper, its original design was implemented in SimpleScalar 3.0, a powerful system software infrastructure that is widely deployed for program performance analysis and microarchitectural detailing. In addition, several modifications to SNP were implemented based on proposals in prior work. Extensive tests were performed on a wide range of integer and floating point benchmarks both with and without these modifications. Based on these empirical data, recommendations were made on how the original SNP could be further improved so that branch prediction can be made more accurate. Benchmark results show that adopting these recommendations can lead to significant improvements in performance. A range of further experiments were then performed to verify the SNP’s design choices empirically on both integer and floating point benchmarks and to suggest further refinements for better accuracy. Although many of these design choices are justified, better prediction rates can be attained by modifying some of the originally proposed parameters, and some of the modifications depend on whether they are applied to integer or floating point benchmarks. Overall, extended SNP proved to be a powerful approach, and should result in a significant practical application of neural networks in the future. REFERENCES
{"Source-Url": "http://www.cs.utexas.edu/users/ai-lab/downloadPublication.php?filename=http%3A%2F%2Fnn.cs.utexas.edu%2Fdownloads%2Fpapers%2Fzhou.ijcnn13.pdf&pubid=127420", "len_cl100k_base": 4481, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 19765, "total-output-tokens": 5545, "length": "2e12", "weborganizer": {"__label__adult": 0.0007395744323730469, "__label__art_design": 0.0012121200561523438, "__label__crime_law": 0.0006060600280761719, "__label__education_jobs": 0.0006656646728515625, "__label__entertainment": 0.00020039081573486328, "__label__fashion_beauty": 0.000362396240234375, "__label__finance_business": 0.0003635883331298828, "__label__food_dining": 0.0004725456237792969, "__label__games": 0.0010995864868164062, "__label__hardware": 0.021697998046875, "__label__health": 0.000926494598388672, "__label__history": 0.0006551742553710938, "__label__home_hobbies": 0.0002765655517578125, "__label__industrial": 0.0015125274658203125, "__label__literature": 0.0003509521484375, "__label__politics": 0.00044035911560058594, "__label__religion": 0.0007781982421875, "__label__science_tech": 0.460205078125, "__label__social_life": 9.363889694213869e-05, "__label__software": 0.00933074951171875, "__label__software_dev": 0.49560546875, "__label__sports_fitness": 0.000591278076171875, "__label__transportation": 0.001560211181640625, "__label__travel": 0.00031185150146484375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24936, 0.01856]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24936, 0.49783]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24936, 0.92206]], "google_gemma-3-12b-it_contains_pii": [[0, 5306, false], [5306, 9132, null], [9132, 13938, null], [13938, 17567, null], [17567, 20580, null], [20580, 23817, null], [23817, 24936, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5306, true], [5306, 9132, null], [9132, 13938, null], [13938, 17567, null], [17567, 20580, null], [20580, 23817, null], [23817, 24936, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24936, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24936, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24936, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24936, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24936, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24936, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24936, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24936, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24936, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24936, null]], "pdf_page_numbers": [[0, 5306, 1], [5306, 9132, 2], [9132, 13938, 3], [13938, 17567, 4], [17567, 20580, 5], [20580, 23817, 6], [23817, 24936, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24936, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
2bff9e17365f8a1f3fc518098e8e5175432c0bc5
[REMOVED]
{"Source-Url": "http://repositorium.sdum.uminho.pt/bitstream/1822/14931/1/INFORUM-11-a.pdf", "len_cl100k_base": 6698, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 30367, "total-output-tokens": 8027, "length": "2e12", "weborganizer": {"__label__adult": 0.00033164024353027344, "__label__art_design": 0.0002570152282714844, "__label__crime_law": 0.0003767013549804687, "__label__education_jobs": 0.0004274845123291016, "__label__entertainment": 5.257129669189453e-05, "__label__fashion_beauty": 0.00013530254364013672, "__label__finance_business": 0.00014197826385498047, "__label__food_dining": 0.00033092498779296875, "__label__games": 0.0005369186401367188, "__label__hardware": 0.0006709098815917969, "__label__health": 0.0004372596740722656, "__label__history": 0.0001709461212158203, "__label__home_hobbies": 7.236003875732422e-05, "__label__industrial": 0.000335693359375, "__label__literature": 0.0002065896987915039, "__label__politics": 0.00023877620697021484, "__label__religion": 0.0004124641418457031, "__label__science_tech": 0.01248931884765625, "__label__social_life": 7.200241088867188e-05, "__label__software": 0.004604339599609375, "__label__software_dev": 0.9765625, "__label__sports_fitness": 0.0003044605255126953, "__label__transportation": 0.0004429817199707031, "__label__travel": 0.00017881393432617188}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33424, 0.01773]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33424, 0.591]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33424, 0.88028]], "google_gemma-3-12b-it_contains_pii": [[0, 2790, false], [2790, 5773, null], [5773, 7699, null], [7699, 10323, null], [10323, 13290, null], [13290, 16454, null], [16454, 19324, null], [19324, 22271, null], [22271, 24499, null], [24499, 27192, null], [27192, 29895, null], [29895, 32890, null], [32890, 33424, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2790, true], [2790, 5773, null], [5773, 7699, null], [7699, 10323, null], [10323, 13290, null], [13290, 16454, null], [16454, 19324, null], [19324, 22271, null], [22271, 24499, null], [24499, 27192, null], [27192, 29895, null], [29895, 32890, null], [32890, 33424, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33424, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33424, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33424, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33424, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33424, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33424, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33424, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33424, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33424, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33424, null]], "pdf_page_numbers": [[0, 2790, 1], [2790, 5773, 2], [5773, 7699, 3], [7699, 10323, 4], [10323, 13290, 5], [13290, 16454, 6], [16454, 19324, 7], [19324, 22271, 8], [22271, 24499, 9], [24499, 27192, 10], [27192, 29895, 11], [29895, 32890, 12], [32890, 33424, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33424, 0.15111]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
239efeafb777bdf7ccc937ed5addb1c3bc1ff4a1
[REMOVED]
{"Source-Url": "http://repositori.uji.es/xmlui/bitstream/handle/10234/41682/48767.pdf;jsessionid=D190D2493E8356A8BC32A32753B22B1C?sequence=1", "len_cl100k_base": 7254, "olmocr-version": "0.1.53", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 41229, "total-output-tokens": 9441, "length": "2e12", "weborganizer": {"__label__adult": 0.0014247894287109375, "__label__art_design": 0.003406524658203125, "__label__crime_law": 0.0017061233520507812, "__label__education_jobs": 0.003589630126953125, "__label__entertainment": 0.0017032623291015625, "__label__fashion_beauty": 0.0009021759033203124, "__label__finance_business": 0.015350341796875, "__label__food_dining": 0.0013637542724609375, "__label__games": 0.1927490234375, "__label__hardware": 0.004070281982421875, "__label__health": 0.0009436607360839844, "__label__history": 0.001018524169921875, "__label__home_hobbies": 0.00035500526428222656, "__label__industrial": 0.0018930435180664065, "__label__literature": 0.0006613731384277344, "__label__politics": 0.0008873939514160156, "__label__religion": 0.0011920928955078125, "__label__science_tech": 0.07080078125, "__label__social_life": 0.0002696514129638672, "__label__software": 0.0860595703125, "__label__software_dev": 0.60546875, "__label__sports_fitness": 0.0014209747314453125, "__label__transportation": 0.0013751983642578125, "__label__travel": 0.0014047622680664062}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39913, 0.03019]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39913, 0.48181]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39913, 0.92026]], "google_gemma-3-12b-it_contains_pii": [[0, 2114, false], [2114, 5565, null], [5565, 7500, null], [7500, 9527, null], [9527, 11693, null], [11693, 13476, null], [13476, 13608, null], [13608, 15271, null], [15271, 15391, null], [15391, 17496, null], [17496, 18504, null], [18504, 20745, null], [20745, 22433, null], [22433, 25609, null], [25609, 27000, null], [27000, 28680, null], [28680, 29757, null], [29757, 32018, null], [32018, 32525, null], [32525, 34077, null], [34077, 37047, null], [37047, 39913, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2114, true], [2114, 5565, null], [5565, 7500, null], [7500, 9527, null], [9527, 11693, null], [11693, 13476, null], [13476, 13608, null], [13608, 15271, null], [15271, 15391, null], [15391, 17496, null], [17496, 18504, null], [18504, 20745, null], [20745, 22433, null], [22433, 25609, null], [25609, 27000, null], [27000, 28680, null], [28680, 29757, null], [29757, 32018, null], [32018, 32525, null], [32525, 34077, null], [34077, 37047, null], [37047, 39913, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39913, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39913, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39913, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39913, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39913, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39913, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39913, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39913, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39913, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39913, null]], "pdf_page_numbers": [[0, 2114, 1], [2114, 5565, 2], [5565, 7500, 3], [7500, 9527, 4], [9527, 11693, 5], [11693, 13476, 6], [13476, 13608, 7], [13608, 15271, 8], [15271, 15391, 9], [15391, 17496, 10], [17496, 18504, 11], [18504, 20745, 12], [20745, 22433, 13], [22433, 25609, 14], [25609, 27000, 15], [27000, 28680, 16], [28680, 29757, 17], [29757, 32018, 18], [32018, 32525, 19], [32525, 34077, 20], [34077, 37047, 21], [37047, 39913, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39913, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
0e986b870ad69df5f8f1d71330e5a41ba911241e
Dynamic Pull-Down Menu Organization in Run-Time by Analyzing Menu Selection Behaviors YoungJin Ro, Ho-Jin Choi, Dan Hyung Lee, and In-Young Ko School of Engineering, Information and Communications University, Daejeon, Korea {ryj8516, hjchoi, danlee, iko}@icu.ac.kr Abstract Event-based self-adaptation of user interface (UI) is introduced in this paper as an approach to solving problems in current static and dynamic UI approaches. Our approach does not require extra information about the user or the context, but uses events which can be gathered easily. Our approach adopts the concept of an agent to make externalization of UI adaptation, that is, no need to make intensive modification of the internal logic of an application. At the current stage of research, this paper focuses only on the menu organization aspect of UI design, where menu categorization and ordering can be accomplished dynamically based on user’s menu selection behaviors. The outcome of the approach is to show enhanced job performance by reducing time for menu search and occurrence for false selection. It is expected that many menu-driven systems can improve the usability by adopting our approach. 1. Introduction Nowadays most software organizations recognize the importance of the user interface (UI) design in making software products. No matter how good ‘zero defect’ software products they are, it should be regarded as the failure if users do not wish to use them because of bad design. There are several ways to improve the UI of a software product, and evolutionary prototyping has been one of the popular traditional methods to get a relevant UI. However, this approach has limitations due to its “static” nature, and more “dynamic” approach to UI design has been introduced recently to overcome the limitations of the traditional approach. Dynamic UI represents a new paradigm of UI design, that is, a concept of such UI that can be changed (semi-)automatically while the software is in use based on the analysis of user’s pattern of actions. In short, different users may see different UI’s adapted to each individual. This paper investigates an approach to dynamic UI based on events. In our approach, events are extracted to recognize user’s behaviors and, based on the information with these events, the UI adapts itself to the user dynamically. This research is in its infancy, hence in this paper we focus only on the menu organization aspect of dynamic UI. The paper is organized as follows. The motivation to and the basic concept for dynamic UI are described in sections 2 and 3, respectively. Our approach is introduced in section 4, where the aim is to provide a guideline towards dynamic menu organization. Section 5 evaluates the potential usefulness of this approach, and section 6 concludes. 2. Motivation There are many dialog styles such as filling-in-forms, questioning and answering, and command languages. Each dialog style is used differently and each one has its own strengths and weaknesses – for example, menu-driven interfaces are easy to learn, but inefficient and inflexible. In this paper we begin our research towards dynamic UI by focusing on menu organization since menus provide a convenient UI even for novice users. By allowing the menu organization to enhance itself dynamically, this approach can be used to improve a menu-driven system’s usability substantially. When using pull-down menus, menu selection errors can degrade the performance of a job. Menu selection errors are known to be caused by problems such as improper categorization, overlapping categories, and vague category titles. Some research reports that search time was saved by 10% to 37%, and task errors were reduced up to 53% by correcting selection errors in the UI. Figure 1 shows examples of improper categorization errors of Microsoft Word 2003 and Microsoft Power Point 2003. Three examples are shown in the figure. First, users should select header and footer in View category instead of Insert category although users select this menu to insert header and footer. Second and third examples are similar with the first example. It is difficult to decide that page setup is categorized as File or Format and Master is categorized as View or Format. Since users cannot easily recognize correct categorization, users would make mistakes and it can degrade performance. However, it is not appropriate to claim that improper categorization errors are made by every user because some people agree with the original categorization. Figure 1. Examples of improper categorization errors Categorization is not a simple task for a UI designer as he/she should provide a reliable menu categorization for many different users. One popular approach to categorization of menus is Hayhoe’s approach. In this approach, at least 50 users make grouping items into self-defined categories. Then the frequency with which users place pairs of items together in a group is calculated, resulting in similarity measures. However, this approach has weaknesses. First, too many users are needed. Hayhoe argued that at least 50 subjects are needed. Research with 50 subjects is not an easy work for open software developers. Second, it does not guarantee to produce optimal menu categories. In fact, it is not possible to make categorization that all people satisfy. For these reasons, it seems difficult to reach a good solution for static categorization of menus, hence we aim to seek solutions using dynamic UI approach. 3. Dynamic UI Dynamic UI is a new paradigm. This approach was introduced to overcome limitations of static UI, but still dynamic UI is not used for many applications. In dynamic UI, system tries to predict user’s immediate actions and change the dynamic part of the UI frequently in the anticipation of the next action. Also, it is expected that less effort will be required to communicate with users compared with static UI since system automatically changes UI based on user’s acting patterns. Users can get specialized UI without continuous complaining to designers about the uncomfortable UI if system automatically reflects user’s patterns of actions. For example, system can find why a user has some difficulty to accomplish a specific task and fix it until the user does not show any problem. Therefore, it is able to make user centered design without high cost as like static UI. The approach of dynamic adaptation based on user information takes information from users then tailor UI by this information. An example of this approach is dynamic interaction generation (DIG), which was introduced to make dynamic user interaction. The DIG research prototype, called DIGBE (DIG for building environments), was also implemented. DIGBE converts a building management configuration database into a dynamic, adaptive user interface. The framework takes the user privileges, the roles of the interaction participants, the specific task, the objects of interest, and the values and types of data for making dynamic UI adaptation. While Penner and Steinmetz [2] focused on the adaptation based on user information such as operator role or task model, Menkhaus and Pree [3] concerned about multi platform access and contexts. Due to appearance of new devices, which are able to access Web applications, such as mobile phones and handheld computers, making interactive services becomes more difficult. As an example for dynamic adaptation based on contexts, MUSA (multiple user interfaces, single application) prototype was introduced to support multi platform and tailor UI properly. First, it can be easier to develop UI for a variety of computing devices by making an abstract description of a UI. Also, MUSA tries to adapt to the context in which it is used. Tailoring of the interface design includes the capability of adaptation of content delivery to various devices, while preserving consistency and usability of the service. Although the approaches above seem convincing in special circumstances, they have weaknesses to be used widely. First, they usually support only pre-defined user types or contexts, limited for supporting various and anonymous users in the Web environment. Second, adaptation cannot be made flexibly because these approaches provide multiple UI for single application. Third, they have weaknesses in adapting to existing applications since UI should be made from the start. Finally, they require user’s intervention to provide specific information such as the user’s role and access level to make dynamic UI adaptation. 4. Event-based dynamic pull-down menu organization 4.1. Event-based self-adaptation Our approach uses an agent. An agent perceives its environment using sensors and acts on the environment using actuators. Figure 2 shows the schematic diagram of a simple reflex agent which is used in our approach. ![Figure 2. Schematic diagram of the system](image) The concept of agent is useful for self-adaptation of UI because the structure is appropriate for monitoring UI events and acting to change UI element properties externally. An agent is placed in the outer side of the application, then it will be possible to adapt in any application since the internal logic of applications does not depend on the agent. By externalization of UI evolution, there will be no need to make application-specific agent, and adaptation of existing applications will require less cost. Therefore, the agent approach can be a solution to overcome weaknesses of current dynamic UI approaches. Using UI events for self-adaptation has strong points. First, in most applications, UI events can be extracted easily. There are several applications that log events (e.g., Web applications) and UI events can be gathered without additional equipments. Moreover, gathering UI events does not depend on user types or contexts. Especially, menu selection events can be gathered easily when the system has hierarchical menus. 4.2. Preparation of designers First, designers make basic menu design. For menu organization, for example, designers should allocate area for pull-down menu and menu bar using common sense. At least, designers can make basic menu design with random order or alphabetic order. Second, designers set up conditions and rules for dynamic adaptation of UI. In case of menu organization, designers can decide such parameters following: - When does a sub-menu need to be moved for modifying categorization? - When does a sub-menu hide? - Which is more important between frequency of use and order of use? 4.3. Sensing events The event-based self-adaptation approach allows us to collect usability information by analyzing UI events. Although the agent only perceives low-level events such as shifts in input focus, key or mouse events, the events can be abstracted and filtered. By analyzing each event, it is possible to make counts and statistics to provide measurable data to the system. Later the counts and statistics can be used for usability improvement. For example, if Nielsen’s model is used, which provides usability factors such as learnability, efficiency, memorability, errors, and satisfaction, the measurement of usability factors can be accomplished like table 1. Although satisfaction is one of the most important factors of usability, there is no way to record satisfaction automatically without any user intervention. Therefore, without user intervention to give scoring about satisfaction, system cannot automatically analyze satisfaction information from events. The table shows an example to gather usability information from counts and statistics, but by characteristics of systems, the way of gathering usability should be tailored. Table 1. Gathering usability from counts and statistics <table> <thead> <tr> <th>Learnability</th> <th>Current Total Time</th> <th>Past Total Time</th> <th>Current Time / Task</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> <td>Past Time / Task</td> </tr> <tr> <td></td> <td></td> <td></td> <td>Current Time / Task</td> </tr> <tr> <td></td> <td></td> <td></td> <td>Past Time / Task</td> </tr> <tr> <td>Efficiency</td> <td>Total Time</td> <td>Time / Task</td> <td></td> </tr> <tr> <td>Memorability</td> <td>Current Idle Time</td> <td>Past Idle Time</td> <td>Current Time for Filtered Events</td> </tr> <tr> <td></td> <td></td> <td></td> <td>Past Time for Filtered Events</td> </tr> <tr> <td>Errors</td> <td>Number of Errors</td> <td></td> <td></td> </tr> </tbody> </table> These steps of events extraction and interpretation help gathering usability information for the system to handle the data automatically as above. When a system has complex functionalities and UI, however, it is not simple to extract events correctly. Accordingly, it is not easy to determine how lower level events are abstracted and how to decide filtered events since the system cannot know user’s objectives. While this paper focuses on menus, the challenge becomes much easier. It is clear to recognize user’s objectives since selecting appropriate menus are the final goals of users always. For example, users do not select font menu for printing, inserting a picture, or drawing table. Users only click font menu to adjust font format. Also, it is easy to determine filtered events since menu selection events are simple enough. Assume that font is categorized as format. Then, if a user select help category first, and select format to select font later, selecting help category can be classified as a filtered event. By recording what menus are selected finally, system can recognize filtered events. In this paper, mainly three different data can be extracted by the system. In addition, first two counts and statistics can be classified as menu selection events. First, the framework accepts selection events such as moving mouse pointer to specific menu or category. Also, when a user clicks a menu or category, then it would be recorded as a selection event. Second, search time can be used meaningfully. Search time means spent time for selecting a menu. These events can be sensed when any menu search event is made. Finally, the last data is frequency of use of menu. It concerns how often a user uses the menu. It could be calculated by the delay between two identical menu search events. For example, if user A attempts to select menus in every 1 minute while user B attempts to select menus in every 30 seconds, A can be evaluated with low frequency of use. The usage of different counts and statistics will be explained in next sections. 4.4. Dynamic menu categorization The menu categorization can affect the performance. However, since the paper deals a menu which can be classified as ‘Sets of Lists’, next section also describes about the dynamic menu ordering; sets and lists are that sets do not much concern about the order, but in a list, specific order is often meaningful, so for lists, dynamic menu ordering is necessary. The basic approach for the dynamic categorization is as follows. If a user selects a category more frequently and spent more time for the category than other categories to select a menu, then system modifies menu category. For the dynamic menu categorization, the system should keep data such as category selection information, spent time, number of categories. As an example, a scoring for each category can be following: \[ C[i] \times Time[i] / \text{NumMenu}[j] \] C: Category selection Time: Spent time NumMenu: Number of menus After the system makes scoring for every category as above, if one score exceeds other categories’ scores at least as much as defined threshold, then the menu categorization should be modified. This initial approach has a problem. That is, order of selection path does not affect to categorization. For example, among selected categories, first selected category should be treated with bigger score than later selected categories. Therefore, the scoring should be modified as following: - Original score = \( C[i] \times Time[i] / \text{NumMenu}[j] \) - Adjusted score = Original score \( \times (\text{TotalPath} - \text{PathOrder}[i] + 1 / \text{TotalPath}) \) By this adjustment, each category can have different weight although several categories are selected. But there remain more problems. First, the approach assumes that user makes blind moves to find a menu. Second, the approach ignores learning ability of user. After user learning menus and categories, changing categorization can make confusing. In fact, users do not make blind move when they cannot find appropriate menus. Users find next category by seeing title of categories. Then, dynamic categorization should consider about influence of each title. That is, certain weight should be multiplied with scoring. Here, designer’s common sense can be used to establish influence of each title. It is not recommended however because dynamic menu organizations focuses on the user-oriented menu design. Instead of designer’s common sense, user’s grouping research information can be used for setting up the weights since personalized grouping is often inferior no matter who makes the grouping. Table 2 represents an example of weighting for each category. <table> <thead> <tr> <th>No. of users select to make grouping</th> <th>File</th> <th>Format</th> <th>Tools</th> </tr> </thead> <tbody> <tr> <td>Weight</td> <td>50 / 100 = 0.5</td> <td>30 / 100 = 0.3</td> <td>20 / 100 = 0.2</td> </tr> </tbody> </table> The second problem was ignoring user learning. In fact, users can learn where a menu is located after several trials and errors. Users can also disable adjusting categorization. If a user disables dynamic menu categorization, there will be no confusing anymore due to unwanted changing. When users disable dynamic categorization, all menus cannot be categorized anymore. It could be a problem if users can want to configure only few menus. Therefore, the approach should consider frequency of use of menu that can be gathered automatically. The system records previous selection and current selection of menu and when frequency of use of a certain menu is high, then it can have higher threshold to adjust categorization. Especially, clear information about categorization of menus can be remembered about 30 seconds since human short term memory lasts less than 30 seconds. Therefore, if a menu is selected in every few seconds, it is not effective to modify categorization. 4.5. Dynamic menu ordering Dynamic menu ordering is simpler than dynamic menu categorization. It uses relative scoring for order of use and frequency of use. Remaining challenge is how to make a combination of order of use and frequency of use. For considering two ordering ways together, scores are adjusted for fair comparison. In this paper, all values should be adjusted to be located in the range from 0 to 1. For example, 3 can be adjusted to 0.875 since the original range of scoring is from -4 to 4. Figure 3 shows the example of adjusting scores. In this figure, left table is used for scoring order of use and right table is used for scoring frequency of use. When the order of use is counted, a positive number means column menu is executed former than row menu. Also In the scoring order of use, a positive number means column menu is executed more time than row menu while a negative number indicates column menu is executed less than row menu. <table> <thead> <tr> <th>Menu1</th> <th>Menu2</th> <th>Menu3</th> </tr> </thead> <tbody> <tr> <td>+2</td> <td>+4</td> <td></td> </tr> <tr> <td>-3</td> <td></td> <td>0</td> </tr> <tr> <td>-4</td> <td>0</td> <td></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Menu1</th> <th>Menu2</th> <th>Menu3</th> </tr> </thead> <tbody> <tr> <td>+1</td> <td>0</td> <td></td> </tr> <tr> <td>-1</td> <td></td> <td>0</td> </tr> <tr> <td>0</td> <td>0</td> <td></td> </tr> </tbody> </table> Figure 3. Example of adjusting scores Finally, the order of use and frequency of use can be combined with two weights: p, and q. p indicates order of use weight that a user or designer gives, and q indicates frequency of use weight. The default values of p and q can be 0.5 and 0.5. 5. Evaluation For evaluation of the proposed approach, 5 different subjects were chosen. Then, a set of 25 menus and 5 different categories were established. Each subject could make grouping with the menus and categories, then a final categorization was made based on summarizing 5 different categories of each subject. Then, the hit ratio of each user’s categorization compared with generated menu categorization was calculated as figure 4. Figure 4. Hit ratio of each user In this figure, the average hit ratio is about 73%. When a user tries to select a menu, the user has 27% probability of miss selection. In this case, the frequent miss selection can bring huge performance degrading. The hit ratio of categorization will be much higher in general cases, since the menus which are used for this experiment were examples of ambiguous menus such as page setup. Since the objective of the evaluation could be confirming the performance improvement when users make selection errors, it is more effective to use ambiguous menus to see clear performance improvement. Next, 25, 50, and 100 random menu selections are generated. If there is an expert who knows exact locations of all menus, then the expert can make only 50 selection events for accomplishing 25 menu selections because it requires 25 category selections and 25 menu selections since sub-menus cannot be selected without selecting specific categories. In the best case, only 100 and 200 menu selection events are needed for executing 50 and 100 random menus. When a user makes blind moves, the user cannot find the exact category that menu is contained. In the worst case, the user can select a menu after select 5 different categories. Therefore, the number of selection events was much bigger than the best case. Even the gap between the best case and the result became bigger when more random menu selections were made. Although the experiment deals the title-based move, there is a gap between the best case and the result. By adopting dynamic menu categorization, the number of false selection events is reduced dynamically. The performance of searching menu is increased. Figure 5 and 6 show the average number of selection events with blind moves and title-based moves in both of static and dynamic menus. Figure 5. Average number of selection events with blind moves Figure 6. Average number of selection events with title-based moves 6. Conclusion This paper proposed an event-based self-adaptation of UI as an approach to solving problems in existing dynamic UI approaches. At the current stage of research, our work focuses only on the menu organization aspect of UI design, where menu categorization and ordering can be accomplished dynamically based on user’s menu selection behaviors. Our approach does not require extra information about the user or the context, but uses events which can be gathered easily. Our approach adopts the concept of an agent to make externalization of UI adaptation, that is, no need to make intensive modification of the internal logic of an application. To show the validity of the proposed approach, we conducted a preliminary experiment for enhanced job performance by reducing time for menu search and occurrence for false selection. It is expected that many menu-driven systems can improve the usability by adopting our approach. 7. Acknowledgement This research was supported by the MKE(Ministry of Knowledge Economy), Korea, under the ITRC(Information Technology Research Center) support program supervised by the IITA(Institute of Information Technology Advancement). (IITA-2008-C1090-0801-0032) 8. References
{"Source-Url": "http://koasas.kaist.ac.kr/bitstream/10203/17230/1/Dynamic%20Pull-Down%20Menu%20Organization%20in%20Run-Time%20by%20Analyzing%20Menu%20Selection%20Behaviors.pdf", "len_cl100k_base": 4948, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 17874, "total-output-tokens": 5375, "length": "2e12", "weborganizer": {"__label__adult": 0.00039577484130859375, "__label__art_design": 0.004550933837890625, "__label__crime_law": 0.0003690719604492187, "__label__education_jobs": 0.0021343231201171875, "__label__entertainment": 0.0001399517059326172, "__label__fashion_beauty": 0.0002332925796508789, "__label__finance_business": 0.0002574920654296875, "__label__food_dining": 0.0004036426544189453, "__label__games": 0.0006051063537597656, "__label__hardware": 0.0018644332885742188, "__label__health": 0.0005540847778320312, "__label__history": 0.0004122257232666016, "__label__home_hobbies": 0.00012612342834472656, "__label__industrial": 0.0004529953002929687, "__label__literature": 0.0003809928894042969, "__label__politics": 0.0002269744873046875, "__label__religion": 0.0005202293395996094, "__label__science_tech": 0.0748291015625, "__label__social_life": 9.09566879272461e-05, "__label__software": 0.0235443115234375, "__label__software_dev": 0.88720703125, "__label__sports_fitness": 0.00023281574249267575, "__label__transportation": 0.00045418739318847656, "__label__travel": 0.00021851062774658203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24666, 0.02653]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24666, 0.29959]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24666, 0.92515]], "google_gemma-3-12b-it_contains_pii": [[0, 3965, false], [3965, 8568, null], [8568, 12811, null], [12811, 17729, null], [17729, 22017, null], [22017, 24666, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3965, true], [3965, 8568, null], [8568, 12811, null], [12811, 17729, null], [17729, 22017, null], [22017, 24666, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24666, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24666, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24666, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24666, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24666, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24666, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24666, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24666, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24666, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24666, null]], "pdf_page_numbers": [[0, 3965, 1], [3965, 8568, 2], [8568, 12811, 3], [12811, 17729, 4], [17729, 22017, 5], [22017, 24666, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24666, 0.21359]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
9de49cb6d6a47ebee4fb223a925b17917b0bbc1c
[REMOVED]
{"len_cl100k_base": 6363, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 25621, "total-output-tokens": 7606, "length": "2e12", "weborganizer": {"__label__adult": 0.0003325939178466797, "__label__art_design": 0.00041866302490234375, "__label__crime_law": 0.00035953521728515625, "__label__education_jobs": 0.0007791519165039062, "__label__entertainment": 6.16908073425293e-05, "__label__fashion_beauty": 0.00015497207641601562, "__label__finance_business": 0.0002157688140869141, "__label__food_dining": 0.0003204345703125, "__label__games": 0.0004086494445800781, "__label__hardware": 0.0005908012390136719, "__label__health": 0.0004868507385253906, "__label__history": 0.00023031234741210935, "__label__home_hobbies": 7.581710815429688e-05, "__label__industrial": 0.00044035911560058594, "__label__literature": 0.0003495216369628906, "__label__politics": 0.00025773048400878906, "__label__religion": 0.0004901885986328125, "__label__science_tech": 0.0296630859375, "__label__social_life": 8.881092071533203e-05, "__label__software": 0.00751495361328125, "__label__software_dev": 0.9560546875, "__label__sports_fitness": 0.0002448558807373047, "__label__transportation": 0.0004343986511230469, "__label__travel": 0.00018358230590820312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33389, 0.02718]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33389, 0.57069]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33389, 0.87007]], "google_gemma-3-12b-it_contains_pii": [[0, 2375, false], [2375, 5635, null], [5635, 8922, null], [8922, 12185, null], [12185, 14664, null], [14664, 18064, null], [18064, 20731, null], [20731, 22999, null], [22999, 25700, null], [25700, 29238, null], [29238, 32542, null], [32542, 33389, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2375, true], [2375, 5635, null], [5635, 8922, null], [8922, 12185, null], [12185, 14664, null], [14664, 18064, null], [18064, 20731, null], [20731, 22999, null], [22999, 25700, null], [25700, 29238, null], [29238, 32542, null], [32542, 33389, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33389, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33389, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33389, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33389, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33389, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33389, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33389, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33389, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33389, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33389, null]], "pdf_page_numbers": [[0, 2375, 1], [2375, 5635, 2], [5635, 8922, 3], [8922, 12185, 4], [12185, 14664, 5], [14664, 18064, 6], [18064, 20731, 7], [20731, 22999, 8], [22999, 25700, 9], [25700, 29238, 10], [29238, 32542, 11], [32542, 33389, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33389, 0.09877]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
89f772ae5c22f6e4e3c2d8a6751af9b81b00db1f
Integration of Interactive and Automatic Provers Jia Meng Computer Laboratory, University of Cambridge Jia.Meng@cl.cam.ac.uk Abstract. Interactive and resolution based automatic provers have both been used widely. Interactive provers offer users expressive formalisms and flexibility and are suitable for proving theorems of any user defined logics. However, they provide limited automation. In comparison, resolution based automatic provers provide automation, but only allow first order logic with equality. I am investigating combining the two types of provers through integrating an interactive prover Isabelle and a resolution based automatic prover Vampire. This paper gives an overview of the integration and describes the techniques used in it. It also lists some experimental results. 1 Introduction Interactive theorem provers support expressive formalisms which allow users to define functions, recursive types and embed complicated logics and theories. However users must guide each step of proofs and hence require significant human efforts. Resolution based provers are automatic but only allow first order logic with equality. We investigate combining the interactive provers’ flexibility and resolution provers’ automation through integration. In this paper I describe the progress of integrating a generic interactive theorem prover Isabelle [6] and a resolution prover Vampire [8]. 1.1 Isabelle and Vampire Isabelle is a powerful interactive theorem prover based on the typed λ-calculus. It is generic: users can embed various logics inside. Currently the supported logics include HOL (Higher Order Logic), HOLCF (the definitional extension of Church’s Higher-Order Logic with Scott’s Logic for Computable Functions), FOL (First Order Logic), ZF (ZF Set Theory based on FOL). A proof goal in Isabelle is a statement expressed in some logic. Users need to specify what tactics and rules to use in order to prove a goal or decompose it into several smaller subgoals. A rule is a previously proved lemma or theorem. A tactic tells Isabelle how the rules should be applied. There are several built-in classical reasoning tactics, such as blast and fast (based on tableau methods) as well as equality reasoning tactics, such as auto and simp (based on equality rewriting). They take on a default set of classical rules and equalities in the current context of the goal and attempt to find a proof automatically. However, users still need to choose the right tactics and this requires proof skills. Vampire is one of the leading resolution theorem provers for first-order logic with equality. Its paramodulation rule deals with equality. In each of the past three years it has won some category of the CADE ATP System Competition. 1.2 Aim of This Integration and Its Novelties The aim [7] of integrating Isabelle and Vampire is that when a user attempts to prove an Isabelle goal, instead of asking the user to specify what tactics or rules to use, the goal will be translated to first-order logic clauses along with a suitable set of rules. These rules are taken from the default configuration of blast and auto. Isabelle’s existing provers use these rules already since they are already-proved theorems. These clauses will then be sent to Vampire. Vampire will run in the background, attempting to prove each unsolved subgoal. When a proof is found, the proof will be sent back to Isabelle for verification. Vampire will combine the classical and equality reasoning through resolution and paramodulation, and will replace many Isabelle’s tools such as blast and auto. Furthermore, our integration also aims to improve the automation by automatically proving goals that cannot be proved by the Isabelle’s built-in tools. There have been several previous attempts at integrating interactive and automatic theorem provers. KIV has been integrated with TAP by Ahrendt et al. [1]. We are hoping to improve upon their results through the use of Vampire, which is one of the world’s best automatic provers. The HOL system [2] has been integrated with a model elimination prover developed by John Harrison [3] and was later integrated with prover Gandalf by Joe Hurd [4]. However, the users must collect relevant lemmas manually. Two major novelties of our integration are: - Users are less likely to have to collect lemmas manually. They do not have to specify proof tactics either. - As Vampire will run in the background, users will not have to wait for a response from Vampire before continuing with the rest of the proof. I have experimented the link up between Isabelle/ZF (with equality) and Vampire. Isabelle/ZF is based on first order set theory and is untyped. The rest of this paper discusses the following aspects of the work involved: 1. Translation between Isabelle/ZF formalism and first-order logic. 2. Minimisation of Vampire proof search space. 3. Vampire’s weight and precedence assignments to function and predicate symbols. 5. A collection of results. 6. Conclusion and future work. 2 Translation Between Isabelle/ZF and First-Order Logic A practical way of translating between the two formalisms is essential for efficient proving by Vampire. Therefore I have carried out many experiments which consisted of taking `blast`, `fast`, `clarify`, `auto`, `simp` invocations from existing proofs and attempting to reproduce proofs using Vampire. During a proof, a set of classical rules and equality rewriting rules in the current Isabelle context are translated to first-order axiom clauses. The goals are negated, converted to clauses and sent to Vampire. As it was experimentation, the translation between the formalisms was done manually. Isabelle/ZF formulas (rules and goals) that are already in first-order logic (FOL) form can be translated directly to clauses using Clause Normal Form (CNF) transformation. For other Isabelle/ZF formulas that are not expressed in FOL form, reformulation of the formulas is required before CNF transformation. 2.1 Transformation of Formulas Not in FOL Form Isabelle/ZF rules, such as the elimination rule for set intersection (IntE), \[ \forall c \in A \land B \implies (c \in A \land c \in B \implies P) \implies P \] need to be translated into a FOL formula (\(\land E\)), \[ \forall c \in A \land B \implies (c \in A \land c \in B) \] since the predicate variable \(P\) is not allowed in FOL. This translation is correct: the first formula is the elimination rule written in the natural deduction form and we need to unfold it in order to eliminate the predicate variable \(P\). Although the predicate \(P\) disappears, we are not losing information during a proof. Suppose we are going to prove a goal \[ [(i \in S_1 \cap S_2) \land C_1 \land C_2 \ldots \land C_n] \implies Q \] where \(C_1 \ldots C_n\) and \(Q\) are formulas. In Isabelle, this goal is proved by applying the set intersection elimination rule \(\text{IntE}\). Firstly \(c \in A \cap B\) is instantiated to \(i \in S_1 \cap S_2\), and \(P\) to \(Q\) (\(c, A, B, P\) are universally quantified variables). Then the goal is replaced by a new subgoal \[ [C_1 \land C_2 \ldots \land C_n \land i \in S_1 \land i \in S_2] \implies Q \] The negation of this subgoal generates the same set of clauses as what we will have if we resolve the negation of the original goal with the formula \(\land E\) above. Moreover, rules such as \(\text{domainE}\), \[ \forall a \in r \implies (a \in \text{domain}(r) \land \forall y \exists \exists (a, y) \in r \implies P) \implies P \] should be translated into $$\forall a \in \text{domain}(r) \rightarrow \exists y (\langle a, y \rangle \in r)$$ We also need to remove Isabelle’s terms, such as $$\bigcap_{x \in A} B(x)$$, since they are not present in first-order logic. A term $$y \in \bigcap_{x \in A} B(x)$$ is translated into $$\forall x [x \in A \rightarrow y \in B(x)] \land \exists a [a \in A]$$ In addition, a formula $$\phi(\bigcap_{x \in A} B(x))$$ is equivalent to $$\exists v [\phi(v) \land \forall u (u \in v \leftrightarrow u \in \bigcap_{x \in A} B(x))]$$. 2.2 Some Other Issues of Translation A subset relation $$R \subseteq S$$ is equivalent to $$\forall x (x \in R \rightarrow x \in S)$$ (this reduces the subset relation to the membership relation). Experiments show that Vampire can find a proof much more quickly if the subset relation is replaced by its equivalent membership relation. This is probably because during most of the complex proofs, subset relations have to be reduced to equivalent membership relations anyway. The experimental results are shown in section 5.1. Some formulas involve set equality as $$A = B$$. During many proofs set equality predicates should be reduced to two subset predicates by resolution: $$A \subseteq B$$ and $$B \subseteq A$$. However, Vampire usually gives the positive equality literal (equal) a low priority relative to other literals in the same clause. Therefore the positive equality literal is likely to be selected and resolved last during resolution. Experiments show that better performance can be achieved when set equality predicates are replaced by the subset predicates, which will be further reduced to formulas involving membership predicates as shown above. The experimental results are shown in section 5.2. 2.3 Minimising Search Space The size of the search space (in terms of the number of clauses) plays a significant role in the performance of resolution provers. Formula renaming [5] has been used to reduce the number of clauses generated during CNF transformation. For instance, the rule Ap_contractE generates 135 clauses using standard CNF transformation. After applying formula renaming to this rule, the CNF transformation generates 15 clauses. In the attempt of proof reproduction in Vampire, four proof goals required the use of Ap_contractE. None of the four goals were proved when standard CNF transformation was used. However, three of them were proved after applying the formula renaming method. 3 Weight, Precedence Assignment During a resolution proof, if we want a literal to be eliminated sooner, then it should have a higher weight relative to other literals in the same clause. This difference in weights gives an ordering on literals. Vampire uses Knuth-Bendix Ordering (KBO) [9] to compute this ordering on literals. Furthermore, KBO is parameterized by weights and precedences of functions and predicates, which can be assigned explicitly by users. Correct weight, precedence assignment is important for several reasons. An Isabelle proof usually requires definitions of constants, functions, etc. to be unfolded before tactics can be applied. However, these definitions can be sent to Vampire as equality clauses, so that users do not have to specify which definition should be unfolded. In order to have Vampire replace the definiendum by the definiens through ordered paramodulation, we need to assign greater weights to functions that occur in the definiendum. An example was a proof of lemma \texttt{I\_contract\_E} in an Isabelle/ZF theory file \texttt{Comb.thy}. It proves that combinator \texttt{I} (the identity combinator) has no possible contraction. An axiom clause of definition \texttt{I = KSS} was included in the Vampire axiom set. Without assigning a higher weight to \texttt{I} relative to \texttt{K} and \texttt{S}, ordered paramodulation will never replace \texttt{I} by \texttt{KSS} as the latter is much heavier than the former (all function symbols have the same weight by default). After the assignment was done, Vampire proved the goal quickly. More importantly, Isabelle rules have information indicating whether a rule should be used as an elimination rule (forward chaining) or an introduction rule (backward chaining). This information is lost after the rules are translated into clauses. Weight and precedence assignment in Vampire is probably the only way to preserve this information. However the resulting KBO is a partial ordering on terms with variables. Therefore in some situations, it is possible that weight assignment will not produce any effect. 4 Settings of Vampire Vampire allows users to specify various settings of the prover. Considerable experimentation with numerous settings of Vampire was carried out in order to find the best combinations of these settings. This was done by attempting to prove around 250 lemmas taken from Isabelle/ZF’s theory files \texttt{equalities.thy} and \texttt{Comb.thy}. Each lemma presents between 1 and 4 separate goals. The experiments show that the default setting of Vampire is usually good. Moreover, the literal selection mode is the most important factor in determining the speed of proofs. Four selection modes (\texttt{selection4}, \texttt{selection5}, \texttt{selection6} and \texttt{selection7}) are better than the others. Vampire also supports Set of Support Strategy (SOS). Most of the goals require us to turn on SOS in order to find proofs. The experiments show that five combinations of settings are better. They were written to five separate setting files so that five Vampire processes can run in parallel. The result of some tests is shown next. 5 Results 5.1 Comparisons of Subset and Membership Relations In order to investigate whether we should reduce subset relations to the equivalent membership relations, proofs of 32 goals were carried out. These goals involve either positive subset predicates or negative subset predicates, or both. Without replacing any subset predicate, only 17 goals were proved. However, after I removed all subset predicates 30 goals were proved. A more detailed comparison is shown in Table 1 below. <table> <thead> <tr> <th>Number of Goals Involving</th> <th>Positive Subset</th> <th>Negative Subset</th> <th>Both</th> </tr> </thead> <tbody> <tr> <td>14</td> <td>11</td> <td>18</td> <td>6</td> </tr> </tbody> </table> Some explanation of Table 1 may be useful. The intersection of row Number of Goals Involving and column Positive Subset indicates the total number of goals where positive subset predicates exist. These goals may involve negative subset predicates as well. The intersection of row Number of Goals Proved if Removing Both Positive and Negative Subset and Positive Subset indicates that 13 goals were proved (out of 14 goals that involve positive predicates) once all subset predicates (both positive and negative) were removed. Similarly for other columns and rows. 5.2 Tests on Equality Literals I tried to prove twelve goals involving equality literals. Eleven of these goals have negative equalities and one has a unit clause with a positive equality literal. Vampire quickly found a proof for the goal containing the positive equality, as I had expected: in a unit clause, a positive equality is definitely selected and resolved with some negative equality literal (a negative equality literal receives a higher weight than other literals occurring in the same clause). In comparison, only one goal involving a negative equality was proved. Once I removed all negative equalities using subset and then membership literals, 10 goals were proved. 5.3 First Comparison of Isabelle/ZF and Isabelle/ZF-Vampire The aim of the first comparison was to find out how well can Isabelle/ZF-Vampire integration prove goals that were originally proved by Isabelle’s built-in tools, such as blast, auto, simp, clarify etc. This experiment is also important to demonstrate whether the integration can improve automation by combining the built-in tools, which need to be performed separately in Isabelle. The initial set of tests was proving around 250 lemmas. They were taken from equalities.thy and Comb.thy. Around 70 axiom clauses were included in the axiom set. Most of the goals were proved with this axiom set. As the ultimate aim is to give Vampire a large set of axioms, tests with a larger axiom set were necessary. During this second run of tests around 129 to 160 axiom clauses were used (As it was explained in section 1, these axiom clauses are generated by the rules taken from the default configuration of blast and auto.). Vampire tried to prove 37 lemmas (63 separate goals), which were drawn from the previous 250 lemmas. Each lemma was attempted five times using the five combinations of settings: defaultSetting uses default settings; setting1 to setting3 use Vampire’s literal selection mode selection5 to selection7 respectively; setting4 turns on dynamic splitting. The lemmas from equalities.thy were mainly proved by blast tactic (with other tactics such as clarify, simp as well), while lemmas from Comb.thy are more complicated and many of them also used auto tactic. The results are shown in Table 2 below. The time limit for each attempt of proof was 60sec. <table> <thead> <tr> <th>Setting File</th> <th>Number of Goals Proved from Comb.thy</th> <th>Number of Goals Proved from equalities.thy</th> <th>Total Number of Goals Proved</th> </tr> </thead> <tbody> <tr> <td>defaultSetting</td> <td>21</td> <td>28</td> <td>49</td> </tr> <tr> <td>setting1</td> <td>22</td> <td>28</td> <td>50</td> </tr> <tr> <td>setting2</td> <td>21</td> <td>27</td> <td>48</td> </tr> <tr> <td>setting3</td> <td>21</td> <td>28</td> <td>49</td> </tr> <tr> <td>setting4</td> <td>18</td> <td>27</td> <td>45</td> </tr> <tr> <td>Combination of all settings</td> <td>24</td> <td>28</td> <td>52</td> </tr> </tbody> </table> Fifty-two goals out of 63 were proved by the combination of all five settings within the time limit. Many tactics, such as blast, auto, can indeed be replaced by Vampire. The tests also indicate a potential that more goals can be proved by running several Vampire processes with different settings in parallel, although the results so far are not dramatic. In addition, there are 8 relatively complex goals where the amount of time taken by each setting to find proofs varies significantly. It shows that goals could be proved more quickly by running several Vampire processes in parallel. Performance variance in different settings is more significant when proving more complicated lemmas (here lemmas drawn from Comb.thy). 5.4 Second Comparison of Isabelle/ZF and Isabelle/ZF-Vampire A more important aim of this integration is to automatically prove goals that cannot be proved by Isabelle’s built-in tools and hence reduce user interaction. This second comparison examined whether the integration can prove goals that were not proved by blast, auto, simp etc. Isabelle proofs of these goals consist of several proof steps carried out by users. If these goals can be proved automatically by Vampire, then Isabelle users will not have to specify the proof steps. This set of experiments took 15 lemmas from Isabelle/ZF theory files Comb.thy and PropLog.thy. One point that we need to consider is at which stage of a proof, we should send the current goal/subgoals to Vampire for an automatic proof. Induction is sometimes necessary to prove a goal and we are not aiming to automate this induction step. Therefore for those lemmas that were proved by induction in Isabelle, I sent to Vampire those subgoals I was left with after induction was performed. The results are shown in Table 3. Some lemmas present more than one subgoal to Vampire. Ten lemmas were proved by Vampire. For those lemmas that could not be proved by Vampire automatically, one or more subgoals’ proofs were not found. The combination of the five Vampire settings was used during the tests. The lemma IDs marked with asterisks indicate that I attempted to eliminate all Isabelle proof steps for these lemmas. <table> <thead> <tr> <th>Lemma ID</th> <th>Number of Isabelle Proof Steps Eliminated</th> <th>Number of Subgoals Sent to Vampire</th> <th>Number of Subgoals Proved by Vampire</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>4</td> <td>2</td> <td>2</td> </tr> <tr> <td>2</td> <td>2</td> <td>2</td> <td>1</td> </tr> <tr> <td>3</td> <td>3</td> <td>2</td> <td>2</td> </tr> <tr> <td>4</td> <td>2</td> <td>1</td> <td>1</td> </tr> <tr> <td>5*</td> <td>2</td> <td>1</td> <td>1</td> </tr> <tr> <td>6*</td> <td>4</td> <td>1</td> <td>1</td> </tr> <tr> <td>7*</td> <td>2</td> <td>1</td> <td>1</td> </tr> <tr> <td>8*</td> <td>3</td> <td>1</td> <td>1</td> </tr> <tr> <td>9*</td> <td>4</td> <td>1</td> <td>1</td> </tr> <tr> <td>10*</td> <td>6</td> <td>1</td> <td>1</td> </tr> <tr> <td>11</td> <td>3</td> <td>3</td> <td>1</td> </tr> <tr> <td>12*</td> <td>3</td> <td>1</td> <td>0</td> </tr> <tr> <td>13*</td> <td>2</td> <td>1</td> <td>0</td> </tr> <tr> <td>14</td> <td>10</td> <td>2</td> <td>1</td> </tr> <tr> <td>15</td> <td>4</td> <td>2</td> <td>2</td> </tr> </tbody> </table> 6 Conclusion and Future Work The current experimentation clearly demonstrates the potential of integrating Isabelle and Vampire. User interaction with Isabelle during a proof is significantly reduced. We will next implement the automatic communication between the two provers. This implementation will consist of several parts: - Automatic translation of formulas between Isabelle/ZF and Vampire’s First Order Logic. This should not be difficult once we have found a practical way for this translation. - Implementing an interface between the two provers. At some point during an Isabelle proof, a goal or some subgoals may be sent to the automatic provers. One or more automatic provers may sit at some servers over the network, running several processes (with different settings) in parallel and attempt to solve any unsolved subgoal. Since it might take a while for the automatic provers to find a proof, users should be able to carry on with the rest of the proof. The way that these automatic and interactive provers interact may be important to the overall performance of the integration. As there is much communication between the provers, we need to ensure the communication can be carried out quickly and efficiently. - Proof reconstruction in Isabelle. Once a proof is found by Vampire, we need to send the proof back to Isabelle for verification. However, resolution based provers’ proofs are hard to read, especially if Skolemization is used. We need to implement such proof reconstruction efficiently in Isabelle. Furthermore, future work will also include investigation on how to use Vampire’s weight and precedence assignments more effectively. Trials on other Vampire’s settings will help fine tune Vampire’s performance. In addition, integration of other supported logics of Isabelle with Vampire will take place. References 4. Joe Hurd. Integrating Gandalf and HOL. In Yves Bertot, Gilles Dowek, André Hirschowitz, Christine Paulin, and Laurent Théry, editors, Theorem Proving in Higher Order Logics, 12th International Conference, TPHOLs ’99, volume 1690 of
{"Source-Url": "http://cliplab.org/Conferences/Colognet/ITCLS-2003/AcceptedPapers/Jia.pdf", "len_cl100k_base": 5294, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 23684, "total-output-tokens": 5986, "length": "2e12", "weborganizer": {"__label__adult": 0.0004193782806396485, "__label__art_design": 0.0005125999450683594, "__label__crime_law": 0.0007467269897460938, "__label__education_jobs": 0.0014772415161132812, "__label__entertainment": 0.00012803077697753906, "__label__fashion_beauty": 0.00022041797637939453, "__label__finance_business": 0.0003552436828613281, "__label__food_dining": 0.000614166259765625, "__label__games": 0.0009441375732421876, "__label__hardware": 0.0011663436889648438, "__label__health": 0.0009660720825195312, "__label__history": 0.00034689903259277344, "__label__home_hobbies": 0.0001430511474609375, "__label__industrial": 0.0008497238159179688, "__label__literature": 0.00046896934509277344, "__label__politics": 0.0005412101745605469, "__label__religion": 0.0007371902465820312, "__label__science_tech": 0.1859130859375, "__label__social_life": 0.00016129016876220703, "__label__software": 0.010101318359375, "__label__software_dev": 0.7919921875, "__label__sports_fitness": 0.0004112720489501953, "__label__transportation": 0.0008082389831542969, "__label__travel": 0.00023484230041503904}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25977, 0.02093]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25977, 0.43826]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25977, 0.92064]], "google_gemma-3-12b-it_contains_pii": [[0, 2363, false], [2363, 5092, null], [5092, 7598, null], [7598, 10070, null], [10070, 13237, null], [13237, 15418, null], [15418, 18618, null], [18618, 22307, null], [22307, 25224, null], [25224, 25977, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2363, true], [2363, 5092, null], [5092, 7598, null], [7598, 10070, null], [10070, 13237, null], [13237, 15418, null], [15418, 18618, null], [18618, 22307, null], [22307, 25224, null], [25224, 25977, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25977, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25977, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25977, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25977, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25977, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25977, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25977, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25977, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25977, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25977, null]], "pdf_page_numbers": [[0, 2363, 1], [2363, 5092, 2], [5092, 7598, 3], [7598, 10070, 4], [10070, 13237, 5], [13237, 15418, 6], [15418, 18618, 7], [18618, 22307, 8], [22307, 25224, 9], [25224, 25977, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25977, 0.21875]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
345771e0c1d9c17b6c322ade07d4053cc68475d5
Contents 1 Introduction 6 1.1 Goals, Target Audience, and Workflows ........................................... 6 1.2 Rocs in a Nutshell .................................................................................. 7 1.2.1 Graph Documents ........................................................................ 7 1.2.2 Edge Types ................................................................................. 7 1.2.3 Node Types ................................................................................ 7 1.2.4 Properties .................................................................................. 8 1.3 Tutorial .................................................................................................. 8 1.3.1 Generating the Graph .................................................................... 8 1.3.2 Creating the Element Types .......................................................... 8 1.3.3 The Algorithm ............................................................................. 8 1.3.4 Execute the Algorithm .................................................................. 9 2 The Rocs User Interface 10 2.1 Main Elements of the User Interface .................................................... 10 2.2 Graph Editor Toolbar .......................................................................... 11 3 Scripting 12 3.1 Executing Algorithms in Rocs .............................................................. 12 3.1.1 Control Script Execution ............................................................... 12 3.1.2 Script Output .............................................................................. 12 3.1.3 Scripting Engine API ................................................................... 13 4 Import and Export 14 4.1 Exchange Rocs Projects ..................................................................... 14 4.1.1 Import and Export of Graph Documents ..................................... 14 4.1.1.1 Trivial Graph File Format ...................................................... 14 4.1.1.1.1 Format Specification ..................................................... 14 4.1.1.1.2 Example ..................................................................... 15 4.1.1.2 DOT Language / Graphviz Graph File Format ..................... 15 4.1.1.2.1 Unsupported Features .................................................. 15 4.1.1.2.2 Example .................................................................. 15 5 Graph Layout 5.1 Laying out graphs automatically in Rocs 5.1.1 Force Based Layout 5.1.1.1 Radial Tree Layout 6 Credits and License Abstract Rocs is a graph theory tool by KDE. Chapter 1 Introduction This chapter provides an overview of the core features and the typical workflows. The most important parts are Section 1.2 and chapter 3, which together should allow every new user to directly start using Rocs. 1.1 Goals, Target Audience, and Workflows Rocs is a Graph Theory Tool for everybody interested in designing and studying graph algorithms. In particular, those are - lecturers, who want to demonstrate algorithms to their students, - students and researchers, who want to see how their algorithm perform, and - everybody who is interested in data structures and algorithms. For all them, Rocs provides an easy to use graphical editor for creating graphs, a powerful scripting engine to execute algorithms, and several helper tools for simulations, experiments, and graph exports. The typical way of using Rocs is to create a graph, either by hand (i.e., dragging nodes and edges to the whiteboard), or by using one of the graph generators. Graph algorithms then can be implemented and executed on the created graph and all changes, which the algorithm performs, are visible immediately in the graph editor. 1.2 Rocs in a Nutshell Every Rocs session is a project: when opening Rocs an empty project is created, when loading some project it becomes the current project. Hereby, a project itself consists of graph documents, scripts/algorithms, and a journal. 1.2.1 Graph Documents A graph document represents the content of a whiteboard in the graph editor. It contains information about the user defined node and edge types, their properties, and about the already created nodes and edges. This is, Rocs understands the set of all nodes and edges of a graph document to form a (not necessarily connected) graph. Everything belonging to a graph document is accessible by the script engine via the global object `Document`. 1.2.2 Edge Types In some scenarios, graphs consist of different types of edges (e.g., an undirected graph plus the tree edges computed by a breadth-first-search algorithm) that shall be handled and displayed differently. For this, besides a default edge type, you can define arbitrary other edge types. Each edge type has its individual visual representation, dynamic properties, and can be set to be either undirected or directed. The scripting interface provides convenience methods to specifically access only edges of specific types. 1.2.3 Node Types Analog to edge types, you can define different types of nodes of a graph (e.g., to give some nodes special roles). Each node type has its own visual representation and dynamic properties. 1.2.4 Properties Every (node or edge) element can have properties. Those properties must be setup at the corresponding node or edge type. Properties are identified and accessed by their names and can contain any value. To create new or change existing properties, use the Element Types sidebar and use the Properties button to open the property dialog. You can also use the scripting engine to access registered properties and change their values. In the following example we assume that the property "weight" is registered for the default edge type. ```javascript var nodes = Document.nodes(); for (var i = 0; i < nodes.length; ++i){ nodes[i].weight = i; } for (var i = 0; i < nodes.length; ++i){ Console.log("weight of node " + i + ": " + nodes[i].weight); } ``` 1.3 Tutorial In this section we want to create an example project to explore some of the most important functions of Rocs. The goal is to create a graph and a script that illustrates a simple 2-approximate algorithm for the minimum vertex cover problem. The minimum vertex cover problem is the problem to find a subset of graph nodes C of minimal size such that each graph edge is connected to at least one node in C. This problem is known to be NP-hard and we want to illustrate how to find an approximation of factor 2 by computing a matching in the given graph. Our goal is to visualize the relationship of the matching and the minimum vertex cover. For this, we want to specify two edge types, one to display matching edges and one type to display "ordinary" edges, as well as two node types that we use to distinguish nodes contained in C and those not contained in C. 1.3.1 Generating the Graph For creating the graph, we use a default graph generator provided by Rocs. This can be found in the main menu at Graph Document → Tools → Generate Graph. There, we select a Random Graph with 30 nodes, 90 edges, and with seed 1 (the seed is the starting seed for the random graph generator; using the same seed multiple times results in same and reproducible graphs). 1.3.2 Creating the Element Types We use the Element Types and create a second node type as well as a second edge type. For both new types we open the properties dialog by using the respective Properties buttons and set the IDs to 2. Furthermore, we change the colors of elements of these two new types (to distinguish them from the default types). Finally, we set all edge types to be bidirectional, and the IDs of the default types to 1. 1.3.3 The Algorithm At last we have to implement the approximation algorithm. For this we use the following implementation: for (var i=0; i < Document.nodes.length; i++) { Document.nodes[i].type = 1; } for (var i=0; i < Document.edges.length; i++) { Document.edges[i].type = 1; } var E = Document.edges(); // set of unprocessed edges var C = new Array(); // matching edges while (E.length > 0) { var e = E[0]; // we take first edge e={u,v} var u = e.from(); var v = e.to(); e.type = 2; // set edge to be a matching edge E.shift(); // remove e (i.e., E[0]) from edge list C.push(u); // add u to C C.push(v); // add v to C // mark u,v as nodes in C u.type = 2; v.type = 2; // remove from E all edges incident to u or v var adjacent = u.edges(); for (var i=0; i < adjacent.length; i++) { var index = E.indexOf(adjacent[i]); // find the index if (index != -1) { E.splice(index, 1); // remove it if really found } } var adjacent = v.edges(); for (var i=0; i < adjacent.length; i++) { var index = E.indexOf(adjacent[i]); // find the index if (index != -1) { E.splice(index, 1); // remove it if really found } } } Console.log("Vertex Cover contains " + C.length + " nodes."); ### 1.3.4 Execute the Algorithm The algorithm can be executed by the Run button at the script control panel. Chapter 2 The Rocs User Interface 2.1 Main Elements of the User Interface The user interface is divided into several logical parts as presented at the screenshot below. Graph Editor The editor provides a whiteboard at that nodes and edges can be placed. Double-clicking at any of its elements opens a corresponding property menu. You can use the tools from the Graph Editor Toolbar to create and modify graphs. Graph Editor Toolbar The toolbar provides the Create Node or Create Edge tools, for creating new elements on the whiteboard. Note the extra-toolbar for selecting the respective node or edge type that becomes visible of one of these tools is selected. Also tools for selecting and moving as well as deleting elements are available here. For details see Section 2.2. Side Bar At the right, you can find the side bar that provides several tools for your workflow: - **Element Types**: This widget gives you direct access to the available edge and node types. - **Journal**: Each project has its own journal that can be used to, e.g., note tasks, results, or observations. - **Scripting API**: To get direct access to the script documentation, you can open this widget. Script Editor In this text editor you can write algorithms as explained in detail in chapter 3. You can work on several script documents simultaneously by using several tabs. Script Output This text area either shows debug information or the script output of your algorithm, depending on the toggled setting at the top of this widget. If the script throws an error, automatically the debug output is presented. Controls Here you can find the controls for executing scripts. You can execute the script that is currently open at the script editor by pressing **Run**. While the script is executed, it is possible to stop execution by pressing the **Stop** button. 2.2 Graph Editor Toolbar This toolbar consists of the following actions. Clicking at an action means that your mouse pointer applies this action at the graph editor whiteboard: - **Select and Move**: To select elements, either click at unused space at the whiteboard, keep the mouse pressed and draw a rectangle that contains some data elements and/or edges to select these elements or otherwise directly click at an unselected element to select this element. If you click at a selected element or a set of selected elements, respectively, by keeping the mouse pressed and moving around you can move these elements. Moving selected elements is also possible with the arrow keys. - **Create Node**: Click at an arbitrary position at the graph editor whiteboard to create a new data element that belongs to the currently selected data structure. By keeping the mouse pointer pressed at the button, a menu shows up at which the data type of the new created data elements can be selected (only if different data types exist). - **Create Edge**: Click at one data element, keep the mouse pressed and draw a line to another data element to which the edge shall point. This action is only successful if the current graph allows to add this edge (e.g., in an undirected graph you are not allowed to add multiple edges between two data elements). By keeping the mouse pointer pressed at the button, a menu shows up at which the edge type of the new created edges can be selected (only if different edge types exist). - **Delete**: Click at an element to delete it. If you delete a node, all adjacent edges are also deleted. Chapter 3 Scripting 3.1 Executing Algorithms in Rocs Rocs internally uses the QtScript Java Script engine. This means, all algorithms that you implement must use Java Script. In the following, we explain how to access and change elements of a graph document from the scripting engine. It is important to note that changes done by the scripting engine are directly reflected at the properties at the graph editor elements. 3.1.1 Control Script Execution There are different execution modes for your algorithms: - Run: Execute the script until it finishes. - Stop: Stop script execution (only available while a script is executed). 3.1.2 Script Output During the execution of an algorithm, debug and program output is displayed in the Debug & Script Output. If the scripting engine detects a syntax error in your script, the error is also displayed as debug message. Note that all program messages are also displayed at the debug output (displayed as bold text). You can control the text that is displayed at the script output by the following functions: ```javascript Console.log(string message); // displays the message as script output Console.debug(string message); // displays the message as debug output Console.error(string message); // displays the message as error output ``` 3.1.3 Scripting Engine API The different parts of Rocs each provide a static element that can be accessed by the scripting engine. These are: - **Document** for the graph document - **Console** for the console log output For the explicit API use and for a method reference, please see the inline help at the Rocs side bar. Chapter 4 Import and Export 4.1 Exchange Rocs Projects Rocs projects can be imported and exported as archived .tar.gz files. These archives can be used to exchange projects. Import and Export can be done with Graph Document → Import Graph and Graph Document → Export Graph as, respectively. 4.1.1 Import and Export of Graph Documents Rocs currently supports import and export of the following file formats: - DOT files, also known as Graphviz files - GML files - Trivial Graph Format files - Keyhole Markup Language Format 4.1.1.1 Trivial Graph File Format The Trivial Graph Format (TGF) is a simple text-based file format for describing graphs. A TGF file consists of a list of node definitions, that map the node IDs to labels, followed by a list of the edges. In this format it is only possible to have one label per node and one value per edge. Rocs interprets imported graphs as undirected graphs. Exported graphs will contain two edges per connection if connections are bidirectional. 4.1.1.1.1 Format Specification - The file starts with a list of nodes (one node per line), followed by a line with the only character "#", followed by a list of edges (one edge per line). - A node consists of an integer (identifier), followed by a space, followed by an arbitrary string. - An edge consists of two integers (identifiers) separated by a space, followed by a space, followed by an arbitrary string. It is assumed that the directed edge points from the first identifier to the second identifier. 4.1.1.2 Example 1 starting node 2 transmitter 3 sink # 1 2 blue 2 1 red 2 3 green 4.1.1.2 DOT Language / Graphviz Graph File Format The DOT language is a plain text graph description language that allows both, a good human readable representation of graphs as well as an efficient processing by graph layout programs. DOT is the default file format for the Graphviz graph visualization suite, but is also widely used by other graph tools. The usual file extensions for DOT are .gv and .dot. 4.1.1.2.1 Unsupported Features Rocs can parse every graph file that contains a graph specified according to the DOT language specification\(^1\). The support of language features is complete, despite of the following exceptions: - subgraph: Due to the lack of a subgraph concept in Rocs, subgraphs are only imported as a set of data elements and connections. Especially, connections to or from subgraphs are not imported. - HTML and XML attributes: Attributes (like labels) that contain HTML or XML syntax are read unchanged. Especially, not adjustment of fonts and styles are read from those attributes. 4.1.1.2.2 Example digraph myGraph { a -> b -> c; b -> d; } \(^1\)http://www.graphviz.org/content/dot-language Chapter 5 Graph Layout 5.1 Laying out graphs automatically in Rocs Rocs can lay out graphs automatically. The Rocs graph layout tool can be found in the main menu at Graph Document → Tools → Graph Layout. There are two different layout algorithms that can be applied: Force Based Layout and Radial Tree Layout. To apply one of them, select the corresponding tab of the graph layout tool, choose the desired parameters and execute the algorithm by clicking on the OK button. Details that are specific to each one of the layout algorithms are provided in the next sections. 5.1.1 Force Based Layout The Force Based Layout can be applied to any graph. Intuitively, this algorithm simulates forces acting in each node. There are repelling forces between pairs of nodes and attraction forces between pairs of nodes that are neighbours. The magnitude of these forces can be specified by moving the corresponding sliders in the user interface. Another parameter that can be controlled is the Area Factor. This parameter controls how the nodes are spread. Layouts generated with high values of Area Factor have a tendency of having large distances between nodes. ### 5.1.1.1 Radial Tree Layout The Radial Tree Layout can only be applied to trees. Any attempt to apply this layout algorithm to other kinds of graph will produce an error message. Parameters for the Radial Tree Layout can be selected using the provided user interface. The tree type parameter selects between a free tree layout and a rooted tree layout. In a free tree layout, nodes are placed freely without any apparent hierarchy between them. In a rooted tree layout, the root node is placed at the top and sub-trees are laid out below it, giving an idea of a hierarchy between nodes. The center/root parameter defines which node is going to be used as root for the rooted tree layout or as center for the free tree layout. The center of a free tree layout is the first node to be placed by the algorithm. All other nodes are placed on circles centered at the center node. A center/root can be selected automatically by the layout algorithm. The node separation parameter controls the distance between nodes. Increasing the value of this parameter will cause the distance between nodes to increase. Similarly, decreasing the value of this parameter will cause the distance between nodes to decrease. Chapter 6 Credits and License Rocs Program Copyright: • Copyright 2008 Ugo Sangiori (ugorox AT gmail.com) • Copyright 2008-2012 Tomaz Canabrava (tcanabrava AT kde.org) • Copyright 2008-2012 Wagner Reck (wagner.reck AT gmail.com) • Copyright 2011-2015 Andreas Cord-Landwehr (cordlandwehr AT kde.org) Documentation Copyright: • Documentation copyright 2009 Anne-Marie Mahfouf annma@kde.org • Documentation copyright 2009 Tomaz Canabrava (tcanabrava AT kde.org) • Documentation copyright 2011-2015 Andreas Cord-Landwehr (cordlandwehr AT kde.org) This documentation is licensed under the terms of the GNU Free Documentation License. This program is licensed under the terms of the GNU General Public License.
{"Source-Url": "https://docs.kde.org/stable5/en/rocs/rocs/rocs.pdf", "len_cl100k_base": 4352, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 29848, "total-output-tokens": 5146, "length": "2e12", "weborganizer": {"__label__adult": 0.00026416778564453125, "__label__art_design": 0.0010824203491210938, "__label__crime_law": 0.00027561187744140625, "__label__education_jobs": 0.002307891845703125, "__label__entertainment": 0.00015401840209960938, "__label__fashion_beauty": 0.0001367330551147461, "__label__finance_business": 0.00018405914306640625, "__label__food_dining": 0.00023317337036132812, "__label__games": 0.0008530616760253906, "__label__hardware": 0.0008916854858398438, "__label__health": 0.00024271011352539065, "__label__history": 0.00029206275939941406, "__label__home_hobbies": 0.00012218952178955078, "__label__industrial": 0.0003256797790527344, "__label__literature": 0.00032639503479003906, "__label__politics": 0.00017881393432617188, "__label__religion": 0.00045013427734375, "__label__science_tech": 0.037353515625, "__label__social_life": 0.00015842914581298828, "__label__software": 0.0518798828125, "__label__software_dev": 0.90185546875, "__label__sports_fitness": 0.0002211332321166992, "__label__transportation": 0.00023806095123291016, "__label__travel": 0.0001552104949951172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20226, 0.04016]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20226, 0.47267]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20226, 0.81282]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 2601, false], [2601, 2741, null], [2741, 2787, null], [2787, 3933, null], [3933, 5397, null], [5397, 8013, null], [8013, 9320, null], [9320, 10103, null], [10103, 12795, null], [12795, 14089, null], [14089, 14415, null], [14415, 15925, null], [15925, 17146, null], [17146, 18088, null], [18088, 19516, null], [19516, 20226, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 2601, true], [2601, 2741, null], [2741, 2787, null], [2787, 3933, null], [3933, 5397, null], [5397, 8013, null], [8013, 9320, null], [9320, 10103, null], [10103, 12795, null], [12795, 14089, null], [14089, 14415, null], [14415, 15925, null], [15925, 17146, null], [17146, 18088, null], [18088, 19516, null], [19516, 20226, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20226, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20226, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20226, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20226, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20226, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20226, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20226, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20226, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20226, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20226, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 2601, 3], [2601, 2741, 4], [2741, 2787, 5], [2787, 3933, 6], [3933, 5397, 7], [5397, 8013, 8], [8013, 9320, 9], [9320, 10103, 10], [10103, 12795, 11], [12795, 14089, 12], [14089, 14415, 13], [14415, 15925, 14], [15925, 17146, 15], [17146, 18088, 16], [18088, 19516, 17], [19516, 20226, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20226, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
aaad763f7dba974b96a6a14677006ea847b89d98
Requirements Traceability prepared by Michael Morckos ID : 20363329 Electrical and Computer Engineering April 4, 2011 1.0 Introduction In software engineering, a requirement is a singular documented need of what a particular software system should be or perform. Requirements constitute the most important part of the software engineering process of a system since they are the statements that identify necessary attributes, capabilities, characteristics, and qualities of the system in order for it to have value and utility to a user [1]. A properly stated and formatted requirement is of vital importance to the development process of the software system since it ensures an efficient, timely and cost effective engineering and development process. Some of the attributes of a good requirement are. - Unitary. The requirement addresses one thing specifically. - Complete. The requirement is fully stated with no missing information. - Consistent. The requirement does not contradict any other requirement and is fully consistent with all authoritative external documentation. - Current. The requirement has not been made obsolete by the passage of time. - Feasible. The requirement can be implemented within the constraints of the project. - Unambiguous. The requirement is concisely stated. - Traceable. The requirement is authoritatively documented and meets all or part of a business need as stated by stakeholders. Requirements traceability is an important sub-discipline of requirements management within software development and systems engineering which is concerned with documenting the life of a requirement and providing bi-directional traceability between various associated requirements and other artifacts in the development process. Tracing a requirement means identifying all parts of the software product that were conceived by it and the ability to trace back from the products to the requirements [2]. This report provides a brief overview of requirements traceability. The second section illustrates some of the definitions of requirements traceability and the motivations for having traceability in the software development process. The third section provides a detailed literature about the methods of requirements traceability and how it is implemented into the software development process. The fourth section provides a brief overview of the benefits and feasibility of requirements traceability. The fifth and final section concludes the report. 2.0 Requirements Traceability 2.1 Definition According to [3], traceability as a general term is the “ability to chronologically interrelate the uniquely identifiable entities in a way that matters.” The word chronology here reflects the use of the term in the context of tracking an object from its source of creation till it reaches its destination. The IEEE Standard Glossary of Software Engineering Terminology defines traceability as “the degree to which a relationship can be established between two or more products of the development process, especially products having a predecessor-successor or master-subordinate relationship to one another.” [4] In software engineering, different sources define requirements traceability from different points of view. According to [1], requirements traceability is “the ability to describe and follow the life of a requirement, in both forward and backward directions.” In addition, [5] states that requirements traceability is “the ability to define, capture and follow the traces left by requirements on other elements of the software development process and the trace left by those elements on requirements.” Both [1] and [5] give conceptual definitions for requirements traceability. While [1] emphasized tracking of requirements through all the phases of development, [5] states that traceability involves other development artifacts as well, by emphasizing the relationship between requirements and the other artifacts such as specification statements, designs, models and developed components. A practical definition is supplied by [6], which states that requirements traceability is an intensive iterative process which involves labeling each requirement in a unique and persistent manner so they can be unambiguously referred to throughout the development cycle. Requirements traceability is a vital task in the design and development of software systems which allows tracing the requirement from the instance it is conceived until it is implemented into running code as a specification. Requirements traceability is one of the characteristics of excellent requirements specifications [6]. 2.2 Motivation for tracing requirements The motivation for tracing requirements comes from the rapidly growing complexity of software systems and the constant need for rapid evolution and upgrade. Based on a real life situation mentioned in [6], missing a requirement in the final product can be both time and money consuming, especially if the requirement is for a safety-critical functionality of the system or if the customer is not satisfied with the final system. Basically, requirements tracing provides a way to demonstrate compliance with a contract, specification, or regulation. At an advanced level, requirements tracing can improve the quality of the products, reduce maintenance costs, and facilitate reuse. Requirements traceability is intended to ensure continued alignment between stakeholder requirements and system evolution [7]. The main purpose of requirements traceability is to facilitate the following. - Understanding the software under development and its artifacts. - Ability to manage changes effectively. - Maintaining consistency between the software and the environment in which the product is operating [2]. 3.0 Methods of Requirements Traceability 3.1 Traceability links Traceability links are used to track the relationship between each unique requirement and its source. For instance, a requirement might trace from a business need, a stakeholder, a business rule, an external interface specification, an industry standard or regulation, or from some other source. In addition, traceability links are also used to track the relationship between each unique requirement and the work products to which that requirement is allocated. For example, a single requirement might trace to one or more architectural elements, detail design elements, objects/classes, code units, tests, user documentation, and/or even to people or manual processes that implement the requirement [8]. Good traceability practices allow for bi-directional traceability. According to [6 7 9], and as illustrated in Figure 1, there are four types of traceability links that constitute bi-directional traceability. ![Figure 1: Bi-directional traceability links.](image) - **Forward to requirements.** Maps requirements source/stakeholder needs to the requirements, which can help to directly track down requirements affected by potential changes in sources or needs. This also ensures that requirements will enforce all stated needs. - **Backward from requirements.** Helps to identify the origin of each requirement and verify that the system meets the needs of the stakeholders. - **Forward from requirements.** As requirements develop and evolve into products, a product can be traced from its requirements. Forward traceability ensures proper direction of the evolving product and indicates the completeness of the subsequent implementation. For example, if a requirement cannot be traced forward to one or more products then the product requirements specification is incomplete and the resulting product may not meet the needs of the business [8]. • **Backward to requirements.** This link traces specific product elements *backward to requirements*. Backward traceability can justify the need and existence of that component and verify that the requirements have been kept current with the design, code, and tests [8]. Moreover, it helps to verify that no “gold plating” has been done, which is the adding of features for which requirements do not exist. Bi-directional traceability links give the ability to analyze the impact of changes where all products are affected by a change in requirements and all requirements are affected by a change or defect in products. Moreover, they provide continuous assessment of the current status of the requirements and products by identifying missing requirements. In addition, Figure 2 summarized many kinds of direct traceability relationships that can be defined in a project. A project may not implement all kinds of traceability relationships. However, the choice of traceability relationships is a major contributor to success and efficient maintainability of the system under development. ![Figure 2: Some possible traceability relationships [6].](image-url) 3.2 Traceability matrices A traceability matrix is a document, usually in the form of a table, that correlates any two baselined documents that require a many to many relationship to determine the completeness of the relationship. In software engineering the Requirements Traceability Matrix (RTM) is a classical tool to help ensure that the project’s scope, requirements, and deliverables remain “as is” when compared to the baseline [10]. Thus, it “traces” the deliverables by establishing a thread for each requirement, from the project’s initiation to the final implementation. A traceability matrix summarizes in a table form the traceability from original identified stakeholder needs to their associated product requirements and then on to other work product elements. In a traceability matrix, each requirements source, each requirement and each work product must have a unique identifier that can be used as a reference. <table> <thead> <tr> <th>User Requirement</th> <th>Functional Requirement</th> <th>Design Element</th> <th>Code Module</th> <th>Test Case</th> </tr> </thead> <tbody> <tr> <td>UC-28</td> <td>catalog.query.sort</td> <td>Class catalog</td> <td>catalog.sort()</td> <td>search.7</td> </tr> <tr> <td></td> <td></td> <td></td> <td></td> <td>search.9</td> </tr> <tr> <td>UC-29</td> <td>catalog.query.import</td> <td>Class catalog</td> <td>catalog.import()</td> <td>search.12</td> </tr> <tr> <td></td> <td></td> <td></td> <td>catalog.validate()</td> <td>search.13</td> </tr> <tr> <td></td> <td></td> <td></td> <td></td> <td>search.14</td> </tr> </tbody> </table> Table 1: Portion of a single traceability matrix [6]. Table 1 illustrates a portion of a traceability matrix. Each functional requirement is linked backward to a specific use case and linked forward to one or more design, code and test modules. As the project gets bigger and more complex, more columns are added to extend the links to other work products. Including more traceability details takes more work, but it points the precise locations of the related software elements, which can save time during change impact analysis and maintenance. An important feature of traceability matrices is that more information are added as the work gets done, not as it gets planned. For instance, the “catalog.import()” in Table 1 is added only when the code in that function has been written, tested and integrated into the code base of the project. This type of traceability matrices can accommodates one-to-one, one-to-many, or many-to-many relationships between system elements by having several items in a single table cell. It’s impossible to perform requirements tracing manually for large and complex projects. Table 2 illustrates a two-way traceability which can be easily managed by automated traceability tools, unlike Table 1. In the table, each cell at the intersection of two linked components is marked to indicate the connection. Different symbols can be used in the cells to explicitly indicate “traced-to” and “traced-from” or other relationships. In addition, some tools automatically flag a link as suspect (visually using a red flag or a diagonal red line) whenever the entity on either end of the link is modified. For instance, as shown in Table 2, a change in UC 2.1 will trigger the flags, indicating that requirements 1.1.3 and 1.1.4 need to be inspected. <table> <thead> <tr> <th>Requirement Identifier</th> <th>Reqs Tested</th> <th>REQ UC 1.1</th> <th>REQ UC 1.2</th> <th>REQ UC 1.2.1</th> <th>REQ UC 2.1</th> <th>..</th> <th>REQ UC 4.3.2</th> <th>..</th> </tr> </thead> <tbody> <tr> <td>Test Cases</td> <td>321</td> <td>3</td> <td>2</td> <td>3</td> <td>1</td> <td>..</td> <td>4</td> <td>..</td> </tr> <tr> <td>Tested Implicitly</td> <td>77</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>1.1.1</td> <td>1</td> <td>x</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>1.1.2</td> <td>2</td> <td>x</td> <td>x</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>1.1.3</td> <td>2</td> <td>x</td> <td>x</td> <td>?</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>1.1.4</td> <td>1</td> <td>x</td> <td>?</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>..</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>6.2.1</td> <td>2</td> <td>x</td> <td>x</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>..</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> Table 2: Portion of a two-way traceability matrix. 3.3 Traceability and non-functional requirements Non-functional requirements such as performance goals and quality attributes don’t always trace directly into code [5]. For instance, a portability requirement could restrict the language features that the programmer uses but might not result in specific code segments that enable portability. Another example is integrity requirements for user authentication, which lead to derived functional requirements that are implemented through passwords or biometrics functionality. In such cases, functional requirements are traced backward to their parent non-functional requirements and forward to products as usual. 3.4 Requirements traceability procedure Requirements traceability for a project can be implemented sequentially as follows [6]. - Depending on the project, it is important to define the required relationships from the possibilities illustrated in Figure 2. As mentioned earlier, the choice of relationships is crucial to the project’s success and maintainability. - Identifying the parts of the product to maintain traceability information for. This parts could be critical core functionalities and/or parts that are expected to undergo the most maintenance and evolution over the product’s life. - Choosing the type of traceability matrix to use. - Defining the tagging conventions that will be used to uniquely identify all requirements and system elements so that they can be linked together homogeneously. - Identify the key individuals who will supply each type of link information and the personnel who will coordinate the traceability activities and manage the data. - Educating the team about the concepts and importance of requirements tracing. Enriching the sense of importance and responsibility among the team members and stressing the need for ongoing creation of detailed traceability data. - Auditing the traceability information periodically to make sure it is being kept current. 4.0 Advantages and Necessity of Requirements Traceability 4.1 Advantages of requirements traceability There are many benefits to establishing requirement tracing within a software development lifecycle. Coverage analysis is easier to execute, since all requirements are traced to higher-level sources, it can be verified that all requirements are satisfied [7]. A Better design is the result of requirements tracing, as “best” designs are possible when complete information is available to the architect. Fewer code reworks is another benefit offered, as tracing enables the team to catch potential problems earlier in the process. Moreover, change management is improved, as when a requirement is changed, the entire “trace” can been reviewed for the impact to the application. [12] states that all of these benefits ultimately result in a shorter development cycle and reduced costs. Regarding the maintenance of a system, [13] states that requirements traceability is a prerequisite for effective system maintenance and consistent change integration. Not implementing traceability can have negative effects. It could lead to a decrease in system quality, causes revisions, and therefore, increases project costs and time. It results in a loss of knowledge if key individuals leave the project, could lead to wrong decisions, misunderstandings, and miscommunication. The benefits of implementing requirements traceability can be summarized as follows. - Certification. Traceability information can be used in product certification to demonstrate that all requirements were implemented. - Tracking. Recording traceability data during development allows for an accurate record of the implementation status of planned functionalities. - Maintenance. Accurate traceability information facilitates making changes correctly and completely during maintenance, thus improving productivity. - Re-engineering. Traceability information can be vital in case of re-using requirements from an old system into a new system. - Reuse. Traceability information facilitates reusing product components by identifying components of related requirements and designs. - Risk reduction. Documenting the component interconnections reduces the risk if a key team member with essential knowledge about the system leaves the project. - Testing. In the testing phase, links between tests, requirements, and actual code can be used to track and identify components producing errors or unexpected behaviors. This can eliminate redundancy and save time. 4.2 Is requirements traceability always useful Despite the many benefits of requirements traceability, many project manager’s nowadays do not see a really necessity for traceability due to a number of reasons. Obvious disadvantages are that it takes extra time and effort during the project to keep up with the tracing of requirements, as well as increased maintenance effort, especially in an evolving system that requires numerous changes for future releases [12]. However, perceiving this as a disadvantage is relative. Some people see that traceability is worth while if the information captured is used in the future to save time and effort. Moreover, according to [11], “Establishing and maintaining requirements traceability is an expensive and politically sensitive endeavor. Developers are not exactly known for their love of documentation. Traceability should come as a side effect of their daily productive work rather than imposing additional bureaucracy.” Therefore, when the traceability process is established, a certain amount of management change needs to be employed, especially with regard to developers, who need to be convinced that the minor daily effort of requirements tracing will have payoffs in the future. A pitfall of traceability is the tendency to not know when to stop tracing. Some tracing may go “so deep” that the effort is counter productive. However, setting a clear plan for the traceability process and having the ability to scale back if the project timeline is being cut can counter this drawback. 5.0 Summary This report was a brief overview on requirements traceability, one of the milestones of requirements management in the software development process. Conceptual and practical definitions for requirements traceability were illustrated as well as the motivation for using traceability in projects. Moreover, the report demonstrated the methods and tools of traceability such as bi-directional traceability links and the traceability matrix. Finally, the report briefly stated the procedure for implementing traceability in software projects and the advantages and necessity of requirements traceability. References
{"Source-Url": "http://mikeymorckos.googlecode.com/files/Requirements-Traceability_Report_en.pdf", "len_cl100k_base": 4124, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 24477, "total-output-tokens": 4957, "length": "2e12", "weborganizer": {"__label__adult": 0.00026702880859375, "__label__art_design": 0.00020492076873779297, "__label__crime_law": 0.00029015541076660156, "__label__education_jobs": 0.0007348060607910156, "__label__entertainment": 3.36766242980957e-05, "__label__fashion_beauty": 0.00010854005813598631, "__label__finance_business": 0.0002313852310180664, "__label__food_dining": 0.00024056434631347656, "__label__games": 0.00037550926208496094, "__label__hardware": 0.0004627704620361328, "__label__health": 0.000278472900390625, "__label__history": 0.00012433528900146484, "__label__home_hobbies": 5.072355270385742e-05, "__label__industrial": 0.000247955322265625, "__label__literature": 0.00017392635345458984, "__label__politics": 0.00014400482177734375, "__label__religion": 0.0002868175506591797, "__label__science_tech": 0.005527496337890625, "__label__social_life": 6.61015510559082e-05, "__label__software": 0.005573272705078125, "__label__software_dev": 0.98388671875, "__label__sports_fitness": 0.00020599365234375, "__label__transportation": 0.000278472900390625, "__label__travel": 0.00013399124145507812}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22173, 0.03404]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22173, 0.46348]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22173, 0.92022]], "google_gemma-3-12b-it_contains_pii": [[0, 120, false], [120, 120, null], [120, 2480, null], [2480, 4630, null], [4630, 5771, null], [5771, 7693, null], [7693, 8855, null], [8855, 11485, null], [11485, 14273, null], [14273, 15578, null], [15578, 18107, null], [18107, 19649, null], [19649, 20263, null], [20263, 22173, null]], "google_gemma-3-12b-it_is_public_document": [[0, 120, true], [120, 120, null], [120, 2480, null], [2480, 4630, null], [4630, 5771, null], [5771, 7693, null], [7693, 8855, null], [8855, 11485, null], [11485, 14273, null], [14273, 15578, null], [15578, 18107, null], [18107, 19649, null], [19649, 20263, null], [20263, 22173, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22173, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22173, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22173, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22173, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22173, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22173, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22173, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22173, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22173, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22173, null]], "pdf_page_numbers": [[0, 120, 1], [120, 120, 2], [120, 2480, 3], [2480, 4630, 4], [4630, 5771, 5], [5771, 7693, 6], [7693, 8855, 7], [8855, 11485, 8], [11485, 14273, 9], [14273, 15578, 10], [15578, 18107, 11], [18107, 19649, 12], [19649, 20263, 13], [20263, 22173, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22173, 0.16822]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
c6c7c64ffbeac82dde0102e981c3024b056c4afd
Effectively Updatable Conjunctive Views Citation for published version: Link: Link to publication record in Edinburgh Research Explorer Document Version: Peer reviewed version Published In: General rights Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. Take down policy The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer content complies with UK legislation. If you believe that the public display of this file breaches copyright please contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and investigate your claim. Effectively Updatable Conjunctive Views (Extended Abstract) Enrico Franconi and Paolo Guagliardo KRDB Research Centre, Free University of Bozen-Bolzano, Italy {franconi,guagliardo}@inf.unibz.it 1 Introduction The view update problem [1] consists in finding suitable ways of consistently and univocally propagating the changes introduced into a set of the view relations to the underlying database relations over which the view relations are defined. This can be formalised within a general framework [1,7] where a view is a function that associates instances of a database schema with instances of a view schema, which has a constructive characterisation when each view symbol is defined in terms of the database symbols in a concrete query language. A view is updatable if the changes introduced into the view relations by updates can be unambiguously propagated back to the underlying database relations over which the view relations are defined. This is possible whenever the view is invertible, but invertibility in itself is not enough for practical purposes, as it merely indicates that the database instances functionally depend on the corresponding view instances. What is actually needed is in fact a constructive characterisation of the inverse, obtained by finding an exact rewriting of each database symbol in terms of the view symbols, expressed in a query language that is not necessarily the same as the one used for defining the view symbols. In the context of relational databases, the study of the invertibility of views has focused on very restricted settings. Cosmadakis and Papadimitriou [2] limit their investigation to only two view relations defined by projections over a single database relation, which was recently generalised in [7] to an arbitrary number of view relations defined by acyclic projections, but still over a database consisting of a single relation. Moreover, the set of integrity constraints considered on the database schema has been limited to functional and join dependencies in [2] and full embedded dependencies in [7]. In this paper, we consider a setting where the view symbols are defined by conjunctive queries (CQs) and we show that, when the integrity constraints on the database schema are stratified embedded dependencies [3], a view is invertible precisely if each database symbol has an exact rewriting given by a CQ over the view schema. We then discuss how such rewritings can be effectively found using the Chase & Backchase algorithm [4]. As a special case, in combination with the general criterion for the translatability of view updates we introduced in [7], our results settle the long-standing open issue pointed out in [2] of how to solve the view update problem for a multi-relational database with view relations that are projections of joins of the database relations. Outline. The rest of the paper is organised as follows: after some preliminaries in Sec. 2, in Sec. 3 we recall and summarise the framework introduced in [7]; in Sec. 4 we show how view updatability can be checked, and rewritings effectively found, when the view symbols are defined by CQs in a multi-relational database under stratified embedded dependencies; we conclude in Sec. 5 by pointing out future research directions. 2 Preliminaries A schema is a finite set of relation symbols. Let dom be an arbitrary (possibly infinite) set of domain values. An instance $I$ of a schema $S$ maps each relation symbol $S$ in $S$ to a relation $S'$ on dom of appropriate arity, called the extension of $S$ under $I$. The set of elements of dom that occur in an instance $I$ is called the active domain of $I$ and is denoted by adom$(I)$. An instance is finite when its active domain is, and we always assume instances to be finite. We consider a database schema $R$ of database symbols and a view schema $V$ of view symbols not occurring in $R$. A database state is an instance $I_R$ of $R$ and a view state is an instance $I_V$ of $V$. The set of all database states (resp., view states) is denoted by $R$ (resp., $V$). The disjoint union $I_R \uplus I_V$ of a database state $I_R$ and a view state $I_V$ is the instance, called a global state, of the global schema $R \uplus V$, with active domain $\text{adom}(I_R) \uplus \text{adom}(I_V)$ and associating each relation symbol $S$ in $R \cup V$ with $S'$ if $S \in R$ and with $S'$ otherwise. We consider a satisfiable (finite) set $\Sigma$ of global constraints over $R \cup V$, consisting of a set $\Sigma_R$ of database constraints over $R$ and a set $\Sigma_{RV}$ of interschema constraints over $R \cup V$. The set $\Sigma_{RV}$ consists of exactly one formula of the form $\forall \overline{\theta} \left( V(\overline{\theta}) \leftrightarrow \phi(\overline{\theta}) \right)$ for each $V \subseteq V$, where $\phi(\overline{\theta})$ mentions only database symbols and is called a definition of $V$ in terms of $R$. Note that, as $R$ and $V$ are disjoint, every instance of $R \cup V$ satisfying $\Sigma$ has the form $I_R \uplus I_V$ where $I_R$ and $I_V$ are a database state and a view state, respectively. A view state $I_V$ (resp., database state $I_R$) is $\Sigma$-consistent (or globally consistent or consistent with the global constraints) if there is a database state $I_R$ (resp., view state $I_V$) such that $I_R \uplus I_V$ satisfies the global constraints $\Sigma$. Observe that every legal database state (i.e., one that satisfies the database constraints $\Sigma_R$) is globally consistent. We denote the set of $\Sigma$-consistent view states (resp., database states) by $V_\Sigma$ (resp., $R_\Sigma$). We say that $V$ determines $R$ under $\Sigma$ (written $V \rightarrow_\Sigma R$) if, for every $I_V$ and $I_R$, it is the case that $I_R = I'_R$ whenever $I_R \uplus I_V \models \Sigma$ and $I_R \uplus I_V \models \Sigma$. In other words, models of $\Sigma$ that agree on the extension of the view symbols also agree on the extension of the database symbols, which means that in every model of $\Sigma$ the latter functionally depends on former. Clearly, under the assumptions we made on the global constraints $\Sigma$, it is always the case that $R \rightarrow_\Sigma V$, and we refer to the corresponding functional mapping $f : R_\Sigma \rightarrow V_\Sigma$, associating each $\Sigma$-consistent database state with a $\Sigma$-consistent view state, as the view from $R$ to $V$ induced by $\Sigma$. In the rest of the paper, unless specified otherwise, whenever we say “a view” we refer to the (one and only) view from $R$ to $V$ induced by a set of global constraints $\Sigma$ as above. 3 The View Update Framework In this section we briefly recapitulate the general view update framework previously introduced in [7]. A view update is a function \( u: V \to V \) associating each view state with another, possibly the same. Given a view update that modifies the current view state, we want to modify the database state accordingly so as to reflect exactly the changes introduced into the view state. For this to be possible in an unambiguous way, the view must be updatable, and the view update translatable, as we formally define next. **Definition 1 (Updatability).** A view is updatable if \( V \twoheadrightarrow \Sigma R \). Note that a view in our sense is always surjective, and \( V \twoheadrightarrow \Sigma R \) further implies injectivity [7], hence the notion of updatability coincides with invertibility. **Definition 2 (Translatability).** Let \( u \) be a view update and let \( I_V \in V \Sigma \). We say that \( u \) is translatable on \( I_V \) if \( u(I_V) \in V \Sigma \). A translatable view update leads to view states in the image of the view \( f \), which are therefore reachable from some database state by means of \( f \). In such a case, when the view is updatable, the changes introduced into the view state \( I_V \) by the view update \( u \), resulting in the updated view state \( I_V' = u(I_V) \), can be univocally pushed back by updating the database to the new state \( f^{-1}(I_V') \). However, the fact that \( V \) determines \( R \) under \( \Sigma \), even though it ensures that a view is invertible, does not actually provide a constructive characterisation of its inverse. In other words, \( V \twoheadrightarrow \Sigma R \) guarantees that the extension of the database symbols functionally depends on that of the view symbols, but it says nothing on how the former is to be obtained from the latter. In order to effectively compute the inverse of a view, we must be able to explicitly express each database symbol \( R \in R \) in terms of the view symbols \( V \) by means of a formula \( \psi \), called an exact rewriting of \( R \) in terms of \( V \) under \( \Sigma \), mentioning only view symbols and such that \( \Sigma \models \forall \overline{x} (R(\overline{x}) \leftrightarrow \psi(\overline{x})) \). **Definition 3.** A view is effectively updatable if each database symbol has an exact rewriting in terms of the view symbols under \( \Sigma \). Whenever a view update results in a view state \( I_V \) in \( V \Sigma \), if a view is effectively updateable the changes can be propagated to the database state \( f^{-1}(I_V) \) by computing the extension of each database symbol from its rewriting in terms of the view symbols. But at this point, the question is: How do we ascertain whether a view state belongs in fact to \( V \Sigma \)? The solution consists in constructing what we call the \( V \)-embedding \( \tilde{\Sigma}_V \) of \( \Sigma \), which is obtained from \( \Sigma \) by replacing every occurrence of each \( R \in R \) with its rewriting in terms of \( V \). The resulting set of constraints mentions only view symbols and, as it turns out, is satisfied exactly by all and only the view states in \( V \Sigma \). Thus, checking for the translatability of a view update amounts to checking whether the updated view state satisfies \( \tilde{\Sigma}_V \). **Theorem 1 ([7]).** Let \( f \) be an effectively updatable view, let \( u \) be a view update, and let \( I_V \in V \Sigma \). Then, \( u \) is translatable on \( I_V \) if and only if \( u(I_V) \models \tilde{\Sigma}_V \). Whether a view update \( u \) is translatable on a view state \( I_V \) satisfying \( \bar{\Sigma}_V \) can be checked in polynomial time in the size of \( u(I_V) \), which is the data complexity of testing whether a finite relational structure is a model of a FOL theory. Summing up, for the above machinery to work it is essential to find a rewriting of each database symbol in terms of the view symbols, in order to build the \( V \)-embedding of \( \Sigma \) (and thus checking for the translatability of updates) and to propagate the changes introduced by translatable view updates (by computing the extension of the database symbols from that of the view symbols). Note that checking whether a view is invertible, and eventually computing its inverse, is an operation performed once for all offline. ### 4 Conjunctive Views In this section, we settle the long-standing open issue pointed out in [2], namely how to solve the view update problem in a multi-relational database with views that are projections of joins of relations, and we do so in a more general setting where the view symbols are defined by CQs and the constraints on the database schema are embedded dependencies satisfying appropriate restrictions. **Example 1.** Let \( \mathcal{R} = \{ R_1, R_2 \} \) and \( \Sigma_\mathcal{R} \) consist of the following embedded dependencies (in this case, inclusion and functional dependencies): \[ \begin{align*} R_1(x, y, z) &\rightarrow \exists v, w R_2(z, v, w), \\ R_2(z, v, w) &\rightarrow \exists x, y R_1(x, y, z), \\ R_1(x, y, z) \land R_1(x', y, z') &\rightarrow z = z', \\ R_2(z, v, w) \land R_2(z, v', w') &\rightarrow v = v'. \end{align*} \] Let \( \mathcal{V} = \{ V_1, V_2, V_3 \} \) and \( \Sigma_\mathcal{V} \) consist of the following view definitions: \[ \begin{align*} V_1(x, y) &\leftrightarrow \exists z R_1(x, y, z), \\ V_2(y, z, v) &\leftrightarrow \exists x, w R_1(x, y, z) \land R_2(z, v, w), \\ V_3(z, w) &\leftrightarrow \exists v R_2(z, v, w). \end{align*} \] The embedded dependencies we consider are required to satisfy the condition known as **stratification** [3], which is based on the notion of **chase graph**: The chase graph of a set of embedded dependencies \( \Sigma \) has the dependencies in \( \Sigma \) as nodes and, for \( \alpha, \beta \in \Sigma \), has an edge from \( \alpha \) to \( \beta \) if and only if, intuitively, firing \( \alpha \) may cause \( \beta \) to fire as well (refer to [3] for the formal definition). Then, \( \Sigma \) is **stratified** if the set of dependencies in every cycle of its chase graph is **weakly acyclic** [6,5]. Note that every weakly acyclic set of dependencies is also stratified. Indeed, \( \Sigma_\mathcal{R} \) of Example 1 is weakly acyclic and, in turn, stratified. Our main result establishes that, when the embedded dependencies over the database schema are stratified, the view is invertible precisely if each database symbol has an exact rewriting as a conjunctive query over the view schema. \(^1\) Universal quantifiers are omitted. Algorithm 1 INPUT: an atomic query over \( R \), a view schema \( V \), a set of embedded dependencies \( \Sigma \) over \( R \cup V \). OUTPUT: an exact CQ rewriting of \( q \) in terms of \( V \) under \( \Sigma \), if any, or \( \perp \) otherwise. 1: function \text{Rewrite}(q, V, \Sigma) 2: for each subquery \( q' \) of \text{chase}(q, \Sigma) over \( V \) do 3: repeat 4: if \( \exists \) containment mapping from \( q \) to \( q' \) return \( q' \) 5: set \( q' \) to \text{chase-step}(q', \Sigma) 6: until no further \text{chase step} applies 7: return \( \perp \) 8: end function Theorem 2. Let \( \Sigma = \Sigma_R \cup \Sigma_{RV} \), where \( \Sigma_R \) consists of stratified embedded dependencies and each \( V \in V \) is defined in \( \Sigma_{RV} \) by a CQ. Then, \( V \rightarrow_\Sigma R \) if and only if each \( R \in R \) has an exact CQ rewriting in terms of \( V \) under \( \Sigma \). The \text{Chase and Backchase} (C&B) [4] is an algorithm that enumerates the exact CQ-rewritings of a CQ under constraints. More precisely, given two schemas \( S \) and \( T \), a set of embedded dependencies \( \Gamma \) over \( S \cup T \), and an input CQ \( q \) over \( S \), the C&B outputs all the CQs over \( T \) which are equivalent to \( q \) under \( \Gamma \). The C&B is sound and complete, in the sense that it returns all and only the CQs into which the input CQ can be rewritten (up to homomorphic equivalence) under the given constraints, whenever the chase is guaranteed to terminate, which is the case, e.g., for stratified sets of dependencies. Obviously, the fact that the output of the C&B is empty for \( q \) does not mean that \( q \) has no rewriting in terms of \( T \) under \( \Gamma \), but simply that its rewriting, if any, is not a CQ. We can use the C&B to look for the rewritings we are interested in. For each \( R \in R \), consider the atomic query \( q(\pi) = R(\pi) \) and proceed as follows: \textbf{Chase.} Chase \( q \) with \( \Sigma \) until no further chase step applies. The resulting query is the so-called \textit{universal plan} \( U \). \textbf{Backchase.} Every subquery of \( U \) over \( V \) (i.e., a set of \( V \)-atoms from \( U \) mentioning all of \( q \)'s free variables) is a candidate rewriting of \( q \). Chase each candidate \( q' \) with \( \Sigma \) step-by-step until no further chase step applies, and at each new step in the chase sequence check whether a containment mapping from the original query \( q \) can be found. If that is the case, then \( q' \) is a rewriting of \( q \). The above is described in more detail in Algorithm 1 and illustrated in our running example. Example 2. Chasing the query \( q(x, y, z) = R_1(x, y, z) \) with \( \Sigma \) of Example 1 gives the following universal plan: \[ U(x, y, z) = \exists v, w R_1(x, y, z) \land R_2(z, v, w) \land V_1(x, y) \land V_2(y, z, v) \land V_3(z, w) . \] A candidate rewriting of \( R(x, y, z) \) in terms of \( V \) is the subquery \( q'(x, y, z) = \exists v \ V_1(x, y) \land V_2(y, z, v) \), that chased with the left-to-right TGDs from (2a) and (2b) yields the following: \[ q''(x, y, z) = \exists v, z', x', w \quad V_1(x, y) \land V_2(y, z, v) \land R_1(x, y, z') \land R_1(x', y, z) \land R_2(z, v, w). \] A further chase step with (1c) gives \( z' = z \), and therefore we can find a containment mapping (the identity) from the original query \( q \) to \( q'' \). Thus, the rewriting of \( R_1(x, y, z) \) is \( \exists v \ V_1(x, y) \land V_2(y, z, v) \). Similarly, we also have that \( R_2(z, v, w) \) can be rewritten in terms of \( V \) as \( \exists y \ V_2(y, z, v) \land V_3(z, w) \). As it turns out, the constraints \( \Sigma \) considered in Theorem 2 are stratified, and this ensures that if an exact CQ-rewriting of \( R(\pi) \) cannot be found by means of Algorithm 1, then \( R \) cannot be expressed at all in terms of the view symbols. **Theorem 3.** Let \( \Sigma \) be as in Theorem 2. Then, the procedure Rewrite of Algorithm 1 is sound and complete for finding the exact rewriting of each database symbol in terms of the view symbols under \( \Sigma \). Moreover, \( V \rightarrow_\Sigma R \) if and only if \( \text{Rewrite}(q, \Sigma, V) \neq \bot \) for every atomic query \( q \) over \( R \). The results presented above extend the setting of [7], consisting of only one database symbol, view symbols defined by acyclic projections and database constraints given by full dependencies, which is a special case where the rewriting is known to be the join, rather than a generic CQ. ### 5 Outlook We conclude by pointing out and briefly discussing possible research directions that would be interesting to pursue and investigate further. - Consider views defined by queries expressed in languages beyond CQs. A first natural candidate is the class of unions of conjunctive queries. - Consider different constraints on the database schema. Several sufficient conditions for chase termination have been proposed, e.g., super-weak acyclicity [8], safety and inductive restriction [9], some of which extend stratification, while some other are incomparable with it. The question is whether the global constraints satisfy such conditions, as is the case for stratification when views are defined by CQs and database constraints are stratified embedded dependencies. - Consider constraints also on the view schema. We can allow a set of view constraints \( \Sigma_V \) for which \( \Sigma_R \cup \Sigma'_V \) is a set of stratified embedded dependencies, where \( \Sigma'_V \) is the set of constraints over \( R \) obtained from \( \Sigma_V \) by replacing every occurrence of each view symbol with its CQ-definition (given in \( \Sigma_{RV} \)) in terms of the database symbols. What is interesting to understand is the shape that such view constraints must have in order to satisfy the above condition. References
{"Source-Url": "http://www.research.ed.ac.uk/portal/files/19428972/Franconi_Guagliardo_2013_Effectively_Updatable_Conjunctive_Views.pdf", "len_cl100k_base": 5171, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 26597, "total-output-tokens": 6340, "length": "2e12", "weborganizer": {"__label__adult": 0.0005207061767578125, "__label__art_design": 0.0004582405090332031, "__label__crime_law": 0.0006895065307617188, "__label__education_jobs": 0.0021076202392578125, "__label__entertainment": 0.00011610984802246094, "__label__fashion_beauty": 0.00026917457580566406, "__label__finance_business": 0.0008230209350585938, "__label__food_dining": 0.0006213188171386719, "__label__games": 0.0008263587951660156, "__label__hardware": 0.0007543563842773438, "__label__health": 0.0015659332275390625, "__label__history": 0.0004940032958984375, "__label__home_hobbies": 0.0001971721649169922, "__label__industrial": 0.000743865966796875, "__label__literature": 0.0008950233459472656, "__label__politics": 0.0003418922424316406, "__label__religion": 0.0005869865417480469, "__label__science_tech": 0.181396484375, "__label__social_life": 0.0001823902130126953, "__label__software": 0.0170745849609375, "__label__software_dev": 0.78759765625, "__label__sports_fitness": 0.0003731250762939453, "__label__transportation": 0.0008740425109863281, "__label__travel": 0.00031948089599609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21657, 0.02292]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21657, 0.30455]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21657, 0.85171]], "google_gemma-3-12b-it_contains_pii": [[0, 1216, false], [1216, 4061, null], [4061, 7831, null], [7831, 11420, null], [11420, 14474, null], [14474, 17400, null], [17400, 20385, null], [20385, 21657, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1216, true], [1216, 4061, null], [4061, 7831, null], [7831, 11420, null], [11420, 14474, null], [14474, 17400, null], [17400, 20385, null], [20385, 21657, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21657, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21657, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21657, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21657, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21657, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21657, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21657, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21657, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21657, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21657, null]], "pdf_page_numbers": [[0, 1216, 1], [1216, 4061, 2], [4061, 7831, 3], [7831, 11420, 4], [11420, 14474, 5], [14474, 17400, 6], [17400, 20385, 7], [20385, 21657, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21657, 0.0]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
a85167a8799aa1f705630b7ee259fb0d8195ec47
DEVELOPMENT OF LOCATION-AWARE APPLICATIONS The Nidaros framework Alf Inge Wang¹, Carl-Fredrik Sørensen¹, Steinar Brede², Hege Servold³, and Sigurd Gimre⁴ ¹Dept. of Computer and Information Science, Norwegian University of Science and Technology, NO-7491 Trondheim, Norway, ²Telenor R&D, NO-7004 Trondheim, Norway, ³Bekk Consulting AS, NO-0102 Oslo, Norway, ⁴CIBER Norge AS, NO-0103 Oslo, Norway Abstract: This paper presents the Nidaros framework for developing location-aware applications that provide location dependent functionality based on the current location of the user. The framework can be used to develop location-dependent advertisement, city guides, guides for tourist attractions, etc. The framework consists of three main components: A runtime system that manages user locations and the interaction with the user clients; a creator tool that is used to map information and multimedia content to locations; and a logging tool that is used to log the movement of users to monitor the interest for certain locations. The paper also describes an implementation of a location-aware tour guide for the Nidaros Cathedral in Trondheim that can run on different mobile devices. Further, the paper describes experiences from installing, configuring, and running a location-aware tour guide in a real environment. A demonstration of the tour guide was tested on PDAs and mobile phones. Keywords: Context/location, case studies and experience, applications, tour guide 1. INTRODUCTION In 2005, it is estimated that there will be more than 1.5 billion wireless subscribers worldwide. Although mobile computing gives challenges to the application developer like handling wireless networks, heterogeneity, limited screen size, input device CPU, memory and battery (Satyanarayanan, 1996), it gives possibilities to develop new types of applications. One such type is location-aware applications. Location-aware applications are useful for several reasons: Firstly, the limited screen size of mobile devices can be better utilized by providing user interfaces that are related to the context of the user. By using the location, the parts that are not relevant to the current location can be left out. Secondly, the user experience can be improved by providing the user with information and interaction that are relevant to the current context. Here the context is necessarily not only location, but can also be time, weather, temperature, altitude etc. Thirdly, a system can collect context information from users to further improve the location-aware system. For instance in a tour guide system for an art gallery, the system can log which paintings the visitors spend most time by. This information can be used to publish additional information about the most popular paintings in the location-aware guide system. Several location-aware systems for tour guiding have been developed like the Lancaster GUIDE (Cheverst et al., 2000), CyberGuide (Abowd et al, 1997) and MUSE (Garzotto et al., 2003), but most of these systems are tailored for a specific location-aware application. In 2003, the Norwegian University of Science and Technology (NTNU) started together with Telenor, the largest telecom company in Norway, to develop a general framework for creating, running, and analysing mobile location-aware systems. The motivation for this work was to enable rapid development of location-aware systems that can provide the user with information or multimedia content dependent on the user location. Another important aspect of the framework was to enable support for different mobile clients with different characteristics from the same system. From similar projects, we have found that location-aware systems use various client devices from rather big portable PCs down to small PDAs (Sørensen et al., 2003). Also we noticed that some location-aware systems use customized hardware to get the required characteristics. In addition, the evolution of mobile devices makes it necessary to be able to adapt to future devices with new and useful capabilities. From talking with people managing a PDA-based tour guide (not location-aware) at the Nidaros Cathedral, we understood that theft was a serious challenge. Letting people use their own mobile equipment (like mobile phones) for such services was found to be very interesting. Another shortcoming for many of the existing systems is that they are tailored to support only one type of positioning technology like GPS, GSM, Bluetooth, IR or WLAN positioning. We used in the Nidaros framework a location server to fetch the user positions from various sources and to send this information back to the system when needed. The location server examines position technologies available to return the most accurate position. Figure 1 shows an overview of the Nidaros framework for development of location-aware applications. The framework covers development and operation of a location-aware system. The development phase is supported in the framework with a creator tool to add and map location-dependent content, an XML client-server interface and templates for creating clients. A runtime system and a statistic tool support the operation phase. In addition the framework make use of a location system database and a location server. 2. THE FRAMEWORK This section presents the main components in the Nidaros framework. 2.1 The Development Phase The development phase of the framework is supported by three components: A creator tool, client templates, and the XML interface between client and server. The Creator Tool The creator tool is a system for managing information and multimedia content related to locations that are to be part of a location-aware application. The tool provides a simple user interface that facilitates describing areas hierarchically into maps, zones and objects as a tree structure. A map represents the whole area of the location-aware application. A map can be divided into one or more zones that represent some specific areas in the map. Within a zone, you can add several objects. These data objects typically represent physical objects and may contain information, audio-clips, video-clips and similar. The maps, zones and objects are mapped to a local Cartesian coordinate where the x- and the y-axis are represented with a 32-bit integer. The coordinate system can be mapped to various geographical positioning systems and makes the framework independent of the actual coordinate systems. By using a Cartesian coordinates in the application, it is easy to visualize maps and objects on the screen of the client device. In the local coordinate system, a map is represented as a rectangle. A zone is represented as a polygon of four coordinates. An object is represented by a specific coordinate and with a hotspot area represented by a polygon similar to a zone. The hotspot area is used to determine if a user is close to an object to trigger some location-aware event. In the tour guide application we developed for the Nidaros Cathedral (described in Section 3), the map represented the whole cathedral, the zones represented the main parts of the cathedral, and the objects represented physical objects like alters, the pipe organ, paintings, etc. The creator tool offers a user interface for how various devices like laptops, PDAs and mobile phones are mapped to the local coordinate system. The mapping identifies the type of device by name, the size of the device, an offset coordinate \((X_{\text{offset}}, Y_{\text{offset}})\), a rotation coordinate \((X_{\text{rotation}}, Y_{\text{rotation}})\) and a rotation angle \(R_{\text{angle}}\). These data are used to compute the transformation from real world coordinates to the local coordinate system. The creator tool can be used to graphically visualize the results of using the tool. The visualization shows the map you have created with named zones and objects. The zones and the objects are shown in different colours. Hotspot areas are shown for the objects. The Client Templates The framework provides a template for reducing time to implement clients for location-aware applications. The template is intended for clients that run Macromedia Flash applications. Flash makes it possible to make advanced and dynamic interfaces, and can run on various operating systems like MS Windows, Mac OS, Linux, Unix and Pocket PC. The Flash player is also capable of audio and video playback. The template offers the basic functionality needed to implement location-aware clients. The template includes functionality for server communication using XML, graphical highlighting of zones and objects, management of hotspots (event-triggered actions), and initialization of multimedia playback. When the template is used, the developer has only to design the user interfaces and possibly additional functionality if wanted. The template makes the implementation of clients easier and possible misunderstandings of the XML-interface with the server are avoided. **The XML Interfaces** The Nidaros framework provides an open XML interface to the runtime system that makes it possible to create clients for different devices using different technologies like Flash, Java 2 Micro Edition, HTML, .NET compact framework, etc. The only requirement is that the client is capable of managing XML communication and adheres to the specified XML interface. The client application can send a request to the runtime system in several different ways, but it has to follow a predefined XML syntax. The response will likewise follow a predefined syntax. If the client application requires downloading of multimedia content, this is done by an HTTP-request to a file server (see Section 2.2). The root element in every XML request is a `locationRequest`. This root element can contain several different elements depending on what kind of information that is wanted. Every `locationRequest` must contain an element called `mac` that holds the mac address of the client device. The runtime server needs the mac address to identify and find the position of the user. The following information can be requested from the server: `Position` will get the current position of the user, `simulatedPosition` will get a simulated position of the user (useful for demonstration and testing), `friendsPosition` will return positions of other users registered as friends, `dynamicInfo` will return objects that the user is within the hotspot area of, `tracks` will return some predefined routes that the user might want to follow, and `messages` is used for sending and retrieving messages between users. When a request is sent to the server, the client will get a response from the server with the necessary information depending on the request type. ### 2.2 The Operation Phase and Runtime System The main components used in the operation phase are the user clients, the runtime system and the location system database. The user clients are developed using the client template and the XML-interface described in Section 2.1. The runtime system is the heart of the Nidaros framework that brings location-aware applications alive. The runtime system manages the information in the database including maps, zones, objects, users, and etc. The main task of the runtime server is to feed the clients with correct information according to the client position. Figure 2 shows the physical view of the runtime system. The runtime system supports several types of clients and identifies three client types we have implemented. The figure shows that wireless LAN is used between the server and the clients, but also other types of wireless networks have been used. Currently, our mobile phone client uses GPRS for communication between the client and the server. The runtime system itself consists of four main components described below. The **file server** stores media files accessible for mobile clients. A file server is used because mobile devices are not likely to store all media files locally because of the limited memory. How much the file server is used depends on the magnitude of multimedia used and the storage capability of the mobile device. Multimedia elements used often should be cached on the device. The **runtime server** is responsible for handling requests from the clients and providing the services requested. A more detailed description of this component is given later in this section. The **location system database** stores all information used by the clients and the server. This database is also used by the creator tool when creating location-aware applications, and by the statistical tool for analysis of user patterns. The **location server** gets the current positions of the clients through various interfaces to different position technologies like WLAN positioning, GPS, IrDA, Bluetooth etc. A more detailed description can be found in Section 2.4. From early scalability tests we have found that the wireless network between the clients and the server will be the main bottleneck of the system. If the GSM network is used, only audio streaming is supported. If WLANs like IEEE 802,11b are used, video streaming is supported. The number of simultaneous users to be served depends on how well the physical network is implemented. Another possible bottleneck can be the file server. However, such servers can be duplicated to achieve better performance. Figure 3 shows the logical view of the runtime server architecture. The system is communicating with client applications through the *GlocServlet* class. Data is exchanged as XML, and the *XMLTransformer* class interprets and transforms the information sent between clients and the server. For the runtime server, we decided to use an architecture based on a centralized control model. This means that all information flow through the *MainController* class. By using centralized control, it is easy to analyse control flows and get the correct responses to the given client requests. It also makes it easy to substitute the servlet class with another class for handling the client communication and to add new interfaces to the system as needed. The *userManager* class is responsible for maintaining information about the users. This task includes storing the user’s last position and deciding whether a person is allowed to communicate with another person (defined as friend). The *PositionManager* class is responsible for returning the user position, adjusted to the type of mobile device used. The *TrackManager* class is responsible for keeping information about the available predefined tracks. Each track has a unique name, so the client application can either request all tracks or one particular track. The *DbManager* class is responsible for all communication with the database. ![Figure 3. The Logical view of the Runtime system](image) ### 2.3 The Analysis Phase The statistical tool is useful for analysing the use of location-aware applications. The tool uses data logged by the runtime system to look at user behaviour and what objects that are most popular. Four classes implement the statistical tool: GUI, LogTool, DbManager and DbLog. The GUI class presents the different services. The LogTool class is responsible for calculation and manipulation of the data stored in the database. The DbManager and DbLog classes handle database issues. ### 2.4 The Location Server The location server (see Figure 4), developed by Telenor R&D, is a framework for uniform, geometric-based management of a wide variety of location sensor technologies. The goal of this framework is to have one server that can get positions of mobile devices through multiple location sensing technologies. By using the location server in the Nidaros framework, we do not need to tailor our system to use a specific positioning technology. We can also use different positioning technologies within the same application. ![Diagram](image) *Figure 4. The architecture of the location server* The server includes a middleware protocol specification and a specification of quality-of-service parameters. Further, the server has support for event-driven position reporting (i.e. for change of position) and support for methods for merging and position enhancements. Figure 4 shows an overview of the location server architecture. The architecture is layered and shields the application from the details of collecting and merging location information from a variety of sources. The devices that are positioned by the server are identified by the MAC-address of the device. The location server also manages authorisation for accessing the position of a device. Further, the architecture provides support for monitoring and tracking of mobile devices. The architecture provides several interfaces to various positioning technologies. The location server currently supports positioning using GPS, GSM, and WLAN. One advantage from using the location server is that this server handles the complexity of merging locations that might include partly overlaps of positioning systems and seamless transitions between different position systems. This is especially useful for location-aware applications that cover both indoor and outdoor areas. 2.5 The Location System Database The location system database stores location data, log data, and user data. The location data is represented in five tables describing maps, zones, objects, preferences, and mapping. The motivation for these tables is that one location can have several maps covering different territories, each map can cover several zones, each zone can contain several objects, and each object can be connected to several preferences. The preferences of an object are used to provide the dynamic menu service. The user can state the preferences in the attraction, and only objects matching his preferences will be displayed in the client menu. In addition, there must exist a table with mapping information to transform locations from world coordinates to coordinates adjusted to fit the client map. The user data is represented in three tables describing users, user groups and user preferences. The user table contains an MAC-id of the user device and other information. The user group table is used to group users that are friends to allow services like tracking friends and messaging. The user preferences table stores information about the user's main interests. The log data is represented in two tables. One table is used for storing user movements and one table for storing what kind of objects the user is interested in. The statistical tool uses the log data. 3. IMPLEMENTING A TOUR GUIDE USING THE FRAMEWORK We created a location-aware tour guide for the Nidaros Cathedral in Trondheim to test the Nidaros framework in a real setting. The cathedral had an existing PDA-based tour guide, called Nidaros Pocket Guide (NPG) that was not location-aware. To show the flexibility of the Nidaros framework, we decided to implement support for two different types of clients: A PDA and a mobile phone. ### 3.1 The PDA Client We decided to develop our location-aware PDA client in Flash MX. This made it possible to reuse code from the NPG and the client template. Our PDA application provided functionality for selecting language, enable trace of own movement, show friends on a map, show zones and objects on a map (with highlighting of current zone and objects), displaying text about objects, and playback of audio or video about an object. The PDA used to run the client was an iPaq with Windows CE and the wireless LAN IEEE 802.11b network integrated. This made it possible to get the position of the device using WLAN-positioning. The location-awareness was presented in the client application in two ways. The first way was to show a map with the position of the user, position of possible friends, highlighting of current zone and nearby objects. The second way was optional for the user, and made the application able to initiate display of objects (text, audio or video) when the user entered the hotspot area of an object. A PDA client is shown to the left in Figure 5. ![Figure 5. The system running on iPaq and SonyEricsson P900 and the WLAN tag](image) ### 3.2 The Mobile Phone Client A mobile phone client could either be implemented in J2ME or using HTML and the web-browser installed in the phone. We decided to do the latter by implementing an additional servlet component that communicates with the runtime server and produces HTML for the mobile phone. We used WAP push to send the appropriate web pages when the user was within a specific area to enable the client to react on the user location. This made it Development of Location-aware Applications possible to, e.g., get the phone to initiate playback of a video when the user was close to a specific altar. We used a Sony Ericsson P900 in the test because of its support for WAP push and the built-in multimedia player. To the centre in Figure 5, the mobile phone client is running on a P900. The most common way to position mobile phones is to use GSM positioning. This method is too course-grained to be used for positioning inside a cathedral. To solve this problem, we came up with the idea of letting the user wear WLAN tags that could be positioned using WLAN positioning. WLAN tags are produced by RadioNor Communications, and are small WLAN radios that transmit a MAC-ID. A picture of a WLAN-tag is shown to the right in Figure 5. The size of the tag is about 4cm wide and 3cm high. For the mobile phone client, it was necessary to register the MAC-ID of the WLAN tag and the mobile phone number to link the phone to the tag. 3.3 The Position Technology Used We used the Cordis RadioEye WLAN positioning produced by RadioNor Communications to position the client devices used in our location-aware application. The RadioEye is a small box with advanced antenna technology and a Linux-server that can determine the physical coordinates of every WLAN-terminal that are active within its coverage area. The sensors decode the MAC addresses of the network devices and determine their geographical position with a typical accuracy of 1-2 meters. A RadioEye covers a radius about 60 degrees from the centre and can e.g. be placed in the ceiling of a building. 3.4 Setting up and Running the Location-aware Application The demonstration of the location-aware tour guide for the Nidaros Cathedral was performed May 14th 2004. Before we could start to run the demonstration, we had to install the infrastructure needed for running the system. Four RadioEyes were installed at the gallery of the cathedral to cover four different zones of the building. After the RadioEyes were in place, the coordinates from the RadioEye server had to be aligned with the coordinates used by the system. This was done by taking measurements from different locations in the cathedral. These measurements were made in a pre-test before the real demonstration. Approximately 20 people were present at the demonstration representing various technology-oriented companies as well as the media (television, magazines and newspapers). The demonstration of the system was very well received and especially the mobile phone application attracted much attention. As far as we know, there are no similar location-aware applications running on mobile phones. 4. EXPERIENCES This section describes experiences we gained from developing a location-aware application using the Nidaros framework. We have found the framework very useful for several reasons. Firstly, the server side of the system can be used directly without any modifications. The only thing missing is the information to be used in the location-aware application. This information can easily be added by using the creator tool. In addition, the creator tool can be used to make changes to the location information before and during the operation phase. By using the available client template, the time to implement a client is rather short. Currently, client templates are only available for HTML and Flash. It would have been useful to provide templates for Java 2 Micro Edition and .Net Compact Framework. It does not require much work to write a client from scratch using the defined XML-interface. XML parsing might cause a challenge for a client implemented in J2ME because of the memory limitation in J2ME. However, the XML messages used in the Nidaros framework are relatively small and simple, and do not contain several levels. A possible extension of the Nidaros framework could be to make a client generator to ease the implementation of new clients for all client technologies. The simulated movement of users was found to be a very useful feature of the runtime system and was invaluable for testing client applications. It takes several hours to set up a real environment with real sensors, and it would be time consuming to engage real users to debug the application. The database used in the framework was found general enough for the tour guide application. However, there are limitations on what type of information that can be stored. This means that the database scheme might have to be extended to fit any location-aware application. We introduced a file server that could feed the clients with multimedia contents because the limited storage available on client devices. In an ideal system, all the media files should be stored locally on the device for quick and responsive presentations. This was impossible for the tour guide for the Nidaros Cathedral if more than one language should be supported on the same device. Most of the multimedia files were audio files, but it was not storage space enough for more than one language. By introducing more videos, this would be a bigger problem. We found from the demonstration in the cathedral that the user had to wait from 5 to 7 seconds before the audio or video was played. We stored the most used media files on the device to avoid such long waiting times. An extension of the Nidaros framework could be to have smarter media file communication management. This means, e.g., that the mobile client can start to pre-fetch files that are likely to be played in the near future based on the location and movement of the user. The mobile phone client got tremendous attention when we demonstrated the location-aware tour guide in the Nidaros Cathedral. The main reason was to be demonstrated such advanced applications running on devices that many own. The current solution involves a WLAN-tag to make the system work. For a commercial tour guide for mobile phones, the WLAN-tag could be available in the entrance of a sight by paying the entrance fee. Sending an SMS that includes the ID of the tag to the tour guide server could initialize the setup of the mobile location-aware tour guide. The combination of using a WLAN-tag together with a mobile phone gives other new exiting opportunities for location-aware applications that can be used both indoors and outdoors. The WLAN-positioning technologies like the RadioEye can be installed in all public building like airports, hospitals, shopping centres etc. The use of a location server makes it easy to adapt to new positioning technologies when they are available. The only required change of the system is to implement a new interface for the new position technology in the location server. The choice of using XML for data exchange between client and server has many benefits. The main benefit is the extensibility of the interface and provision of an open interface to other systems. The main disadvantage of using XML is the overhead sending messages between client and server. For a system with many users, this can cause scalability problems because of the limited bandwidth in wireless networks. A problem we experienced running the location-aware tour guide, was the high demand on CPU and memory on the mobile clients for parsing the XML-data. Generally, the clients spent more time on parsing XML data than they used to send requests and receive responses. A high demand on the CPU will also make the battery run out faster. We foresee that this problem will not be so dominant for future mobile devices with improved performance and battery technology. 5. RELATED WORK This section describes work on similar frameworks from location-aware and context-aware applications. The NEXUS Project (Volz and Sester, 2000) has developed a generic infrastructure that can be used to implement all kinds of context-aware applications both for indoor and outdoor services. The NEXUS clients access the server via a standardized user interface running on the mobile device carried by the user. The interface has to be adjusted to the different kinds of clients, especially concerning the different level of computing power, different amounts of memory and different size of displays. A NEXUS station can manage sensor systems that measure both global indicators (like temperature) and object related information (like location). The NEXUS framework uses separate components for sensor management and communication, and use spatial models with multiple representations to model the physical world stored in distributed databases. The Framework for Location-Aware ModEling (FLAME) is a configurable, generic software framework for development of location- aware applications (Coulouris et al., 2004). FLAME provides support for multiple sensor technologies, provides a simple spatial model for the representation of locatable entities. In addition, FLAME provides a simple event architecture for the presentation of location information to applications, and a queryable location database. The framework and its applications are largely event-driven in order to accommodate the real-time nature of the location information that they handle. The database holds the initial states (like static regions), and it also holds a synopsis of the real-time location information. A region manager stores and retrieves regions from the database. A spatial relation manager generates application-related events to satisfy currently active subscriptions. The event adapters generate events, e.g., when a person has moved a "Person Movement Event" is generated. Dey and Abowd (Dey and Abowd, 2001) presents requirements and a conceptual framework for handling context information. The requirements to be fulfilled by the framework are Separation of concerns, Context interpretation, Transparent, distributed communications, Constant availability of context acquisition, Context storage and history, and Resource discovery. The conceptual framework for handling context by Dey and Abowd suggests the use of context widgets to provide access for applications to context information from their operating environment. The context widget is regarded as a mediator between applications and the operating environment, insulating applications from context acquisition concerns. Context-specific operations are addressed in the framework by four categories of components: interpreters, aggregators, services and discoverers. The framework defined by Dey and Abowd focuses more on the management of various context sources and to represent these context sources in an application. 6. CONCLUSION Although there exist different types of location-aware applications, there are still many location-aware services that have not been explored. Many of the existing location-aware applications are tailored just for one purpose. In this paper, we have presented the Nidaros framework to be used to implement location-aware applications. Our framework provides support for the development phase, the operation phase and the analysis phase. In the development phase the creator tool is used to add needed information into the database to be used by the final application. Further, the client templates can used for faster development of mobile clients with some basic functionality. In the operation phase, the runtime system manages all interaction between clients and the server including communication of multimedia files and determination of clients' positions using a location server. For the analysis phase a statistic tool can be used to analyse the usage of the location-aware application, detect possible bottlenecks of the system, and see what objects that are most popular. References
{"Source-Url": "http://dl.ifip.org/db/conf/ifip8/mobis2005/WangSBSG05.pdf", "len_cl100k_base": 6249, "olmocr-version": "0.1.49", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 14925, "total-output-tokens": 7396, "length": "2e12", "weborganizer": {"__label__adult": 0.0004725456237792969, "__label__art_design": 0.0007615089416503906, "__label__crime_law": 0.0005497932434082031, "__label__education_jobs": 0.0007691383361816406, "__label__entertainment": 0.00013446807861328125, "__label__fashion_beauty": 0.00025272369384765625, "__label__finance_business": 0.00034117698669433594, "__label__food_dining": 0.0005364418029785156, "__label__games": 0.001163482666015625, "__label__hardware": 0.00518798828125, "__label__health": 0.0008563995361328125, "__label__history": 0.0012254714965820312, "__label__home_hobbies": 9.626150131225586e-05, "__label__industrial": 0.0005345344543457031, "__label__literature": 0.00029659271240234375, "__label__politics": 0.0003590583801269531, "__label__religion": 0.00048279762268066406, "__label__science_tech": 0.1719970703125, "__label__social_life": 7.641315460205078e-05, "__label__software": 0.043243408203125, "__label__software_dev": 0.76806640625, "__label__sports_fitness": 0.0004274845123291016, "__label__transportation": 0.0012426376342773438, "__label__travel": 0.0011320114135742188}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34153, 0.01388]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34153, 0.38962]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34153, 0.91114]], "google_gemma-3-12b-it_contains_pii": [[0, 1943, false], [1943, 4776, null], [4776, 6010, null], [6010, 8812, null], [8812, 11377, null], [11377, 13411, null], [13411, 15544, null], [15544, 16671, null], [16671, 18975, null], [18975, 20702, null], [20702, 23237, null], [23237, 25920, null], [25920, 28615, null], [28615, 31279, null], [31279, 34153, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1943, true], [1943, 4776, null], [4776, 6010, null], [6010, 8812, null], [8812, 11377, null], [11377, 13411, null], [13411, 15544, null], [15544, 16671, null], [16671, 18975, null], [18975, 20702, null], [20702, 23237, null], [23237, 25920, null], [25920, 28615, null], [28615, 31279, null], [31279, 34153, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34153, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34153, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34153, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34153, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34153, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34153, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34153, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34153, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34153, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34153, null]], "pdf_page_numbers": [[0, 1943, 1], [1943, 4776, 2], [4776, 6010, 3], [6010, 8812, 4], [8812, 11377, 5], [11377, 13411, 6], [13411, 15544, 7], [15544, 16671, 8], [16671, 18975, 9], [18975, 20702, 10], [20702, 23237, 11], [23237, 25920, 12], [25920, 28615, 13], [28615, 31279, 14], [31279, 34153, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34153, 0.0]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
9af97f73b2c028f2be18098281c647afc09cbaa9
Comparative presentation of eclipse frameworks for Fog/Edge computing: February 2020 Speaker: Franck Roudet – Orange Labs Grenoble Authors: Yanjun CHEN – Orange Labs China Beijing Rui ZHOU, Jingyang CHEN Fog Computing Conceptual Model: NiST SP 500-325 & 500-291 Generic fog application software architecture ➔ projection on HW/network Outline • Edge Computing in the Eclipse Foundation • ioFog Introduction • Fog05 Introduction • Comparisons • Summary Edge Computing in the Eclipse Foundation Eclipse Projects related with Edge Computing and belonging to Eclipse IoT - **Eclipse kura** Eclipse Kura™ is an extensible open source IoT Edge Framework based on Java/OSGi. - **Eclipse ioFog** Eclipse ioFog is a complete edge computing platform that provides all of the pieces needed to build and run applications at the edge at enterprise scale. - **Eclipse fog05** The End-to-End Compute, Storage and Networking Virtualisation solution. **Eclipse edge native** working group - Public announcement on Dec.2019 - A separation from IoT initiatives, address particular challenges associated with edge computing in a very focused way - ioFog and fog05 are two flagship projects - Collaboration or merging of the two projects are considered in the roadmap, but how and what to merge are not discussed no early than 2020 Jan ioFog - Project Overview ioFog is an open source project in Eclipse foundation which parent project is eclipse IoT project. Licenses: Eclipse Public License 2.0 The project source code repositories: [https://github.com/eclipse-iofog](https://github.com/eclipse-iofog) Latest Releases: - v1.3.0 2019-10-21 - v1.2.0 2019-08-08 - v1.1.0 2019-06-19 Source code language: Java, Javascript, Go Project Leading Contributors: edgeworx [https://edgeworx.io/](https://edgeworx.io/) ioFog Overview IoFog is an edge computing platform providing components to deploy and distribute microservices in Edge Compute Network (ECN) using container based approach. - **Controller**: management and orchestration of different agents - **Connector**: communications of microservices between different nodes - **Agent**: a daemon service running in each (edge) node – monitoring & supervision - **Microservice**: docker container doing business Other related tools or projects: - **Command line tools**: iofogctl - **ECN viewer** ioFog Architecture Controller is used for orchestration of the different Agents. - Controller should be deployed in the node where network accessible for edge nodes (fixed IP address or DNS). - One or multiple controllers could be used ➔ grouping agents Connector will enable the communication between different microservices. - The Connector assists in providing automatic discovery and NAT traversal, brokering communication when possible. - Connector offers two connectivity types: - public pipe punches through firewalls and NATed networks to perform automatic internetworking of the Fog. - private pipe, consumes bandwidth on the Connector but stabilizes connectivity ioFog API swagger: https://iofog.org/docs/1.3.0/controllers/rest-api.html Community: https://discuss.iofog.org/ ioFog Components Main Features - Agent Agent will handle the starting, stopping, and management of microservices running on the node. - a local API with REST-like endpoints as well as WebSockets, example interfaces: - Get container configuration, next unread message - Get messages from publishers within timeframe - Get control/message WebSocket connection - Get service status/info/version - Attach/detach ioFog agent to the configured ioFog controller - Change ioFog agent configuration - All messages passing through the Local API must be in the ioMessage format (embodied in JSON, XML and Binary). Note: Agents could be remotely managed by controller after successfully connected which enable the deployment and maintaining of microservices without using SSH communication to contact each edge node. # ioFog Deployment Requirements ## Agent - Processor: x86-64 or ARM Dual Core or better - RAM: 256 MB minimum - Hard Disk: 100 MB minimum - Linux kernel v3.10+ (Ubuntu, CentOS, Raspbian, etc) - Java Runtime v8.0.0 or higher - Docker v1.10 or higher ## Controller - Processor: x86-64 or ARM Dual Core or better - RAM: 1 GB minimum - Hard Disk: 5 GB minimum - Linux kernel v3.10+ (Ubuntu, CentOS, Raspbian, etc), macOS 10.12+, or Windows 7+ - Node.js v8+ and NPM ## Connector - Processor: x86-64 or ARM Dual Core or better - RAM: 1 GB minimum - Hard Disk: 5 GB minimum - Linux kernel v3.10+ (Ubuntu, CentOS, Raspbian, etc) - Java Runtime v8.0.0 or higher **Note:** While ioFog fully support using Raspberry Pi's as workers on the edge environment, they are not meant to be used as the Controller and Connector infrastructure (specify using the Raspberry Pi as an agent in edge infrastructure). ioFog Experimental Deployment ioFog version: 1.3.0 Deployment environment: - VM x84 4GB, 2vCPU, 40GB - Raspberry Pi 3b+ - Nvidia jetson nano Deployment logic: - deploy 1 controller, 3 agents 1 connector - deploy example application https://iofog.org/docs/1.3.0/getting-started/quick-start.html IoFog Experimental Deployment Hardware/Network Target infra architecture Public network Private network VM Linux HTTP proxy Mirroring Fog App component Shop JFrog Artifactory NPM Docker Jetson nano Raspi 3 Raspi 3 ioFog Experimental Deployment ioFog SW deployment on Target infra architecture Public network Private network VM Linux ioFog Controller ioFog Connector ioFog Agent HTTP proxy Jetson nano Raspi 3 Raspi 3 Orange ioFog Experimental Deployment ioFog SW deployment on Target infra architecture Controller - undefined, undefined - undefined, undefined - g-debian9-franck-1.rd.francetelecom.fr Active resources - Flows: 0 - Agents: 4 - Microservices: 0 Agents - 4 nodes - meylan1-agent: 0 Microservices - g-fogcpp-33-agent: 0 Microservices - g-fogcpp-notconf-agent: 0 Microservices - g-jetsonnano-01-agent: 0 Microservices ioFog Experimental Deployment 3 tiers Hello World App – ioFog freeboard app Public network Private network VM linux Freeboard HTTP proxy Mirroring Fog Artifactory Network for ms Network for ms Network for ms IoFog Controller IoFog Connector IoFog Agent Jetson nano Raspi 3 Raspi 3 IoFog Agent Freeboard API Sensors Orange Eclipse IoT Day Grenoble 2020 ioFog Experimental Deployment Hello World App Deployment on Target infra architecture Install ioFog command line tools - mkdir iofog - curl https://packagecloud.io/install/repositories/iofog/iofogctl/script.deb.sh | sudo bash - sudo apt-get install iofogctl=1.3.0 Can also get ioFog Demo resource: - wget http://www.edgeworx.io/downloads/demo-sdk/edgeworx-iofog-sdk_1.3.0-beta.tar.gz - tar xvfz edgeworx-iofog-sdk_1.3.0-beta.tar.gz - ./bootstrap.sh Check the version of iofogctl - iofogctl version latest release(2019.Nov.03): 1.3.0 Project Overview - “Eclipse fog05 aims at providing full Management and Orchestration stack for the Fog Computing environment” - “The End-to-End Compute, Storage and Networking Virtualisation solution.” - One of the main goals of fog05 is the unification of different frameworks coming from Telcos and Industry, in particular ETSI MEC, ETSI NFV and the OpenFog Consortium (merged in one stream of Industrial Internet Consortium now), there three framework deals with applications, but from different perspectives and focusing on different aspects of the application. - not clear if fog05 will keep following the spec after OFC merged with IIC - Source code repositories: https://github.com/eclipse-fog05 - Licenses: Eclipse Public License 2.0 or Apache 2.0 - Note: Yaks and Zenoh have Apache 2.0 license. - Source code language: OCaml, Python, Go - Project leading contributors: ADLink Fog05 Architecture - fog05 features a decentralized and plug-in-based architecture. It’s designed to run as either a process on a traditional OS or as a trusted service inside a Trusted Execution Environment (TEE). - fog05 is composed by: - Zenoh: zero network overhead protocol - a protocol for extremely resource constrained environment (XRCE), Zero Overhead Pub/sub, Store/Query and Compute - Agent: The core logic of fog05, it takes care of managing, monitoring and orchestrating entities through plugins - Plugins: Plugins provide supports for atomic entities, OS, networks, etc. - The agent represents a manageable resource. An agent has two stores, one representing the actual state of the node and the other representing the desired state for the node. Fog05 Plugins - **OS plugins**: abstract the underlying operating system and provide primitives to agent and other plugins. - Linux plugin - Windows plugin - **Network manager plugin**: create all the network related components (network, routers and ports) - Linux bridge with VXLAN virtual networks - **Runtime plugin**: life-cycle management for Fog Deployment Unit (FDU) - KVM virtual machine - Linux containers (LXD) - Native applications - Containerd (docker) containers (in roadmap) - ROS2 robotic framework (in roadmap) - **Device plugin**: designed to allow the deployment and discovery of micro-controller and IoT board that does not have an operating system - an example implementation for a micro-controller that can be used as base for other implementation (in roadmap) Fog05 Core Concepts: FDU, Atomic Entity & Entity - A Fog Deployment Unit (FDU) is an indivisible unit of deployment, such as, a binary executable, a unikernel, a container or a virtual machine. An FDU requires a certain kinds of resources as a precondition to its execution. The life-cycle of an FDU is defined by a Finite State Machine (FSM). - An Atomic Entity is a functional unit of deployment, meaning that it is composed by one or more FDU, that can be of the same type or different types. - An Entity is a service/application deployment and it is composed by one or more Atomic Entities. - Unit manifest - fog05 entities are described through JSON manifests. - These manifests are compatible with ESTI MEC/NFV manifests as well with OpenMANO. - Entity & Atomic entity lifecycle (6 states for entity and 9 for atomic entity) Fog05 Experimental Deployment Deployment environment: - 2 virtual machines on top of openstack: 4GB, 1vCPU, 40GB - ubuntu 16.04, python 3.6, libev4, libssl1.0, openssl... Deployment logic: - deploy 2 Fog05 nodes: - Yaks server, Eclipse fog05 agent, Linux plugin, Linux bridge plugin and LXD plugin - deploy example application Reference https://github.com/eclipse-fog05/fog05/blob/master/INSTALL.md Issues during the experimental deployment has been raised in the github for supporting https://github.com/eclipse/fog05/issues Fog05 Experimental Deployment Experimental deployment architecture: - **Node1:** { IP: 192.168.12.196 ; LXDbidge: 196lxdb0 ; NodeID: f6717a4f-65b8-4f11-a822-96a13764cc6e } - **Node2:** { IP: 192.168.12.202 ; LXDbidge: 202lxdb0; NodeID: dccd6c65-b371-4bef-a631-eda7853a1ca9} - **Example FDU UUID:** d790a23d-ee56-5645-8c8d-fb014d9c4776 - **YAKS server:** 192.168.12.196 Outline • Edge Computing in the Eclipse Foundation • ioFog Introduction • fog05 Introduction • Comparisons • Summary & Next steps Project Active level - **Github**: stars: 17, forks: 15 (01/17/2020) - **Release**: - v0.1 Feb 14 2020 - (v0.1.3) in Oct.2018 - **Last 12 months commit** - **Github**: stars: 249, forks: 30 (ioFog Agent) (01/17/2020) - **Release**: - v1.3.0, Oct.21 2019 - V1.2.0, Oct.07 2019 - V1.1.0, Jun.19 2019 - **Last 12 months commit** https://github.com/eclipse/fog05 https://github.com/eclipse/ioFog Community & Ecosystem --- **Foundation:** Eclipse Foundation **Parent project:** Eclipse IoT **Project leaders:** ADLINK **Related parties:** - **OpenFog Consortium**: The Open Fog Consortium (merged with IIC) is currently considering to adopt fog05 as the reference implementation of the reference architecture and as one of the fog platforms to use in fog testbeds. - **ITRI**: ITRI has adopted fog05 for all fog computing, MEC and 5G. - **UC3M**: The networking and edge computing group at the Universidad Carlos III de Madrid, leader by professors Arturo Azcorra and Antonio Oliva, are actively collaborating with us and contributing to fog05. - **5G Coral UE Project**: The 5G Coral EU project has adopted fog05 as the infrastructure for MEC and its actively contributing to its R&D. - **Huawei**: fog05 along with some of the services provided such as its decentralized key/value store are of extreme interest to Huawei Corporate R&D. Huawei has been actively discussing requirements and directions of these technologies. --- **Community & Ecosystem** **Foundation:** Eclipse Foundation **Parent project:** Eclipse IoT **Project leaders:** Edgeworx **Related parties:** - **Edgeworx**: launched out of stealth October 30, 2018, announcing funding from Samsung NEXT, Sequoia Seed and CloudScale Capital Partners. - **Collaboration with other Eclipse IoT projects**: - **hawkBit**: by leveraging microservices running on the ioFog, remote updates can be achieved for devices that have complex update processes or connectivity that requires intelligent interfacing. - **Vorto**: the meta definitions of real-world objects can be represented in the authoring interface, with the implementation happening on a per-object basis through the use of microservices running on the edge near the actual physical object. - **Kura**: the ioFog (along with ComSat) can be used to add remote connectivity and serviceability for Kura gateways that remain safely behind a firewall or NATed network. Microservices can be instantiated on the ioFog to work with Kura gateways and add layers of behavior for large groups of them. Licensing Info - **Licenses**: Eclipse Public License 2.0 or Apache 2.0 - fog05 has also dependencies on other Open Source projects. Some of those are not yet hosted by eclipse but plan to contribute as fog05 sub-projects. These projects may have a relative licensing impact: - Eclipse Cyclone - Eclipse Cyclone Python API http://github.com/atolab/python-cdds/ - Eclipse Cyclone OCaml API http://github.com/atolab/ocaml-cdds - DStore http://github.com/atolab/python-dstore - **Licenses**: Eclipse Public License 2.0 - ioFog has also included 3’rd party libraries: - Netty (Apache 2 license) - HornetQ (Apache 2 license) - Docker-Java (Apache 2 license) General requirements: - Eclipse fog05 is designed to be deployed from big servers to micro-controllers. - Eclipse fog05 can run in different systems, including windows, linux or uni-kernel with different OS Plugin Minimum requirements: <table> <thead> <tr> <th>Supported Deployed Environments</th> </tr> </thead> <tbody> <tr> <td>General requirements:</td> </tr> <tr> <td>- Linux 3.10+ (ubuntu, centos, etc), macos 1.12+, windows 7+</td> </tr> <tr> <td>- Docker 1.10+</td> </tr> </tbody> </table> Minimum requirements of main components: <table> <thead> <tr> <th>Agent</th> </tr> </thead> <tbody> <tr> <td>Processor: x86-64 or ARM Dual Core or better</td> </tr> <tr> <td>RAM: 256 MB minimum</td> </tr> <tr> <td>Hard Disk: 100 MB minimum</td> </tr> <tr> <td>Linux kernel v3.10+ (Ubuntu, CentOS, Raspbian, etc)</td> </tr> <tr> <td>Java Runtime v8.0.0 or higher</td> </tr> <tr> <td>Docker v1.10 or higher</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Controller</th> </tr> </thead> <tbody> <tr> <td>Processor: x86-64 or ARM Dual Core or better</td> </tr> <tr> <td>RAM: 1 GB minimum</td> </tr> <tr> <td>Hard Disk: 5 GB minimum</td> </tr> <tr> <td>Linux kernel v3.10+ (Ubuntu, CentOS, Raspbian, etc), macOS 10.12+, or Windows 7+</td> </tr> <tr> <td>Node.js v8+ and NPM</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Connector</th> </tr> </thead> <tbody> <tr> <td>Processor: x86-64 or ARM Dual Core or better</td> </tr> <tr> <td>RAM: 1 GB minimum</td> </tr> <tr> <td>Hard Disk: 5 GB minimum</td> </tr> <tr> <td>Linux kernel v3.10+ (Ubuntu, CentOS, Raspbian, etc)</td> </tr> <tr> <td>Java Runtime v8.0.0 or higher</td> </tr> </tbody> </table> Development Tools - **Tech doc** - Fog05 Wiki - API documents (FIMAPI modules) - SDK documents - **API & SDK** - Language: OCaml, Python3, GO - REST API are still under development - **Command line tools** - “fos” command line tool (limited functions so far) - **Tech doc** - online docs (multiple versions) - API references - SDK document - **API & SDK** - Controller REST API Reference: /swagger/1.3.0/controller/swagger.yaml - Connector exposes API and it’s API where you have a set of identities. - The Agent daemon supports a local API with REST-like endpoints as well as WebSockets. - The ioFog SDK is an optional library. Together with Connector, they provide an easy way for edge nodes to communicate with each other: C#, C/C++, Go, Java, JavaScript (Node.js), Python - **Command line tools** - Command line tool: iofogctl (works as control plane of ECN) Compliance & Interoperability - **Compliance with MEC** - fog05 entities are described through JSON manifests, and these manifest are compatible with ETSI MEC / NFV - **Compliance ETSI OSM** - fog05 is used as VIM for Extreme-Edge devices (lamppost) enabling MEC and LXD container instantiation through ETSI OSM (5Gppp 5GCity). - fog05 is one of the infrastructure identified as compliant with the 5G principles and requirements by the EU 5GPPP working group - **Compliance with Kubernetes** - IoFog engine has been integrated with Kubernetes, enabling Kubernetes to orchestrate microservices down to the edge. ### Verification test **If it's easy to test:** - During installation and verification, several bugs and issues have been met which can’t be easily solved rely only its available official doc. - demo examples has been provided: [https://github.com/atolab/fog05_demo](https://github.com/atolab/fog05_demo) **Any issues:** - Some issues’ link on Fog05 GitHub: [https://github.com/eclipse/fog05/issues/143](https://github.com/eclipse/fog05/issues/143) [https://github.com/eclipse/fog05/issues/146](https://github.com/eclipse/fog05/issues/146) - REST API are still under development - tech docs’ information are not always up-to-date - Lack of full guidance on installation and deployment - Command line tool is not full-functioning **Ways to get support:** - on Eclipse fog05 repository: [https://github.com/eclipse/fog05/issues](https://github.com/eclipse/fog05/issues) - on its gitter channel: [https://gitter.im/atolab/fog05](https://gitter.im/atolab/fog05) - by e-mail --- **If it's easy to test:** - Creating and managing ECN (Edge Compute Networks) on a local machine by following the guide haven’t met too many issues. - API reference and SDK description are clear and detailed. - Command line tools works well for testing. - Hard to setup behind entreprise proxy **Any issues:** - Some questions have been raised mainly related with the ioFog usage. **Ways to get support** - iofog discussion forum: [https://discuss.iofog.org/](https://discuss.iofog.org/) - To report security issues: [https://www.eclipse.org/security/](https://www.eclipse.org/security/) - Join Slack to get updates and ask questions: [iofog.slack.com](https://iofog.slack.com) - By email Fog Functions for Service Life Cycle Management **Fog Infrastructure** - Node discovery - Node resource abstraction - Node configuration - Node monitoring **Fog Application Deployment** - Fog orchestration - Runtime engines **Fog Application Monitoring** - Application service discovery - Application monitoring - Analytics service - Application data management **Security** - Security consideration Fog Infrastructure Related Functions <table> <thead> <tr> <th>Items</th> <th>fog05</th> <th>ioFog</th> </tr> </thead> <tbody> <tr> <td>Node resource discovery</td> <td>- computational resources, such as, CPU, FPGA, GPU, storage resources, such as, RAM, block storage, object storage, networking resources, such as, 802.11 devices, tunnels and I/O resources, such as, COM, CAN, GPIO and I2C (some of them are not ready yet)</td> <td>- Hardware resource abstraction and monitoring</td> </tr> <tr> <td></td> <td>- In order to expose its resources in the system each node has either a running fog05 agent or a device plugin.</td> <td>- Realtime update on the node status e.g. memory and CPU usage</td> </tr> <tr> <td></td> <td></td> <td>- Location information is automatically obtained and recorded</td> </tr> <tr> <td>Node discovery</td> <td>- The agent is in charge of advertising of node resources and functionalities to the whole system.</td> <td>- ioFog agent should connect to controller for reporting node status</td> </tr> <tr> <td>Node configuration</td> <td>- APIs capabilities of updating desired status of a node entity</td> <td>- API exposed for node configuration</td> </tr> <tr> <td></td> <td></td> <td>- Control plane command line tools</td> </tr> <tr> <td>Node monitoring</td> <td>- Node agent provide the entry point for the node management and expose the capabilities with APIs</td> <td>- Node status monitoring</td> </tr> <tr> <td></td> <td></td> <td>- Control plane command line tools</td> </tr> <tr> <td></td> <td></td> <td>- Web portal for node visualization</td> </tr> </tbody> </table> - Agent is a **common solution** to do the node resource exposure, remote configuration, and etc. - The abstraction in iofog includes more **low level network resources** such as network bridge, OS interfaces. - iofog contains the dynamic and **real-time information** of node status such as the CPU usage, RAM usage, last active timestamp etc. - iofog provides more ways to monitor and visualize the infrastructure include querying API/SDK, using command line tools or viewing from web portal. # Fog Deployment Related Functions <table> <thead> <tr> <th>Items</th> <th>fog05</th> <th>ioFog</th> </tr> </thead> </table> | Fog orchestration | - Framework and basic abstractions are defined in Fog05 for supporting service orchestration. | - This component is still in the design phase, | - Orchestration functions are involving toward native Kubernetes | | Runtime engines | - fog05 can manage open-ended set of hypervisors and container technologies for which a Runtime Plugin was implemented. | - KVM virtual machine | - Docker engine | | - Linux containers (LXD) | - Linux containers (lxc) | | - Native application (Linux and Windows) | - more lightweight runtime engine is planned in the roadmap | | - Planning: containerd, ROS2 Robotic framework | | - fog05 is now still in the **design phase** for main orchestration feature, which is planning to deploy entities in the suitable place with respect its constrains, I/O latency, geofencing and computing power. - iofog has involving towards the **native Kubernetes orchestration**. - fog05 support more types of runtime engines. - Both of the projects claim that more **lightweight runtime engines are planned** in the roadmap. ### Fog Application Monitoring Related Functions <table> <thead> <tr> <th>Items</th> <th>fog05</th> <th>ioFog</th> </tr> </thead> <tbody> <tr> <td>Application service discovery</td> <td>- Service could be deployed as FDU</td> <td>- Microservices could be deployed in different edge clusters and discovered by control plane</td> </tr> <tr> <td></td> <td>- Global view of a FDU would be shared by YAKS server</td> <td></td> </tr> <tr> <td>Application data management</td> <td>- support application status monitoring</td> <td>- support management by microservice</td> </tr> <tr> <td></td> <td>- No additional storage for temporary data</td> <td>- No additional data for temporary data</td> </tr> <tr> <td>Application monitoring</td> <td>- API capabilities for application status description, on-board and offload</td> <td>- Control plane command line tool for service description</td> </tr> <tr> <td></td> <td></td> <td>- API capabilities for application status description</td> </tr> <tr> <td></td> <td></td> <td>- Visualization in web portal with mapping to hosted agent (ECN viewer)</td> </tr> <tr> <td>Analytic services</td> <td>- No, could be implemented by FDU</td> <td>- No, could be implemented by micro-services</td> </tr> </tbody> </table> - Both the two projects **support fundamental functions** on application/service discovery or monitoring either by distributed servers or through a centralized server. - Both the two projects do not support service **data management and analytics** which can be regarded as advanced features of those type of platforms. ## Fog Security Related Functions <table> <thead> <tr> <th>Items</th> <th>fog05</th> <th>ioFog</th> </tr> </thead> <tbody> <tr> <td>Security management</td> <td>- Token based security measures</td> <td>Each node is constantly validating a comprehensive set of security rules with all the other nodes, looking for minor deviations or signals of rogue nodes.</td> </tr> <tr> <td></td> <td>- Security plugins in distributed storage model YAKS in fog05, including access control, authentication and crypto</td> <td>- Secure delivery of short-lived secrets</td> </tr> <tr> <td></td> <td></td> <td>- full control over data flow and policy-based geofencing</td> </tr> </tbody> </table> - Both the projects have basic security measures on authorization, data protection during transmission, as well as policy support in different sub models. - ioFog have policy control on the **geofencing** since node and application geolocations are identified in the platform. ## And the App components development <table> <thead> <tr> <th>Items</th> <th>fog05</th> <th>ioFog</th> </tr> </thead> <tbody> <tr> <td>App shop</td> <td>- Need to manage a FDU repository</td> <td>• Docker hub</td> </tr> <tr> <td></td> <td></td> <td>• Private inhouse docker repository</td> </tr> <tr> <td>Prerequisite infra</td> <td>- ZENOH and YAKS infra and network must exists</td> <td>• Debian OS family. Ssh key exchange, user with sudoers privilege (without password)</td> </tr> <tr> <td></td> <td></td> <td>• Auto deployment (Hard with proxy)</td> </tr> <tr> <td>Component packaging</td> <td>- Docker</td> <td>• Docker image only</td> </tr> <tr> <td></td> <td>- uniKernel</td> <td></td> </tr> <tr> <td></td> <td>- Everything but you have to manage the self deployment, starting, stopping, removing your component</td> <td></td> </tr> <tr> <td>Messaging model</td> <td>- YAKS – Distributed key/value store</td> <td>• ioMessage</td> </tr> </tbody> </table> Outline - Edge Computing in the Eclipse Foundation - ioFog Introduction - Fog05 Introduction - Comparisons - Summary Summary - Both fog05 & ioFog address the **key elements** from cloud to edge, including cloud and edge infrastructure management, dynamic service deployment, communication between cloud and edge, and etc. - ioFog and Fog05 use different approaches: - Design of fog05 is try to create a unified compute fabric with considering compatibilities of industry standards and resource constrained environments - *If the development of this project could follow the industry standard evolution should be considered* - ioFog focuses container approaches and embraces the container-based ecosystems to create a distributed edge compute network (ECN) for running and deploying services - *Advanced features already enabled in existing orchestration solutions such as k8s may be still lack of support in ioFog currently* - So far, from verification test experiences, ioFog performs better than fog05 in terms of code maintenance, documentation quality and project active status. Thanks # Fog05 Plugins <table> <thead> <tr> <th>Description</th> <th>Linux plugin</th> <th>Bridge utils plugin</th> <th>LXD plugin</th> <th>KVM Libvirt plugin</th> <th>Native applications plugin</th> </tr> </thead> <tbody> <tr> <td>This plugin allow fog05 to run on top on Linux</td> <td>This plugin allow fog05 to manage networks with bridge utils</td> <td>This plugin allow fog05 to manage lxd container</td> <td>This plugin allow fog05 to manage vm</td> <td>This plugin allow fog05 to manage native applications</td> <td></td> </tr> <tr> <td>Supported operation</td> <td>execute command</td> <td>create virtual bridge</td> <td>deploy</td> <td>deploy deploy destroy stop pause resume</td> <td></td> </tr> <tr> <td></td> <td>check that file exists</td> <td>create virtual network</td> <td>destroy</td> <td>{{ pid_file }} parameter in starting native applications</td> <td></td> </tr> <tr> <td></td> <td>save on file</td> <td>add interface to network</td> <td>scale of vm</td> <td></td> <td></td> </tr> <tr> <td></td> <td>read from file</td> <td>delete virtual interface</td> <td>migrate</td> <td></td> <td></td> </tr> <tr> <td></td> <td>get HW information</td> <td>delete virtual bridge</td> <td>scale of vm</td> <td></td> <td></td> </tr> <tr> <td></td> <td>(cp,ram, network, disks)</td> <td>delete virtual network</td> <td>migrate</td> <td></td> <td></td> </tr> <tr> <td></td> <td>get uuid from motherboard</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td>get hostname</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td>send signal</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td>check if a pid exists</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td>install packages</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td>remove packages</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Todo</td> <td>get detailed i/o informations</td> <td>create virtual interface</td> <td>deploy</td> <td>configure application with parameters</td> <td></td> </tr> <tr> <td></td> <td>get HW accelerators informations</td> <td>remove interface from network</td> <td>migrate</td> <td></td> <td></td> </tr> <tr> <td></td> <td>get pid from process name</td> <td>scale of vm</td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td>get monitoring information about network</td> <td>deploy</td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td>get gps information about node</td> <td>migrate</td> <td></td> <td></td> <td></td> </tr> </tbody> </table> Orange Eclipse IoT Day Grenoble 2020 fog05 uses YAKS to maintain the actual and desired state for global and node-specific information **Note:** In this version the nodes have to be in the *same* network and connected to the *same* YAKS server.
{"Source-Url": "https://wiki.eclipse.org/images/5/57/Eclipse-IoTDay2020Grenoble-roudet.pdf", "len_cl100k_base": 7936, "olmocr-version": "0.1.53", "pdf-total-pages": 45, "total-fallback-pages": 0, "total-input-tokens": 63435, "total-output-tokens": 8955, "length": "2e12", "weborganizer": {"__label__adult": 0.0002734661102294922, "__label__art_design": 0.000354766845703125, "__label__crime_law": 0.00023794174194335935, "__label__education_jobs": 0.00044655799865722656, "__label__entertainment": 8.022785186767578e-05, "__label__fashion_beauty": 0.00013625621795654297, "__label__finance_business": 0.0004582405090332031, "__label__food_dining": 0.00027108192443847656, "__label__games": 0.0005450248718261719, "__label__hardware": 0.0034465789794921875, "__label__health": 0.00032591819763183594, "__label__history": 0.0002808570861816406, "__label__home_hobbies": 9.125471115112303e-05, "__label__industrial": 0.0005450248718261719, "__label__literature": 0.00014841556549072266, "__label__politics": 0.0002123117446899414, "__label__religion": 0.0003173351287841797, "__label__science_tech": 0.06097412109375, "__label__social_life": 7.027387619018555e-05, "__label__software": 0.0196380615234375, "__label__software_dev": 0.91015625, "__label__sports_fitness": 0.00022745132446289065, "__label__transportation": 0.0005135536193847656, "__label__travel": 0.0001857280731201172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32456, 0.02986]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32456, 0.17944]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32456, 0.7845]], "google_gemma-3-12b-it_contains_pii": [[0, 206, false], [206, 264, null], [264, 337, null], [337, 455, null], [455, 1404, null], [1404, 1404, null], [1404, 1883, null], [1883, 2422, null], [2422, 2441, null], [2441, 3215, null], [3215, 4036, null], [4036, 4932, null], [4932, 5230, null], [5230, 5457, null], [5457, 5676, null], [5676, 6088, null], [6088, 6459, null], [6459, 6545, null], [6545, 6996, null], [6996, 6996, null], [6996, 7887, null], [7887, 8657, null], [8657, 9461, null], [9461, 10301, null], [10301, 10833, null], [10833, 11207, null], [11207, 11338, null], [11338, 11744, null], [11744, 13877, null], [13877, 14554, null], [14554, 16073, null], [16073, 16970, null], [16970, 17591, null], [17591, 19261, null], [19261, 19665, null], [19665, 22409, null], [22409, 24335, null], [24335, 26491, null], [26491, 27785, null], [27785, 29559, null], [29559, 29677, null], [29677, 30659, null], [30659, 30666, null], [30666, 32248, null], [32248, 32456, null]], "google_gemma-3-12b-it_is_public_document": [[0, 206, true], [206, 264, null], [264, 337, null], [337, 455, null], [455, 1404, null], [1404, 1404, null], [1404, 1883, null], [1883, 2422, null], [2422, 2441, null], [2441, 3215, null], [3215, 4036, null], [4036, 4932, null], [4932, 5230, null], [5230, 5457, null], [5457, 5676, null], [5676, 6088, null], [6088, 6459, null], [6459, 6545, null], [6545, 6996, null], [6996, 6996, null], [6996, 7887, null], [7887, 8657, null], [8657, 9461, null], [9461, 10301, null], [10301, 10833, null], [10833, 11207, null], [11207, 11338, null], [11338, 11744, null], [11744, 13877, null], [13877, 14554, null], [14554, 16073, null], [16073, 16970, null], [16970, 17591, null], [17591, 19261, null], [19261, 19665, null], [19665, 22409, null], [22409, 24335, null], [24335, 26491, null], [26491, 27785, null], [27785, 29559, null], [29559, 29677, null], [29677, 30659, null], [30659, 30666, null], [30666, 32248, null], [32248, 32456, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 32456, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32456, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32456, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32456, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32456, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32456, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32456, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32456, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32456, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32456, null]], "pdf_page_numbers": [[0, 206, 1], [206, 264, 2], [264, 337, 3], [337, 455, 4], [455, 1404, 5], [1404, 1404, 6], [1404, 1883, 7], [1883, 2422, 8], [2422, 2441, 9], [2441, 3215, 10], [3215, 4036, 11], [4036, 4932, 12], [4932, 5230, 13], [5230, 5457, 14], [5457, 5676, 15], [5676, 6088, 16], [6088, 6459, 17], [6459, 6545, 18], [6545, 6996, 19], [6996, 6996, 20], [6996, 7887, 21], [7887, 8657, 22], [8657, 9461, 23], [9461, 10301, 24], [10301, 10833, 25], [10833, 11207, 26], [11207, 11338, 27], [11338, 11744, 28], [11744, 13877, 29], [13877, 14554, 30], [14554, 16073, 31], [16073, 16970, 32], [16970, 17591, 33], [17591, 19261, 34], [19261, 19665, 35], [19665, 22409, 36], [22409, 24335, 37], [24335, 26491, 38], [26491, 27785, 39], [27785, 29559, 40], [29559, 29677, 41], [29677, 30659, 42], [30659, 30666, 43], [30666, 32248, 44], [32248, 32456, 45]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32456, 0.18145]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
86f700f08cada313e98d1cc2175bb13f47a92bf2
1 Background As in the previous lecture, we refer to: datatype order = LESS | EQUAL | GREATER (* Comparison for integers *) (* compare : int * int -> order *) REQUIRES: true ENSURES: compare(x,y) ==> LESS if x<y compare(x,y) ==> EQUAL if x=y compare(x,y) ==> GREATER if x>y *) fun compare(x:int, y:int):order = if x<y then LESS else if y<x then GREATER else EQUAL As previously, a list of integers is sorted if each item in the list is no greater than all items that occur later in the list. Here is an SML function that checks for this property: (* sorted : int list -> bool REQUIRES: true ENSURES: sorted(L) evaluates to true if L is sorted and to false otherwise. *) fun sorted [] = true | sorted [x] = true | sorted (x::y::L) = (compare(x,y) <> GREATER) andalso sorted(y::L) * Adapted from documents by Stephen Brookes and Dan Licata. We will also refer to the \texttt{ins} function, which we used as a helper function for sorting integer lists. \begin{verbatim} (* ins : int * int list -> int list REQUIRES: L is sorted ENSURES: ins(x, L) is a sorted permutation of x::L *) fun ins (x, [ ]) = [x] | ins (x, y::L) = case compare(x, y) of GREATER => y::ins(x, L) | _ => x::y::L \end{verbatim} 2 Integer Trees in SML We will use the following integer tree type: \begin{verbatim} datatype tree = Empty | Node of tree * int * tree \end{verbatim} We can draw pictures of trees by putting the root integer at the top, as usual, and we may omit drawing leaf nodes. For example, let $t$ be the tree \texttt{Node(Empty, 42, Node(Empty, 9, Empty))}. This can be drawn as: \begin{verbatim} t = 42 \ 9 \end{verbatim} And the tree \texttt{Node(t, 0, t)} looks like: \begin{verbatim} 0 / \ 42 42 \ \ 9 9 \end{verbatim} \texttt{depth} and \texttt{size} Let $\texttt{max : int*int -> int}$ be the usual integer maximum function (predefined in SML as \texttt{Int.max}): \begin{verbatim} fun max(x:int, y:int):int = if x>y then x else y \end{verbatim} We define the functions $\texttt{depth}$ and $\texttt{size}$ of type $\texttt{tree -> int}$ by: \begin{verbatim} fun depth Empty = 0 | depth (Node(t1, _, t2)) = max(depth t1, depth t2) + 1 fun size Empty = 0 | size (Node(t1, _, t2)) = size t1 + size t2 + 1 \end{verbatim} Intuitively, size(t) computes the number of non-leaf nodes in t, and \( \text{depth}(t) \) computes the length of the longest path from the “root” of t to a leaf node. (We have also referred to depth as height in other settings.) We refer to size(t) as “the size of t” and to depth(t) as the “depth of t”. For all trees t, \( \text{size}(t) \geq 0 \) and \( \text{depth}(t) \geq 0 \); and if \( t' \) is a child of t, then \( \text{depth} t' < \text{depth} t \) and \( \text{size} t' < \text{size} t \). So we can also use induction on tree depth, or induction on tree size, as techniques for proving properties of trees, or of functions operating on trees. Aside: structural induction on trees, induction on tree size, and induction on tree depth, as well as simple and complete induction on non-negative integers, are all special cases of a general technique known as well-founded induction. In-order Traversal Here is a function that builds a list of integers from a tree, by making an in-order traversal of the tree, collecting data into a list. In-order traversal of a non-empty tree involves traversing the left-child, then the root, and then traversing the right-child; we also use in-order traversal on the subtrees. This description suggests that we define a recursive function! This function is used mainly in specifications, but serves as an example of how to define a function that operates on trees: use clauses, one for the empty tree and one for non-empty trees, using pattern-matching to give names to the components of a tree. (You have probably seen or will see this function in other settings with other names, but the idea is the same throughout: in-order traversal of a tree.) \[ \text{trav : tree} \rightarrow \text{int list} \] \[ \text{REQUIRES: true} \] \[ \text{ENSURES: trav(t) returns a list consisting of the integers in t, in the same order as seen during an in-order traversal of t.} \] \[ \ast\) \] fun trav Empty = [ ] | trav (Node(t1, x, t2)) = trav t1 @ (x :: trav t2) For example, for the tree t on page 2, \( \text{trav(t)} \Rightarrow [42,9] \). And \[ \text{trav(Node(t, 0, t))} \Rightarrow \text{trav t} @ (0 :: \text{trav t}) \] \[ \Rightarrow [42,9] @ (0 :: [42,9]) \] \[ \Rightarrow [42,9,0,42,9]. \] We say “x is in t” if x is a member of the list \( \text{trav(t)} \). We prove the following theorem to build familiarity with the terms: **Theorem 1** For all values \( T : \text{tree} \), \( \text{trav}(T) \) evaluates to a list of length \( \text{size}(T) \). **Proof:** By structural induction on \( T \). **BASE CASE:** \( T = \text{Empty} \). We must show that \( \text{trav}(\text{Empty}) \) returns a list of length \( \text{size}(\text{Empty}) \). Observe that \( \text{trav}(\text{Empty}) \Rightarrow [\ ] \) and that \( \text{size}(\text{Empty}) \Rightarrow 0 \), so the base case follows. **INDUCTIVE STEP:** \( T = \text{Node}(t1, x, t2) \). **Inductive Hypotheses:** \( \text{trav}(t1) \) returns a list of length \( \text{size}(t1) \) and \( \text{trav}(t2) \) returns a list of length \( \text{size}(t2) \). **Need to show:** \( \text{trav}(T) \) returns a list of length \( \text{size}(T) \). **Showing:** Let \( L1 \) be the list \( \text{trav}(t1) \) and \( L2 \) the list \( \text{trav}(t2) \). Let \( n1 \) be the integer \( \text{size}(t1) \) and \( n2 \) the integer \( \text{size}(t2) \). By definition of \( \text{size} \) we have \[ \text{size}(T) = \text{size}(\text{Node}(t1, x, t2)) \\ = (\text{size}(t1) + 1 + \text{size}(t2)) \\ = n1 + 1 + n2 \] By definition of \( \text{trav} \) we have \[ \text{trav}(T) = \text{trav}(\text{Node}(t1, x, t2)) \\ = (\text{trav}(t1) \@ (x :: \text{trav}(t2)) \\ = L1 \@ (x :: L2) \] which, by the inductive hypotheses, reduces to a list of length \( n1 + 1 + n2 \), as desired. \( \square \) **Sorted Trees** Informally, we say that a value of type \( \text{tree} \) is sorted if the integers in the tree occur in sorted order. More precisely we intend this to mean that the in-order traversal list of the tree is sorted. Equivalently: (i) an empty tree is sorted and (ii) a non-empty tree is sorted if and only if its two subtrees are sorted, every integer in the left subtree is less-than-or-equal to the integer at the root, and every integer in the right subtree is greater-than-or-equal to the integer at the root. We can implement an SML function for testing tree sortedness: \[ (* \text{Sorted : tree} \to \text{bool} \\ \text{REQUIRES: true} \\ \text{ENSURES: Sorted(T) returns true if T is sorted and false otherwise.} \\ *) \\ \text{fun Sorted T = sorted(trav T)} \] Using \texttt{trav} like this is an easy way to make a slightly vague assertion about the contents of a tree into a rigorous one. So, a tree is called “sorted” if and only if its traversal list is sorted in a sense with which we are already familiar. Similarly, we will say that one tree \( t_1 \) is a “permutation” of another tree \( t_2 \) if and only if the list \( \text{trav}(t_1) \) is a permutation of the list \( \text{trav}(t_2) \). **Insertion for Trees** Insertion sort is not well suited to parallel implementation, even if we are inserting into a tree. Nevertheless the tree-based analogue of the insertion function on lists is still of interest. We use capitalization to distinguish this function from the \texttt{ins} function on lists used in the previous lecture. \[ \text{Ins} : \text{int} \times \text{tree} \rightarrow \text{tree} \] \textbf{REQUIRES:} \( t \) is a sorted tree \textbf{ENSURES:} \( \text{Ins}(x,t) \) is a sorted tree such that \( \text{trav}(\text{Ins}(x,t)) \) is a sorted permutation of \( x::\text{trav}(t) \). \[ \text{fun Ins } (x, \text{Empty}) = \text{Node}(\text{Empty}, x, \text{Empty}) \] \[ | \text{Ins } (x, \text{Node}(t1, y, t2)) = \] \[ \text{case compare}(x,y) \text{ of} \] \[ | \text{GREATER} \rightarrow \text{Node}(t1, y, \text{Ins}(x, t2)) \] \[ | \_ \rightarrow \text{Node}(\text{Ins}(x, t1), y, t2) \] Compare this code with the code for the list function \texttt{ins} given earlier. In what follows we take as a fact that \texttt{Ins} is total. That is easy to prove, using structural induction. Theorem 2 For all $x : \text{int}$ and all sorted $T : \text{tree}$, $\text{trav}(\text{Ins}(x, t)) \simeq \text{ins}(x, \text{trav } t)$. Proof: By structural induction on tree $T$. **BASE CASE:** $T = \text{Empty}$. Need to show that for every integer $x$, \[ \text{trav}(\text{Ins}(x, \text{Empty})) \simeq \text{ins}(x, \text{trav Empty}). \] By the definitions of $\text{trav}$, $\text{Ins}$ and $\text{ins}$ we see \[ \begin{align*} \text{trav}(\text{Ins}(x, \text{Empty})) & \Rightarrow \text{trav}(\text{Node}(\text{Empty}, x, \text{Empty})) & \text{[1st Ins clause]} \\ & \Rightarrow \text{trav}(\text{Empty}) \odot (x :: \text{trav}(\text{Empty})) & \text{[2nd trav clause]} \\ & \Rightarrow [\ ] \odot [x] & \text{[1st trav clause]} \\ & \Rightarrow [x] & \text{[see append def]} \end{align*} \] \[ \begin{align*} \text{ins}(x, \text{trav Empty}) & \Rightarrow \text{ins}(x, [\ ]) & \text{[1st trav clause]} \\ & \Rightarrow [x] & \text{[1st ins clause]} \end{align*} \] That establishes the base case. **INDUCTIVE STEP:** $T = \text{Node}(t_1, y, t_2)$. (Observe that $t_1$ and $t_2$ are sorted since $T$ is.) **Inductive Hypotheses:** For every integer $x$, \[ \text{trav}(\text{Ins}(x, t_1)) \simeq \text{ins}(x, \text{trav } t_1) \] and \[ \text{trav}(\text{Ins}(x, t_2)) \simeq \text{ins}(x, \text{trav } t_2). \] **Need to show:** For every integer $x$, \[ \text{trav}(\text{Ins}(x, T)) \simeq \text{ins}(x, \text{trav } T). \] **Showing:** Suppose $x > y$ (the other cases are similar, as usual). Then: \[ \begin{align*} \text{trav}(\text{Ins}(x, \text{Node}(t_1, y, t_2))) & \simeq \text{trav}(\text{Node}(t_1, y, \text{Ins}(x, t_2))) & \text{[2nd Ins clause, } x>y] \\ & \simeq \text{trav}(t_1) \odot (y :: \text{trav}(\text{Ins}(x, t_2))) & \text{[2nd trav clause, Ins totality]} \\ & \simeq (\text{trav } t_1) \odot (y :: \text{ins}(x, \text{trav } t_2)) & \text{[IH, RefTrans]} \\ & \simeq (\text{trav } t_1) \odot (\text{ins}(x, y :: \text{trav } t_2)) & \text{[2nd ins clause, } x>y, \text{RefTrans]} \\ & \simeq \text{ins}(x, (\text{trav } t_1) \odot (y :: \text{trav } t_2)) & \text{[T is sorted, } x>y]^{1} \\ & \simeq \text{ins}(x, \text{trav } \text{Node}(t_1, y, t_2))) & \text{[2nd trav clause]} \end{align*} \] \[\square\] **Remark:** The proof methodology we employed to establish the base case was of the form: If expression $e_1$ reduces to some value $v$ and if expression $e_2$ also reduces to that same value $v$, then $e_1 \simeq e_2$. Alternatively, we could have shown that $e_1 \simeq e_2$ by establishing a string of equivalences, as we did in the Inductive Step. Often a proof can be done either way, though one must be careful to employ the correct reasoning for the particular methodology one uses. In particular, it is false to say that $e_1 \Rightarrow e_1$ if in fact no such reduction occurs, if merely $e_1 \simeq e_2$. And establishing \[^{1}\text{Technically, this step requires its own proof. In an exam, we would probably supply it as a separate fact or lemma.}\] that \( e_1 \cong e_2 \) must itself be done with care, since one must ensure that the expressions of interest really do both have equivalent values or both loop forever or both raise the same exception. See the guide to extensional equivalence on the course’s tools webpage for further advice. 3 Splitting a Tree In adapting the mergesort algorithm to operate on trees we need a suitable analog to the \texttt{split} function. It isn’t easy to figure out a good way to hew a tree into two roughly equal sized pieces, based solely on the structure of the tree. Instead, we will start from a tree and an integer, and break the tree into two trees that consist of items in the tree less-than-or-equal to the integer and items greater-than-or-equal to the integer, respectively. We will only ever need to use this method on a sorted tree, as you will observe when we develop the code. Indeed, the design of this function takes advantage of the assumption that the tree is already sorted, a fact that we echo in the way we write the function’s specification. \[ \begin{align*} (* \text{ SplitAt : int * tree } & \to \text{ tree * tree} \\ \text{ REQUIRES: } & \text{ t is sorted} \\ \text{ ENSURES: } & \text{ SplitAt}(x, t) \text{ returns a pair } (t_1, t_2) \text{ of sorted trees such that:} \\ & (a) \text{ for every Node}(\_y, \_\_\_\_) \text{ in } t_1, \ y \leq x, \\ & \text{ and } (b) \text{ for every Node}(\_y, \_\_\_\_) \text{ in } t_2, \ y \geq x, \\ & \text{ and } (c) \text{ trav}(t_1)@\text{trav}(t_2) \text{ is a sorted permutation of trav}(t). \\ (*) \end{align*} \] \[ \text{fun SplitAt}(x, \text{Empty}) = (\text{Empty, Empty}) \] \[ | \text{SplitAt}(x, \text{Node}(\text{left}, y, \text{right})) = \] \[ \text{case compare}(x, y) \text{ of} \] \[ \text{LESS } \Rightarrow \text{ let} \] \[ \begin{align*} & \text{val } (t_1, t_2) = \text{SplitAt}(x, \text{left}) \\ & \text{in} \] \[ (t_1, \text{Node}(t_2, y, \text{right})) \] \[ \end{align*} \] \[ \text{end} \] \[ | _ \Rightarrow \text{ let} \] \[ \begin{align*} & \text{val } (t_1, t_2) = \text{SplitAt}(x, \text{right}) \\ & \text{in} \] \[ (\text{Node}(\text{left}, y, t_1), t_2) \] \[ \end{align*} \] \[ \text{end} \] 4 Merging Two Trees The tree-based analog of merge is a function that takes a pair of sorted trees and combines their data into a single (also sorted) tree. (* Merge : tree * tree -> tree REQUIRES: t1 and t2 are sorted ENSURES: Merge(t1,t2) is a sorted tree t such that trav(t) is a sorted permutation of trav(t1)@trav(t2). *) fun Merge (Empty, t2) = t2 | Merge (Node(l1,x,r1), t2) = let val (l2, r2) = SplitAt(x, t2) in Node(Merge(l1, l2), x, Merge(r1, r2)) end 5 Mergesort for Trees Using Ins and Merge, and guided by their specs, we may now define a mergesorting function for integer trees. (* Msort : tree -> tree REQUIRES: true ENSURES: Msort(t) is a sorted tree such that trav(Msort(t)) is a sorted permutation of trav(t). *) fun Msort Empty = Empty | Msort (Node(t1, x, t2)) = Ins (x, Merge(Msort t1, Msort t2)) Compare and contrast this code with the mergesorting function msort for integer lists that we examined during the last lecture. This code no longer needs to do any significant work to split the input data before sorting recursively, since the tree structure already encapsulates a natural split (into left and right parts). However, it may be necessary now to perform some splitting when merging the results of the recursive calls. 6 Depth Analysis There can be many different trees containing the same integers. Indeed, there can be many different sorted trees containing the same integers. So the specifications and proofs so far don’t really tell us much about the shapes of the trees produced by sorting. We state some intuitive results about depth, that will be helpful when we analyze the runtime behavior of our code. One may prove these results by induction. 1. For all trees t and integers x, \[ \text{depth}(\text{Ins}(x, t)) \leq \text{depth}(t) + 1. \] 2. For all trees $t$ and integers $x$, if $\text{SplitAt}(x, t) \Rightarrow (t_1, t_2)$ then \[ \text{depth}(t_1) \leq \text{depth}(t), \quad \text{depth}(t_2) \leq \text{depth}(t). \] 3. For all trees $t_1$ and $t_2$, \[ \text{depth}(\text{Merge}(t_1, t_2)) \leq \text{depth}(t_1) + \text{depth}(t_2). \] 4. For all trees $t$, \[ \text{depth}(\text{Msort } t) \leq 2 \times \text{depth}(t). \] 7 Size Analysis We further state some intuitive results about size. Again, one may use a suitable inductive method to prove these facts. 1. For all trees $t$ and integers $x$, \[ \text{size}(\text{Ins}(x, t)) = \text{size}(t) + 1. \] 2. For all trees $t$ and integers $x$, if $\text{SplitAt}(x, t) \Rightarrow (t_1, t_2)$ then \[ \text{size}(t_1) + \text{size}(t_2) = \text{size}(t). \] 3. For all trees $t_1$ and $t_2$, \[ \text{size}(\text{Merge}(t_1, t_2)) = \text{size}(t_1) + \text{size}(t_2). \] 4. For all trees $t$, \[ \text{size}(\text{Msort } t) = \text{size}(t). \] 8 Span Analysis The overall work\(^2\) of our $\text{Msort}$ implementation is $O(n \log n)$, and the span is $O((\log n)^3)$. We now analyze the span in detail. We focus on span, since the purpose of using trees was to enable parallel-friendly sorting. - The span for $\text{Ins}(x, t)$ is $O(d)$, where $d$ is the depth of $t$. Reason: $\text{Ins}(x, t)$ makes a single recursive call, on a subtree whose depth is $d - 1$ or less. This produces the recurrence \[ S_{\text{Ins}}(0) = c_0, \quad S_{\text{Ins}}(d) \leq c_1 + S_{\text{Ins}}(d - 1) \quad \text{for } d > 0 \] for some constants $c_0$, $c_1$. This recurrence has a solution that is $O(d)$. - $\text{SplitAt}(x, t)$ has span $O(d)$, where $d$ is the depth of $t$. Again the reason is that $\text{SplitAt}$ makes a single recursive call, on a subtree whose depth is $d - 1$ or less. \(^2\)Using the techniques from this course, you should be able to show that the work is $O(n (\log n)^2)$. A finer analysis using concavity of the log function can then establish that the work is in fact $O(n \log n)$. You will revisit this problem in 15-210, where you will see an algorithm with span $O((\log n)^2)$, better than what we have here. • \texttt{Merge(t1, t2)} has span $O(d_1 d_2)$, where $d_1$ and $d_2$ are the depths of $t_1$ and $t_2$, respectively. To see this, observe that \texttt{Merge} calls \texttt{SplitAt} on $t_2$ and \texttt{Merge} makes two (independent, i.e., parallelizable) recursive calls. The first arguments in these recursive calls are subtrees of $t_1$ with depths no greater than $d_1 - 1$. The second arguments are subtrees of $t_2$, one of which might have depth $d_2$. Thus (plus a base case): $$S_{\text{Merge}}(d_1, d_2) \leq k_0 + S_{\text{SplitAt}}(d_2) + S_{\text{Merge}}(d_1 - 1, d_2)$$ $$\leq k_1 + k_2 * d_2 + S_{\text{Merge}}(d_1 - 1, d_2)$$ for some constants $k_0$, $k_1$, and $k_2$. This recurrence has a solution that is $O(d_1 d_2)$. • Assuming that the trees produced by \texttt{Msort} are balanced, so that their depth is about the logarithm of their size, \texttt{Msort(t)} has span $O(d^3)$, where $d$ is the depth of $t$. To see this, observe that \texttt{Msort} makes two (independent, i.e., parallelizable) recursive calls to subtrees of depth $d - 1$ or less, followed by a call to \texttt{Merge} and a call to \texttt{Ins}. Thus (plus a base case): $$S_{\text{Msort}}(d) \leq k + S_{\text{Ins}}(2d) + S_{\text{Merge}}(d - 1, d - 1) + S_{\text{Msort}}(d - 1)$$ $$\leq c_0 + c_1 * d + c_2 * (d - 1)^2 + S_{\text{Msort}}(d - 1)$$ for balanced trees of depth $d > 1$, for some constants $k$, $c_0$, $c_1$, and $c_2$. Expanding out, and observing that the sum of the first $d$ squares is proportional to $d^3$, we deduce that the span is $O(d^3)$. Since the size $n$ of a balanced tree and its depth $d$ satisfy $d = O(\log n)$, our analysis shows that the span for \texttt{Msort(t)} on balanced trees of size $n$ is $O((\log n)^3)$. Thus (ignoring constants), when we sort a billion integers in a balanced tree, the length of the longest critical path is about 27000 operations, so we can exploit over a million processors! \textbf{Caution:} This would be true, except that we haven’t justified our balance assumption. Indeed, there is a bug in the previous analysis. Even if we assume that the original tree passed to \texttt{Msort} is balanced, we cannot guarantee that the recursive sorting within \texttt{Msort} will produce balanced trees or that calling \texttt{Merge} on balanced trees will maintain balance. In fact, as written, \texttt{Msort} need not maintain balance. There is a simple fix: rebalance within \texttt{Msort}. This can be done in a way that does not affect the asymptotic work and span results, but we will not discuss details here. More generally, later in the course, we will discuss how to implement binary trees that maintain balance. \textbf{Comment:} There is a way to define a version of \texttt{Msort} that avoids using \texttt{Ins}, instead calling \texttt{Merge}: \begin{verbatim} fun Msort' Empty = Empty | Msort' (Node(t1, x, t2)) = Merge(Node(Empty, x, Empty), Merge(Msort' t1, Msort' t2)) \end{verbatim} Are \texttt{Msort} and \texttt{Msort'} extensionally equivalent? Which of \texttt{Msort} and \texttt{Msort'} is more efficient?
{"Source-Url": "http://www.cs.cmu.edu/~15150/resources/lectures/08/TreeSorting.pdf", "len_cl100k_base": 6890, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 25292, "total-output-tokens": 7681, "length": "2e12", "weborganizer": {"__label__adult": 0.0003228187561035156, "__label__art_design": 0.0003058910369873047, "__label__crime_law": 0.00032806396484375, "__label__education_jobs": 0.0007877349853515625, "__label__entertainment": 7.784366607666016e-05, "__label__fashion_beauty": 0.0001246929168701172, "__label__finance_business": 0.00016999244689941406, "__label__food_dining": 0.00055694580078125, "__label__games": 0.0006961822509765625, "__label__hardware": 0.00095367431640625, "__label__health": 0.0005311965942382812, "__label__history": 0.0002663135528564453, "__label__home_hobbies": 0.00013899803161621094, "__label__industrial": 0.00047850608825683594, "__label__literature": 0.00024437904357910156, "__label__politics": 0.00027179718017578125, "__label__religion": 0.0005168914794921875, "__label__science_tech": 0.024261474609375, "__label__social_life": 0.0001035928726196289, "__label__software": 0.004222869873046875, "__label__software_dev": 0.96337890625, "__label__sports_fitness": 0.00036263465881347656, "__label__transportation": 0.000560760498046875, "__label__travel": 0.00021636486053466797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20839, 0.02206]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20839, 0.44219]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20839, 0.75932]], "google_gemma-3-12b-it_contains_pii": [[0, 887, false], [887, 2300, null], [2300, 4632, null], [4632, 6919, null], [6919, 8490, null], [8490, 11534, null], [11534, 13741, null], [13741, 15554, null], [15554, 17738, null], [17738, 20839, null]], "google_gemma-3-12b-it_is_public_document": [[0, 887, true], [887, 2300, null], [2300, 4632, null], [4632, 6919, null], [6919, 8490, null], [8490, 11534, null], [11534, 13741, null], [13741, 15554, null], [15554, 17738, null], [17738, 20839, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20839, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20839, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20839, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20839, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 20839, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20839, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20839, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20839, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20839, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20839, null]], "pdf_page_numbers": [[0, 887, 1], [887, 2300, 2], [2300, 4632, 3], [4632, 6919, 4], [6919, 8490, 5], [8490, 11534, 6], [11534, 13741, 7], [13741, 15554, 8], [15554, 17738, 9], [17738, 20839, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20839, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
637a7ff69133adbdc5cbfd53478463895b0f5529
Domain-specific transformation of the REA enterprise ontology Zdenek Melis University of Ostrava Czech Republic e-mail: Zdenek.Melis@Osu.cz Jaroslav Zacek University of Ostrava Czech Republic e-mail: Jaroslav.Zacek@Osu.cz Frantisek Hunka University of Ostrava Czech Republic e-mail: Frantisek.Hunka@Osu.cz Abstract—The paper deals with the general description of the methodology of the transformation of basic concepts of the REA enterprise ontology (Resource, Event, Agent) from the initial visual modeling interface to model source codes. The paper describes the structure of the script of the domain-specific language (DSL) and its subsequent transformation into the executable source code using abstract classes for defining the structure of the template code. The aim is the creation of the basic structure of the concept of templates and layout the code generated from the created model. The model is due to excessive complexity limited to the use of three basic concepts of REA enterprise ontology. Keywords—REA ontology; DSM; DSL; transformation I. INTRODUCTION Due to the increasing complexity of information systems, demands for the control clarity and the simplicity are increasing. Technologies that use a visual environment come to the fore. One of the leading technologies based on a visual programming is a domain-specific modeling (DSM) [1]. It focuses on one particular domain, which provides the syntax and the semantic of the visual language and the ability to transform created models into the executable source code [1]. Business process modeling is one of complex problems that use the visual modeling technology. With regard to the structure of the DSM and code generation possibilities, the most appropriate technology of the description of business processes seems to be the REA ontology [2]. The object-oriented structure of the REA ontology allows transformation of models into other structures, such as source code or database schema [8, 9, 10]. II. DOMAIN-SPECIFIC MODELING Domain-specific modeling is a software engineering methodology for software development [1]. The primary focus of DSM is automated development of applications in a specific domain based on principles of visual programming. Unlike traditional modeling principles that have a universal focus, DSM is based on the entirely specific modeled domain. The DSM architecture is three layered, although individual layers overlay to each other. The highest and for the user the most visible layer is the language. The narrow focus on a specific domain allows language to correspond by its whole structure to domain terms. Syntactic aspects of the language are not limited to textual form, but they can take any visual form representing concepts of particular domain. Unlike generally focused traditional principles, semantic aspects of the language are contained. The second layer is the generator, which transforms created model into language of the target platform. The last layer is the domain framework. It creates a supportive environment for the generator, which includes many features, such as defining the interface for the generator, integration of the generated code into the system, removing duplicate parts, and many others. The main advantage of DSM is the automated transformation without mapping that is a frequent source of errors. Using of the domain language brings simplicity and easy manipulation with tool even for non-technical users who can intuitively use the tool through the knowledge of the problem domain. The toughest stage in the development process is creating of the modeling tool. Once the tool is developed, using this tool for creating software is very simple and fast. It can increase development productivity up to ten times [5, 6]. III. THE REA ONTOLOGY The REA enterprise ontology is the concept for designing and model creation of enterprise infrastructures. It is based on resource ownership and its exchange. The aim of most businesses is the profit generation and therefore they must ensure the effectiveness, the efficiency and the adaptability of their business processes and it cannot be done without their modeling and subsequent analysis [4]. There are many of business processes modeling tools, but due to inadequate level of abstraction and the use of general concepts they are not usable enough for the business process modeling. Rather than general modeling techniques companies use expensive software created directly to the specific requirements of a particular enterprise. REA ontology does not use general concepts but specific. They increase the amount of represented data, while maintaining the simplicity of the model. The REA ontology model offers 4 levels of abstraction [2, 7]. The highest level is Value System Level, which represents view of the flow of resources between the company and its business partners. The second level is Value Chain Level describing links between business processes within a company. The third level, REA model level, describes a specific business process and represents the change in the value of resources. The various concepts of this level can be divided into two groups – the operational level and the policy level. The operational level includes the basic concepts describing the specific facts and events that have already happened. Here are included concepts forming the name of this ontology - economic resource, event and agent. The policy level is an extension of the operational level by concepts specifying rules or allowing planning. The lowest level of an abstraction is the Task level, which describes the model instance level, making it an implementation-dependent. This paper deals only with the transformation of the operational level of the REA model level. It has three basic concepts: economic resource, event and agent. The resource represents an economic material that the business wants to plan, manage and monitor. It can include products, money, services or for example workmen. The economic agent is a person, group of people or the whole company that has control over resources. The economic event describes incremental or decrement change of resource. From the model perspective the economic event is key for preserving information, because it determine who, when, why and how the resource was changed [3]. One of reasons for choosing the REA ontology is object-oriented structure support that is necessary for successful transformation. The REA ontology model also includes internal rules for verifying the consistency of the model, ensuring correctness of created links. At the same time models are simple and understandable for ordinary users who will work with it, but sufficiently precise for its automation [2]. IV. MODEL TRANSFORMATION The transformation itself consists of several steps. At the beginning the user creates a model of a business process in the visual interface. This interface contains methods for ensuring the basic model validation by preventing incorrect links creation alternatively prevent execution of the second phase - generation. During the second phase the DSL script is generated from a visual model, containing basic elements of the structure of the model and on this basis the source code is generated. A. Visual interface The user creates a business process models in the visual environment that provides the basic user interface and performs the validation and the partial verification of the model. It provides basic semantic correctness of the model and prevents the generation of incomplete structures. The visual interface contains the common label of an entity type and its name and basic data attributes defining its properties. For simplicity and clarity only basic entities representing resources, events and agents are used. For simplicity and clarity only basic entities representing resources, events and agents are used. Figure 1: Visual representation of the model The header of each element contains the name and the type of entity. Under the header there are attributes with a predefined structure specified by the type of entity. If necessary the user can expand, or completely change the structure by adding new attributes or changing existing ones. The entity Car can serve as an example. In addition to basic attributes Name and Amount the entity is expand by an attribute representing a specific serial number of the car and Value that represents the value that the resource has for the company (for example total production costs). The value on the link between the resource and the event indicates the amount of resources that is changed within the event. B. DSL script Once the model is completed and validation criteria are met, a domain-specific language (DSL) script constituting the regulation for the generator is created. The script is basically the export of the created model into XML format with the omission of data related to the visual interface of the model (such as the placement of elements, their size, color ...). The following code fragment represents the DSL script showing the transfer of the resource entity from the model shown at Figure 1: ``` <resource title="Car"> <id type="int">1</id> <name type="String">Audi A4</name> <sn type="String">35A76C38</sn> <amount type="int">1</amount> <value type="int" currency="USD">7200</value> </resource> <resource title="Money"> <id type="int">2</id> <name type="String">Money</name> ``` Figure 1: Visual representation of the model The code contains new items added by user - value that represents the value of the economic resource and currency for defining the exchange of funds handled by business. Individual links are merged with the appropriate entities. An example may be the Stackflow link (link between the Resource and the Event) containing information on the amount of increase / decrease of resources within a single event. This attribute is converted into an Event entity. As mentioned before, the main carrier of information is an Event entity, which has a significant role in obtaining data from the model. For this reason, most of links of this model moves into this entity. Each entity has a unique ID, which is used for the unique identification of the element and for replacing individual links by creating reference to that ID. The following code fragment shows one of event entities: ```xml <event title="Sale"> <id type="int">1</id> <name type="String">Sale</name> <eventType type="EventType">decrement</eventType> <agentReceiveID type="int">8</agentReceiveID> <agentProvideID type="int">2</agentProvideID> <resourceID type="int">1</resourceID> <date type="String">8.12.2002</date> <amount type="int">1</amount> </event> ``` **EventType** attribute is determined by the type of link provide/receive, if the event is from the business point of view incremental or decrement and according to these links agents are determined too. Exchange duality saves all references to events connected with duality. ```xml <exchangeDuality> <id type="int">1</id> <eventId type="int">1</eventId> <eventId type="int">2</eventId> </exchangeDuality> ``` C. Source code generator The last phase of the transformation itself is generating the source code from the created DSL script. Creating a complete general generator would be inefficient and implementation difficult due to the unchanging structure of REA ontology elements. Fixed domain structure ensures durability and stability of domain terms, which allows predefining the general structure of basic elements. Due to their limited number it is possible to predefine a common part of the code that is same in all cases of the use of the entity as a template. In case of the above model, which is restricted for using of only three basic entities, two types of templates are used - an abstract class and a class template. An abstract class contains the basic code structure, layout of methods and predefined basic attributes that the given entity must contain. In the case of transformation of the model restricted to using only three entities the same number of abstract classes are fully enough. When extending the model by other semantic abstractions, the number of abstract classes is not linear to the number of used entities because some entities may have more abstract classes based on specific uses of that entity in the model. Basic abstract classes are therefore AbstractAgent, AbstractEvent and AbstractResource. The abstract class defining the agent entity contains the basic attributes *id, name* and *company* and their access methods, as shown in Figure 2. ![Abstract class for the agent entity](image) These attributes are common to all agents and can be individually extended by additional parameters specifying a particular agent. Using default parameters is not mandatory, but it is recommended. The creator of the model can ignore these parameters and creates new one. The only restriction is that variable names defined in the parent class cannot be used again with different data type. The abstract class for a resource entity is defined in a similar way (Figure 3). *Amount* attribute indicates the amount of resources from the business perspective. The other attributes (*id* and *name*) are the same as attributes in AbstractAgent class. AbstractResource - amount: int - id: int - name: String + getAmount(): int + getId(): int + getName(): String + setAmount(int): void + setId(int): void + setName(String): void + toString(): String AbstractEvent - agentProvideID: int - agentReciveID: int - amount: int - date: java.util.Calendar - eventType: EventType - id: int - name: String - resourceID: int + getAgentProvideID(): int + getAgentReciveID(): int + getAmount(): int + getDate(): String + getEventType(): EventType + getId(): int + getName(): String + getResourceID(): int + setAgentProvideID(int): void + setAgentReciveID(int): void + setAmount(int): void + setDate(String): void + setEventType(EventType): void + setId(int): void + setName(String): void + setResourceID(int): void + toString(): String Figure 3: Abstract class for the resource entity The last abstract class is AbstractEvent that creates a draft for the event entity (see Figure 4). Unlike the other two abstract classes it contains the most methods because the event is carrier of basic properties of the model. Attributes agentProvideID, agentReciveID and resourceID store links with individual agents and a resource. AbstractEvent - agentProvideID: int - agentReciveID: int - amount: int - date: java.util.Calendar - eventType: EventType - id: int - name: String - resourceID: int + getAgentProvideID(): int + getAgentReciveID(): int + getAmount(): int + getDate(): String + getEventType(): EventType + getId(): int + getName(): String + getResourceID(): int + setAgentProvideID(int): void + setAgentReciveID(int): void + setAmount(int): void + setDate(String): void + setEventType(EventType): void + setId(int): void + setName(String): void + setResourceID(int): void + toString(): String Figure 4: Abstract class for the event entity In addition to basic attributes such as id and name the attribute amount also figures here. It indicates the amount of resources that is changed within the event. Another necessary attribute is date, recording the date of the past event. Calendar class is intended for its preservation, but the output of the visual interface returns the date as a string and therefore it is necessary to convert the date inside access methods. The last parameter is eventType, that specifies the type of an event, whether it is from the business perspective incremental or decrement. It is determined by a constant of enumerator EventType, which is part of this class. Abstract classes are used to define basic parameters for the generated class. They are generated using class templates. They operate on a simple principle: instead of generating the whole structure of the code, such as headers of classes, methods, etc., the appropriate template that contains all of these structures will apply and on the basis of the script missing data will be add. Part of data can be added by just replacing non-terminal symbol for the specific name from the script, other data provides a simple automaton according to the specified grammar. Non-terminal symbols are written in a template as non-terminal name between two percent signs. The following code shows a template for a resource entity: ```java public class %className% extends AbstractResource{ %attributesDeclaration% /* * Default constructor */ public %className%(int id, String name, int amount){ setId(id); setName(name); setAmount(amount) } /* * Empty constructor */ public %className%(){ %fullConstructor% %attributesGetterSetter% public String toString(){ return super.toString()%toString%; } } ``` Generator processes the template by its division by the `%` character to array strings - tokens. Each token is compared with the list of keywords and if the comparison did not find any results, the token is written to the file. If the token is recognized as a keyword, the output of automaton corresponding to that keyword is written to the file. In above template there are many non-terminals. The first of them is an attribute `%className%`, which is The attribute \%attributesDeclaration% is responsible for the declaration of new variables. This must be performed by automaton using the following grammar: \%attributesDeclaration%: \%attributesDeclaration% -> private \%attribDataType% \%attribName%\%;\n \%attributesDeclaration% -> %e% The grammar has 2 rules. Until there are other undefined variables, the first rule is used, but once all new variables from the script are processed, the second rule is used. Known data types defined at an abstract class are skipped by the generator at this stage. Non-terminal \%attribDataType% is replaced by the appropriate data type corresponding to the attribute type in the DSL script in the section behind the element name. The element name itself is used to replace \%attribName% attribute. Non-terminal %n% is used by automaton as the command new line and %e% is empty non-terminal, in this case the automaton ends and the generator continues processing other parameters. Another non-terminal symbol in the template is \%fullConstructor%, that is used to generate the constructor containing all used variables. Its structure is defined by following grammar: \%fullConstructor%: \%fullConstructor% -> public \%name%(%int id, String name, String company\%attributes%){ this(id, name, company); \%setAttributes%} \%fullConstructor% -> %e% \%attributes%: \%attributes% -> ,\%attribDataType% \%attribName% \%attributes% \%attributes% -> %e% \%setAttributes%: \%setAttributes% -> this.set\%attribName%(%attribName%); \%setAttributes%} \%setAttributes% -> %e% The processing of this non-terminal occurs only when the DSL script contains new attributes, otherwise the output would be identical to the default constructor. That is why this structure is implemented by using the non-terminal instead of fixed placement in the template. The first part of the template generates the general structure of the constructor with fixed set of variables of the abstract class. In the header of the method there is non-terminal \%attributes% which generates a list of all newly added variables including their data types separated by commas. The body of method starts with calling the default constructor. Then \%setAttributes% non-terminal is used for generation of the code to ensure the assignment of method’s input values to particular variables. The last non-terminal in the template is \%toString%, which is used for completing toString() method. This non-terminal is performed only if the element contains new attributes. %toString%: %toString% -> +"\n\%attribName%: +get\%attribName%()" %toString% -> %e% After the template processing is complete, the output is the class containing the complete source code corresponding to the particular element in the visual interface: ```java public class Money extends AbstractResource{ private String currency; /** * Default constructor */ public Money(int id, String name, int amount){ setId(id); setName(name); setAmount(amount); } /** * Empty constructor */ public Money(){} public Money(int id, String name, int amount, String currency) { this(id, name, amount); setCurrency(currency); } public void setCurrency(String currency){this.currency = currency;} public String getCurrency(){return this.currency;} public String toString(){ ``` In a similar way templates for agents and events are created. In addition to these classes the new class is generated, which creates instances of created classes and fills them with data. By extending this class there is a space for the instance level of visual modeling or to extension the model by the simulation ability. V. DISCUSSION ABOUT USING OF ABSTRACT CLASSES Using abstract classes can greatly simplify the process of the source code generating, because it is not necessary to generate repeatedly general and often used structures. The disadvantage of this solution is the loss of generality of generated models. Globally, it is not possible to determine the exact structure of abstract classes of the REA ontology, because it depends on the specific modeled business process. It is only possible to determine the estimated parameters, but not mandatory parameters. For example, the agent has estimated parameter Name. An abstract class defines it as a String, but the model creator may require an instance of some object. Although it is possible to add a new attribute, it is necessary to choose a different label of the attribute. The question is, when it is appropriate to use an abstract class? If the usage of complex data structures in attributes is not expected, the application of abstract classes of will significantly facilitate the creation of the generator. On the other hand, if generality of models and their greater modeling expressiveness is required, then the application of abstract classes is not recommended. ACKNOWLEDGMENT The paper is supported by the grant reference no. 6141 provide by IGA Faculty of Science University of Ostrava. REFERENCES
{"Source-Url": "http://worldcomp-proceedings.com/proc/p2012/SER4436.pdf", "len_cl100k_base": 4598, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 28327, "total-output-tokens": 5614, "length": "2e12", "weborganizer": {"__label__adult": 0.0002944469451904297, "__label__art_design": 0.00035953521728515625, "__label__crime_law": 0.0002627372741699219, "__label__education_jobs": 0.0007557868957519531, "__label__entertainment": 4.9948692321777344e-05, "__label__fashion_beauty": 0.00011628866195678712, "__label__finance_business": 0.0003571510314941406, "__label__food_dining": 0.00026297569274902344, "__label__games": 0.00040841102600097656, "__label__hardware": 0.00051116943359375, "__label__health": 0.00036072731018066406, "__label__history": 0.00018346309661865232, "__label__home_hobbies": 6.54458999633789e-05, "__label__industrial": 0.000308990478515625, "__label__literature": 0.00026917457580566406, "__label__politics": 0.00017392635345458984, "__label__religion": 0.00036215782165527344, "__label__science_tech": 0.0146636962890625, "__label__social_life": 6.854534149169922e-05, "__label__software": 0.006244659423828125, "__label__software_dev": 0.97314453125, "__label__sports_fitness": 0.00019991397857666016, "__label__transportation": 0.0003733634948730469, "__label__travel": 0.00015485286712646484}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23887, 0.00824]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23887, 0.64383]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23887, 0.83198]], "google_gemma-3-12b-it_contains_pii": [[0, 4842, false], [4842, 9545, null], [9545, 13369, null], [13369, 17372, null], [17372, 20757, null], [20757, 23887, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4842, true], [4842, 9545, null], [9545, 13369, null], [13369, 17372, null], [17372, 20757, null], [20757, 23887, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23887, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23887, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23887, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23887, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23887, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23887, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23887, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23887, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23887, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23887, null]], "pdf_page_numbers": [[0, 4842, 1], [4842, 9545, 2], [9545, 13369, 3], [13369, 17372, 4], [17372, 20757, 5], [20757, 23887, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23887, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
ebd692591f7499a772b4a36805182e9b986c00f6
Abstract Model-based design and automated code generation are increasingly used at NASA to produce actual flight code, particularly in the Guidance, Navigation, and Control domain. However, since code generators are typically not qualified, there is no guarantee that their output is correct, and consequently auto-generated code still needs to be fully tested and certified. We have thus developed AUTOCERT, a generator-independent plug-in that supports the certification of auto-generated code. AUTOCERT takes a set of mission safety requirements, and formally verifies that the auto-generated code satisfies these requirements. It generates a natural language report that explains why and how the code complies with the specified requirements. The report is hyper-linked to both the program and the verification conditions and thus provides a high-level structured argument containing tracing information for use in code reviews. 1. Introduction Model-based development and automated code generation are increasingly used by NASA missions (e.g., Constellation uses MathWorks’ Real-Time Workshop), not only for simulation and prototyping, but also for actual flight code generation, in particular in the Guidance, Navigation, and Control (GN&C) domain. However, since code generators are typically not qualified, there is no guarantee that their output is correct, and consequently the generated code still needs to be fully tested and certified. The V&V situation thus remains unsatisfactory: - Code reviews are still necessary for mission-critical applications, but the generated code is often difficult to understand, and requires reviewers to match subtle details of textbook formulas and algorithms to model and/or code. - Common modeling and programming languages do not allow important requirements to be represented explicitly (e.g., units, coordinate frames, quaternion handedness); consequently, such requirements are generally expressed informally and the generated code is not traced back to these requirements. - Writing documentation is tedious and therefore often not completed or kept up to date. In this paper, we describe a new tool that generates human-readable and traceable safety documentation from the results of an automated analysis of auto-generated code. It is based on the AUTOCERT code analysis tool [1], which takes a set of mission safety requirements, and formally verifies that the auto-generated code satisfies these requirements. It can verify both simple execution-safety requirements (e.g., variable initialization before use, array out of bounds, etc.), as well as domain- and mission-specific requirements such as the consistent use of Euler angle sequences and coordinate frames. The results of the code analysis are used to generate a natural language report that explains why and how the code complies with the specified requirements. The report makes the following information explicit: assumptions on the environment (e.g., the physical units and constraints on input signals) and the intermediate variables in the computation (representing intermediate signals in the model), the algorithms, data structures, and conventions (e.g., quaternion handedness) used by the code generator to implement the model, the dependencies between variables, and the chain of reasoning which allows the requirements to be concluded from the assumptions. The analysis tool matches candidate algorithms for various mathematical operations against the code, and then uses theorem proving to check that they really are correct implementations. The report is hyper-linked to both the program and the verification conditions, and gives traceability between verification artifacts, documentation, and code. In order to construct a justification that the code meets its requirements, a diligent code reviewer must “rediscover” all the information which is automatically generated by AUTOCERT, so the high-level structured argument provided by our tool can result in substantial savings in effort. Our approach, both to the formal verification and to the construction of the review reports, is independent of the particular generator used, and we have applied it to code generated by several different in-house and commercial code generators, including MathWorks' Real-Time Workshop. In particular, we have applied our tool to several subsystems of the navigation software currently under development for the Constellation program, and used it to generate review reports for mission-specific requirements such as the consistent use of Euler angle sequences and coordinate frames. 2. Background 2.1. Automated Code Generation Model-based design and automated code generation (or autocoding) promise many benefits, including higher productivity, reduced turn-around times, increased portability, and elimination of manual coding errors [2], [3]. There are now numerous successful applications of both in-house custom generators for specific projects, and generic commercial generators. One of the most popular code generators within NASA is MathWorks' Real-Time Workshop (with the add-on product Embedded Coder), an automatic code generator that translates Simulink/Stateflow models into embeddable (and embedded) C code [4]. By some estimates, 50% of all NASA projects now use Simulink and Real-Time Workshop for at least some of their code development. Code generators have traditionally been used for rapid prototyping and design exploration, or the generation of certain kinds of code (user interfaces, stubs, header files etc.), but there is a clear trend now to move beyond simulation and prototyping to the generation of production flight code, particularly in the GN&C domain. Indeed, the prime contractor for the Orion Constellation program, and used it to generate review reports for mission-specific requirements such as the consistent use of Euler angle sequences and coordinate frames. 2.2. Autocode Assurance The main challenge in the adoption of code generators in safety-critical domains is the assurance of the generated code. Ideally, the code generator, itself, should be qualified or even formally verified, but this is rarely done: the direct V&V of code generators is generally too laborious and complicated due to their complex nature, while testing the generator itself can require detailed knowledge of the (often proprietary) transformations it applies [5], [6]. Moreover, the qualification is only specific to the use of the generator within a given project, and needs to be repeated for every project and for every version of the tool. Even worse, if the generator is upgraded during a project, any qualification effort which has been carried out on the previous working version is now lost, the code must be re-certified, and the entire tool-chain must now essentially be upgraded. This can offset many of the advantages of using a generator. Also, even if a code generator is generally trusted, it often requires user-specific modifications and configurations, which necessitate that V&V be carried out on the generated code [7]. In summary, the generated code still needs to be fully tested and certified. Advocates of the model-driven development paradigm claim that by only needing to maintain models, and not code, the overall complexity of software development is reduced. While it is undoubtedly true that some of burden of verification can be shifted from code to model, there are additional concerns and, indeed, more artifacts in a model-based development process than just models. Users not only need to be sure that the code implements the model, but also that the code generator is correctly used and configured, that the target adaptations are correct, that the generated code meets high-level safety requirements, that it is integrated with legacy code, and so on. There can also be concerns with the understandability of the generated code. Some explanation of why and how the code satisfies the requirements, therefore, helps the larger certification process. Automated support for V&V that is integrated with the generator can address some of these complexity concerns. Furthermore, certification requires more than black box verification of selected properties, otherwise trust in one tool (the generator) is simply replaced with trust in another (the verifier). Automated code generation, therefore, presents a number of challenges to software processes and, in particular, to V&V, and this leads to risk. The documentation tool we describe here mitigates some of that risk. 2.3. Autocode Verification In contrast to approaches based on directly qualifying the generator or on testing of the generated code, we have instead developed an independent autocode analysis tool which is nevertheless closely integrated with the code generator. Specifically, AUTOCERT supports certification by formally verifying that the generated code complies with a range of mathematically specified requirements and is free of certain safety violations. However, in an independent V&V (IV&V) context, we must consider the larger picture of certification, of which formal verification is a part, and therefore produce assurance evidence which can be checked either by machines (during proof checking) or by humans (during code reviews). Hence, the tool constructs an independently verifiable certificate, and explains its analysis in a textual form suitable for code reviews. If the tool does not detect any bugs, then it is guaranteed that the auto-generated source code meets the stated requirements. Moreover, the time taken to review and certify the auto-generated code by hand, could be compared with with the time taken to do it with support from AUTOCERT. 2.3.1. Code Analysis. In order to certify a system, AUTO-CERT is given a set of assumptions and requirements. Assumptions are typically constraints on input signals to the system, while requirements are constraints on output signals. The tool then parses, analyzes, and verifies the generated source code with respect to the specified requirements. Note that only the code is analyzed, rather than the model or the generation process. In other words, the code generator is treated as a black box. The key technical idea of our approach is to exploit the idiomatic nature of auto-generated code in order to automatically infer logical annotations, that is, assertions of program properties at key locations in the code. Annotations are crucial in order to allow the automatic formal verification of the requirements without requiring access to the internals of the code generator, as well as making a precise analysis possible. The annotations are used to generate verification conditions (VCs), which are then proved by an automated theorem prover. We omit further technical details of the verification process (see [15], [8]). During the course of verification, AUTO-CERT records various facts, such as the locations of variable definitions and uses, which are later used to generate the review document (Section 4.2). 2.3.2. Customization. AUTO-CERT is independent of the particular generator used, and need only be customized to a domain via an appropriate set of annotation schemas, which encapsulate certification cases for matching code fragments. We omit details of the schema language here (see [9]), but note that is is based on a generic pattern language for describing code idioms. Schemas also contain actions which construct the annotations needed to certify a code fragment, and can record other information associated with the code, such as the mathematical conventions it follows. A schema also has a number of different textual descriptions which can be parametrized by the variables in the pattern. This is used during the document generation process. 2.3.3. Certification Browser. The user can view the results of the verification via a certification browser that is integrated with Matlab. This displays the generated code along with the VCs and the review document (to be described below). By selecting a line in the generated code, the user can see the list of VCs that are dependent on that line. The user can also select a VC and navigate to its source in the code. This action highlights the lines in the RTW-generated code which contribute to the chosen VC (that is, they had either an annotation from which the given VC was generated or contributed a safety obligation). A click on the source link associated with each VC prompts the certification browser to highlight all affected lines of code, and display the annotations for the selected VC in the RTW-generated code. Conversely, a click on the line number link at each line of code or on an annotation link will display all VCs associated with that line or annotation. A further click on the verification condition link itself displays the formula which can then be interpreted in the context of the relevant program fragments. 3. Mathematical Domain We will illustrate the review document generation using excerpts that explain the verification of several requirements for an attitude module of a spacecraft GN&C system. In addition to being a necessary component of every spacecraft, the GN&C domain is challenging from a verification perspective due to its complex and mathematical nature. We just describe the model at the top level sufficient to understand typical requirements. The attitude sub-system takes several input signals, representing various physical quantities, and computes output signals representing other quantities, such as Mach number, angular velocity, position in the Earth-Centered Inertial frame, and so on. Signals are generally represented as floats or quaternions and have an associated physical unit and/or frame of reference. At the model level, the transformations of coordinate frames are usually done by converting quaternions to direction cosine matrices (DCMs), applying some matrix algebra, and then converting back to quaternions. Other computations are defined in terms of the relevant physical equations. Units and frames are usually not explicit in the model, and instead are expressed informally in comments and identifier names. At the code level, equations and transformations are expressed in terms of the usual loops, function calls, and sequences of assignments. Depending on the optimization settings of the generator, the resemblance to the model can be tenuous. Variables can be renamed and reused, and structures can be merged (e.g., via loop fusion) or split (e.g., to carry out common sub-expression elimination). The challenge for AUTO-CERT is to disentangle this complexity and provide a comprehensible explanation in terms of concepts from the model and domain (e.g., [10], [11], [12]). In effect, what the tool must do is reverse engineer the code. In practice, this semantic abstraction can be seen as going up through several levels before reaching the high-level mathematical concepts appropriate for explanation. Fig.1 shows the relationships between these levels. At the lowest level is the code itself along with primitive arithmetic arithmetic operators. This is, of course, the level at which V&V is actually carried out (we do not consider object code here). The purpose of comments in the code (and model) is generally to informally explain the code at a more abstract level, so AUTO-CERT can be seen as formally checking these implicit conventions. At the next level are mathematical operations, such as matrix multiplication and transpose, while low-level datatypes such as floats correspond, at the more abstract level, to physical values of a given unit. These, in turn, are used to represent navigational information in terms of quaternions, DCMs, Euler angles, and so in, in various coordinate systems. This is the level at which we explain the verification. There is a further level of abstraction, at which domain experts think, namely the principles of guidance, navigation, and control, itself, but explanation at this level is currently beyond our scope. 4. Generating Review Documents 4.1. Document Purpose and Assumptions The generated safety documents serve as structured reading guides for the code and the verification artifacts, showing why and how the code complies with the specified requirements. However, the documents do not simply associate source code locations with verification conditions; in fact, we delegate this to the existing complementary code browser [1] sketched in Section 2.3.3. Instead, the documents call out the high-level operations and conventions used by the generated code (which might be different from those originally specified in the model from which the code was generated, due to optimizations) and the relevant structures in the code (in particular, the paths between the locations where the requirements manifest themselves and where they are established) and associates the verification conditions with these. This provides a “natural” high-level grouping mechanism for the verification conditions, which helps reviewers to focus their attention to the artifacts and locations that are relevant for each safety requirements, and thus conforms to the usually requirements-driven safety certification process. The document construction is based on the assumption that all relevant information can be derived in the verification phase, starting with the variables occurring in the original requirements. The applied schemas implicitly also indicate which high-level conventions and operations are used by the code (see Section 4.4), and a semantic labeling of the verification conditions [13] allows us to associate only the small number of VCs with the paths that actually contribute to demonstrating how a given requirement holds along a path, as opposed to those that are just coincidentally related to it (see Section 4.5). 4.2. Technical Approach The generated documents are heavily cross-referenced and hyper-linked, both internally and externally, so that HTML/JavaScript is a suitable technical platform. Cross-linking follows not only from the hierarchical document structure (e.g., the links from the requirements summary to the individual requirements sections, see Fig. 2), but also from the traceability links recovered by the analysis phase, primarily the chains of implications from the properties of one variable to the properties of one or more “dependent” variables. Hyper-links are mostly traceability links to other artifacts such as external documents, models, code, or verification conditions that were constructed by the analysis and verification phases. Further hyper-links can be introduced by the concept lexicalization; these usually refer to external documents such as RTW documentation or Wikipedia pages. The actual document generation process is relatively lightweight and does not require the application of deep natural language generation (NLG) technology [14]. Currently, the document’s overall structure is fixed, so that content determination and discourse planning are not necessary. Concept lexicalization, however, relies on text fragments provided by the annotation schemas (for the mathematical and data structures and the operations) or stored in a fact base (for the mathematical operations used in assumptions and other formulas). This step can thus be customized easily. The document generator contains canned text for the remaining fixed parts of the document, and constructs some additional “glue text”, to improve legibility. The combined text is post-processed to ensure that the document is syntactically correct. The generator currently directly produces HTML, but changing the final output to, e.g., XML to simplify layout and rendering changes is relatively straightforward. 4.3. Document Structure The document consists of a general introduction and a section for each certified requirement. The introduction contains a natural language representation of the formalized requirements and certification assumptions; see Fig. 2 for an This document describes the results of the safety certification for the code generated from the model Attitude. It consists of sections establishing the following safety requirements: - rty_7 is a value representing Mach at MSL altitude - rty_2 is a value representing position in the ECI frame - rty_1 is a value representing velocity in the ECI frame - VelocityCompNed is a value representing velocity in the NED frame The assumptions for the certification are that - BitwiseOperator_c is positive - VelocityNED_e is a value representing velocity in the NED frame - DCMtoQuat_1 is a quaternion representing a transformation from the NED frame to the body fixed frame (Body) - AtmScaleHt_MslAlt represents the altitude entries in a lookup table - SpeedOfSound_Lookup represents the speed of sound entries in a lookup table - GeodeticHeight_g is a value representing geodetic height - Latitude_g is a value representing geodetic latitude - Longitude_a is a value representing longitude - rty_11 is a value representing altitude - rty_12 is a value representing angular velocity Figure 2. Requirements and Assumptions Note that different representations are not necessarily unsafe or unwanted (in fact, DCMs and quaternions can represent the same information), but might nevertheless indicate deeper design problems. The code relevant to this requirement uses the following data structures: - DCMs - Quaternions The data structures are represented using the following mathematical conventions: - DCMs are represented as 9-vectors. - DCMs are represented as 3-vectors. - The vectors \( \text{eml}_\text{fv5}, \text{eml}_\text{fv6}, \text{and} \, \text{eml}_\text{fv7} \) together represent a DCM. - Quaternions are right-handed. In order to certify this requirement, we concentrate on the following operations used in the code: - a coordinate transformation using a DCM from ECI to ECEF - a coordinate transformation using a DCM from NED to ECEF - a coordinate transformation using a DCM from NED to Nav - conversion of a DCM to a quaternion - conversion of a quaternion to a DCM - matrix multiplication - matrix transpose Figure 3. High-level Conventions The variable \( T_{\text{NED}}_{\text{to_body}} \) has a single relevant occurrence at line 235 in file \( \text{Attitude.cpp} \). Frame safety for this occurrence requires that \( T_{\text{NED}}_{\text{to_body}} \) is a DCM representing a transformation from the NED frame to the body fixed frame (Body), or, formally, that \[ \text{has\_frame}(T_{\text{NED}}_{\text{to_body}}, \text{dcm}(\text{ned, body})) \] holds. Safety of this use gives rise to three verification conditions: - \( \text{Attitude\_frame\_016\_0025} \) (i.e., establish the postcondition at line 235 (#1)) - \( \text{Attitude\_frame\_016\_0026} \) (i.e., establish the postcondition at line 235 (#2)) - \( \text{Attitude\_frame\_016\_0027} \) (i.e., establish the postcondition at line 235 (#3)) The frame safety is established at a single location, lines 177 to 189 in file \( \text{Attitude.cpp} \) by definition as a DCM matrix from NED to NAV. The correctness of the definition gives rise to two verification conditions: - \( \text{Attitude\_frame\_006\_0009} \) (i.e., establish the postcondition at line 189 (#1)) - \( \text{Attitude\_frame\_007\_0010} \) (i.e., establish the precondition at line 177 (#1)) Figure 5. Definitions or more “dependent” variables. The chain starts at those key variables which appear in the requirement, and continues to variables in the assumptions or input signals. Fig. 4 shows one step in this chain. At this step in the justification, we need to show that the variable \( T_{\text{NED}}_{\text{to_body}} \) is a DCM from NED to the Body frame. First, we show that the information which has been inferred at this point in the code does indeed give the variable the requirement properties. Three VCs establish this (cf. “safety of this use”). Second, the location where the variable is defined is given, and the correctness of that definition is established, i.e., that it does define the relevant form of DCM. In this case, it turns out that that particular definition has been explained earlier in the document, so a link is given to the relevant section (cf. “as above”). We give an example of a definition below. Third, we observe that this definition – a matrix multiplication – depends, in turn, on properties of other variables, i.e., the multiplicands, with which the explanation continues later in the document. Fourth, we show that the properties of the definition are sufficient to imply the properties of the use, and that these properties are preserved along the path connecting the two locations. **Explaining the definitions.** Fig. 5 gives an example where a DCM has been identified and verified. It gives links to the appropriate lines in the code and links to the VCs that demonstrate the correctness of the definition. In this case there are two VCs: a pre-condition (omitted here), which states that there exist heading and azimuth variables, and a post-condition, which states that the constructed matrix does indeed satisfies the textbook definition of a DCM from Ned to NAV, with entries equivalent to the appropriate trigonometric expressions. Structures that involve loops generally have considerably more correctness conditions, with VCs for inner and outer invariants, as well as pre- and post-conditions. 4.6. Tracing The provision of traceability links between artifacts is crucial to providing certification support since things cannot be understood in isolation. Indeed, the code review document generated by AUTOCERT can be seen as a structured high-level overview of the traceability links inferred during verification. There are both internal links, where items within the document are linked to each other, and external links to other artifacts. The internal links have been described above, and include links from requirements to safety policies, variables, and concepts. Fig. 6 illustrates the different kinds of external tracing provided by AUTOCERT within the larger Matlab environment. Matlab/RTW already provides bidirectional linking between models and code. To this, the AUTOCERT certification browser adds bidirectional linking between code and VCs. The review documents provide a further layer of tracing, linking code, VCs, and external documents such as Matlab block documentation and Wikipedia articles on domain concepts. 5. Conclusion We have described the review documentation feature of AUTOCERT, an autocode certification tool which has been customized (but is not limited) to the GN&C domain, and have illustrated its use on code generated by Real-Time Workshop from a Matlab model of an attitude sub-system. AUTOCERT automatically generates a high-level narrative explanation for why the specified requirements follow from the assumptions and a background domain theory, and provides hyperlinks between steps of the explanation and the relevant lines of code, as well as the generated verification conditions. The tool is aimed at facilitating code reviews, thus increasing trust in otherwise opaque code generator without excessive manual V&V effort, and better enabling the use of automated code generation in safety-critical contexts. We are currently working to automate linking of inferred concepts to a mission ontology database. The idea is that by automatically annotating the code with inferred concepts, engineers are relieved of this documentation chore. We also plan to provide links to mission requirements documents and other relevant project documentation. Much more can be done to improve the review documents themselves, such as adding more hierarchy and top-level summaries, and listing formulas and equations that are used in the code. In particular, more information could be gleaned from the proofs, such as the use of constants, lookup tables, as well as the specific assumptions and axioms used by individual requirements, and whether there are any unused assumptions. However, we are already working on such a proof analysis, and foresee no particular problems in extending the document generator accordingly. We also continue to extend the underlying domain theory that is used to verify the code. Acknowledgments. Thanks to Allen Dutra for help with the graphics. References
{"Source-Url": "https://ti.arc.nasa.gov/publications/606/download/", "len_cl100k_base": 5572, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 23909, "total-output-tokens": 6726, "length": "2e12", "weborganizer": {"__label__adult": 0.00037217140197753906, "__label__art_design": 0.00030231475830078125, "__label__crime_law": 0.0003740787506103515, "__label__education_jobs": 0.0006480216979980469, "__label__entertainment": 6.99758529663086e-05, "__label__fashion_beauty": 0.00017201900482177734, "__label__finance_business": 0.00025653839111328125, "__label__food_dining": 0.0003650188446044922, "__label__games": 0.0006589889526367188, "__label__hardware": 0.0015277862548828125, "__label__health": 0.0004673004150390625, "__label__history": 0.0002799034118652344, "__label__home_hobbies": 0.00013971328735351562, "__label__industrial": 0.0007023811340332031, "__label__literature": 0.00022423267364501953, "__label__politics": 0.0002675056457519531, "__label__religion": 0.0004513263702392578, "__label__science_tech": 0.061431884765625, "__label__social_life": 9.822845458984376e-05, "__label__software": 0.0083160400390625, "__label__software_dev": 0.9208984375, "__label__sports_fitness": 0.00046944618225097656, "__label__transportation": 0.0014619827270507812, "__label__travel": 0.00027060508728027344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30863, 0.0223]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30863, 0.70445]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30863, 0.9087]], "google_gemma-3-12b-it_contains_pii": [[0, 4258, false], [4258, 9728, null], [9728, 15522, null], [15522, 20038, null], [20038, 21359, null], [21359, 25829, null], [25829, 30115, null], [30115, 30863, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4258, true], [4258, 9728, null], [9728, 15522, null], [15522, 20038, null], [20038, 21359, null], [21359, 25829, null], [25829, 30115, null], [30115, 30863, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30863, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30863, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30863, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30863, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30863, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30863, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30863, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30863, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30863, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30863, null]], "pdf_page_numbers": [[0, 4258, 1], [4258, 9728, 2], [9728, 15522, 3], [15522, 20038, 4], [20038, 21359, 5], [21359, 25829, 6], [25829, 30115, 7], [30115, 30863, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30863, 0.0]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
f86ace0c47f61f997cb713c645f457ab620c8996
IODEF/RID over SOAP Status of this Memo By submitting this Internet-Draft, each author represents that any applicable patent or other IPR claims of which he or she is aware have been or will be disclosed, and any of which he or she becomes aware will be disclosed, in accordance with Section 6 of BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet-Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html. Abstract Documents intended to be shared among multiple constituencies must share a common format and transport mechanism. The Incident Object Description Exchange Format (IODEF) defines a common XML format for document exchange. This draft outlines the SOAP wrapper for all IODEF documents and extensions to facilitate an inter-operable and secure communication of documents. The SOAP wrapper allows for flexibility in the selection of a transport protocol. The transport protocols will be provided through existing standards and SOAP binding, such as SOAP over HTTP(S) and SOAP over BEEP. 1. Introduction The Incident Object Description Exchange Format (IODEF) [RFCXXX] describes an XML document format for the purpose of exchanging data between CSIRTS or those responsible for security incident handling for network providers. The defined document format provides an easy way for CSIRTS to exchange data in a way which can be easily parsed. In order for the IODEF documents to be shared between entities, a uniform method for transport is necessary. SOAP will provide a layer of abstraction and enable the use of multiple transport protocol bindings. IODEF documents and extensions will be contained in an XML Real-time Inter-network Defense (RID) [RFCXXXX] envelope inside the body of a SOAP message. The RIDPolicy class of RID (e.g., policy information that may affect message routing) will appear in the SOAP message header. HTTPS or HTTP/TLS has been selected as the REQUIRED SOAP binding for exchanging IODEF/RID messages. The primary reason for selecting HTTPS is due to the existence of multiple successful implementations of SOAP over HTTP, and it is widely understood. Excellent tool support exists to ease the development of applications using SOAP over HTTP. BEEP may actually be better suited as a transport for RID messages containing IODEF documents, but does not yet have wide adoption. Standards exist for the HTTPS or HTTP/TLS binding for SOAP. A standard is in development for SOAP over BEEP, [RFC****]. Standards MUST be followed when implementing transport bindings for RID communications. The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC2119. 2. SOAP Wrapper IODEF/RID documents, including all supported extensions, intended to be shared between CSIRTS or NPs MUST use a SOAP wrapper for transport along with a supported transport protocol binding. The transport is independent of the wrapper. SOAP will be used to provide the messaging framework and can make distinctions as to how messages should be handled by each participating system. SOAP has been selected because of the flexibility it provides for binding with transport protocols, which can be independent of the IODEF/RID messaging system. As defined by the SOAP messaging specifications [18], the IODEF document plus any extensions will be in the SOAP body of the message. The SOAP header contains information that is pertinent to all participating systems that receive the message, including the ultimate destination, any intermediate hosts, and message processing policy information. Depending on the message or document being transported, there may be a case, such as with RID messages, where a host may only need to view the SOAP header and not the SOAP body and is therefore acting as a SOAP intermediary node. An example of this would be if one RID system was sending a communication to a RID system for which there was no direct trust relationship, an intermediate RID system may be used to provide the trusted path between the communicating systems. This intermediate system may not need to see the contents of the SOAP body. Therefore, the elements or classes needed by all participating systems MUST be in the SOAP header, specifically the RIDPolicy class. Each participating system receiving an incident handling IODEF document is an ultimate destination and has to parse and view the entire IODEF document to make necessary decisions. The SOAP specifications for intermediate and ultimate nodes MUST be followed, for example a message destined for an intermediate node would contain the attribute env:role with the value http://www.w3c.org/2003/05/soap-envelope/role/next. Also in accordance with the SOAP specifications, the attribute of env:mustUnderstand has a value of "true" to ensure each node processes the header blocks consistent with the specifications for IODEF. SOAP messages are typically a one-way conversation. Incident information that may be transmitted to another RID host in the form of a Report message is the single case within RID where a one way communication is specified. The arrival of an IODEF/RID Report document in a SOAP message is simply an assertion that a described incident occurred. In the case of other RID message types to support incident handling, two SOAP messages may be exchanged to enable bi-directional communication. Request/response pairs defined by RID include: TraceRequest/TraceAuthorization/Result, Investigation/Result, and IncidentQuery/Report. 3. Transport Protocol Bindings The SOAP binding will be used for message transport. One agreed upon protocol, HTTPS, MUST be implemented by all RID systems and other protocols may be used. The use of a single transport binding supported by all systems sending and receiving RID messages and extensions of IODEF will enable inter-operability between participating CSIRTS and NPs. Other protocol bindings may be defined in separate documents. 3.1 SOAP over HTTP/TLS SOAP over HTTP/TLS is widely supported, as are ancillary tools such as the Web Services Description Language (WSDL), hence the selection of SOAP over HTTP 1.1 over TLS as Mandatory for transport of RID communications. Security is provided through the RID specification and all REQUIRED RID security MUST be implemented. TLS offers additional security at the transport layer; this security SHOULD be used. BCP 56 contains a number of important considerations when using HTTP for application protocols. These include the size of the payload for the application, whether the application will use a web browser, whether the protocol should be defined on a port other than 80, and if the security provided through HTTPS suits the needs of the new application. It is acknowledged within the scope of these concerns that HTTPS is not ideally suited for IODEF/RID transport, as the former is a client-server protocol and the latter a message-exchange protocol; however, the ease of implementation for services based on SOAP over HTTP outweighs these concerns. Consistent with BCP 56, IODEF over SOAP over HTTP/TLS will require its own TCP port assignment from IANA. Every RID system participating in a consortium MUST listen for HTTPS connections on the assigned port, as the requests and responses in a RID message exchange transaction may be significantly separated in time. If a RID message is answered immediately, or within a generally accepted HTTP client timeout (about thirty seconds), the listening system SHOULD return the reply message in the HTTP response body; otherwise, it must initiate a connection to the requesting system and send its reply in an HTTP request. If the HTTP response body sent in reply to a RID message does not contain a RID message itself, the response body SHOULD be empty, and RID clients MUST ignore any response body that is not an expected RID message. This provision applies ONLY to HTTP response bodies; unsolicited HTTP requests (such as Reports not preceded by an IncidentQuery) are handled according to the message exchange pattern described in RID. RID systems SHOULD use HTTP/1.1 persistent connections as described in RFC 2616 to minimize TCP connection setup overhead. RID systems SHOULD support chunked transfer encoding on the HTTP server side to allow the implementation of clients that do not need to pre-calculate message sizes before constructing HTTP headers. 3.2 SOAP over BEEP SOAP over BEEP is an optional transport binding for IODEF/RID. A RID system supporting BEEP MAY attempt to use SOAP over BEEP on first connection with a peer; if the peer does not support SOAP over BEEP, the initiating peer MUST fall back to SOAP over HTTPS, and SHOULD note that the peer does not support BEEP, to avoid attempting to use BEEP in future communications. BEEP has certain implementation advantages over HTTP/TLS for this application, however the protocol has not been widely implemented. Communications between participating RID systems is on a server-to-server basis, for which BEEP is better suited than HTTP. Incident handling may at times require immediate action, thus a protocol with low overhead and minimum bandwidth requirements is desirable. To provide equivalent transport-layer security to HTTP/TLS, the BEEP TLS profile MUST be supported and SHOULD be used. 4. Examples The examples below, parallel to the examples in section 4.5 of the RID document, demonstrate how the structure of RID messages fits into SOAP containers as outlined in this document for each message type. Of particular note in each is the duplication of the RID policy information in both the SOAP header and SOAP body. The first example includes the full RID message and associated IODEF-Document; following examples omit the IODEF-Document and refer to it in a comment. The IODEF-Document must be present, and the elements required are outlined in the RID specification. 4.1. Example Trace Request message This TraceRequest is identical to the TraceRequest example in Section 4.5.1.1 of RID, and would be sent from a CSIRT reporting a denial-of-service attack in progress to its upstream NP. This request asks the upstream to continue the trace, and to rate-limit traffic closer to the source. ```xml <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://www.w3.org/2001/12/soap-envelope"> <SOAP-ENV:Header> <iodef-rid:RID xmlns:iodef-rid="http://www.ietf.org/iodef/iodef-rid.html" xmlns:iodef="http://www.ietf.org/iodef/iodef.html"> <iodef-rid:RIDPolicy> <iodef-rid:MsgType>TraceRequest</iodef-rid:MsgType> <iodef-rid:PolicyRegion>Inter-Consortium</iodef-rid:PolicyRegion> <iodef-rid:MsgDestination>RIDSystem</iodef-rid:MsgDestination> <iodef:Node> <iodef:Address category="ipv4-addr">172.20.1.2</iodef:Address> </iodef:Node> <iodef-rid:TrafficType>RIDSystem</iodef-rid:TrafficType> <iodef:IncidentID name="CERT-FOR-OUR-DOMAIN"> CERT-FOR-OUR-DOMAIN#207-1 </iodef:IncidentID> </iodef-rid:RIDPolicy> </iodef-rid:RID> </SOAP-ENV:Header> <SOAP-ENV:Body> <iodef-rid:RID xmlns:iodef-rid="http://www.ietf.org/iodef/iodef-rid.html" xmlns:iodef="http://www.ietf.org/iodef/iodef.html"> <iodef-rid:RIDPolicy> <iodef-rid:MsgType>TraceRequest</iodef-rid:MsgType> <iodef-rid:PolicyRegion>Inter-Consortium</iodef-rid:PolicyRegion> <iodef-rid:MsgDestination>RIDSystem</iodef-rid:MsgDestination> <iodef:Node> <iodef:Address category="ipv4-addr">172.20.1.2</iodef:Address> </iodef:Node> <iodef-rid:TrafficType>RIDSystem</iodef-rid:TrafficType> <iodef:IncidentID name="CERT-FOR-OUR-DOMAIN"> CERT-FOR-OUR-DOMAIN#207-1 </iodef:IncidentID> </iodef-rid:RIDPolicy> </iodef-rid:RID> </SOAP-ENV:Body> </SOAP-ENV:Envelope> ``` <iodef-rid:RIDPolicy> <iodef-rid:MsgType>TraceRequest</iodef-rid:MsgType> <iodef-rid:PolicyRegion>Inter-Consortium</iodef-rid:PolicyRegion> <iodef-rid:MsgDestination>RIDSystem</iodef-rid:MsgDestination> <iodef:Node> <iodef:Address category="ipv4-addr">172.20.1.2</iodef:Address> </iodef:Node> <iodef-rid:TrafficType>RIDSystem</iodef-rid:TrafficType> <iodef:IncidentID name="CERT-FOR-OUR-DOMAIN"> CERT-FOR-OUR-DOMAIN#207-1 </iodef:IncidentID> </iodef-rid:RIDPolicy> <iodef-rid:NPPath> <iodef:Name>CSIRT-FOR-OUR-DOMAIN</iodef:Name> <iodef:RegistryHandle>CSIRT123</iodef:RegistryHandle> <iodef:Email>csirt-for-our-domain@ourdomain</iodef:Email> <iodef:Node> <iodef:Address category="ipv4-addr">172.17.1.2</iodef:Address> </iodef:Node> </iodef-rid:NPPath> <iodef-rid:NPPath> <iodef:Name>CSIRT-FOR-UPSTREAM-NP</iodef:Name> <iodef:RegistryHandle>CSIRT345</iodef:RegistryHandle> <iodef:Email>csirt-for-upstream-np@ourdomain</iodef:Email> <iodef:Node> <iodef:Address category="ipv4-addr">172.20.1.2</iodef:Address> </iodef:Node> </iodef-rid:NPPath> <iodef-rid:RID> <iodef:IODEF-Document version="1.0"> <iodef:Incident restriction="need-to-know" purpose="handling"> <iodef:IncidentID name="CERT-FOR-OUR-DOMAIN"> CERT-FOR-OUR-DOMAIN#207-1 </iodef:IncidentID> <iodef:IncidentData> <iodef:Description>Host involved in DOS attack</iodef:Description> <iodef:StartTime>2004-02-02T22:19:24+00:00</iodef:StartTime> <iodef:DetectTime>2004-02-02T22:49:24+00:00</iodef:DetectTime> <iodef:ReportTime>2004-02-02T23:20:24+00:00</iodef:ReportTime> <iodef:Assessment> <iodef:Impact severity="low" completion="failed" type="none"/> <iodef:Assessment> <iodef:Contact role="creator" role="irt" type="organization"> <iodef:name>CSIRT-FOR-OUR-DOMAIN</iodef:name> <iodef:Email>csirt-for-our-domain@ourdomain</iodef:Email> </iodef:Contact> <iodef:Contact role="tech" type="organization"> <iodef:name>Constituency-contact for 10.1.1.2</iodef:name> <iodef:Email>Constituency-contact@10.1.1.2</iodef:Email> </iodef:Contact> <iodef:History> <iodef:HistoryItem type="notification"> <iodef:IncidentID name="CSIRT-FOR-OUR-DOMAIN"> CSIRT-FOR-OUR-DOMAIN#207-1 </iodef:IncidentID> <iodef:Description> Notification sent to next upstream NP closer to 10.1.1.2 </iodef:Description> <iodef:DateTime>2001-09-14T08:19:01+00:00</iodef:DateTime> </iodef:HistoryItem> </iodef:History> <iodef:EventData> <iodef:System category="source"> <iodef:Service> <iodef:port>38765</iodef:port> </iodef:Service> <iodef:Node> <iodef:Address category="ipv4-addr">10.1.1.2</iodef:Address> </iodef:Node> </iodef:System> <iodef:System category="target"> <iodef:Service> <iodef:port>80</iodef:port> </iodef:Service> <iodef:Node> <iodef:Address category="ipv4-addr">192.168.1.2</iodef:Address> </iodef:Node> </iodef:System> <iodef:Expectation priority="high" category="rate-limit-host"> <iodef:Description>Rate limit traffic close to source</iodef:Description> </iodef:Expectation> </iodef:EventData> <iodef:Record> <iodef:RecordData> <iodef:RecordItem type="ipv4-packet"> 45000052ad90000ff06c41fc0a801020a010102976d0050103e020810...72206361726566756c6c792072656e64696720466432e0a </iodef:RecordItem> <iodef:Description>"The IPv4 packet included"</iodef:Description> </iodef:RecordData> </iodef:Record> was used in the described attack" </iodef:Description> </iodef:RecordData> </iodef:Record> </iodef:EventData> </iodef:IncidentData> </iodef:Incident> </iodef:IODEF-Document> </SOAP-ENV:Body> </SOAP-ENV:Envelope> 4.2 Example Investigation Request Message This InvestigationRequest is identical to the InvestigationRequest example in section 4.5.2.1 of the RID specification, and would be sent in a situation similar to that of example 4.1, when the source of the attack is known. <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://www.w3.org/2001/12/soap-envelope"> <SOAP-ENV:Header> <iodef-rid:RID xmlns:iodef-rid="http://www.ietf.org/iodef/iodef-rid.html" xmlns:iodef="http://www.ietf.org/iodef/iodef.html"> <iodef-rid:RIDPolicy> <iodef-rid:MsgType>Investigation</iodef-rid:MsgType> <iodef-rid:MsgDesination>SourceOfIncident</iodef-rid:MsgDestination> <iodef:Node> <iodef:Address category="ipv4-addr">172.25.1.33</iodef:Address> </iodef:Node> </iodef-rid:RIDPolicy> </iodef-rid:RID> </SOAP-ENV:Header> <SOAP-ENV:Body> <iodef-rid:RID xmlns:iodef-rid="http://www.ietf.org/iodef/iodef-rid.html" xmlns:iodef="http://www.ietf.org/iodef/iodef.html"> <iodef-rid:RIDPolicy> <iodef-rid:MsgType>Investigation</iodef-rid:MsgType> <iodef-rid:MsgDesination>SourceOfIncident</iodef-rid:MsgDestination> <iodef:Node> <iodef:Address category="ipv4-addr">172.25.1.33</iodef:Address> </iodef:Node> </iodef-rid:RIDPolicy> </iodef-rid:RID> </SOAP-ENV:Body> 4.3 Example Report This Report is identical to the Report example in section 4.5.3.1 of the RID specification. 4.4 Example Incident Query This IncidentQuery is identical to the IncidentQuery example in Section 4.5.4.1 of the RID specification. 5. Security Considerations All security considerations of related documents MUST be considered including those in the following documents: Requirements for the Format for INCident information Exchange (FINE) [RFCXXXX], the Incident Data Exchange Format Data Model and XML Implementation (IODEF), [RFCXXXX], and Incident Handling: Real-time Inter-network Defense (RID) [RFCXXXX]. The SOAP wrappers described herein are built upon the foundation of these documents; the security considerations contained therein are incorporated by reference. 5.1 Privacy and Confidentiality For transport confidentiality, TLS (whether HTTP/TLS or the BEEP TLS profile) MUST be supported and SHOULD be used. Since multiple bindings for transport may be implemented, RID implementations MUST support XML encryption [4] to encrypt the SOAP body. Since XML encryption is performed at the IODEF document level, not only is the transport encrypted when a document is sensitive or contains sensitive elements, but the stored document is also encrypted. Note that this encryption applies only to the SOAP body; policy information in the SOAP header should remain unencrypted if it is necessary for the intermediate node to dispatch the message. XML encryption protects the IODEF document in the SOAP body from being viewed by any involved SOAP intermediary node. 5.2 Authentication and Identification For transport authentication and identification, TLS (whether HTTP/TLS or the BEEP TLS profile) MUST be supported and SHOULD be used. Each RID consortium SHOULD use a trusted public key infrastructure (PKI) to manage identities for TLS connections. Since multiple bindings for transport may be implemented, RID implementations MUST support XML digital signatures [5] to sign the SOAP body; the rationale and implementation here is parallel to that for XML Encryption, in section 5.1 above. 6. IANA Considerations The IANA is requested to assign TCP ports for RID over SOAP over HTTPS and for RID over SOAP over BEEP. 7. Summary SOAP provides the wrapper to facilitate the exchange of RID messages. The IETF and W3C standards provide detailed implementation details for SOAP and SOAP protocol bindings. The use of existing standards assists with development and interoperability between RID systems exchanging IODEF documents for incident handling purposes. HTTPS is the mandatory transport binding for SOAP to be implemented and BEEP is optional. 8. References 6.1 Acknowledgements Funding for the RFC Editor function is currently provided by the Internet Society. 6.2 Author Information Kathleen M. Moriarty MIT Lincoln Laboratory 244 Wood Street Lexington, MA 02420 Phone: 781-981-5500 Email: Moriarty@ll.mit.edu Brian H. Trammell CERT Network Situational Awareness 4500 Fifth Avenue Pittsburgh, PA 15213 Email: bht@cert.org Copyright (C) The Internet Society (2006). This document is subject to the rights, licenses and restrictions contained in BCP 78, and except as set forth therein, the authors retain all their rights. This document and the information contained herein are provided on an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. This work was sponsored by the Air Force under Air Force Contract Number FA8721-05-C-0002. "Opinions, interpretations, conclusions, and recommendations are those of the author and are not necessarily endorsed by the United States Government."
{"Source-Url": "https://tools.ietf.org/pdf/draft-ietf-inch-rid-soap-00.pdf", "len_cl100k_base": 5306, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 31265, "total-output-tokens": 7219, "length": "2e12", "weborganizer": {"__label__adult": 0.000431060791015625, "__label__art_design": 0.0004982948303222656, "__label__crime_law": 0.0032482147216796875, "__label__education_jobs": 0.0013685226440429688, "__label__entertainment": 0.0002455711364746094, "__label__fashion_beauty": 0.0002186298370361328, "__label__finance_business": 0.001399993896484375, "__label__food_dining": 0.00030350685119628906, "__label__games": 0.000911235809326172, "__label__hardware": 0.003406524658203125, "__label__health": 0.0004875659942626953, "__label__history": 0.0006237030029296875, "__label__home_hobbies": 9.65595245361328e-05, "__label__industrial": 0.0007281303405761719, "__label__literature": 0.0005731582641601562, "__label__politics": 0.0011205673217773438, "__label__religion": 0.0004868507385253906, "__label__science_tech": 0.318115234375, "__label__social_life": 0.00017011165618896484, "__label__software": 0.1295166015625, "__label__software_dev": 0.53466796875, "__label__sports_fitness": 0.0003254413604736328, "__label__transportation": 0.0009479522705078124, "__label__travel": 0.0002486705780029297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24170, 0.03481]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24170, 0.21465]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24170, 0.78519]], "google_gemma-3-12b-it_contains_pii": [[0, 1560, false], [1560, 1560, null], [1560, 4213, null], [4213, 6958, null], [6958, 9516, null], [9516, 12448, null], [12448, 14220, null], [14220, 15972, null], [15972, 17523, null], [17523, 17635, null], [17635, 17769, null], [17769, 18410, null], [18410, 20532, null], [20532, 22589, null], [22589, 24170, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1560, true], [1560, 1560, null], [1560, 4213, null], [4213, 6958, null], [6958, 9516, null], [9516, 12448, null], [12448, 14220, null], [14220, 15972, null], [15972, 17523, null], [17523, 17635, null], [17635, 17769, null], [17769, 18410, null], [18410, 20532, null], [20532, 22589, null], [22589, 24170, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 24170, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24170, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24170, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24170, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24170, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24170, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24170, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24170, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24170, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24170, null]], "pdf_page_numbers": [[0, 1560, 1], [1560, 1560, 2], [1560, 4213, 3], [4213, 6958, 4], [6958, 9516, 5], [9516, 12448, 6], [12448, 14220, 7], [14220, 15972, 8], [15972, 17523, 9], [17523, 17635, 10], [17635, 17769, 11], [17769, 18410, 12], [18410, 20532, 13], [20532, 22589, 14], [22589, 24170, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24170, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
5bda5476f66c516c46d6e7a4cebb83a9ed30aa60
[REMOVED]
{"Source-Url": "http://vstte.inf.ethz.ch/pdfs/vstte-hoare-misra.pdf", "len_cl100k_base": 6665, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 22851, "total-output-tokens": 7094, "length": "2e12", "weborganizer": {"__label__adult": 0.0003709793090820313, "__label__art_design": 0.0002608299255371094, "__label__crime_law": 0.00033402442932128906, "__label__education_jobs": 0.0009708404541015624, "__label__entertainment": 5.316734313964844e-05, "__label__fashion_beauty": 0.00015413761138916016, "__label__finance_business": 0.0002617835998535156, "__label__food_dining": 0.0003342628479003906, "__label__games": 0.0004172325134277344, "__label__hardware": 0.0009670257568359376, "__label__health": 0.0005841255187988281, "__label__history": 0.00021958351135253904, "__label__home_hobbies": 9.679794311523438e-05, "__label__industrial": 0.00030994415283203125, "__label__literature": 0.0002770423889160156, "__label__politics": 0.0002180337905883789, "__label__religion": 0.00045371055603027344, "__label__science_tech": 0.0175018310546875, "__label__social_life": 0.00011229515075683594, "__label__software": 0.004665374755859375, "__label__software_dev": 0.970703125, "__label__sports_fitness": 0.00027751922607421875, "__label__transportation": 0.0005183219909667969, "__label__travel": 0.00018012523651123047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37276, 0.013]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37276, 0.4238]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37276, 0.9511]], "google_gemma-3-12b-it_contains_pii": [[0, 3269, false], [3269, 7097, null], [7097, 10772, null], [10772, 14398, null], [14398, 18045, null], [18045, 21911, null], [21911, 25786, null], [25786, 29306, null], [29306, 32549, null], [32549, 36162, null], [36162, 37276, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3269, true], [3269, 7097, null], [7097, 10772, null], [10772, 14398, null], [14398, 18045, null], [18045, 21911, null], [21911, 25786, null], [25786, 29306, null], [29306, 32549, null], [32549, 36162, null], [36162, 37276, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37276, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37276, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37276, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37276, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37276, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37276, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37276, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37276, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37276, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37276, null]], "pdf_page_numbers": [[0, 3269, 1], [3269, 7097, 2], [7097, 10772, 3], [10772, 14398, 4], [14398, 18045, 5], [18045, 21911, 6], [21911, 25786, 7], [25786, 29306, 8], [29306, 32549, 9], [32549, 36162, 10], [36162, 37276, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37276, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
a93dab5fb0fad89455e980888bc6b5ede94bf130
A guide to inline assembly for C and C++ Basic, intermediate, and advanced concepts Salma Elshatanoufy William O'Farrell November 01, 2011 First, the authors describe basic usage syntax for inline assembly (inline asm) embedded within C and C++ programs. Then they explain intermediate concepts, such as addressing modes, the clobbers list, and branching stanzas, as well as more advanced topics, such as memory clobbers, the volatile attribute, and locks are discussed for those who want to use inline asm in multithreaded applications. In this article, we discuss several use scenarios for inline assembly, also called inline asm. For beginners, we introduce basic syntax, operand referencing, constraints, and common pitfalls that new users need to be aware of. For intermediate users, we discuss the clobbers list, as well as branching topics that facilitate the use of branch instructions within inline asm stanzas in their C/C++ code. Lastly, we discuss memory clobbers and the volatile attribute for advanced users who use inline asm to optimize their code. We conclude with an example of multithreaded locking with inline asm. Basic inline asm In the asm block shown in code Listing 1, the addc instruction is used to add two variables, op1 and op2. In any asm block, assembly instructions appear first, followed by the inputs and outputs, which are separated by a colon. The assembly instructions can consist of one or more quoted strings. The first colon separates the output operands; the second colon separates the input operands. If there are clobbered registers, they are inserted after the third colon. If there are no clobbered inputs for the asm block, the third colon can be omitted, as Listing 2 shows. Listing 1. Opcodes, inputs, outputs, and clobbers ```c int res=0; int op1=20; int op2=30; asm ( " addc. %0,%1,%2 : "=r"(res) : "b"(op1), "r"(op2) : "r0" ); ``` ```c int res=0; int op1=20; int op2=30; asm ( " addc. %0,%1,%2 : "=r"(res) : "b"(op1), "r"(op2) : "r0" ); ``` Listing 2. No clobbered inputs for the asm block, so third colon omitted ``` asm ( " addc. %0,%1,%2 : "=r"(res) : "b"(op1), "r"(op2) ); ``` Note: The clobberers list is discussed later in this section. Each instruction "expects" inputs and outputs to be passed in a certain format. In the previous example, the addc. instruction expects its operands to be passed through registers, hence op1 and op2 are passed into the asm block with the "b" and "r" constraints. For a complete listing of all legal asm constraints for the IBM XL C and C++ compiler, see the compiler language reference. Register constraints on variable declarations In some programs, you will want to tie variables to certain hardware registers. This is done at the variable declaration. The following example ties the variable res to GPR0 throughout the life of the program: ``` int register res asm("r0")=0; ``` When the variable type is not matched with the type of target hardware register, you will receive a compilation error notice. After a variable is tied to a specific register, it is not possible to use another register to hold the same variable. For example, the following code will cause a compilation error, the variable res is associated at declaration time with GPR0, but in the asm block, the user attempts to use any register but GPR0 to pass in res. Listing 3. Compilation error when conflicting constraints are used on a variable ``` int register res asm("r0")=0; asm ( " addc. %0,%1,%2 : "=b"(res) : "b"(op1), "r"(op2) : "r0" ); ``` In the example in Listing 4, there is no output operand for the stw instruction, hence the outputs section of the asm is empty. None of the registers is modified, so they are all input operands, and the target address is passed in with the input operands. However, something is modified: the addressed memory location. But that location is not explicitly mentioned in the instruction, so the output of the instruction is implicit rather than explicit. Listing 4. Instructions with no output operands int res [] = {0,0}; int a=45; int *pointer = &res[0]; asm ( " stw %0,%1(%2) : : "r"(a), "i"(sizeof(int)),"r"(pointer)); Listing 5. Instructions with preserved operands int res [] = {0,0}; int a=45; asm ( " stw %0,%1(%2) : +r"(res[0]) : "r"(a), "i"(sizeof(int)),"r"(pointer)); In listing 5, if you want to preserve the initial value of a result variable that is not necessarily modified by the asm block, then you need to use the + (plus sign) constraint to preserve the initial value of that variable, as is shown with res[0]. Target memory addresses in inline asm If an instruction specifies two of its arguments in a form similar to D(RA), where D is a literal value and RA is a general register, then this is taken to mean that D+RA is an effective address. In this case, the appropriate constraints are "m" or "o". Both "m" and "o" refer to memory arguments. Constraint "o" is described as an offsettable memory location. But in the IBM® POWER® architecture, nearly all memory references require an offset, so "m" and "o" are equivalent. In this case, you can use a single constraint to refer to two operands in the instruction. Listing 6 is an example. Listing 6. A single constraint to refer to two operands in the instruction int res [] = {0,0}; int a=45; asm ( " stb %1,%0 : : "=m"(res[1]) : "r"(a)); The form of the instruction stb (from the assembly language reference) is: stb RS,D(RA). Although the stb instruction technically takes three operands (a source register, an address register, and an immediate displacement), the asm description of it uses only two constraints. The "=m" constraint is used to notify the compiler that the memory address of res is to be used for the result of the store instruction (The "sync" instruction is often used for this purpose, but there are others available, as described in the POWER ISA See Resources for a link.) The "=m" indicates that the operand is a modified memory location. You do not need to know the address of the target location beforehand, because that task is left to the compiler. This allows the compiler to choose the right register (r1 for an automatic variable, for instance) and apply the right displacement automatically. This is necessary, because it would generally be impossible for an asm programmer to know what address register and what displacement to use. In other instances, you can also override this behavior by manually calculating the target address as in the following example. **Listing 7. Manually calculating the target address** ```c int res[] = {0, 0}; int a=45; asm ( "stb %0,%1(%2) \n" : "r"(a), "i"(sizeof(int)),"r"(&res)); ``` In this code, the specification `%1(%2)` represents a base address and an offset, where `%2` represents the base address, and `res[0]` and `%1` represent the offset, `sizeof(int)`. As a result, the store is performed at the effective address, `res`. **Note:** For some instructions, `GPR0` cannot be used as a base address. Specifying GPR0 tells the assembler not to use a base register at all. To ensure that the compiler does not choose `r0` for an operand, you can use the constraint "b" rather than "r". **Addressing modes for POWER and PowerPC instructions** The IBM POWER architecture type is RISC. Instructions typically operate either with three register arguments (two registers for source arguments, one register to hold a result) or with two registers and an immediate value (one register and one immediate value for the source arguments, and one register to hold the result). There are exceptions to this pattern, but mostly it is true. Among the instructions that take two registers and an immediate value, there are two special subclasses: load instructions and store instructions. These instructions use the immediate value as an offset to the value in the source register to form an "effective address." The offset value is typically an offset onto the stack (r1 is the stack pointer), or it is an offset to the TOC (Table of Contents -- r2 is the TOC pointer). The TOC is used to promote the construction of position-independent code, which enables efficient dynamic loading of shared libraries on these machines. When using inline asm, you do not have to use specific registers nor manually construct effective addresses. The argument constraints are used to direct the compiler to choose registers or construct effective addresses appropriate to the requirements of the instructions. Thus, if a general register is required by the instruction, you could use either the "r" or "b" constraint. The "b" constraint is of interest, because many instructions use the designation of register 0 specially -- a designation of register 0 does not mean that r0 is used, but instead a literal value of 0. For these instructions, it is wise to use "b" to denote the input operands to prevent the compiler from choosing r0. If the compiler chooses r0, and the instruction takes that to mean a literal 0, the instruction would produce incorrect results. **Listing 8. r0 and its special meaning in the stbx instruction** char res[8]={'a','b','c','d','e','f','g','h'}; char a='y'; int index=7; asm (" stbx %0,%1,%2 : "r"(a), "r"(index), "r"(res) ); Here, the expected result string is abcdefgy, but if the compiler chose \texttt{r0} for \%1, then the result would incorrectly be ybcdefgh. To prevent this from happening, use "b" as in Listing 9 shows. **Listing 9. Using "b" constraint to signify non-zero GPR** Another example is in the following ASM block. While it appears that the asm block below does res=res+4, that is not the actual functional behavior of the code. **Listing 10. Meaning of \texttt{r0} in the second operand with addi opcode** int register res asm("r0")=5; int b=4; asm (" addi %0,%0,%1 : "r"(res) : "i"(b) : "r0" ); where: addi %0(result operand),%0(input operand res),%3(immediate operand b) Because \texttt{res} is tied to \texttt{r0}, the translation of the asm code in assembly looks becomes: \texttt{addi \_0,\_0,4} The second operand does not translate to register zero. Instead, it translates to the immediate number zero. In effect, the following is the result of the addi operation: \texttt{res=0+4} This case is special to the \texttt{addi opcode}. If, instead, \texttt{res} was tied to \texttt{r1}, then the original intended behavior would have been obtained: \texttt{res=res+4} **Clobbers list** **Basic clobbers list** In cases when registers that are not directly tied to the inputs/outputs are used within the asm block, the user must specify such registers within the clobbers list. The *clobbers list* is used to notify the compiler that the registers contained within the list can potentially have their values altered. Hence, they should not be used to hold other data other than for the instructions that they are used for. In the example in Listing 11, registers 8 and 7 are added to the clobbers list because they are used in the instructions but are not explicitly tied to any of the input/output operands. Also, condition register field zero is added to the clobbers list for the same reason. Although it is not present in the input/output operands, the `mfocrf` instruction reads that bit from the condition register and moves the value in register 8. **Listing 11. Clobbers list example** ```asm asm (" addc. %0,%2,%3 " mfocrf 8,0x1 " andi. 7,8,0xF " stw 7,%1 : "=r"(res),"=m"(c_bit) : "b"(a), "r"(b) "r0","r7","r8","cr0" ); clobbers list ``` If, instead, the `mfocrf` instruction read from condition register field 1 (cr1), then that field would need to be added to clobbers list instead. Also, the period [full stop] at the end of the `addc.` and `andi.` instructions means their results are compared to zero, and the result of the comparison is stored in condition register field 0. When clobbered registers are omitted from the clobbers list, the results from the asm operations might not be correct. This is because such clobbered registers might be reused to hold intermediate values for other operations. Unless the compiler detects that those registers are clobbered, the intermediate data can be used to perform the programmer's instructions, with inaccurate results. Also, the user's asm instructions may clobber values used by the compiler. **Exceptions to the clobbers list** Nearly all registers can be clobbered, except for those listed in Table 1. **Table 1. Registers that cannot be clobbered** <table> <thead> <tr> <th>Register</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>r1</td> <td>stack pointer</td> </tr> <tr> <td>r2</td> <td>toc pointer</td> </tr> <tr> <td>r11</td> <td>environment pointer</td> </tr> <tr> <td>r13</td> <td>64 bit mode thread local data pointer</td> </tr> <tr> <td>r30</td> <td>often used by the compiler as a stack frame pointer, pointer to constant area</td> </tr> <tr> <td>r31</td> <td>often used by the compiler as a stack frame pointer, pointer to constant area</td> </tr> </tbody> </table> Memory clobbers Memory clobber implies a fence, and it also impacts how the compiler treats potential data aliases. A memory clobber says that the asm block modifies memory that is not otherwise mentioned in the asm instructions. So, for example, a correct use of memory clobbers would be when using an instruction that clears a cache line. The compiler will assume that virtually any data may be aliased with the memory changed by that instruction. As a result, all required data used after the asm block will be reloaded from memory after the asm completes. This is much more expensive than the simple fence implied by the "volatile" attribute (discussed later). Remember, because the memory clobber says anything might be aliased, everything that is used needs to be reloaded after the asm, regardless of whether it had anything to do with the asm. A memory clobber can be added to the clobbers list by simply using the "memory" word instead of a register name. Branching Basic branching Branching can be tricky with inline asm, this is because you need to know the address of the instruction to which to branch before compile time. Although this is not possible, you can use labels. Using labels, the branch-to address can be designated with a unique identifier that can be used as a target branch address. Within a single source file, labels cannot be repeated within an inline asm block, nor within neighboring asm blocks within the same source. In a given program, each label is unique. There is an exception to this rule, however, and this is if you use relative branching (more on this later). With relative branching, more than one label with the same identifier can be found within the same program and within the same asm block. Note: Labels cannot be used in asm to define macros because of possible namespace clashes. In the example in Listing 12, the branch occurs when the LT bit, bit 0, of the condition register is set. If it is not set, then the branch is not taken. Listing 12. Example of branch taken when LT bit of CR0 is set (0x80000000) ```asm " addic. %0,%2,%4 " bc 0xC,0,here " there: add %1,%2,%3 " here: mul %0,%2,%3 : "r"(res),"r"(res2) : "r" (a),"r"(b),"r"(c) : "cr0" ); ``` Likewise, a branch would occur if the GT bit (bit 1) of the condition register is set, as in the code in Listing 13. Listing 13. Example of branch taken when GT bit of CR0 is set (0x40000000) With inline asm, it is perfectly legal to branch within the same asm block; however, it is not possible to branch between different asm blocks, even if they are contained within the same source. Relative branching As discussed earlier, relative branching allows you to reuse the name of a label more than once within the same program. It is predominantly used, however, to dictate the position of the target address relative to the branch instruction. These are examples of the relative branch codes that can be used: - F - forward - B - backward Note: That they must be suffixed to numeric labels to be syntactically correct. In this example (Listing 14), notice that the target address is referenced as "Hereb". In this case, we use the label of the target address appended with a suffix that dictates where this label is located relative to the branch instruction itself. The label "Here" is located before the branch instruction, hence the use of the "b" suffix in "Hereb." Listing 14. Needs caption ``` asm ( " addic. %0,%2,%4 \n" " bc 0xC,1,here \n" " there: add %1,%2,%3 \n" " here: mul %0,%2,%3 \n" : ":r"(res),":r"(res2) : ":r"(a),":r"(b),":r"(c) : "cr0" ); ``` The condition register The condition register is used to capture information on results of certain instructions. For non-floating point instructions with period (.) suffixes that set the CR, the result of the operation is compared to zero. - If the result is greater than zero, then bit 1 of the CR field is set (0x4). - If it is less than zero, then bit 0 is set (0x8). - If the result is equal to zero, then bit 2 is set (0x2). For all compare instructions, the two values are compared, and any CR field can be set (not just CR0). Table 2 lists the bits and their corresponding meanings (there are eight such sets of 4 bits in the condition register, called "cr0, cr1, cr2 ... cr7"). Table 2. Bits of a CR field and the meanings of different settings <table> <thead> <tr> <th>Bit</th> <th>Name</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>LT</td> <td>RA &lt; 0</td> </tr> <tr> <td>1</td> <td>GT</td> <td>RA &gt; 0</td> </tr> <tr> <td>2</td> <td>EQ</td> <td>RA = 0</td> </tr> <tr> <td>3</td> <td>U</td> <td>Overflow for integer operations.</td> </tr> <tr> <td></td> <td></td> <td>Unordered, for floating point operations</td> </tr> </tbody> </table> Note: For floating point instructions with a period suffix, CR1 is set to the upper 4 bits of the FPSCR. Blocking the Volatile attribute Making an inline asm block "volatile" as in this example, ensures that, as it optimizes, the compiler does not move any instructions above or below the block of asm statements. ``` asm volatile(" addic. %0,%1,%2 ":"r"(res):"=r"(a),"r"(a)) ``` This can be particularly important in cases when the code is accessing shared memory. This will be illustrated in the next section on multithreaded locking. Multithreaded locking One of the most common uses of inline asm is in writing short segments of instructions to manage multithreaded locks. Because of the loose memory model on the POWER architecture, constructing such locks requires careful use of a pair of instructions: - One instruction that loads the lock word and creates a "reservation" - Another that updates the lock word if the reservation hasn't been lost in the interim Note: If the reservation has been lost, a loop can be used to retry repeatedly. Listing 15 shows a basic inline function that attempts to acquire a lock (there are several problems with this code, which we discuss after these examples). Listing 15. Example of Acquire lock function coded in asm inline bool acquireLock(int *lock){ bool returnvalue = false; int lockval; asm { "0: lwarx %0,0,%2 \n" //load lock and reserve " cmpwi 0,%0,0 \n" //compare the lock value to 0 " bne 1f \n" //not 0 then exit function " ori %0,%0,1 \n" //set the lock to 1 " stwcx. %0,0,%2 \n" //try to acquire the lock " bne 0b \n" //reservation lost, try again " ori %1,%1,1 \n" //set the return value to true "1: \n" //didn't get lock, return false : "+r" (lockval), "+r" (returnvalue) : "r"(lock) //parameter lock is an address : "cr0" ); //cmpwi, stwcx both clobber cr0 return returnvalue; } Listing 16 is an example of how this inline function could be used. **Listing 16. Example of how the acquireLock function can be used** ```c if (acquireLock(lockWord)){ //begin to use the shared region temp = x + 1; . . . } ``` Because the function is inline, the resulting code won’t have an actual call in it. Instead, it will precede the use of the shared region `x` with the instructions to acquire the lock. The first problem to notice with this code is the lack of a synchronization instruction. One of the key performance enhancements enabled by the loose memory model of the POWER architecture is the ability of the machine to reorder loads and stores to make more efficient use of internal pipelines. However, there are times when the programmer needs to curtail this reordering to some degree to properly access shared storage. In the case of a lock, you would not want a load of data from the shared region (“x” in the case above) to be reordered so that it occurs before the lock on the region is acquired. For this reason, a synchronization instruction should be inserted to tell the machine to limit reordering in this case. The `sync` instruction is often used for this purpose, but there are others available, as described in the POWER ISA (see **Resources**). In the code example in Listing 17, we inserted `sync` instruction to prevent reordering of loads of "x" (this is called an "import barrier"): **Listing 17. Sync example** inline bool acquireLock(int *lock) { bool returnvalue = false; int lockval; asm { "0: lwarx %0,0,%2 \n" //load lock and reserve " cmpwi 0,%0,0 \n" //compare the lock value to 0 " bne 1f \n" //not 0 then exit function " ori %0,%0,1 \n" //set the lock to 1 " stwcx. %0,%0,2 \n" //try to acquire the lock " bne 0b \n" //reservation lost, try again " sync \n" //import barrier " ori %1,%1,1 \n" //set the return value to true "1: \n" //didn't get lock, return false : "+r" (lockval), "+r" (returnvalue) : "r"(lock) //parameter lock is an address : "cr0" ); //cmpwi, stwcx both clobber cr0 return returnvalue; } } In that asm block, the sync will prevent any subsequent loads from occurring until after it is known which way the preceding branch went. That way the variable x will not be loaded unless the branch was not taken and the acquireLock returns true. So, are we set now? Unfortunately not. We still have to worry what the compiler might do. Modern optimizing compilers can be very aggressive in moving code around -- and even removing it completely -- if it appears that the changes might make the program run faster without changing the semantics of the code. However, compilers typically aren't aware of the complexities involved with accessing shared memory. For example, a compiler might move the statement temp = x + 1; to a place higher in the program if it determines that the result would be scheduled more efficiently (and it assumes that the "if" is usually taken). Of course, that would be disastrous from the viewpoint of accessing shared data. To prevent the movement of any loads (or any instructions at all) from below the inline asm to a location above it, you can use the keyword "volatile" (also known as the volatile attribute) to modify the asm block, as Listing 18 shows. **Listing 18. Volatile keyword example** inline bool acquireLock(int *lock) { bool returnvalue = false; int lockval; asm volatile { "0: lwarx %0,0,%2 \n" //load lock and reserve " cmpwi 0,%0,0 \n" //compare the lock value to 0 " bne 1f \n" //not 0 then exit function " ori %0,%0,1 \n" //set the lock to 1 " stwcx. %0,%0,2 \n" //try to acquire the lock " bne 0b \n" //reservation lost, try again " sync \n" //import barrier " ori %1,%1,1 \n" //set the return value to true "1: \n" //didn't get lock, return false : "+r" (lockval), "+r" (returnvalue) : "r"(lock) //parameter lock is an address : "cr0" ); //cmpwi, stwcx both clobber cr0 return returnvalue; } } When you do this, an internal fence is placed before and after the asm block that prevents instructions from being moved past it. And remember that this asm block is inlined, so it will prevent the access to x from being moved above the asm-implemented lock. Memory clobbers in multithreaded locking The discussion of multithreaded locking would not be complete without a mention of memory clobbers. The keyword memory is often added to the clobber list in such situations, although it is not always clear why it would be needed. The use of memory in the clobbers list means that memory is altered unpredictably by the asm block. However, memory modifications in the locking example given are quite predictable. Although the variable lock is a pointer (that points to a lock location), that isn't any more unpredictable that the expression "*lock" in a C program. In that case, a well-behaved compiler would likely associate the expression "*lock" with all variables of the appropriate type, and so would correctly reload any affected variables after the pointer was used for modifying data. Nonetheless, the use of memory clobbers appears to be a pervasive practice, which is probably driven by an abundance of caution when dealing with shared regions. Programmers should be aware, though, of the performance penalties involved and of alternative approaches. When an inline asm includes "memory" in the clobbers list, it means that any variable in the program might have been modified by the asm, so it must be reloaded before it is used. This requirement can pretty much put a sledgehammer to optimization efforts by the compiler. A potentially lighter-weight approach would be to make the shared region volatile (in addition to the asm block itself). Making a variable volatile means its value must be reloaded before it is used in any given expression. If the shared region in question is a data structure, such as a list or queue, this will ensure that the updated structure is reloaded after the lock is acquired. However, all of the non-shared data accesses can enjoy the full complement of compiler optimizations. Tip: If the shared data structure is accessed by a pointer (say *p), be sure to declare the pointer so that you indicate that it's the object pointed to that is volatile, not the pointer itself. For example, this declares that the list pointed to by p is volatile: ```c volatile list *p ``` Acknowledgments Thank you Ian McIntosh, Christopher Lapkowski, Jim McInnes, and Jae Broadhurst. You've each played an important role in publishing this article. © Copyright IBM Corporation 2011 Trademarks (www.ibm.com/developerworks/ibm/trademarks/)
{"Source-Url": "https://www.ibm.com/developerworks/rational/library/inline-assembly-c-cpp-guide/inline-assembly-c-cpp-guide-pdf.pdf", "len_cl100k_base": 6751, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 26335, "total-output-tokens": 7430, "length": "2e12", "weborganizer": {"__label__adult": 0.0002758502960205078, "__label__art_design": 0.0002038478851318359, "__label__crime_law": 0.0002211332321166992, "__label__education_jobs": 0.00023221969604492188, "__label__entertainment": 4.07099723815918e-05, "__label__fashion_beauty": 0.0001119375228881836, "__label__finance_business": 9.256601333618164e-05, "__label__food_dining": 0.0003082752227783203, "__label__games": 0.00048422813415527344, "__label__hardware": 0.0016088485717773438, "__label__health": 0.00024247169494628904, "__label__history": 0.00013256072998046875, "__label__home_hobbies": 7.855892181396484e-05, "__label__industrial": 0.00037789344787597656, "__label__literature": 0.00012189149856567384, "__label__politics": 0.0001633167266845703, "__label__religion": 0.00041961669921875, "__label__science_tech": 0.006557464599609375, "__label__social_life": 4.166364669799805e-05, "__label__software": 0.004650115966796875, "__label__software_dev": 0.98291015625, "__label__sports_fitness": 0.00024437904357910156, "__label__transportation": 0.0003757476806640625, "__label__travel": 0.00014603137969970703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26936, 0.03373]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26936, 0.62273]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26936, 0.89139]], "google_gemma-3-12b-it_contains_pii": [[0, 2053, false], [2053, 4146, null], [4146, 6469, null], [6469, 9248, null], [9248, 10793, null], [10793, 13114, null], [13114, 15524, null], [15524, 17268, null], [17268, 19264, null], [19264, 21476, null], [21476, 24488, null], [24488, 26936, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2053, true], [2053, 4146, null], [4146, 6469, null], [6469, 9248, null], [9248, 10793, null], [10793, 13114, null], [13114, 15524, null], [15524, 17268, null], [17268, 19264, null], [19264, 21476, null], [21476, 24488, null], [24488, 26936, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26936, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26936, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26936, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26936, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26936, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26936, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26936, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26936, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26936, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26936, null]], "pdf_page_numbers": [[0, 2053, 1], [2053, 4146, 2], [4146, 6469, 3], [6469, 9248, 4], [9248, 10793, 5], [10793, 13114, 6], [13114, 15524, 7], [15524, 17268, 8], [17268, 19264, 9], [19264, 21476, 10], [21476, 24488, 11], [24488, 26936, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26936, 0.05068]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
9a09880f526928dceee48e5dbc7a51df67f8b8f0
[REMOVED]
{"len_cl100k_base": 5376, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 27330, "total-output-tokens": 7139, "length": "2e12", "weborganizer": {"__label__adult": 0.0003533363342285156, "__label__art_design": 0.0004863739013671875, "__label__crime_law": 0.0004968643188476562, "__label__education_jobs": 0.0009560585021972656, "__label__entertainment": 8.797645568847656e-05, "__label__fashion_beauty": 0.0002007484436035156, "__label__finance_business": 0.00033092498779296875, "__label__food_dining": 0.00033402442932128906, "__label__games": 0.0005931854248046875, "__label__hardware": 0.0010700225830078125, "__label__health": 0.00078582763671875, "__label__history": 0.0003116130828857422, "__label__home_hobbies": 8.380413055419922e-05, "__label__industrial": 0.000415802001953125, "__label__literature": 0.0003619194030761719, "__label__politics": 0.0003883838653564453, "__label__religion": 0.0004353523254394531, "__label__science_tech": 0.07861328125, "__label__social_life": 0.00010222196578979492, "__label__software": 0.00974273681640625, "__label__software_dev": 0.90283203125, "__label__sports_fitness": 0.00027942657470703125, "__label__transportation": 0.0005350112915039062, "__label__travel": 0.0002079010009765625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34744, 0.02633]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34744, 0.42107]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34744, 0.91209]], "google_gemma-3-12b-it_contains_pii": [[0, 3581, false], [3581, 7699, null], [7699, 11956, null], [11956, 15736, null], [15736, 19615, null], [19615, 24898, null], [24898, 27190, null], [27190, 29338, null], [29338, 34107, null], [34107, 34744, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3581, true], [3581, 7699, null], [7699, 11956, null], [11956, 15736, null], [15736, 19615, null], [19615, 24898, null], [24898, 27190, null], [27190, 29338, null], [29338, 34107, null], [34107, 34744, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34744, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34744, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34744, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34744, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34744, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34744, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34744, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34744, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34744, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34744, null]], "pdf_page_numbers": [[0, 3581, 1], [3581, 7699, 2], [7699, 11956, 3], [11956, 15736, 4], [15736, 19615, 5], [19615, 24898, 6], [24898, 27190, 7], [27190, 29338, 8], [29338, 34107, 9], [34107, 34744, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34744, 0.09735]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
52770581be99bd7a5b006ddc46e9dad05760a534
Trace replay with change propagation impact in client/server applications Raafat Zarka, Amélie Cordier, Elöd Egyed-Zsigmond, Alain Mille To cite this version: Trace replay with change propagation impact in client/server applications Raafat Zarka\textsuperscript{1,2}, Amélie Cordier\textsuperscript{1,3}, Elöd Egyed-Zsigmond\textsuperscript{1,2}, Alain Mille\textsuperscript{1,3} \textsuperscript{1}Université de Lyon, CNRS \textsuperscript{2}INSA-Lyon, LIRIS, UMR5205, F-69621, France \textsuperscript{3}Université Lyon 1, LIRIS, UMR5205, F-69622, France {raafat.zarka, amelie.cordier, elod.egyed-zsigmond, alain.mille}@liris.cnrs.fr Abstract: To help end-users mastering complex applications, it is often efficient to enable them to “replay” what they have done so far. In some cases, it is even more useful to enable them to modify some values of the actions they are replaying. However, while doing so, it is very important to deal with the consequences of these changes on the remaining of the replay process. In this paper, we describe our models to enable replay of user’s interactions and to manage impact propagation of changes during the replay process. These models are built upon traces, i.e. digital objects that enable us to record user interactions and to reuse them in different ways. We have implemented the replay process in a Web application called SAP-BO Explorer, an application helping business users to access large amounts of information. Our tool helps users to better understand the application. Keywords: impact propagation, macro recording, bookmarks, replay traces, human computer interaction. 1. Introduction With the multiplication and the rapid development of software systems and applications, we now have access to more and more tools, which are usually more and more complicated. While using these tools, we are often lost, usually because we lack time to understand applications, get used to them and to exploit them efficiently. In response to this problem, some application designers came up with solutions for helping users either to discover the application or to learn how to be more efficient while using it. Providing a relevant assistance to users becomes a real challenge for application designers. Among the proposals for assistance strategies, we usually find tutorials, how-to, videos, assistants, training courses, etc. However, all these assistance strategies rest upon a static description of the application, hard-coded a priori. They are proposed to users in an identical way and thus, are not always well suited to specific needs of specific users. To overcome this issue, we have proposed, in a previous work (Zarka et al. 2010) to use interaction traces in order to provide user with a personalized and contextualized assistance based on previous experiences. Interaction traces are relatively new digital objects. An interaction trace is a record of the actions performed by a user on a system. In other words, a trace is a story of the user’s actions, step by step. Hence, traces enable us to capture users’ experiences. Traces are recorded according to a pre-established model, so that they can be reused in different ways: replay, exploration, modification, modification plus replay, etc. Working with traces raises numerous research issues. How to collect, represent, store, and visualize traces? What mechanisms have to be implemented in order to allow user to browse their personal traces? How to implement a replay mechanism in a pre-existing system? How to take into account privacy issues when working with traces? Recent researches provide us with solutions to some of these problems and enable us to work within an existing framework for manipulating traces (see (Champin et al. 2004), (Cordier et al. 2009) and (Settouti et al. 2009)). In this paper, we focus on a specific research question: how to replay a trace in a system and which issues are raised by the replay when the initial situation has been modified? To better understand this problem, let us consider the following example. A user makes a sequence of manipulations to improve a colored picture: transformation in gray-scale, selection of a scale of gray, luminosity attenuation for the selection, blur effect on the selection. Not satisfied with the result, he decides to go back to the initial state (the original picture) and to replay the whole set of actions, except from the transformation in gray-scale. The question is: “is the remaining of the actions still possible?” The issue we address in this paper is then: how to enable a trace replay while monitoring the impact of a modification in the trace on the remaining of the process? In order to address this issue, we have firstly elaborated a mechanism enabling to do a simple replay of a trace (i.e. with no modification) from any point in the trace. Then, we have defined a model for impact analysis in order to manage impact propagation after a modification of the trace. Both models are described in this paper. The trace-replay mechanism has been implemented in the widely used SAP-BO Explorer application (SAP 2010), a web application enabling user to load, explore, visualize and export large quantities of data. SAP-BO needed a tool to help their users better understand the tool and this is the solution we designed for them. We have instrumented the initial application in order to collect interaction traces and we have developed a graphical interface in order to display the traces according to an ad-hoc representation. We have also instrumented the application in order to enable replay of recorded traces. The application is operational and a demo video is available 1 A demo video of trace replay and visualization is available at: https://liris.cnrs.fr/~rzarka/ReplayTraceDemo/ This paper is organized as follows. In section 2, we survey related work. Then, in section 3, we show how we use traces in order to enable replay of user’s interactions. In section 4, we discuss the consequences of a change during the replay, and we propose an impact propagation model. Section 5 gives implementation details. Evaluation and discussion of our proposal are made in section 6. The paper ends with a conclusion and a description of future research issues. 2. Related work In most of existing macro recording systems, users have to be proactive: they need to start and stop macro recording. Bookmark systems are one of the most common macro recording systems. They enable users to “replay” web pages. With Koala (Little et al. 2007), the user can record a sequence of actions and generate a script of keyword commands that can be replayed later. Recorded scripts are stored automatically on a wiki, which might be shared by a workgroup, allowing easy exchange and improvement of scripts. CoScripter (Leshed et al. 2008) is a Firefox plug-in created by IBM Research. It allows users to record and share interactions with websites. It records user actions and saves them in semi-natural language scripts. The scripts made are saved in a central wiki for sharing with other users. WebVCR (Anupam et al. 2000) and WebMacros (Safonov et al. 2001) record web browser actions as a low-level internal representation, which is not editable by the user or displayed in the interface. All these systems require planning to enable recording while Smart Bookmarks (Hupp & Miller 2007) supports retroactive recording: it automatically captures users’ interactions while they navigate the web and displays them through a graphical presentation. When users want to bookmark a webpage, the system automatically determines the sequence of commands needed to return to the page, and saves the sequence as a bookmark. While Smart Bookmarks lets users save or share actions from ongoing browsing sessions, ActionShot (Li et al. 2010) enables users to share actions they have performed before by providing them with a visual interface for browsing their entire history. ActionShot system is built on top of the CoScripter platform. History data is reused through the re-execution of recorded steps. Sharing also is supported through Facebook, Twitter or via email. Both ActionShot and Smart Bookmarks are generic, but they are implemented as Firefox extensions which is a limit. Besides, they cannot work with dynamic pages (e.g. Ajax or Flash based). In Smart Bookmarks, users can modify parameters values before the bookmark starts running. However, these new values may affect commands and cause inconsistent states in the application. Hence, it seems relevant to study impact propagation of these changes. Impact propagation analysis is widely studied in software engineering and database domains. In (Briand et al. 2003), the authors propose a UML model-based approach to impact analysis that can be applied before any implementation of the changes, thus allowing an early decision-making and change planning process. Most techniques to predict the effects of schema changes upon applications that use the database can be expensive and error-prone, making the change process expensive and difficult. In (Maule 2010), the authors present a novel analysis for extracting potential database queries from a program, called query analysis. The impacts of a schema change can be predicted by analyzing the results of query analysis, using a process they call impact calculation. Many systems also support impact analysis. One of them is Sybase Power Designer Modeling Tool that provides powerful methods for analyzing the dependencies between object models (Sybase 2010). **Table 1.** Comparison table of related work <table> <thead> <tr> <th>System</th> <th>Representation</th> <th>Simple Replay</th> <th>Replay with change</th> <th>Adaptation</th> </tr> </thead> <tbody> <tr> <td>WebMacros</td> <td>No</td> <td>Proactive</td> <td>No</td> <td>No</td> </tr> <tr> <td>WebVCR</td> <td>Wiki Scripts</td> <td>Proactive</td> <td>No</td> <td>No</td> </tr> <tr> <td>Koala</td> <td>Text, Firefox Extension</td> <td>Proactive</td> <td>No</td> <td>No</td> </tr> <tr> <td>CoScripter</td> <td>Graphical (screenshots), Firefox extension</td> <td>Retroactive</td> <td>Yes, without impact propagation</td> <td>Classify buttons for side-effecting</td> </tr> <tr> <td>Smart bookmarks</td> <td>Graphical text explanations, Firefox extension</td> <td>Retroactive</td> <td>Yes, without impact propagation</td> <td>No</td> </tr> <tr> <td>ActionShot</td> <td>Actions list</td> <td>Macro and undo command</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>Photoshop</td> <td>Does not trace</td> <td>Undo command</td> <td>No</td> <td>Impact rules</td> </tr> <tr> <td>Power Designer</td> <td>M-Trace with text explanations</td> <td>Retroactive</td> <td>Yes</td> <td>Impact rules and adapted values</td> </tr> </tbody> </table> Some applications allow users to replay their actions like Photoshop (Harrington 2009), by using undo or playback commands. In Photoshop, graphics designers and photographers have a number of processes they frequently perform on their images. By creating macros called “actions” they can automate many routine tasks using simple text files that are recorded in a macro-style. Whether is the goal is to convert an image for the Web or to transform a color photo into a black and white photo, designers can reduce several steps to a click on a single button. Users can create their own macro scripts which are mini recordings of commands. This is also what we would Trace replay with change propagation impact in client/server applications like to provide, but in our case we need to apply macro recording for systems that do not support undo commands like most of client-server applications. In addition, we do not want to ask the user to start or stop recording his actions. Table 1 shows a comparison between all the presented works according to the way they allow visualization of past actions and if they support the replay with or without change of values. 3. Simple trace replay (go back to a previous state) In client-server applications, simple undo commands imply data interchange between client and server. This may take a lot of time (especially if the undo has to be repeated many times) and can cause server overload. Besides, such a problem may face loss of data issues. Last, it is not a scalable solution for situations where a lot of users access the server at the same time. For all these reasons, undo commands are hard to implement. Instead, to enable users to go back to a previous state, we propose to implement a “trace replay mechanism”. This mechanism enables users to replay their interaction until they reach the expected state of the application. In order to implement this mechanism, we have defined a trace model (see Fig. 1). ![Modified trace model to support trace replay](image) Each user’s session is represented by a M-Trace which consists of a set of observed elements (obsels). Each obsel has a type and two timestamps representing its beginning and ending instants. Each obsel type has a domain of attributes and indicates the values of its attributes respecting the range of the attribute type. An obsel can affect many elements at the same time. For example, pressing a “delete all” button can erase the values of many elements together. By using the obsel attribute values, we can calculate the new values for the related elements, where each obsel attribute concerns only one element. Using this model we can get all the obsels that can modify every element and all the elements that can be affected by an obsel. When capturing the traces we don’t need to store the values of elements at each time. We only store attributes and values of each obsel. For example if a user selects a chart, the value of the obsel will be the ID of the chart and not the whole information about the chart, so we need an element called “selected chart” that contains all the information about the selected chart. 3.1. Playback trace process Our solution to go back to a previous state of the system is to playback users’ actions from a starting point (session start) and not by undoing last ones. When a user chooses to go back to a past state, he can choose the obsel that he wants to return to. The system will automatically go back to this state by replaying all the obsels that happened from the beginning of the session until the selected obsel; let’s call it the triggered obsel (the obsel where we want the system to play back to). Fig. 2 [A] shows a simple trace replay, a list of obsels starting from A to R, where R is the replay obsel and C is the triggered obsel. In R the user asked to replay traces to back with the system to its state when clicking on C. We can see that all the obsels that happened between C and R will be ignored (EDA). This replay will be done by one command which means one call from the client to the server. After replaying traces the system will go back to the past state and the user will continue his usage to the system, and new obsels will be collected. An Obsel R means that at this point a replay action happened. Fig. 2 [A] Simple Trace Replay, [B] Trace Replay with change By replaying the obsels we can calculate the values of these elements at the replaying point. **SimpleReplay** algorithm gets M-Trace and the triggered obsel as input and goes back to a previous state. Firstly, it gets the subset of the trace that should be replayed starting from the first obsel to the triggered one by a chronological order. Then this trace will be optimized by using the optimization algorithm to delete extra obsels. Each element gets its default values and then a loop on all the obsels runs, where at each time the element values are updated according to the attributes of the current obsel. At the end, the new element values are updated making the system going back to this state. The replay event is also captured as a new obsel and taken in consideration during the analysis. Program `simpleReplay (M-Trace, TriggerredObsel)` ```plaintext ReplayedTrace := getSubTrace(0, position(TriggerredObsel)) optimize(ReplayedTrace) Elements := getDefaultValues() For pos := 0 to getObselCount(ReplayedTrace)-1 Obsel := ReplayedTrace[pos] Attributes := getAttributes(Obsel.Type) For each attribute in Attributes Value := getAttributeValue(Obsel, attribute) Elem := getAffectedElement(attribute) Elements[elem] := GenerateElementValue(value) End For each End For update(Elements) End Program ``` ### 3.2. Optimized trace replay process As not all the obsels play a role for changing the state of the system, the replay process can be optimized by reducing the number of replayed obsels. In addition, in some cases many obsels can be ignored, either because they have been canceled by other obsels or because of reset values. According to that, we don’t need to go through all the obsels in order to go back to the triggered one. Analyzing the previous obsels to get the right values of the elements enables us to optimize the replay process. We can get an optimized chronological list of obsels from the beginning of the session to the triggered obsel; this list will be used to generate the values for each element. Optimize algorithm tries to delete all unnecessary obsels that induce loops in the trace. For example, in the simple replay obsel, the subTrace from replay obsel to triggered obsel should be deleted. The same thing is also done for a reset obsel which means deleting all the obsels from the beginning to the reset obsel. So we consider that there is a list of unnecessary loop obsels in the trace, and in this algorithm all these loops will be deleted as shown in Fig. 3. 4. Replay traces with impact propagation In this section we describe how we can replay traces after modifying an obsel by handling the consequences of changes on elements before actually performing these changes. This is illustrated on Fig. 2 [B]. R is a replay obsel that triggers a replay of the trace after doing a change on the values of the triggered obsel C. Because of a change in one of the attribute values of C, the values of some other obsels could be inconsistently modified, like E and A, while other obsels may remain consistent, like D. We need to calculate the new values, in order to take into account this modification. Then the trace can be replayed with these new values. After that the user can continue to use the system. We face many questions like: how can we determine the elements affected by a change? Can we be proactive and specify the appropriate new values, without asking the user to enter the new values? How can we replay the next obsels after applying this change? To answer these questions we propose to define impact rules of dependencies between the elements for manipulating the consequences of a change. 4.1. Impact rules for element dependencies Impact rules define the dependencies between the elements in the system in order to be able to identify the elements that are affected by a change in another element, and to specify the modifications that could be done on the affected elements to stay consistent and valid. Each rule includes a source element and the condition on its values that specifies the dependence with a destination element and the condition on its values. A rule says that if specific conditions for the values of the source element are fulfilled then some of the values of the destination element determined by the destination condition cannot exist, which requires replacing these values by an adapted value. **Definition:** Impact rule Let $E$ be a set of elements. Each element has a name and some values. Let $O$ be a set of operations and $F$ be a set of functions. We can define an impact rule $R$ as an implication of the form: $$R = (E_S, C_S) \rightarrow (E_D, C_D) : \forall E \in E, \text{ where } E_S, E_D, \forall \in E,$$ and $E_S$ is the source element, $C_S$ is the source condition, $E_D$ is the destination element, $C_D$ is the destination condition, and $\forall$ is the adapted element. $C_S$ and $C_D$ are conditions based on operations and functions on the values of the elements. Conditions are composed of operations ($O$) and functions ($F$) on elements values. Operations can be logical (and, or, not, etc), mathematical (+, -, *, /, etc) or others. Functions can be grouping functions like (max, sum, min, count, avg) or custom functions like (isNumber, isHoliday, etc). For each application, system’s experts define impact rules for the dependencies between the elements, to determine the consequences of modifying a past obsel. We can get all the impacted obsels for each rule from the entity of the relations between elements and obsels. If we find impact rules having the elements of the modified obsel as source elements and their values satisfying the source conditions, then, for each destination element, if its value satisfies the destination condition, we need to replace the destination element by the adapted one. Adapted values can be specified manually as default values or can be generated automatically using past traces. For example, in SAP-BO Explorer, we consider an impact rule like: if the number of selected measures is greater than one, the element “Chart” cannot be of type “Pie”. If a user asks to replay a trace after modifying the number of selected measures that activated this rule, and if there was a successor obsel for changing the chart type to “Pie”, then this obsel will not be valid anymore because of this rule, and the chart type will be automatically changed according to the adapted value to be “Vertical Bars”. The rule will be as following: $$E_S = \text{Selected Measures} \quad C_S = (\text{Count}() > 1)$$ $$E_D = \text{Chart} \quad C_D = (\text{type} = \text{“Pie”})$$ $$\forall = (\text{Type} = \text{“Vertical Bar”})$$ The user can replay a part of his session after modifying some of the obsels values. These modifications can be of many types like shifting obsel by changing their timestamps, thus causing a change in the order between obsels, updating a value for an attribute of an obsel, or even deleting an obsel. By using impact rules we can determine the consequences of a change and the adapted values. In case of not finding an adapted value of an element or the absence of an impact rule, the corresponding obsels will be invalid. Then the user will have to select the suitable value manually; otherwise the replay process will fail. 4.2. Retrieving adapted value from past traces When a user adds a new impact rule, the system asks him to choose the adapted value from a list of possible values, or to keep the system calculating it automatically using past traces. For this purpose, we propose to use a retrieval algorithm similar to the algorithm we presented in (Zarka et al. 2010). In the original algorithm, we tried to retrieve episodes similar to the current one without taking obsels values in consideration because we just wanted to know the next recommended obsels. So, in order to make this algorithm useful for finding the adapted values, we need to make a comparison between the values of the obsels. In addition, we want to retrieve the adapted value for the destination element and not the next recommended obsels. Get adapted value algorithm starts by selecting a subset of the trace from its beginning to the modified triggered obsel. Then it retrieves all the past similar episodes to the current one. Similarity includes values comparison. For each similar episode, it calculates the final value of the corresponding element (destination element in the impact rule) as we did in the simple replay, without updating the system. If there is more than one value, we take the one that occurs the most often and we consider it as the adapted one. If we are not able to retrieve any episode, we keep this element as an invalid element until another obsel modifies its value, otherwise the replay process will fail and the system will ask the user to choose the value manually. 5. Implementation In the previous sections, we have described the models that we have defined to support replay of user interactions by exploiting traces. In this section, we show how we have implemented our trace replay model into the SAP-BO Explore application. 5.1. Trace collecting and visualization Firstly we modified SAP-BO Explorer for being able to collect obsels. SAP-BO Explorer is divided into two parts. Server part is implemented in Java. The management of users’ sessions is done in this part, thus enabling many users to work on the system at the same time. The client part is a Flex application; each user has a web application where he can do his exploration. The traces are collected in the client side. Fig. 4 shows a snapshot of the user interface. Each time a user tries to use the system, a new session is opened. Each session contains many obsels, and each action of the user is collected as an obsel presented in a XML format specifying the obsel type, timestamps, and the values of this obsel. We consider that the interface of SAP-BO Explorer is divided into task-oriented blocks, where each block contains obsels dedicated to similar kinds of tasks. The interface consists of blocks for measures, categories, visualization, export, search, etc. For example, the measures block contains many types of obsels like select measure, add calculation, edit calculation, etc. For example, when a user tries to select a measure, we capture this action as an obsel of the type “Select measure” from the second block “Measures block”. The obsel has for value “Trade USD” and is time stamped with the current timestamp. Each session is presented as a M-Trace stored in XML and has a unique ID, contains the ID of the user who did this session, and the temporal list of obsels that happened in this session. When a user log himself in SAP-BO Explorer, a request to the server-side is sent in order to open a new session. This triggers the creation of a new XML output file for this session. Each time a new obsel is collected; it is formatted in XML format and sent to the server in order to be added to the session file. Each user can open and manipulate many Information Spaces at the same time. An Information Space is a collection of objects mapped to data for a specific business operations or activities. All the obsels of a session, whatever the Information Spaces they belong to, are stored in the same file. We have developed a new interface to visualize users’ traces displaying a graphical representation of what they have done so far (see Fig. 5). Each obsel is captured according to our model classified according the available types and represented as colored bullets. Obsels appear on the left side of the interface as a chronologically ordered list from the beginning of the session to the most recent obsel. By clicking on an obsel, we can see its description on the right side of the interface. Obsel’s values are visualized in the form of a tree of attributes and their values. ![Fig. 5 Trace Visualization Interface](image) ### 5.2. Trace replay implementation If a user wants to go back to a previous state, he can at any time select the triggered obsel from the list of captured obsels and click on replay button (see Fig. 5). The system will automatically replay traces to go back to this state. A new obsel will be added to the obsels list of type ‘Replay’. Its values are set according to the values of triggered obsel. This new obsel indicates that a replay action has occurred here and has triggered a previous obsel. As we explained before, the optimization algorithm uses replay obsels to minimize the number of the replayed obsels by deleting the obsels that are skipped in the replay action. Each element has different type and number of values from other elements. For not analyzing each element in a different way, we need to make it more general. By using introspection we can determine the type of an object at runtime. Introspection refers to the ability to examine something to determine what it is, what it knows, and what it is capable of doing. Introspection gives us a great deal of flexibility and control. To do that we used Object as type of the values attribute of an obsel, which means that this attribute can have any type of values. We do introspection on this attribute in order to determine the content of it and then to manipulate it in a general way. 6. Evaluation and Discussion We have implemented our replay method within the SAP-BO Explorer application. However, this method can be applied in any system. To enable trace replay, the first step is to collect traces. For this purpose, we use a model, the M-Trace, that enables us to collect all the traces according to the same abstract model. We have experimented with our system by using many types of datasets and by considering all obsels types, opening many sessions together and trying to go back to previous states many times in the same session. We even succeeded to go back to all sessions at the same time by one single go-back command. The execution time of the replay process is very fast, it is like any other action in the application, which means the time of message exchanging between the client and the server. Systems like ours face number of challenges like replaying traces for already closed sessions, optimizing replay after modifying past obsels and rechecking impact rules after modifying elements values. But they also face more general problems, as mentioned in the discussion section. For example, in (Hupp & Miller 2007), the following issues are raise: privacy of the user and his permission to trace him, security of the system while collecting and visualizing traces, protection of users from undesirable side-effects triggered by the replay, and the robustness of the replay after doing some changes in the system. When implementing our system, we also faced specific problems. For example, in SAP-BO Explorer the same user can open many sessions at the same time. We had to deal with the problem of replaying the trace of a closed session. Our replay process can handle this case by reopening the session, with default values and by applying all the replayed obsels until the triggered one. As we have not implemented yet the replay with changes, we have not faced the problem of optimizing the replay after these modifications. Application of impact rules can be recursive; a modification on an obsel value can have an impact on other obsel values if obsels are related. To deal with these problems we need to develop a graph of impact propagation to be able to solve loops problems and to know the dependencies between different obsels and elements. This will be one of our future works. When the trace includes obsels that have secure and sensitive information like passwords and credit card numbers, our system detects and obscures the password when visualizing it. But it still needs a lot of enhancements and rules to detect this information and secure it, by notifying the user about it or even asking him to re-enter it again. Our system continuously collects and records user’s interactions which constitute a potential risk to privacy and security. This problem is share by all the systems that record rich history. traces (web browsers, recommendation systems, etc.). Dealing with this issue is out of the scope of our study. However we do notify our users that all their interactions are recorded. Side-effects are another issue we have to deal with. Indeed, replaying a trace may have unexpected consequences and can damage the system or cause deletion of. In our current implementation, we do not deal with this problem. However, we think that the proposition described in (Hupp & Miller 2007) is relevant to solve such a problem. The idea is to classify obsels into two classes: side-effecting and non side-effecting. This makes easier the annotation of critical obsels. Last, we have to face robustness issues. Indeed, we have to make sure that the trace system is still usable after major changes either on data or on processes of the system. This question is also out of the scope of our study because it is mainly related to the trace collecting phase. We make the assumption that robustness issues are handled by the trace-based system, responsible for traces management. 7. Conclusion and future work In this paper we have described an approach using of interaction traces to allow users to return to a particular state of an application. This approach is an alternative way of undoing actions in applications where undo commands are not available (such as client-server applications). For this purpose, we use play-back of traces. Playback can be identical to the original trace or can introduce different action parameters. We analyze the impact propagation of changes performed on past actions. This work has been conducted in collaboration with SAP Business Objects and the application we used to implement our approach is SAP-BO Explorer. The aim of our contribution within this project was to support replay process in a client-server application, where classical undo commands cannot be implemented. The main contribution of this paper shows how we can playback interaction traces, in an optimized way, in order to go back to a particular state of the application. For that purpose, we have introduced the concept of predefined impact rules and we have built an algorithm that discovers adapted values of obsels affected by changes. At the time being, the collect process and the simple replay process are implemented. In future work, we plan to address issues mentioned in the discussion concerning side-effects, robustness, and security. In addition, we are interested in studying how we can extract users’ experiences in order to reuse them for assistance purpose. ACKNOWLEDGMENTS We thank Francoise Corvaisier, member of SAP-BO enterprise for her support, thoughts and for giving us the opportunity to do this work in SAP-BO. Any opinions, findings, conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the sponsors. References
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-00746727/file/hal-00746727.pdf", "len_cl100k_base": 7319, "olmocr-version": "0.1.51", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 34149, "total-output-tokens": 9099, "length": "2e12", "weborganizer": {"__label__adult": 0.00025463104248046875, "__label__art_design": 0.00052642822265625, "__label__crime_law": 0.0002875328063964844, "__label__education_jobs": 0.001232147216796875, "__label__entertainment": 9.709596633911131e-05, "__label__fashion_beauty": 0.00014197826385498047, "__label__finance_business": 0.0003941059112548828, "__label__food_dining": 0.0002428293228149414, "__label__games": 0.0005478858947753906, "__label__hardware": 0.0007271766662597656, "__label__health": 0.00030231475830078125, "__label__history": 0.0002498626708984375, "__label__home_hobbies": 8.088350296020508e-05, "__label__industrial": 0.0003025531768798828, "__label__literature": 0.0003161430358886719, "__label__politics": 0.0001742839813232422, "__label__religion": 0.00027871131896972656, "__label__science_tech": 0.044830322265625, "__label__social_life": 0.00011086463928222656, "__label__software": 0.048095703125, "__label__software_dev": 0.900390625, "__label__sports_fitness": 0.00015294551849365234, "__label__transportation": 0.0002942085266113281, "__label__travel": 0.00015437602996826172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38576, 0.0209]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38576, 0.38587]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38576, 0.89599]], "google_gemma-3-12b-it_contains_pii": [[0, 430, false], [430, 3093, null], [3093, 6069, null], [6069, 8831, null], [8831, 12464, null], [12464, 14037, null], [14037, 16149, null], [16149, 18697, null], [18697, 20574, null], [20574, 23491, null], [23491, 26108, null], [26108, 27792, null], [27792, 29478, null], [29478, 32343, null], [32343, 34916, null], [34916, 37507, null], [37507, 38576, null]], "google_gemma-3-12b-it_is_public_document": [[0, 430, true], [430, 3093, null], [3093, 6069, null], [6069, 8831, null], [8831, 12464, null], [12464, 14037, null], [14037, 16149, null], [16149, 18697, null], [18697, 20574, null], [20574, 23491, null], [23491, 26108, null], [26108, 27792, null], [27792, 29478, null], [29478, 32343, null], [32343, 34916, null], [34916, 37507, null], [37507, 38576, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38576, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38576, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38576, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38576, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38576, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38576, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38576, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38576, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38576, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38576, null]], "pdf_page_numbers": [[0, 430, 1], [430, 3093, 2], [3093, 6069, 3], [6069, 8831, 4], [8831, 12464, 5], [12464, 14037, 6], [14037, 16149, 7], [16149, 18697, 8], [18697, 20574, 9], [20574, 23491, 10], [23491, 26108, 11], [26108, 27792, 12], [27792, 29478, 13], [29478, 32343, 14], [32343, 34916, 15], [34916, 37507, 16], [37507, 38576, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38576, 0.08264]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
4e00ae5c21ca3bd1308b26db86a89732d7735580
Characterization of the windows kernel version variability for accurate memory analysis Michael I. Cohen Google Inc., Brandschenkstrasse 110, Zurich, Switzerland Keywords: Memory analysis Incident response Binary classification Memory forensics Live forensics A B S T R A C T Memory analysis is an established technique for malware analysis and is increasingly used for incident response. However, in most incident response situations, the responder often has no control over the precise version of the operating system that must be responded to. It is therefore critical to ensure that memory analysis tools are able to work with a wide range of OS kernel versions, as found in the wild. This paper characterizes the properties of different Windows kernel versions and their relevance to memory analysis. By collecting a large number of kernel binaries we characterize how struct offsets change with versions. We find that although struct layout is mostly stable across major and minor kernel versions, kernel global offsets vary greatly with version. We develop a “profile indexing” technique to rapidly detect the exact kernel version present in a memory image. We can therefore directly use known kernel global offsets and do not need to guess those by scanning techniques. We demonstrate that struct offsets can be rapidly deduced from analysis of kernel pool allocations, as well as by automatic disassembly of binary functions. As an example of an undocumented kernel driver, we use the win32k.sys GUI subsystem driver and develop a robust technique for combining both profile constants and reversed struct offsets into accurate profiles, detected using a profile index. Introduction Memory analysis has become a powerful technique for the detection and identification of malware, and for digital forensic investigations (Ligh et al., 2010, 2014). Fundamentally, memory analysis is concerned with interpreting the seemingly unstructured raw memory data which can be collected from a live system into meaningful and actionable information. At first sight, the memory content of a live system might appear to be composed of nothing more than random bytes. However, those bytes are arranged in a predetermined order by the running software to represent a meaningful data structure. For example consider the C struct: ```c typedef struct _EPROCESS { unsigned long long CreateTime; char[16] ImageFileName; } EPROCESS; ``` The compiler will decide how to overlay the struct fields in memory depending on their size, alignment requirements and other consideration. So for example, the CreateTime field might get 8 bytes, causing the ImageFileName field to begin 8 bytes after the start of the _EPROCESS struct. A memory analysis framework must have the same layout information in order to know where each field should be found in relation to the start of the struct. Early memory analysis systems hard coded this layout information which was derived by other means (e.g. reverse engineering or simply counting the fields in the struct header file (Schuster, 2007)). This approach is not scalable though, since the struct definition change routinely between versions of the operating system. For example, in the above simplified struct of an _EPROCESS, if additional fields are inserted, the layout of the field members will change to make room for the new elements. So for example, if another 4 byte field is added before the CreateTime field, all other offsets will have to increase by 4 bytes to accommodate the new field. This will cause all the old layout information to be incorrect and our interpretation of the struct in memory to be wrong. Modern memory analysis frameworks address the variations across different operating system versions by use of a version specific memory layout template mechanism. For example in Volatility (The Volatility Foundation, 2014) or Rekall (The Rekall Team, 2014a, b) this information is called a profile. The Volatility memory analysis framework (The Volatility Foundation, 2014) is shipped with a number of Windows profiles embedded into the program. The user chooses the correct profile to use depending on their image. For example, if analyzing a Windows 7 image, the profile might be specified as Win7SP1x64. In Volatility, the profile name conveys major version information (i.e. Windows 7), minor version information (i.e. Service Pack 1) and architecture (i.e. x64). Volatility uses this information to select a profile from the set of built-in profiles. **Deriving profile information** The problem still remains how to derive this struct layout information automatically. The Windows kernel contains many struct definitions, and these change for each version, so a brute force solution is not scalable (Okolica and Peterson, 2010). Memory analysis frameworks are not the only case where information about memory layout is required. Specifically, when debugging an application, the debugger needs to know how to interpret the memory of the debugged program in order to correctly display it to the user. Since the compiler is the one originally deciding on the memory layout, it makes sense that the compiler generates debugging information about memory layout for the debugger to use. On Windows systems, the most common compiler used is the Microsoft Visual Studio compiler (MSVCC). This compiler shares debugging information via a PDB file (Schreiber, 2001), generated during the build process for the executable. The PDB file format is unfortunately undocumented, but has been reverse engineered sufficiently to be able to extract accurate debugging information, such as struct memory layout, reliably (Schreiber, 2001; Dolan-Gavitt, 2007a). The PDB file for an executable is normally not shipped together with the executable. The executable contains a unique GUID referring to the PDB file that describes this executable. When the debugger wishes to debug a particular executable, it can then request the correct PDB file from a symbol server. This design allows production binaries to be debugged, without needing to ship bulky debug information with final release binaries. The PDB file contains a number of useful pieces of information for a memory analysis framework: - **Struct members and memory layout.** This contains information about memory offsets for struct members, and their types. This is useful in order to interpret the contents of memory. - **Global constants.** The Windows kernel contains many important constants, which are required for analysis. For example, the PsActiveProcessHead is a constant pointer to the beginning of the process linked list, and is required in order to list processes by walking that list. - **Function addresses.** The location of functions in memory is also provided in the PDB file — even if these functions are not exported. This is important in order to resolve addresses back to functions (e.g. in viewing the Interrupt Descriptor Table — IDT). - **Enumeration.** In C an enumeration is a compact way to represent one of a set of choices using an integer. The mapping between the integer value and a human meaningful string is stored in the PDB file, and it is useful for interpreting meaning from memory. **Characterizing kernel version variability** As described previously, the Volatility tool only contains a handful of profiles generated for different major releases of the Windows kernel. However, each time the kernel is rebuilt by Microsoft (e.g. for a security hot fix), the code could be changed, and the profile could be different. The assumption made by the Volatility tool is that these changes are not significant and therefore, a profile generated from a single version of a major release will work on all versions from that release. We wanted to validate this assumption. We collected the Windows kernel binary (ntkrnlmp.exe, ntkrpamp.exe, ntoskrnl.exe) from several thousand machines in the wild using the GRR tool (Cohen et al., 2011). Each of these binaries has a unique GUID, and we were therefore able to download the corresponding PDB file from the public Microsoft symbol server. We then used Rekall’s mspdb parser to extract debugging information from each PDB file. This resulted in 168 different binaries of the Windows kernel for various versions (e.g. Windows XP, Windows Vista, Windows 7 and Windows 8) and architectures (e.g. x386 and AMD64). Clearly, there are many more versions of the Windows kernel in the wild than exist in the Volatility tool. It is also very likely that we have not collected all the versions that were ever released by Microsoft, so our sample size, although large, is not exhaustive. Fig. 1 shows sampled offsets of four critical struct members for memory analysis: - The _EPROCESS.VadRoot is the location of the Vad within the process. This is used to enumerate process allocations (Dolan-Gavitt, 2007b). - The _KPROCESS.DirectoryTableBase is the location of the Directory Table Base (i.e. the value loaded into the CR3 register) which is critical in constructing the Virtual Address Space abstraction. - The _EPROCESS.ImageFileName is the file name of the running binary. For example, this field might contain "csrss.exe". Microsoft Windows kernel versions contain four parts: The major and minor versions, the revision and the build number. The build number increases for each build (e.g., security hotfix). As can be seen in the figure, struct offsets do tend to remain stable across Windows versions. In most cases, with a single notable exception — version 5.2.3970.175 (GUID 46684165EAA84AF88D29D617E86A95982), the struct offsets remain the same for all major Windows releases. Therefore, chances are good that the Volatility profile for a given Windows version would actually work most of the time for determining struct layout. **Kernel global constants variability** It is generally not sufficient to determine only the struct memory layout for memory analysis. For example, consider listing the running processes. One technique is to follow the doubly linked list of EPROCESS.ActiveProcessLinks in each process struct (Okolica and Peterson, 2010). This --- **Fig. 1.** Offsets for a few critical struct members across various versions of the Windows kernel. These offsets were derived by analyzing public debug information from the Microsoft debug server for the binaries in our collection. technique needs to find the start of the list which begins at the global kernel constant *PsActiveProcessHead*. The location for this global constant in memory is determined statically by the compiler at compile time, and it is usually stored in one of the data sections in the PE file itself. Since this information is also required by the debugger, the PDB file also contains information about global constants and functions (even if these are not actually exported via the Export Address Table). Rekall’s *mspdb* plugin also extract this information into the profile. Fig. 2 illustrates the memory addresses of some important kernel constants for the kernels in our collection: - **NtBuildLab** is the location of the NT version string (e.g. “7600.win7_rtm.090713-1255”). This is used to identify the running kernel. - **PsActiveProcessHead** is the head of the active process list. This is required in order to list the running processes. - **NtCreateToken** is an example of a kernel function. This will normally exist in the .text section of the PE file. - **str:FILE_VERSION** is literally the string “FILE_VERSION”. Usually the compiler will place all literal strings into their own string table in the .rdata section of the PE file. The compiler will then emit debugging symbols for the location of each string — indicating that they are literal ![Fig. 2. Offsets for a few global kernel constants across various versions of the Windows kernel. These offsets were derived by analyzing public debug information from the Microsoft debug server for the binaries in our collection. Offsets are provided relative to the kernel image base address.](image-url) strings. The importance of this symbol will be discussed in the following sections. As can be seen, the offsets of global kernel constants change dramatically between each build — even for the same version. This makes sense, since the compiler arranges global constants in their own PE section, so if any global constant is added or removed in the entire kernel, this affects the ordering of all other constants placed after it. It is therefore clear that it is unreliable to directly obtain the addresses of kernel globals by simply relying on the version alone. The Volatility tool resorts to a number of techniques to obtain these globals: - Many globals are obtained from the KdDebuggerData-Block — another global kernel struct which contains pointers to many other globals. This structure is usually scanned for. - Scanning for kernel objects which refer to global constants (e.g. via pool tag scanning or other signatures). - Examining the export tables of various PE binaries for exported functions. - Dynamically disassembling code to detect calls to non-exported functions. These techniques are complex and error prone. They are also susceptible to anti-forensics as signature scanners can trivially be fooled by spurious signatures (Williams and Torres, 2014). Scanning for signatures over very large memory images is also slow and inefficient. The Rekall memory forensic framework (The Rekall Team, 2014a, b), a fork of the Volatility framework, takes a different approach. Instead of guessing the location of various kernel constants, the framework relies on a public profile repository which contains every known profile from every known build of the Windows kernel. This greatly simplifies memory analysis algorithms because the address of global kernel variables and functions is directly known from public debugging information provided by Microsoft. There is no need to scan or guess at all. Locating these globals is very efficient since there is no need to scan for signatures, making the framework fast and reducing the ability of attackers to subvert analysis. **Identifying binary versions** The Rekall profile repository contains, at the time of writing, 309 profiles for various Windows kernel versions (and this number is constantly increasing). Typically, users will simply report the GUID of the Windows kernel found in their image, but will not provide the actual kernel binary. Previously, Rekall employed a scanning technique to locate the GUID of the NT kernel running within the image. Once the GUID is known, the correct profile can be fetched from the repository and analysis can begin. However, this technique is still susceptible to manipulation (It is easy for attackers to simply wipe or alter the GUID from memory). Sometimes the GUID is paged out of memory and in this case it is impossible to guess it. What we really need is a reliable way to identify the kernel version without relying on a single signature. The problem of identifying kernel binaries in a memory image has been examined previously in the Linux memory analysis context (Roussev et al., 2014). In that paper, the authors used similarity hashing to match the kernel in a memory image with a corpus of known binaries. In our case, we do not always have the actual binaries but have debugging symbols from these binaries. We therefore need a way for deducing enough information about the kernel binary itself (which we may not have) from the debug symbols. Consider the following information present in the PDB file: - String Literals. As shown in the example above, the compiler generates string literals in the PE binary itself. These are then located using global debugging symbols. For example, in Fig. 2 we know the exact offsets in memory where we expect find the string "FILE_VERSION". - Function preamble. The PDB file also contains the locations of many functions. We note that each function is generally preceded by 5 NOP instructions in order to make room for hot patching (Chen, 2011). Thus, we can deduce that for each function in the PDB, the previous byte contains the value 0x90 (NOP instruction). The problem, therefore, boils down to identifying which of a finite set of kernel profiles is the one present in the memory image, based on known data that must exist at known offsets: 1. Begin by selecting a number of function names, or literal string names. We term these Comparison Points since we only compare the binaries at these known offsets. 2. Examine all available profiles, and record the offset of these symbols as well as the expected data to appear at this offset (either a NOP instruction or the literal string itself). 3. Build a decision tree around the known comparison points to minimize the number of string comparisons required for narrowing down the match. Note that at this stage it is possible to determine if there are sufficient number of comparison points to distinguish all profile selections. If profile selection is ambiguous, further comparison points are added and the process starts again. 5. For each match, seek around the match to apply the decision tree calculated earlier. Within a few string comparisons, the correct profile is identified. 6. Load the profile from the profile repository and initialize the analysis. In practice it was found that fewer than a dozen comparison points are required to characterize all the profiles in the Rekall profile repository, leading to extremely quick matching times. Also, binary identification is robust to manipulation since the choice of comparison points is rather arbitrary and can be changed easily. Windows kernel binary identification Section 3 described an efficient algorithm for identifying a binary match from a set of known binaries. However, in the memory analysis context, this comparison must be made in the Virtual address space. Modern CPUs operate in protected mode, and the exact memory accessible to the kernel does not necessarily need to be contiguous in the physical memory image. Therefore, before we are able to apply the index classification algorithm, we must build a virtual address space, requiring us to identify the value of CR3, or the kernel’s Directory Table Base (DTB). The DTB can be captured during the acquisition process and stored in the image, but typically it must be scanned for. The Volatility memory forensic framework scans for the Idle and stored in the image, but typically it must be scanned for. Section 2 examined the variability of documented kernel structures across different kernel versions. The question we try to answer now is, what is the variability of undocumented kernel structures of significance to the memory analyst? Undocumented kernel structures One of the most interesting kernel drivers is the Windows 32 user mode GUI subsystem (Mandt, 2011; Yuan, 2001), implemented as “win32k.sys”. The data structures used in this subsystem are required to detect many common hooks placed by malware (e.g. SetWindowsHookEx() style keyloggers (Sikorski and Honig, 2012)). The Rekall profile repository currently contains profiles for 169 unique versions of this driver. However, only 33 versions include information about critical structures (e.g. tagDESKTOP and tagWINDOWSTATION). The remaining profiles only contain information about global constants and functions, but no structure information. Our goal is to understand how various important structures evolved through the released versions. Since many of these versions are undocumented and do not have debugging information, previous research has manually reverse engineered several samples from different versions. However, we are unsure if there is internal variability within Windows versions and releases. Guided by our previous experience with the Windows Kernel versions, we hypothesize that the win32k.sys struct layout would not vary much between minor release versions. Given our large corpus of binaries we can directly examine this hypothesis and evaluate the best approach for determining struct layout when analyzing the Win32k GUI subsystem. Data driven reverse engineering The literature contains a number of published systems for automatically detecting kernel objects from memory images (Sun et al., 2012). For example, the SigGraph system (Lin et al., 2011), is capable of building scanners for Linux kernel structures by analyzing their internal pointer graphs. The SigGraph system specifically does not utilize incidental knowledge about the system to assist in the reversing task. However on Windows systems, there are some helpful observation one can make to facilitate type analysis from memory dumps. In the Windows kernel all allocations come from one of the kernel pools (e.g. Paged, Non-Paged or Session Pool). Allocations smaller than a page are preceded by a POOL_-HEADER object (Schuster, 2006, 2008). The pool header contains a known tag as well as indications of the previous and next pool allocation (within the page). Thus, small pool allocations form a doubly linked list. Due to this property it is possible to validate the pool header and locate it in memory. A typical Windows kernel allocation is illustrated in Fig. 3. If we were to ask, “What kernel object exists at a given virtual offset?”, we can simply scan backwards for a suitable POOL_HEADER structure and deduce the type of object from the pool tag. We can further scan forward from this location for other heuristics, such as pointers to certain other pool allocations, or doubly linked lists. We wrote a Rekall plugin called analyze_structs to perform this analysis on arbitrary memory locations. For example, Fig. 4 shows the analysis of the global symbol grpWinStaList which is the global offset of the head of the tagWINDOWSTATION list. We can see that at offset... 0x10 there is a pointer to the tagDESKTOP object, at offset 0x18 there is a pointer to the global gTermIO object etc. With Windows 7 we can find the complete struct information in the PDB file. This is also shown in Fig. 4. We can see that the detected pointers correspond with the rpdeskList, pTerm, spklList, pGlobalAtomTable and psidUser members. An obvious limitation of this technique is that if a pointer in the struct is set to NULL, we are unable to say anything about it. Hence to reveal as many fields as possible we need to examine as many instances of the same object type as we can find (e.g. via pool scanning techniques). ### Code based reverse engineering The previous section demonstrates how we can deduce some struct layouts by observation of allocations we can find from the kernel pools. However, these observations are not sufficient to deduce all types of members. Specifically, only pointers are reliably deduced by this method. Additionally, we must observe allocated memory in a memory dump from a running system. Often we only have the executable binary (e.g. from disk) but not the full memory image. In these cases, we need to resort to the more traditional reverse engineering approach. Previously, researchers have reverse engineered specific exemplars of the win32k.sys binary which is representative of a specific Windows version (The Volatility Foundation, 2014). However, manually reverse engineering every file in our large corpus of win32k.sys binaries is time consuming and error prone. Some forensic tools simply contain the reversed profile data as “Magic Numbers” embedded within their code (The Volatility Foundation, 2014) without an explanation of where these numbers came from, making forensic validation and cross checking difficult. We wish to automatically extend this analysis to new binaries with minimal effort. We therefore want to express the required assembler pattern as a template which can be applied to the new file’s disassembly. In practice, however, the compiler is free to mix use of registers in functions, or reorder branches. Often identical source code will generate assembler code using different registers, and different branching order. Fig. 5 shows the same code segment from two different versions of the xxxCreateWindowStation function. As can be seen, although the general sequence of instructions is similar, the exact registers are different for each case (This... function essentially checks the rpdesk pointer of the global variable gptiCurrent, a global tagTHREADINFO struct). We therefore construct our pattern match in such a way that exact register names are not specified. We only require the same register to be used for $var1 throughout the pattern. Additionally, the compiler may reorder Assembler code fragments from version to version. When a branch is reordered, the pattern match may be split into different parts of the branching instruction. In order to normalize the effect of branching, we unroll all branches in the assembly output. This means we follow all branches until we reach code that is already disassembled and then backtrack to resume disassembly from the branch onwards. This technique allows us to match our pattern against the complete code of each function. For example consider Fig. 6. This shows a very short function win32k!SetGlobalCursorLevel which dereferences many pointers to a number of structs. The function iterates over all desktops (tagDESKTOP) and all threads (tagTHREADINFO) and sets their cursor level. It is quite simple to infer the structs and fields involved when reading the assembly code (for Windows 7) in conjunction with the struct definitions exported in the PDB files for Windows 7. The same templates can then be applied for other versions of the binary for which there are no exported symbols. Our template can now be published and independently cross validated for accuracy. For example, in the event that investigators find a different version of the binary in the wild, they are able to apply the templates and re-derive the struct offsets directly from the binary — cross validating the resulting profile. It must be noted that this technique does not work in every case since the code does change from version to version, sometimes dramatically. We therefore offer a number of possible templates (to different functions) that can be applied in turn until a match is found. Results We have collected 133 unique versions of the “win32k.sys” driver binary, and downloaded PDB files for these samples. We then generated assembler templates for many struct fields and ran these templates over these binaries in our collections. Fig. 7 shows a summary of struct offsets across different versions of the win32k driver. As can be seen, the struct offsets are generally not changed between major and minor binary versions, although they do vary between each minor version. Similarly, Fig. 8 shows that global constants vary wildly from build to build, hence version number alone is insufficient to provide reliable offsets for these constants. Discussion This study’s main goal was to characterize what factors change between various binary versions, and how these are relevant to memory analysis. We found that generally, struct layout does not change within the same minor version, but global constants were found to vary wildly with version. In our quest to characterize the variation we have developed a number of very useful techniques: 1. We have developed a technique to build a “profile index” — a mechanism to quickly detect which profile from a pre-calculated profile repository is applicable for a specific memory image. Our method is resilient to anti-forensic manipulation since it uses a random selection of comparison points chosen from the binary code and data segments themselves. 2. We have also demonstrated a data analysis technique for rapidly determining struct offsets by analyzing kernel pool allocations. 3. We have created an Assembler templating language which can be used to match sequences of assembler code in order to extract struct offsets for struct members. This technique can be applied for static binaries as well as binaries found in memory images. How should these techniques be applied in order to improve the accuracy of memory analysis software? As noted previously, some memory analysis frameworks currently use techniques such as pool scanning, disassembling and other heuristics to guess the locations of global kernel variables (The Volatility Foundation, 2014). This is especially problematic when trying to locate win32k.sys global parameters since the GUI subsystem has a different pool area for each session. Without contextual information, pool scanning techniques cannot associate the correct kernel structures to the correct session, leading to many erroneous results. It is therefore desirable to rely on accurate profile information in locating global structures. This warrants the creation and maintenance of a public profile repository with accurate symbol information for each version observed in the wild (The Rekall Team, 2014a, b). The problem remains however, how does one know which profile should be used for a specific memory image? By applying the profile indexing technique, one can reliably detect the correct profile to use for each memory image. The profiles can then contain exact offsets of global variables and functions. This improves analysis because there is a large amount of accurate information available (for example it is possible to resolve addresses to function names — really helping with disassembly views). Finally, we can address the problem of undocumented struct layouts. While the win32k.sys profiles do contain the addresses of global variables and functions, most do not contain struct layout. Although we can apply the assembler templates to deduce the struct layouts directly within the memory image, this is not a reliable technique since in practice, many code pages will not be mapped into memory — causing the disassembly of the required functions to fail. Instead we can collect win32k.sys binaries of all major and minor versions and apply the disassembly templates to the binaries themselves. Although we can never be absolutely sure that struct layouts are the same in all builds of the same version, our analysis suggests this is the case. That is, the struct layout for win32k.sys depends only on the major and minor version numbers of the win32k.sys binary itself. We therefore make the assumption that struct layout does not vary between major and minor versions (this assumption seems to hold well as a result of this research). Therefore, we construct a profile for all win32k.sys binaries by merging the global constants and functions found in the PDB files provided by Microsoft with the canonical struct layout for the specific major and minor version. We then similarly create a “profile index” for all known win32k.sys profiles and apply it on in the memory image to detect the correct profile to use. Once the correct profile is found (containing both accurate constants and accurate struct layouts) we can use it to conduct analysis of the memory image without problems. **Limitations of symbol based memory analysis** In this paper we find that kernel constants vary greatly between kernel builds. We advocate locating the kernel constants directly from the debugging symbols distributed by Microsoft. While this approach makes for an efficient analysis, which is less susceptible to manipulation, it does have some shortcomings. The main problem is that we require the PDB files for the exact versions of the kernel we are dealing with to be available. While Microsoft typically publishes PDB files for publicly released versions of the operating system, it is possible that PDB files for private, or development versions of the operating system are not published. When Rekall encounters a Windows kernel version which does not exist in the repository, the user may follow a procedure to add it to the repository by downloading the corresponding debug information from the Microsoft symbol server. However, if this is not possible (perhaps because the PDB file is not published), the user is unable to proceed at all. Rekall does not employ scanning or guessing techniques for locating kernel global constants without having the profile information (e.g. like Volatility does). **Conclusions and future work** Although this paper concentrates specifically on the Windows kernel binary and the win32k.sys GUI subsystem driver, the techniques presented are applicable for other drivers and binaries. Specifically, the tcpip.sys driver manages the network stack and is largely undocumented. The same techniques we develop for constructing profiles from a mixture of documented and undocumented (reversed) information can be applied to this case. Identifying which of a set of known binaries matches the exact running binary in a memory image is a critical first step to memory analysis of all operating systems. For example, we have extended this method to auto-detect the exact kernel running on an OSX system. The ability to generate profiles with more accurate information allows one to abandon using scanning and guessing techniques for determining this information from the potentially compromised memory image itself. The less the framework relies on the memory image to derive analysis information, the more resilient it is to malicious manipulation. For example, the literature has noted that the Kernel Debugger Block can be easily overwritten by malware in such a way that memory analysis can fail to find it (Haruyama and Suzuki, 2012). Finally, this paper presents the groundwork for ultimately addressing the difficult problem of Linux memory analysis. Linux kernel struct layouts vary wildly based on kernel configuration as well as purely on kernel version. Only recently has it become possible to acquire memory on a Linux system in a kernel version agnostic manner (Stüttgen and Cohen, 2014), but there is a wide need to reliably determine the correct profile for unknown kernels— often encountered during incident response situations. Previously, systems were proposed that attempted to derive all kernel struct offsets by examining the specific assembly instructions. However these systems, failed to take into account register swapping and function re-branching (Case et al., 2010), making them less reliable for matching real kernels in practice. This paper’s proposed assembler templates are much more robust to these variations. Previous dynamic analysis platforms attempt to build a complete profile from the reversed parameters. However, as shown in this paper, we only need to gather just enough information to select the correct profile from a finite set of known profile variations. Future work can apply the techniques discussed in this paper to auto-detecting a Linux profile from an unknown kernel. References
{"Source-Url": "http://old.dfrws.org/2015eu/proceedings/DFRWS-EU-2015-5.pdf", "len_cl100k_base": 6862, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 33460, "total-output-tokens": 8688, "length": "2e12", "weborganizer": {"__label__adult": 0.0006999969482421875, "__label__art_design": 0.0007281303405761719, "__label__crime_law": 0.006023406982421875, "__label__education_jobs": 0.0010986328125, "__label__entertainment": 0.0002256631851196289, "__label__fashion_beauty": 0.0002989768981933594, "__label__finance_business": 0.0003638267517089844, "__label__food_dining": 0.0003895759582519531, "__label__games": 0.001956939697265625, "__label__hardware": 0.006916046142578125, "__label__health": 0.0006341934204101562, "__label__history": 0.0006690025329589844, "__label__home_hobbies": 0.00016558170318603516, "__label__industrial": 0.0010232925415039062, "__label__literature": 0.0006732940673828125, "__label__politics": 0.0006709098815917969, "__label__religion": 0.00058746337890625, "__label__science_tech": 0.35546875, "__label__social_life": 0.00014102458953857422, "__label__software": 0.049652099609375, "__label__software_dev": 0.5703125, "__label__sports_fitness": 0.0003352165222167969, "__label__transportation": 0.0007123947143554688, "__label__travel": 0.0001962184906005859}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38448, 0.01746]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38448, 0.6263]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38448, 0.89561]], "google_gemma-3-12b-it_contains_pii": [[0, 2998, false], [2998, 9019, null], [9019, 10407, null], [10407, 12074, null], [12074, 17818, null], [17818, 22010, null], [22010, 24457, null], [24457, 28352, null], [28352, 30416, null], [30416, 31792, null], [31792, 33047, null], [33047, 38448, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2998, true], [2998, 9019, null], [9019, 10407, null], [10407, 12074, null], [12074, 17818, null], [17818, 22010, null], [22010, 24457, null], [24457, 28352, null], [28352, 30416, null], [30416, 31792, null], [31792, 33047, null], [33047, 38448, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38448, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38448, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38448, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38448, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38448, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38448, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38448, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38448, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38448, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38448, null]], "pdf_page_numbers": [[0, 2998, 1], [2998, 9019, 2], [9019, 10407, 3], [10407, 12074, 4], [12074, 17818, 5], [17818, 22010, 6], [22010, 24457, 7], [24457, 28352, 8], [28352, 30416, 9], [30416, 31792, 10], [31792, 33047, 11], [33047, 38448, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38448, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
62844cc65080d4b97a4943dfc4c74d65578dc8e8
EUPHORIA Reference Manual T. Paul McCartney and Kenneth J. Goldman Follow this and additional works at: https://openscholarship.wustl.edu/cse_research Part of the Computer Engineering Commons, and the Computer Sciences Commons Recommended Citation EUPHORIA Reference Manual T. Paul McCartney and Kenneth J. Goldman WUCS-95-19 July 1995 Department of Computer Science Washington University Campus Box 1045 One Brookings Drive Saint Louis, MO 63130-4899 1 Introduction The Programmers' Playground is a software library and runtime system for creating distributed multimedia applications [1][2][3]. EUPHORIA\textsuperscript{2} is the user interface management system for Playground, allowing end-users to create direct manipulation graphical user interfaces (GUIs) for distributed applications [5]. EUPHORIA has an intuitive graphics editor which allows end-users to simply draw GUIs (see Figure 1). The behavior of GUIs is established by forming logical connections between EUPHORIA and external Playground modules [4]. This document summaries the features of EUPHORIA and how it can be used to create GUIs. It is meant as a companion to the Playground reference manual [4]. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{syrupfactory.png} \caption{EUPHORIA graphics editor, displaying an interactive maple syrup factory GUI.} \end{figure} EUPHORIA is meant to be controlled by a three button mouse. Note that throughout this document, whenever the words "click" or "drag" are used, the left mouse button is implied. \begin{enumerate} \item This research was supported in part by National Science Foundation grants CCR-91-10029 and CCR-94-12711. \item EUPHORIA is an acronym for End User Production of graphHical interfaces fOr Really Interactive distributed Applications. \end{enumerate} 1.1 Retractable panes The EUPHORIA window is divided into a number of panes (see Figure 1), the tool palette, data boundary, alternatives interface, and main drawing area. The size of the panes can be adjusted by dragging the separator line of the pane. This is useful for hiding the tool palette and data boundary when a GUI is completed. Enlarging a pane can reveal additional information. For example, making the tool palette larger reveals a “background” label. Dragging a connection line from this to a color entry with the middle mouse button changes the background color of the main drawing area. 2 Drawing Drawing in EUPHORIA is much like drawing in other graphics editors such as MacDraw or FrameMaker. The EUPHORIA window contains a graphics tool palette and a main drawing area. With the tool palette, users can select shapes to be drawn, change the color of shapes, and perform other operations through the menus (see Figure 2). ![Figure 2: Tool palette.](image) Once a drawing tool is selected, the shape corresponding to that tool may be drawn in the main drawing area. When drawing is complete, the drawing tool becomes unselected. Double clicking on a tool prevents the tool from becoming unselected after the first drawing. In this way, users can conveniently draw multiple shapes. When finished, clicking on a selected tool unselects it. The color of a shape can be selected by clicking on the appropriate color entry below the shape palette. Clicking on a color entry changes the color of the selected graphics objects in the main drawing window and sets the color for all future drawings. 2.1 Selection & handles In the main drawing area, clicking on a graphics object causes it to become selected or unselected. Multiple graphics objects can be selected at the same time; selecting a graphics object does not unselect other graphics objects. When no drawing tools are selected, dragging a selection box over an area in the main drawing area will select all graphics objects within the box. Note that all previously selected graphics objects become unselected when dragging a selection box. Selected graphics objects can be deleted by pressing the backspace or delete key. Figure 3 shows some of the basic shape types and their selection handles. As with other graphics editors, most of these handles can be dragged to change the attributes of their graphics object. The color of each handle represents the type of information that it represents. For example, real number values such as width and height appear in blue; “point” x,y coordinate values appear in green. These handles are used not only for direct manipulation, but also for forming constraints among graphics objects, as described in Section 4. Some handles are exclusively used for forming constraints. For example, the handle in the bottom middle of a text object is used to connect to its string attribute (see Section 4.3). 2.2 GIF images GIF images may be loaded into the main drawing area by choosing Load GIF Image... from the "File" menu. A GIF image is treated much like a rectangle. Users change move, resize, and form constraints with GIF images. 2.3 Layering Graphics objects have an associated layer attribute which controls the order in which graphics objects are drawn (i.e. which shapes are in front of other shapes). When a graphics object is created, it is set to the front-most layer. The layer can be changed with the "Layer" menu, allowing one to Bring to Front or Send to Back selected graphics objects. 2.4 Coordinate system The coordinate system of the main drawing area is oriented with the x-coordinate axis increasing to the right, and the y coordinate axis increasing downward (see Figure 4). By default, the origin is at the top left corner of the main drawing area. The position of the origin (i.e. the (0, 0) coordinate) and the scaling factor of the x and y axes may be modified through the origin controller (see Figure 4). The origin controller is invoked by choosing Origin Controller from the "Edit" menu. Dragging the mouse within the drawing area sets the position of the origin to the mouse position. Clicking on an axis allows one to enter a new coordinate value. The scaling factor of the axis is set by the system inferring that the distance from the origin to the selected position represents the entered coordinate value. For example, setting the value to a high number increases the scaling factor, making everything smaller. The origin controller provides a means to set the coordinate system of a drawing to convenient local coordinate units of an external application rather than raw pixel values. 3 Data Boundary Perhaps the most apparent difference between EUPHORIA and other graphics editors is the ability to connect attributes of graphics objects to external applications. Changes to these attributes are sent to and from external Playground modules and EUPHORIA. These connections can be used to create animated visualizations and interactive, direct manipulation GUIs. As described in [1], each Playground module (including EUPHORIA) has a set of "published" externally readable/writable variables called the data boundary. The data boundary is essentially an interface to the outside world. Figure 5 shows the graphical representation of the data boundary which appears as the left portion of the EUPHORIA window. The top portion of the data boundary contains an "external update control" button. This allows users to enable or disable communication between EUPHORIA and external modules. It is sometimes useful to turn off communication for a period of time so that animation can be suspended, allowing graphics objects to be modified. 3.1 Published variables A published variable represents a value which is shared with external Playground modules. When a variable is changed in an external module, Playground sends the change out to all connected modules, including EUPHORIA. Similarly, when a graphics object is changed (e.g. moved by the user), this change may also be sent out to external Playground modules, according to the published variables and the logical connections between variables [4]. Figure 6 shows the visual appearance of different types of published variables. As with handles, the color of a published variable represents its data type. Each variable has protections, represented as arrows, which control the read/write permissions of the variable to external modules [4]. Clicking on an arrow toggles its protection on or off. Note that having only write world protection is treated as a special case which allows external updates to the variable to be processed in the internal constraint network more efficiently. --- 3. In other publications, the term "presentation" has been used. The term "data boundary" is used to avoid confusion with graphical presentations. 4. It is possible to edit animated graphics objects with communication activated. However, it is sometimes hard to grab a quickly moving object! Clicking on a published variable selects/unselects it. Pressing the backspace or delete key removes the selected variables from the data boundary⁴. Double clicking on the name of a variable allows the name to be edited. Double clicking on a variable opens a dialog box for viewing and changing its properties (see Figure 7). Users can change the strength of a variable, which affects how the system interprets updates from external applications. For example, by default, user actions such as dragging a graphics shape take precedence over updates from a published variable (which is connected to the graphics shape). If the strength of the variable is set high enough, updates from the variable would take precedence over user actions. In the case when the variable is only write world, this means that the user cannot move the graphics shape; it can only be moved by external modules. 3.2 Publishing a graphics attribute A graphics object handle can be published, meaning that the attribute which it controls is connected to an externally readable/writable published variable. This is achieved by dragging, with the **middle mouse button**, a connection line from a graphics object handle and the “new variable area” of the data boundary. This has the effect of creating a visual representation of a published variable, informing Playground’s connection manager of the variable, and forming a constraint (see Section 4) between the handle’s graphics object attribute and the published variable. Similarly, constraints can also be formed between graphics object attributes and variables already in the data boundary, as described in Section 4.2. --- ⁴. Be careful, selected graphics objects are also deleted. It is not possible to directly reinsert a published variable into its former location because of how the connection manager operates. 3.3 Creating user defined types Tuple data types can be created which consist of multiple fields of data. Also, other data types such "character" and "boolean" can be published, allowing users to visualize and edit any Playground base type, tuple of base types, tuple of tuples, etc. ![Add Variables Window](image) Figure 8: Add variables window. Double clicking on the "new variable area" of the data boundary shows the "Add Variables Window" (see Figure 8). Any data type listed in this window can be published by selecting the type, entering a name, and pressing the Add button. The set of variables in the data boundary can be "captured" as a group. Pressing the Capture button creates a new tuple type with the data boundary variables as the fields of the tuple and the entered name of the Add Variables Window as the type name. This new type is inserted into the list of available types. Variables of a user defined type can be published by pressing the "Add" button just as with other types. When published into the data boundary, a tuple appears in green with a small triangle to the left. This triangle is used to expose or hide the fields of the tuple (see Figure 6). 3.4 Declaring tuples in external applications Point tuple variables and user defined tuples can be declared in external applications as described in [4]. For example, a point type can be declared in the following way: ```plaintext PGtuple(Point) { PGreal x, y; PUBLIC_FIELDS(Point, field(x); field(y)) }; ``` 4 Constraints Users can establish constraint relationships among the graphics object attributes, published variables, and individual fields of a tuple. In addition to direct manipulation, the graphics object handles are also used in forming constraint relationships. A constraint is a persistent relationship between graphics object attributes. Once a constraint is formed, the system is responsible for maintaining the relationship when changes are made to graphics objects or published variables. Three types of constraints are currently supported: constant, equality, and conversion. 4.1 Constant constraints Clicking on a handle with the right mouse button causes the corresponding attribute of the graphics object to become constant. So, for example, if the width and height handles of a rectangle are set to be constant, the size of the rectangle becomes fixed. A handle which is constant is colored gray. Clicking with the right mouse button on the handle a second time releases the constant constraint. 4.2 Equality constraints An equality constraint can be established by dragging a connection line between two handles of the same type with the middle mouse button. For example, a rectangle can be constrained to be a square by forming a constraint between its width and height handles. Equality constraints can also be formed between graphics object handles and published variables. Like equality constraints between handles, this is achieved by dragging a connection line, with the middle mouse button, between a handle and a published variable. Note that publishing variables as described in Section 3.2 automatically forms a constraint relationship. Constraints to published variables are a means for visualizing and interacting with the value of a published variable. For example, one can form an equality constraint between the top-left handle of a rectangle and a point type published variable. Whenever the point variable is changed externally (i.e. from a separate module which is connected to the variable) the change is communicated to the rectangle, moving the rectangle to the appropriate position in the window. Similarly, whenever the rectangle is moved through direct manipulation, its updated position is sent out to the connected, external Playground modules. 4.3 Conversion constraints Equality constraints can be made between handles or published variables of different types. These types of constraints are known as conversion constraints, since some kind of type conversion is usually necessary. For example, a real handle such as the width of a rectangle can be connected to an integer published variable. This results in a rounding operation when the real value is communicated to the integer published variable. Table 1 lists the supported connection types. Note that only a subset of these conversion operations are available in Playground's connection manager for making connections between modules. Table 1: Supported equality (E) and conversion (C) constraints. <table> <thead> <tr> <th></th> <th>real</th> <th>integer</th> <th>bool</th> <th>character</th> <th>string</th> <th>tuple</th> </tr> </thead> <tbody> <tr> <td>real</td> <td>E</td> <td>C</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>integer</td> <td>C</td> <td>E</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>bool</td> <td></td> <td></td> <td>E</td> <td></td> <td></td> <td></td> </tr> <tr> <td>character</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>string</td> <td>C</td> <td>C</td> <td>C</td> <td>C</td> <td></td> <td>E</td> </tr> <tr> <td>tuple</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>E/C</td> </tr> </tbody> </table> The conversion operation from string to boolean translates the following strings as having a boolean value of false: 0, t, F, false, False, FALSE. Every other string is interpreted as having a boolean value of true. Tuples are compatible based on the number and types of the tuple fields (recursively). For example, a tuple with two real fields is compatible to a tuple with two integer fields. A tuple with two real fields is not compatible with a tuple with three real fields. 4.4 Constraint visualization & editing Constraints can be visualized and edited. In the "Constraint" menu, choosing Show Constraints enables constraint visualization. By default, constraints are shown as flashing lines between the handles of selected objects and/or published variables. A visualized constraint may be deleted by clicking on it with the right mouse button. Choosing Show Current Propagation visualizes the current computation direction [6] using an arrow head. Unsatisfied constraints are shown as dashed lines without an arrow head. Note that a visualization line or arrow can represent multiple constraints, in the case of tuples. For example, forming a constraint between two points actually forms two constraints: one between the x coordinates and one between the y coordinates. Double headed arrows are used to show the mixed computation directions. By default, certain constraints are not shown. This includes constraints to imaginary objects (see Section 5.1), and constraints in which at least one endpoint is not visible within the window. Choosing Show Hidden Constraints will show all constraints. 5 Advanced Drawing EUPHORIA supports a number of high level mechanisms for constructing GUIs, including imaginary objects, alternatives, widgets, and aggregate mappings (under development). 5.1 Imaginary objects Any graphics object can be made imaginary. An imaginary graphics object is not normally visible or selectable by the user. However, the underlying constraint relationships of an imaginary graphics object are maintained. In this way, imaginary objects are a convenient means for forming indirect constraint relationships among graphics objects. For example, ovals in EUPHORIA do not have handles in their center. However, through the use of two imaginary rectangles and some constraints, it is possible to create a oval center handle (see Figure 9). The sizes of the two rectangles are constrained to be equal and the bottom-right corner of one rectangle is constrained to be equal to the top-left of the other. These rectangles are then inscribed within the oval. The result is a handle which is always in the center of the oval, even if the oval is moved or resized. ![Figure 9: Creating a center handle for an oval from imaginary graphics objects.](image) A selected graphics object can be made imaginary by choosing Set/Unset Imaginary from the "Edit" menu. Imaginary graphics objects can be shown (which is useful for editing) or hidden by selecting Imaginaries Shown from the "Edit" menu. 5.2 Alternatives A user GUI can have multiple representations which are called alternatives. For example, a simulation GUI might consist of an alternative which shows the simulation state graphically, allowing direct manipulation, and an alternative that shows expanded information in a more “text and button” type representation. Alternatives are often used in the development of widgets (see Section 5.3). For example, a widget may have standard and selected visual representations which are defined as separate alternatives. Figure 10: Alternatives interface. At the bottom of the EUPHORIA window is a hidden pane for specifying alternatives; this pane can be exposed by dragging up the bottom divider (see Figure 10). A table lists each alternative as a box with an associated alternative ID. Clicking on an alternative displays it in the main drawing area (only one alternative is displayed at a time). Any drawing in the main drawing area becomes incorporated into the current alternative. Initially, there is only one alternative, with ID 0. Pressing the New... button creates a new alternative based on a supplied alternative ID. 5.3 Widgets A widget is a compound graphics object that is a grouping of graphics objects with a subset of exposed attributes. One may think of a widget as a “module” of graphics shapes with a data boundary of externally readable/writable attributes. The values of the attributes in a widget’s data boundary are the only means of controlling or viewing the state of the widget externally. Widgets are created visually by end-users. As with other graphics objects, the external attributes of a widget can be viewed, revealing handles which can be used in forming connections to the widget. Users can create a widget in the following way. First, a drawing is made in the main drawing area as described in Section 2. Note that this can also include constraint relationships among graphics objects. Second, the attributes of the graphics shapes in the drawing which are to be exposed, are published as described in Section 3.2. All other graphics attributes will be essentially encapsulated within the widget. Third, the drawing is saved to a file. Finally, choosing Load As Widget... from the “Widget” menu creates a widget from the saved specification. The graphics objects of the drawing are grouped within the widget; the published attributes appear as handles. The graphics objects within a widget are in a separate coordinate system (see Section 2.4). In publishing attributes of a widget, users can specify how the exposed handles of a widget translate values coming into and going out of the widget. Values of the widget’s handles can be in terms of the widget’s local coordinate system, or the coordinate system of its container (i.e. the main drawing area, or another widget which the widget is contained within). This translation is set in the Variable Attributes Window described in Section 3.1. 6 Command Line Arguments A number of optional command line arguments allow users to customize the execution of EUPHORIA: <table> <thead> <tr> <th>argument</th> <th>default</th> <th>description</th> </tr> </thead> <tbody> <tr> <td>-buffer</td> <td>290x400</td> <td>Size of offscreen buffer, used for screen updates.</td> </tr> <tr> <td>-colorDelta</td> <td>200</td> <td>Maximum color approximation distance in RGB space.</td> </tr> <tr> <td>-display</td> <td>no default</td> <td>X windows display name.</td> </tr> <tr> <td>-file</td> <td>no default</td> <td>Saved EUPHORIA file to load on start-up.</td> </tr> <tr> <td>-geometry</td> <td>580x400-0+0</td> <td>Position and size of the EUPHORIA window.</td> </tr> <tr> <td>-invalidAreas</td> <td>3</td> <td>Number of invalid rectangles maintained.</td> </tr> <tr> <td>-pollDuration</td> <td>50</td> <td>Event polling time before drawing, in msecs.</td> </tr> <tr> <td>-pollSleep</td> <td>10</td> <td>Sleep time while polling for events, in msecs.</td> </tr> <tr> <td>-title</td> <td>EUPHORIA</td> <td>Title of EUPHORIA window and module name.</td> </tr> </tbody> </table> For example, to start EUPHORIA with specific display and a small buffer: ``` PGeuphoria -display seesaw.cs.wustl.edu:0.0 -buffer 200x200 ``` 6.1 Double buffering Double buffering is used for smooth, flicker free, graphics rendering. This means that a resource called a “ pixmap” must be allocated to buffer intermediate drawing results. The size of the pixmap is determined by the -buffer argument. Setting this value to a large size can result in more efficient drawing. Unfortunately, large pixmaps use a lot of memory; setting this value too large can cause EUPHORIA not to start, giving an X-windows warning. 6.2 Color allocation Workstations which have a limited number of colors (e.g., 8 bit depth or 256 simultaneous colors) can have problems managing how colors are allocated. EUPHORIA controls how color is allocated, and can approximate a requested color to an already allocated color. Color approximation degree is set by the -colorDelta option. Color delta is the maximum distance in RGB space in which two colors can be considered equivalent. Setting this value lower will tend to match the requested values more exactly (e.g., setting color delta to 0 disables color approximation). 6.3 Invalidation Multiple “invalid areas” can be maintained for the EUPHORIA window. These areas determine which portions of the window need to be redrawn when the appearance of window items change. Having more invalid areas is likely to make drawing more efficient if the buffer is small. or many sparsely positioned, disconnected graphics items change sporadically. On a workstation with fast graphics capabilities, fewer invalid areas may result in more efficient drawing. 6.4 Event loop EUPHORIA’s event loop is timed according to the -pollSleep and -pollDuration arguments. Before drawing is performed in an iteration of the event loop, the system first polls for events and updates from the Playground environment. The polling time is determined by poll duration. This allows the system to gather many changes to draw simultaneously, rather than drawing each change separately. The duration effectively determines the maximum "frames per second" update rate of the drawing. The default setting allows for at most 20 updates per second; setting this value higher can result in more efficient, but "jumpier", drawing. During the polling loop, EUPHORIA repeatedly sleeps for a period of time (determined by loop delay) to wait for new events and to give other processes a chance to run. Setting value this lower can result in faster drawing. However, this can cause EUPHORIA to monopolize the CPU of the workstation on which it is running. 7 Common Questions Q: Why do graphics objects sometimes change shape during dragging or external updates? A: EUPHORIA uses a constraint solver not only for end-user constraints, but also for direct manipulation and external updates. A set of constraints can be underconstrained, causing these types of problems. This means that the constraint solver may make arbitrary choices on how to satisfy a set of constraints. This usually can be solved by adding more constraints. For example, adding constant constraints to the width and height of a shape. Q: Why does EUPHORIA occasionally ignore some of the constraints? A: In general, it is not always possible to solve all constraint relationships. A set of constraints can be overconstrained if two or more constraints conflict with each other. In the event of conflicts, one or more constraints may be left unsatisfied. Also, cyclic relationships of constraints may cause constraints to be unsatisfied. Unsatisfied constraints are shown as dashed lines in when visualized (see Section 4.4). The solution to overconstrained constraints is to simplify the constraint relationships between graphics objects. Q: Why is the EUPHORIA module sometimes not removed from the connection manager user interface after EUPHORIA is exited? A: Playground’s connection manager is unable to detect when a module has been terminated through the use of the UNIX "kill" command. If EUPHORIA is exited using "kill", then the module with continue to appear in the connection manager user interface. Note that this includes the window system methods of quitting a window, such as choosing Quit from the OpenWindows default window menu. The correct way to exit EUPHORIA is to choose Quit from EUPHORIA’s file menu. Q: How can I make EUPHORIA run faster? A: Table 2 lists a number of options for fine tuning the execution of EUPHORIA. In designing a GUI, one should take into account the speed of the hardware on which it is run and the user perception of change. That is, attempting to update a GUI at faster rate than the hardware can handle or faster than a user can perceive, can result in a GUI that runs slowly. It must be remem- bered that EUPHORIA is part of a distributed system; if the EUPHORIA module monopolizes the CPU, external modules will run slower which, in turn, will make EUPHORIA run slowly. Also, if other modules update their variables repeatedly at a rate faster than the update rate of EUPHORIA, time is still spent dealing with the intermediate values of the variables even though the values may be “skipped over”. External modules should utilize Playground’s `pgsleep` [4] to adjust their speed to a reasonable rate, and should avoid resending redundant information. Q: I'm updating the position of an object on the screen from an external module. Why does the object "hop", first moving on the x-axis and then on the y-axis? A: The problem is that the external module should be using Playground's atomic step mechanism [4]. This ensures that changes to the x coordinate and the y coordinate occur together. Another way to make drawing more efficient is to have all changes for an iteration within an atomic step, eliminating redundant drawing. Acknowledgments We thank the EUPHORIA users of the Washington University CS333 class for their useful comments. We also thank David Saff, who is in the process of developing constraint visualization and editing for EUPHORIA. We thank Bala Swaminathan and Ram Sethuraman for their work in developing the Playground library. References
{"Source-Url": "http://openscholarship.wustl.edu/cgi/viewcontent.cgi?article=1377&context=cse_research", "len_cl100k_base": 6151, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 17613, "total-output-tokens": 7045, "length": "2e12", "weborganizer": {"__label__adult": 0.0002046823501586914, "__label__art_design": 0.0007257461547851562, "__label__crime_law": 0.00017535686492919922, "__label__education_jobs": 0.0010890960693359375, "__label__entertainment": 8.445978164672852e-05, "__label__fashion_beauty": 8.90493392944336e-05, "__label__finance_business": 0.00013935565948486328, "__label__food_dining": 0.00016951560974121094, "__label__games": 0.0004374980926513672, "__label__hardware": 0.0007333755493164062, "__label__health": 0.00019228458404541016, "__label__history": 0.00016319751739501953, "__label__home_hobbies": 5.668401718139648e-05, "__label__industrial": 0.00021588802337646484, "__label__literature": 0.0001914501190185547, "__label__politics": 0.00011420249938964844, "__label__religion": 0.00030612945556640625, "__label__science_tech": 0.0120697021484375, "__label__social_life": 6.949901580810547e-05, "__label__software": 0.0311737060546875, "__label__software_dev": 0.951171875, "__label__sports_fitness": 0.0001252889633178711, "__label__transportation": 0.0001875162124633789, "__label__travel": 0.0001169443130493164}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30978, 0.03041]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30978, 0.74666]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30978, 0.8902]], "google_gemma-3-12b-it_contains_pii": [[0, 456, false], [456, 664, null], [664, 664, null], [664, 2020, null], [2020, 4941, null], [4941, 6666, null], [6666, 9021, null], [9021, 10869, null], [10869, 12962, null], [12962, 16078, null], [16078, 18906, null], [18906, 21855, null], [21855, 24620, null], [24620, 27943, null], [27943, 30978, null]], "google_gemma-3-12b-it_is_public_document": [[0, 456, true], [456, 664, null], [664, 664, null], [664, 2020, null], [2020, 4941, null], [4941, 6666, null], [6666, 9021, null], [9021, 10869, null], [10869, 12962, null], [12962, 16078, null], [16078, 18906, null], [18906, 21855, null], [21855, 24620, null], [24620, 27943, null], [27943, 30978, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30978, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30978, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30978, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30978, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30978, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30978, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30978, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30978, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30978, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30978, null]], "pdf_page_numbers": [[0, 456, 1], [456, 664, 2], [664, 664, 3], [664, 2020, 4], [2020, 4941, 5], [4941, 6666, 6], [6666, 9021, 7], [9021, 10869, 8], [10869, 12962, 9], [12962, 16078, 10], [16078, 18906, 11], [18906, 21855, 12], [21855, 24620, 13], [24620, 27943, 14], [27943, 30978, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30978, 0.11515]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
5d9d039f918d652c3bd2539923fb27bc8fb49cd9
[REMOVED]
{"Source-Url": "https://brage.bibsys.no/xmlui/bitstream/handle/11250/2430606/SINTEF+S17392.pdf?isAllowed=y&sequence=2", "len_cl100k_base": 5195, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 22591, "total-output-tokens": 6248, "length": "2e12", "weborganizer": {"__label__adult": 0.0003139972686767578, "__label__art_design": 0.00029397010803222656, "__label__crime_law": 0.0002467632293701172, "__label__education_jobs": 0.0012149810791015625, "__label__entertainment": 4.696846008300781e-05, "__label__fashion_beauty": 0.00014269351959228516, "__label__finance_business": 0.000270843505859375, "__label__food_dining": 0.0002682209014892578, "__label__games": 0.0003795623779296875, "__label__hardware": 0.0005469322204589844, "__label__health": 0.0003921985626220703, "__label__history": 0.0002186298370361328, "__label__home_hobbies": 6.330013275146484e-05, "__label__industrial": 0.00031304359436035156, "__label__literature": 0.0002598762512207031, "__label__politics": 0.00020802021026611328, "__label__religion": 0.0003879070281982422, "__label__science_tech": 0.01136016845703125, "__label__social_life": 8.26716423034668e-05, "__label__software": 0.0046539306640625, "__label__software_dev": 0.9775390625, "__label__sports_fitness": 0.00023376941680908203, "__label__transportation": 0.0004177093505859375, "__label__travel": 0.00017142295837402344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28590, 0.01341]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28590, 0.47862]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28590, 0.91876]], "google_gemma-3-12b-it_contains_pii": [[0, 2572, false], [2572, 5587, null], [5587, 8600, null], [8600, 10177, null], [10177, 13108, null], [13108, 16216, null], [16216, 19339, null], [19339, 22829, null], [22829, 25644, null], [25644, 28590, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2572, true], [2572, 5587, null], [5587, 8600, null], [8600, 10177, null], [10177, 13108, null], [13108, 16216, null], [16216, 19339, null], [19339, 22829, null], [22829, 25644, null], [25644, 28590, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28590, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28590, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28590, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28590, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28590, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28590, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28590, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28590, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28590, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28590, null]], "pdf_page_numbers": [[0, 2572, 1], [2572, 5587, 2], [5587, 8600, 3], [8600, 10177, 4], [10177, 13108, 5], [13108, 16216, 6], [16216, 19339, 7], [19339, 22829, 8], [22829, 25644, 9], [25644, 28590, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28590, 0.14474]]}
olmocr_science_pdfs
2024-12-12
2024-12-12
f05e13ea883ddd3a0d77d04e71d35383fc21400a
Hex Map 16 Pathfinding - Highlight cells. - Pick a search destination. - Find the shortest path. - Create a priority queue. This is part 16 of a tutorial series about hexagon maps. After figuring out the distances between cells, we move on to finding paths between them. From now on, the Hex Map tutorials are made with Unity 5.6.0. Note that there is a bug in 5.6 that breaks texture arrays in builds on multiple platforms. The workaround is to enable *Is Readable* via the texture array’s inspector. Planning a journey. Highlighting Cells To search for a path between two cells, we first have to select these cells. It's no longer a matter of selecting a single cell and watching the search spread through the map. For example, we first select a starting cell, followed by a destination cell. After making these selections, it would be handy to highlight them. So let's add that functionality. We're not going to create a fancy or efficient highlighting method now, just a quick one to aid development. 1.1 Outline Texture A simply way to highlight cells is by adding an outline to them. The most straightforward way to do this is with a texture that contains a hexagon outline. Here is such a texture. It is transparent except for a white hexagon outline. By making it white, we can colorize it later as we see fit. ![Cell outline on black background.](image) Import the texture and set its Texture Type to Sprite. Its Sprite Mode is Single, with the default settings. Because it's a pure white texture, we don't need sRGB conversion. The alpha channel represents transparency, so enable Alpha is Transparency. I also enabled mip maps and set the Filter Mode to Trilinear, because otherwise mip transitions can be obvious for outlines. ![Texture import settings.](image) The quickest ways to add a potential outline to each cell, is to give each its own sprite. Create a new game object and add an image component to it, via Component / UI / Image, and assign our outline sprite to it. Then, put a Hex Cell Label prefab instance in the scene, make the sprite object a child of it, and apply the changes to the prefab. Then get rid of the instance. Now every cells has a sprite, but they will be much too large. To make the outlines fit around the cell centers, change the Width and Height of the sprite's transform component to 17. Highlight sprites, partially obscured by terrain. 1.3 Drawing on Top of Everything Because the outline overlaps the cell edge regions, it often ends up below the terrain geometry. This causes part of the outline to disappear. Changing the vertical position of the sprites can prevent this for small elevation changes, but not for cliffs. What we can do instead, is always draw the outlines on top of everything else. We need to create a custom sprite shader for this. We can suffice by copying Unity's default sprite shader and making a few changes. Shader "Custom/Highlight" { Properties { [PerRendererData] _MainTex ("Sprite Texture", 2D) = "white" {} _Color ("Tint", Color) = (1,1,1,1) [MaterialToggle] PixelSnap ("Pixel snap", Float) = 0 [HideInInspector] _RendererColor ("RendererColor", Color) = (1,1,1,1) [HideInInspector] _Flip ("Flip", Vector) = (1,1,1,1) [PerRendererData] _AlphaTex ("External Alpha", 2D) = "white" {} [PerRendererData] _EnableExternalAlpha ("Enable External Alpha", Float) = 0 } SubShader { Tags { "Queue"="Transparent" "IgnoreProjector"="True" "RenderType"="Transparent" "PreviewType"="Plane" "CanUseSpriteAtlas"="True" } Cull Off ZWrite Off Blend One OneMinusSrcAlpha Pass { CGPROGRAM #pragma vertex SpriteVert #pragma fragment SpriteFrag #pragma target 2.0 #pragma multi_compile_instancing #pragma multi_compile _ PIXELSNAP_ON #pragma multi_compile _ ETC1_EXTERNAL_ALPHA #include "UnitySprites.cginc" ENDCG } } The first change is to ignore the depth buffer, by making the Z test always succeed. ZWrite Off ZTest Always The second change is to draw after all other transparent geometry. Adding 10 to the transparent queue should suffice. "Queue"="Transparent+10" Create a new material that uses this shader. We can ignore all its properties, sticking with the default values. Then have our sprite prefab use this material. Our highlights are now always visible. Even when a cell is hidden behind higher terrain, its outline will still be drawn on top of everything else. This might not be pretty, but it ensures that we can always spot the highlighted cells, which is useful. 1.4 Controlling Highlights We don’t want all cells to be highlighted all the time. In fact, we want to start out with none of them highlighted. We can do this by disabling the image component of the Highlight prefab object. ![Disabled image component.](Image (Script)) To enable the highlight of a cell, add an `EnableHighlight` method to `HexCell`. It has to grab the only child of its `uiRect` and enable its image component. Create a `DisableHighlight` method as well. ```csharp public void DisableHighlight () { Image highlight = uiRect.GetChild(0).GetComponent<Image>(); highlight.enabled = false; } public void EnableHighlight () { Image highlight = uiRect.GetChild(0).GetComponent<Image>(); highlight.enabled = true; } ``` Finally, we can also provide a color to tint the highlight when enabling it. ```csharp public void EnableHighlight (Color color) { Image highlight = uiRect.GetChild(0).GetComponent<Image>(); highlight.color = color; highlight.enabled = true; } ``` 2 Finding a Path Now that we can highlight cells, we can go ahead and select two cells, then search for a path between them. First we have to actually select the cells, then limit the search to finding a path, and finally show that path. 2.1 Search Start We have two different cells to select, the start and the end point of the search. Let's say that to select the cell to search from, you have to hold down the left shift key while clicking. Doing so will highlight that cell with a blue color. We have to keep a reference to this cell for later searching. Also, when a new starting cell is chosen, the highlight of the old one should be disabled. So add a `searchFromCell` field to `HexMapEditor`. ```csharp HexCell previousCell, searchFromCell; ``` Inside `HandleInput`, we can use `Input.GetKey(KeyCode.LeftShift)` to check whether the shift key is being held down. ```csharp if (EditMode) { EditCells(currentCell); } else if (Input.GetKey(KeyCode.LeftShift)) { if (searchFromCell) { searchFromCell.DisableHighlight(); } searchFromCell = currentCell; searchFromCell.EnableHighlight(Color.blue); } else { hexGrid.FindDistancesTo(currentCell); } ``` *Where to search from.* 2.2 Search Destination Instead of finding all distances to a cell, we're now looking for a path between two specific cells. So rename `HexGrid.FindDistancesTo` to `HexGrid.FindPath` and give it a second `HexCell` parameter. Adjust the `Search` method as well. ```csharp public void FindPath (HexCell fromCell, HexCell toCell) { StopAllCoroutines(); StartCoroutine(Search(fromCell, toCell)); } IEnumerator Search (HexCell fromCell, HexCell toCell) { for (int i = 0; i < cells.Length; i++) { cells[i].Distance = int.MaxValue; } WaitForSeconds delay = new WaitForSeconds(1 / 60f); List<HexCell> frontier = new List<HexCell>(); fromCell.Distance = 0; frontier.Add(fromCell); ... } ``` `HexMapEditor.HandleInput` now has to invoke the adjusted method, using `searchFromCell` and `currentCell` as arguments. Also, we can only look for a path when we know which cell to search from. And we don't have to bother looking for a path when the destination is the same as the start. ```csharp if (editMode) { EditCells(currentCell); } else if (Input.GetKey(KeyCode.LeftShift)) { ... } else if (searchFromCell && searchFromCell != currentCell) { hexGrid.FindPath(searchFromCell, currentCell); } ``` Once we're going for a search, we should first get rid of all previous highlights. So have `HexGrid.Search` disable the highlights while it's resetting the distances. As this also disables the highlight of the starting cell, enable it again afterwards. At this point, we can also highlight the destination cell. Let's make it red. IEnumerator Search (HexCell fromCell, HexCell toCell) { for (int i = 0; i < cells.Length; i++) { cells[i].Distance = int.MaxValue; cells[i].DisableHighlight(); } fromCell.EnableHighlight(Color.blue); toCell.EnableHighlight(Color.red); ... } 2.3 Limiting the Search At this point, our search algorithm still computes the distances for all cells that are reachable from the starting cell. This is no longer necessary. We can stop as soon as we've found the final distance to the destination cell. So when the current cell is the destination, we can break out of the algorithm loop. ```csharp while (frontier.Count > 0) { yield return delay; HexCell current = frontier[0]; frontier.RemoveAt(0); if (current == toCell) { break; } for (HexDirection d = HexDirection.NE; d <= HexDirection.NW; d++) { ... } } ``` Stopping at the destination. What happens if the destination cannot be reached? Then the algorithm continues until all reachable cells have been searched. Without the possibility to take an early exit, it will behave like the old `FindDistancesTo` method. 2.4 Showing the Path We can find the distance between the start and end of a path, but we do not yet know what the actual path is. To do so, we have to keep track of how each cell was reached. How can we do this? When adding a cell to the frontier, we do so because it's a neighbor of the current cell. The only exception is the starting cell. All other cells were reached via the current cell. If we keep track of from which cell each cell was reached, we end up with a cell network. Specifically, a tree network with the starting cell as its root. We can use this to construct a path once we reach the destination. ![A tree network describing paths to the center.](image) We can store this information by adding another cell reference to `HexCell`. We don't need to serialize this data, so let's use a default property for it. ```csharp public HexCell PathFrom { get; set; } ``` Back in `HexGrid.Search`, set the neighbor's `PathFrom` to the current cell when adding it to the frontier. We also have to change this reference when we find a shorter route to a neighbor. ```csharp if (neighbor.Distance == int.MaxValue) { neighbor.Distance = distance; neighbor.PathFrom = current; frontier.Add(neighbor); } else if (distance < neighbor.Distance) { neighbor.Distance = distance; neighbor.PathFrom = current; } ``` After arriving at the destination, we can visualize the path by following these references back to the starting cell, and highlight them. ```java if (current == toCell) { current = current.PathFrom; while (current != fromCell) { current.EnableHighlight(Color.white); current = current.PathFrom; } break; } ``` A path has been found. Note that there are often multiple shortest paths. Which is found depends on the order in which the cells are processed. Some paths might look good, others might look bad, but there's never a shorter path. We'll get back to this later. 2.5 Adjusting the Search Start Once a starting cell has been chosen, changing the destination cell will trigger a new search. The same should happen when selecting a new starting cell. To make this possible, HexMapEditor also has to remember the destination cell. ```cpp HexCell previousCell, searchFromCell, searchToCell; ``` Using this field, we can also initiate a new search when selecting a new start. ```cpp else if (Input.GetKey(KeyCode.LeftShift)) { if (searchFromCell) { searchFromCell.DisableHighlight(); } searchFromCell = currentCell; searchFromCell.EnableHighlight(Color.blue); if (searchToCell) { hexGrid.FindPath(searchFromCell, searchToCell); } } else if (searchFromCell && searchFromCell != currentCell) { searchToCell = currentCell; hexGrid.FindPath(searchFromCell, searchToCell); } ``` Also, we should avoid making the start cell equal to the destination cell. ```cpp if (editMode) { EditCells(currentCell); } else if (Input.GetKey(KeyCode.LeftShift) && searchToCell != currentCell) { ... } ``` 3 Smarter Searching Although our search algorithm finds the shortest path, it spends a lot of time investigating cells that obviously won't be part of that path. Obvious to us, at least. The algorithm doesn't have a high-level view of the map. It cannot see that searching in some directions will be pointless. It prefers to follow roads, even if they lead away from the destination. Can we make it smarter? Currently, we're only considering a cell's distance from the start when deciding which cell to process next. If we want to be smart about this, we also have to consider the distance to the destination. Unfortunately, we don't know this yet. But we could make an estimate of the remaining distance. Adding that to the cell distance gives us an indication of the total path length that goes through this cell. We can then use that to determine the search priorities for cells. 3.1 Search Heuristic When we're relying on an estimate or guess instead of exactly known data, we say that we use a search heuristic. This heuristic represents our best guess of the remaining distance. We have to determine this value for each cell that we search through, so add an integer property for it to HexCell. We don't need to serialize it, so we can suffice with another default property. ```csharp public int SearchHeuristic { get; set; } ``` How do we guess the remaining distance? In the most ideal case, there would be a road leading straight to the destination. If so, the distance is equal to the unmodified distance between the coordinates of this cell and the destination cell. Let's use that as our heuristic. As the heuristic doesn't depend on the path traveled so far, it is constant during the search. So we only have to compute it once, when HexGrid.Search adds a cell to the frontier. ```csharp if (neighbor.Distance == int.MaxValue) { neighbor.Distance = distance; neighbor.PathFrom = current; neighbor.SearchHeuristic = neighbor.coordinates.DistanceTo(toCell.coordinates); frontier.Add(neighbor); } ``` 3.2 Search Priority From now on, we'll determine the search priority based on the cell distance plus its heuristic. Let's add a convenient property for this value to `HexCell`. ```csharp public int SearchPriority { get { return distance + SearchHeuristic; } } ``` To make this work, adjust `HexGrid.Search` so it uses this property to sort the frontier. ```csharp frontier.Sort( (x, y) => x.SearchPriority.CompareTo(y.SearchPriority) ); ``` Searching without vs. with heuristic. 3.3 Admissible Heuristic Using our new search priorities, we indeed end up visiting fewer cells. However, on a featureless map the algorithm still processes cells that lie in the wrong direction. This happens because our default movement cost is 5 per step, while the heuristic only adds 1 per step. So the influence of the heuristic isn't very strong. If the movement costs are the same across the map, then we could use the same costs when determining the heuristic. In our case, that would be our current heuristic times 5. That would drastically reduce the amount of processed cells. However, if there are roads on the map, we might end up overestimating the remaining distance. As a result, the algorithm can make mistakes and produce a path that isn't actually the shortest. To guarantee that we find the shortest path, we have to make sure that we never overestimate the remaining distance. This is known as an admissible heuristic. Because our minimum movement cost is 1, we have no choice but to use the same costs when determining the heuristic. Technically, it's fine to use an even lower cost, but that would only make the heuristic weaker. The lowest possible heuristic is zero, which is simply Dijkstra's algorithm. When the heuristic is nonzero, this algorithm is known as A*, pronounced as A-star. Why is it known as A*? The idea of adding a heuristic to Dijkstra's algorithm was first introduced by Nils Nilsson. He named his variant A1. Later, Bertram Raphael made a better version, which he named A2. After that, Peter Hart proved that A2 was optimal with a good heuristic, so there couldn't be a better version. That prompted him to name it A* to indicate that there would never be another improvement, like A3 or A4. So yes, the A* algorithm is the best you'll ever get, but it is only as good as its heuristic. Priority Queue Although A* is a good algorithm, our implementation of it is not that efficient. That's because we're using a list to store the frontier, which we have to sort each iteration. As mentioned in the previous tutorial, what we need is a priority queue, but there's no standard implementation of one. Let's now create one ourselves. Our queue must support an enqueue and a dequeue operation, based on priority. It must also support changing the priority of a cell that's already in the queue. Ideally, we implement this while minimizing searching, sorting, and memory allocations. And we want to keep it simple too. 4.1 Creating a Custom Queue Create a new HexCellPriorityQueue class with the required public methods. We'll use a simple list to keep track of the queue's contents. Also, give it a Clear method to reset the queue, so we can reuse it. ```csharp using System.Collections.Generic; public class HexCellPriorityQueue { List<HexCell> list = new List<HexCell> (); public void Enqueue (HexCell cell) { } public HexCell Dequeue () { return null; } public void Change (HexCell cell) { } public void Clear () { list.Clear(); } } ``` We store the cell priorities in the cells themselves. So a cell's priority has to be set before it is added to the queue. But in case of a priority change, it is probably useful to know what was the old priority. So let's add that as a parameter to Change. ```csharp public void Change (HexCell cell, int oldPriority) { } ``` It's also useful to know how many cells are in the queue, so add a Count property for that. Simply use a field that is incremented and decremented appropriately. ```csharp int count = 0; public int Count { get { return count; } } public void Enqueue (HexCell cell) { count += 1; } public HexCell Dequeue () { count -= 1; return null; } ... public void Clear () { list.Clear(); count = 0; } ``` 4.2 Adding to the Queue When a cell is added to the queue, let's start by simply using its priority as its index, treating the list as a simple array. ```csharp public void Enqueue (HexCell cell) { count += 1; int priority = cell.SearchPriority; list[priority] = cell; } ``` However, that only works if the list is long enough, otherwise we go out of bounds. We can prevent that by adding dummy elements to the list until it has the required length. These empty elements don't reference a cell, so they're created by adding `null` to the list. ```csharp int priority = cell.SearchPriority; while (priority >= list.Count) { list.Add(null); } list[priority] = cell; ``` But this only stores a single cell per priority, while there will likely be multiple. To keep track of all the cells with the same priority, we have to use another list. While we could use an actual list per priority, we can also add a property to `HexCell` to link them together. This allows us to create a chain of cells, known as a linked list. ```csharp public HexCell NextWithSamePriority { get; set; } ``` To create the chain, have `HexCellPriorityQueue.Enqueue` make the newly added cell reference the current value at the same priority, before replacing it. ```csharp cell.NextWithSamePriority = list[priority]; list[priority] = cell; ``` 4.3 Removing from the Queue To retrieve a cell from the priority queue, we have to access the linked list at the lowest non-empty index. So loop through the list until we find it. If we don't, then the queue is empty and we return `null`. We could return any cell from the found chain, because they all have the same priority. It's simplest to return the cell at the start of the chain. ```csharp public HexCell Dequeue () { count -= 1; for (int i = 0; i < list.Count; i++) { HexCell cell = list[i]; if (cell != null) { return cell; } } return null; } ``` To keep a reference to the rest of the chain, use the next cell with the same priority as the new start. If there was only one cell at this priority level, the element will become `null` and it will be skipped in the future. ```csharp if (cell != null) { list[i] = cell.NextWithSamePriority; return cell; } ``` 4.4 Keeping Track of the Minimum This approach works, but requires us to iterate through the list each time we retrieve a cell. We cannot avoid searching for the lowest non-empty index, but we do not have to start from zero every time. Instead, we could keep track of the minimum priority, and start the search from there. Initially, the minimum is effectively infinite. ```csharp int minimum = int.MaxValue; ... public void Clear () { list.Clear(); count = 0; minimum = int.MaxValue; } ``` When a cell is added to the queue, adjust the minimum if necessary. ```csharp public void Enqueue (HexCell cell) { count += 1; int priority = cell.SearchPriority; if (priority < minimum) { minimum = priority; } ... } ``` And when dequeuing, use the minimum to iterate through the list, instead of starting at zero. ```csharp public HexCell Dequeue () { count -= 1; for (; minimum < list.Count; minimum++) { HexCell cell = list[minimum]; if (cell != null) { list[minimum] = cell.NextWithSamePriority; return cell; } } return null; } ``` This drastically reduces the amount of time we have to spend looping through our priority list. 4.5 Changing Priorities When a cell's priority changes, it has to be removed from the linked list that it's currently a part of. To do so, we have to follow the chain until we find it. Begin by declaring the head of the old priority list to be the current cell, and also keep track of the next cell. We can directly grab the next cell, because we know that there is at least one cell at this index. ```java public void Change (HexCell cell, int oldPriority) { HexCell current = list[oldPriority]; HexCell next = current.NextWithSamePriority; } ``` If the current cell is the changed cell, then it is the head cell and we can cut it away, as if we dequeued it. ```java HexCell current = list[oldPriority]; HexCell next = current.NextWithSamePriority; if (current == cell) { list[oldPriority] = next; } ``` If not, we have to follow the chain until we end up at the cell in front of the changed cell. That one holds the reference to the cell that has been changed. ```java else { while (next != cell) { current = next; next = current.NextWithSamePriority; } } ``` At this point, we can remove the changed cell from the linked list, by skipping it. ```java while (next != cell) { current = next; next = current.NextWithSamePriority; } current.NextWithSamePriority = cell.NextWithSamePriority; ``` After the cell has been removed, it has to be added again so it ends up in the list for its new priority. ```java public void Change (HexCell cell, int oldPriority) { ... Enqueue(cell); } ``` 4.6 Using the Queue Now we can use our custom priority queue in `HexGrid`. We can make do with a single instance that we reuse for all searches. ```csharp HexCellPriorityQueue searchFrontier; ... IEnumerable Search (HexCell fromCell, HexCell toCell) { if (searchFrontier == null) { searchFrontier = new HexCellPriorityQueue(); } else { searchFrontier.Clear(); } searchFrontier.Enqueue(fromCell); while (searchFrontier.Count > 0) { yield return WaitForSeconds(); HexCell current = searchFrontier.Dequeue(); // frontier.RemoveAt(0); } ... } ``` The `Search` method now has to enqueue `fromCell` before starting its loop, and each iteration begins by dequeuing a cell. This replaces the old frontier code. ```csharp WaitForSeconds delay = new WaitForSeconds(1 / 60f); // List<HexCell> frontier = new List<HexCell>(); fromCell.Distance = 0; // frontier.Add(fromCell); searchFrontier.Enqueue(fromCell); while (searchFrontier.Count > 0) { yield return delay; HexCell current = searchFrontier.Dequeue(); // frontier.RemoveAt(0); ... } ``` Adjust the code for adding and changing a neighbor as well. Make sure to remember the old priority before changing it. if (neighbor.Distance == int.MaxValue) { neighbor.Distance = distance; neighbor.PathFrom = current; neighbor.SearchHeuristic = neighbor.coordinates.DistanceTo(toCell.coordinates); // frontier.Add(neighbor); searchFrontier.Enqueue(neighbor); } else if (distance < neighbor.Distance) { int oldPriority = neighbor.SearchPriority; neighbor.Distance = distance; neighbor.PathFrom = current; searchFrontier.Change(neighbor, oldPriority); } Finally, we no longer need to sort the frontier. ```cpp // frontier.Sort( // (x, y) => x.SearchPriority.CompareTo(y.SearchPriority) // ); ``` Searching with a priority queue. As mentioned earlier, which shortest path is found depends on the order in which the cells are processed. Our queue produces a different order than a sorted list, hence you can get different paths. Because we're both adding to and removing from the head of the linked lists for each priority, they function as stacks rather than queues. The cells that were added last get processed first. A side effect of this approach is that the algorithm tends to zigzag. This makes it more likely to produce paths that zigzag as well. Fortunately, such paths tend to look better, so it's a nice side effect. Sorted list vs. priority queue. The next tutorial is Limited Movement. Enjoying the tutorials? Are they useful? Want more? Please support me on Patreon! Or make a direct donation! made by Jasper Flick
{"Source-Url": "https://catlikecoding.com/unity/tutorials/hex-map/part-16/Hex-Map-16.pdf", "len_cl100k_base": 6219, "olmocr-version": "0.1.53", "pdf-total-pages": 28, "total-fallback-pages": 0, "total-input-tokens": 48130, "total-output-tokens": 7641, "length": "2e12", "weborganizer": {"__label__adult": 0.0004503726959228515, "__label__art_design": 0.000518798828125, "__label__crime_law": 0.0002601146697998047, "__label__education_jobs": 0.00030541419982910156, "__label__entertainment": 9.649991989135742e-05, "__label__fashion_beauty": 0.00017130374908447266, "__label__finance_business": 7.146596908569336e-05, "__label__food_dining": 0.0003533363342285156, "__label__games": 0.0029392242431640625, "__label__hardware": 0.0007252693176269531, "__label__health": 0.00018978118896484375, "__label__history": 0.0002086162567138672, "__label__home_hobbies": 9.238719940185548e-05, "__label__industrial": 0.000255584716796875, "__label__literature": 0.0001971721649169922, "__label__politics": 0.0001665353775024414, "__label__religion": 0.0004413127899169922, "__label__science_tech": 0.002384185791015625, "__label__social_life": 6.538629531860352e-05, "__label__software": 0.0052947998046875, "__label__software_dev": 0.98388671875, "__label__sports_fitness": 0.00033593177795410156, "__label__transportation": 0.0003666877746582031, "__label__travel": 0.0002484321594238281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26762, 0.00676]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26762, 0.61395]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26762, 0.87611]], "google_gemma-3-12b-it_contains_pii": [[0, 527, false], [527, 1011, null], [1011, 1784, null], [1784, 2346, null], [2346, 2898, null], [2898, 4407, null], [4407, 4660, null], [4660, 5673, null], [5673, 6889, null], [6889, 8467, null], [8467, 8744, null], [8744, 9829, null], [9829, 10954, null], [10954, 11558, null], [11558, 12634, null], [12634, 14677, null], [14677, 15772, null], [15772, 16500, null], [16500, 17020, null], [17020, 18721, null], [18721, 19682, null], [19682, 20332, null], [20332, 21264, null], [21264, 22686, null], [22686, 24044, null], [24044, 25288, null], [25288, 26557, null], [26557, 26762, null]], "google_gemma-3-12b-it_is_public_document": [[0, 527, true], [527, 1011, null], [1011, 1784, null], [1784, 2346, null], [2346, 2898, null], [2898, 4407, null], [4407, 4660, null], [4660, 5673, null], [5673, 6889, null], [6889, 8467, null], [8467, 8744, null], [8744, 9829, null], [9829, 10954, null], [10954, 11558, null], [11558, 12634, null], [12634, 14677, null], [14677, 15772, null], [15772, 16500, null], [16500, 17020, null], [17020, 18721, null], [18721, 19682, null], [19682, 20332, null], [20332, 21264, null], [21264, 22686, null], [22686, 24044, null], [24044, 25288, null], [25288, 26557, null], [26557, 26762, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 26762, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26762, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26762, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26762, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26762, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26762, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26762, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26762, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26762, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26762, null]], "pdf_page_numbers": [[0, 527, 1], [527, 1011, 2], [1011, 1784, 3], [1784, 2346, 4], [2346, 2898, 5], [2898, 4407, 6], [4407, 4660, 7], [4660, 5673, 8], [5673, 6889, 9], [6889, 8467, 10], [8467, 8744, 11], [8744, 9829, 12], [9829, 10954, 13], [10954, 11558, 14], [11558, 12634, 15], [12634, 14677, 16], [14677, 15772, 17], [15772, 16500, 18], [16500, 17020, 19], [17020, 18721, 20], [18721, 19682, 21], [19682, 20332, 22], [20332, 21264, 23], [21264, 22686, 24], [22686, 24044, 25], [24044, 25288, 26], [25288, 26557, 27], [26557, 26762, 28]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26762, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
2b7d0d1643f7bb2f38ffebe510b38860c874f4af
Multilingual Question Answering Over Linked Data QALD-3 Open Challenge Document Version: December 18, 2013 The QALD-3 open challenge is the third instalment of the Question Answering over Linked Data benchmark and is organized as a half-day lab at CLEF 2013. QALD-3 offers two tasks: multilingual question answering, aimed at all kinds of question answering systems that mediate between a user, expressing his or her information need in natural language, and semantic data, and ontology lexicalization, aimed at all methods that (semi-)automatically create lexicalizations for ontology concepts. All relevant information for participating in the challenge are given in this document. 1 Introduction 2 Relevant information in a nutshell 3 Task 1: Multilingual question answering 3.1 Datasets 3.1.1 English DBpedia 3.8 3.1.2 Spanish DBpedia 3.1.3 MusicBrainz 3.1.4 SPARQL endpoint 3.2 Training and test phase 3.2.1 Training questions 3.2.2 Submitting results during test phase 3.2.3 Evaluation measures 3.3 Participant’s challenge 4 Task 2: Ontology lexicalization 4.1 Training data 4.2 Submitting results during test phase 4.3 Evaluation measures 5 Contact and trouble shooting 1 Introduction Motivation and Goal While more and more semantic data is published on the Web, the question of how typical Web users can access this body of knowledge becomes of crucial importance. Over the past years, there is a growing amount of research on interaction paradigms that allow end users to profit from the expressive power of Semantic Web standards while at the same time hiding their complexity behind an intuitive and easy-to-use interface. Especially natural language interfaces have received wide attention, as they allow users to express arbitrarily complex information needs in an intuitive fashion and, at least in principle, in their own language. The key challenge lies in translating the users’ information needs into a form such that they can be evaluated using standard Semantic Web query processing and inferencing techniques. To this end, systems have to deal with a heterogeneous, distributed and very large set of highly interconnected data. The availability of such an amount of open and structured data has no precedents in computer science and approaches that can deal with the specific character of linked data are urgently needed. In addition, multilinguality has become an issue of major interest for the Semantic Web community, as both the number of actors creating and publishing data all in languages other than English, as well as the amount of users that access this data and speak native languages other than English is growing substantially. In order to achieve the goal that users from all countries have access to the same information, there is an impending need for systems that can help in overcoming language barriers by facilitating multilingual access to semantic data originally produced for a different culture and language. Coordinators - Philipp Cimiano (CITEC, Universität Bielefeld, Germany) - Vanessa Lopez (IBM Research, Dublin, Ireland) - Christina Unger (CITEC, Universität Bielefeld, Germany) - Elena Cabrio (INRIA Sophia-Antipolis Méditerranée, Cedex, France) - Axel-Cyrille Ngonga Ngomo (Universität Leipzig, Germany) - Sebastian Walter (CITEC, Universität Bielefeld, Germany) 2 Relevant information in a nutshell Workshop Website: http://www.sc.cit-ec.uni-bielefeld.de/qald/ Task 1: Multilingual question answering Datasets: - DBpedia 3.8 http://downloads.dbpedia.org/3.8/en/ - Spanish DBpedia http://es.dbpedia.org/DBpediaESdata/ - MusicBrainz http://greententacle.techfak.uni-bielefeld.de/~cunger/qald2/musicbrainz.tar.gz (226.8 MB) SPARQL endpoint: - For English DBpedia and MusicBrainz: http://vtentacle.techfak.uni-bielefeld.de:443/sparql - For Spanish DBpedia: http://es.dbpedia.org/sparql Training questions: http://greententacle.techfak.uni-bielefeld.de/~cunger/qald/3/ - dbpedia-train.xml and dbpedia-train-answers.xml - esdbpedia-train.xml and esdbpedia-train-answers.xml - musicbrainz-train.xml and musicbrainz-train-answers.xml Test questions will be made available on May 1, 2013. Participant’s challenge: http://greententacle.techfak.uni-bielefeld.de/~cunger/qald/3/participants-challenge.xml Task 2: Ontology lexicalization Training concepts (10 classes and 30 properties) from the DBpedia ontology: - http://greententacle.techfak.uni-bielefeld.de/~cunger/qald/3/dbpedia_train_classes_properties.txt Corresponding lemon lexicon containing lexicalizations of those concepts: - http://greententacle.techfak.uni-bielefeld.de/~cunger/qald/3/dbpedia_train_lexicon_en.ttl Test concepts will be made available on May 1, 2013. **Evaluation** *Submission of results and evaluation* is done by means of an online form: [http://greententacle.techfak.uni-bielefeld.de/~cunger/qald/index.php?x=evaltool&q=3](http://greententacle.techfak.uni-bielefeld.de/~cunger/qald/index.php?x=evaltool&q=3) Results for training the training phase can be uploaded at any time; results for the test phase can be uploaded from May 1 to May 17, 2013. **Resources** You are free to use all resources (e.g., WordNet, GeoNames, dictionary tools, and so on). **Contact** Updates on the open challenge will be published on the *Interacting with Linked Data* mailing list: [https://lists.techfak.uni-bielefeld.de/cit-ec/mailman/listinfo/ild](https://lists.techfak.uni-bielefeld.de/cit-ec/mailman/listinfo/ild) In case of question, problems and comments, please contact Christina Unger: cunger@cit-ec.uni-bielefeld.de Task 1: Multilingual question answering This task is aimed at all kinds of question answering systems that mediate between a user, expressing his or her information need in natural language, and semantic data. 3.1 Datasets In order to evaluate and compare question answering systems, we provide three RDF datasets: English DBpedia 3.8 (http://dbpedia.org), Spanish DBpedia (http://es.dbpedia.org), and MusicBrainz (musicbrainz.org). In order to work with the datasets, you can either download them or use the provided SPARQL endpoint. 3.1.1 English DBpedia 3.8 DBpedia is a community effort to extract structured information from Wikipedia and to make this information available as RDF data. The RDF dataset provided for the challenge is the official DBpedia 3.8 dataset for English, including links, most importantly to YAGO\(^1\) categories and MusicBrainz. This dataset comprises all files provided at: - http://downloads.dbpedia.org/3.8/links/ Namespaces that are used in the provided training and test queries are the following: - dbo: <http://dbpedia.org/ontology/> - dbp: <http://dbpedia.org/property/> - res: <http://dbpedia.org/resource/> - yago: <http://dbpedia.org/class/yago/> - foaf: <http://xmlns.com/foaf/0.1/> - xsd: <http://www.w3.org/2001/XMLSchema#> - rdfs: <http://www.w3.org/2000/01/rdf-schema#> - rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> 3.1.2 Spanish DBpedia Since 2011, information from Wikipedia is extracted also in 15 non-English languages, including Spanish. So far, the Spanish DBpedia contains almost 100 million RDF triples. The data can be downloaded at http://es.dbpedia.org/DBpediaESdata/. \(^1\)For detailed information on the YAGO class hierarchy, please see http://www.mpi-inf.mpg.de/yago-naga/yago/. Relevant namespaces that are used in the provided training and test queries are the following (in addition to the usual rdf, rdfs and xsd): \[ \begin{align*} esdbo: & <http://es.dbpedia.org/ontology/> \\ esdbp: & <http://es.dbpedia.org/property/> \\ esres: & <http://es.dbpedia.org/resource/> \\ \end{align*} \] 3.1.3 MusicBrainz MusicBrainz is a collaborative effort to create an open content music database. The dataset provided for the challenge is an RDF export containing all classes (artists, albums and tracks) and the most important properties of the MusicBrainz database. A package containing all RDF data\(^2\) can be downloaded from the following location: \[ \text{http://greententacle.techfak.uni-bielefeld.de/~cunger/qald2/musicbrainz.tar.gz} \quad \text{(226.8 MB)} \] The following namespaces are used in the provided training and test queries: \[ \begin{align*} omo: & <http://purl.org/ontology/mo/> \\ \text{bio: } & <http://purl.org/vocab/bio/0.1/> \\ \text{rel: } & <http://purl.org/vocab/relationship/> \\ \text{event: } & <http://purl.org/NET/c4dm/event.owl#> \\ \text{tl: } & <http://purl.org/NET/c4dm/timeline.owl#> \\ \text{foaf: } & <http://xmlns.com/foaf/0.1/> \\ \text{dc: } & <http://purl.org/dc/elements/1.1/> \\ \text{xsd: } & <http://www.w3.org/2001/XMLSchema#> \\ \text{rdf: } & <http://www.w3.org/1999/02/22-rdf-syntax-ns#> \\ \end{align*} \] The RDF export builds on the Music Ontology, a specification of which can be found at http://musicontology.com. Examples for how to model data w.r.t. this specification are given in the Music Ontology Wiki: http://wiki.musicontology.com/index.php/Examples. In the following, we briefly describe the most important classes and relations relevant for the challenge. There are three major classes: - \text{\textbf{mo:MusicArtist}} and its subtypes \text{\textbf{mo:MusicGroup}} for bands and orchestras, and \text{\textbf{mo:SoloMusicArtist}} for persons (independent of whether they are solo artists or members of a group) - \text{\textbf{mo:Record}} - \text{\textbf{mo:Track}} Artists have a birth and death date modeled by means of the BIO vocabulary\(^3\). For example, the following SPARQL query extracts the birth and death date of John Lennon: \(^2\text{In fact it contains only a subset of all track information, due to performance problems.}\) \(^3\text{http://vocab.org/bio/0.1/}\) SELECT ?birthdate deathdate WHERE { ?artist foaf:name 'John Lennon'. ?event1 rdf:type bio:Birth . ?event1 bio:date ?birthdate . ?event2 rdf:type bio:Death . ?event2 bio:date ?deathdate . } In exactly the same way, the corresponding dates for music groups are formulated (where birth date can be read as founding date and death date as the date the group broke up). Artists are related among each other through relations like rel:spouseOf, rel:parentOf, siblingOf and rel:collaboratesWith from the RELATIONSHIP vocabulary. Membership in a group is expressed in two ways: by means of the simple relation mo:member_of, indicating that someone is or was member of a group, and by means of the Event and Timeline Ontology. Using the former, the following triple expresses that John Lennon is or was a member of The Beatles: <http://zitgist.com/music/artist/4d5447d7-c61c-4120-ba1b-d7f471d385b9> mo:member_of <http://zitgist.com/music/artist/b10bbbfc-cf9e-42e0-be17-e2c3e1d2600d> Or, in a more human-readable way: ?artist foaf:name 'John Lennon' . ?band foaf:name 'The Beatles' . In order to also express time information, the more complex Event and Timeline Ontology representation has to be used. For example, the following triples express that Pete Best was a member of The Beatles from August 12, 1960 until August 16, 1962. ?artist foaf:name 'Pete Best' . ?event rdf:type mo:membership . ?band foaf:name 'The Beatles' . ?event event:time ?time . ?time tl:start '1960-08-12'^^xsd:date . ?time tl:end '1962-08-16'^^xsd:date . Records are related to their creator through the property foaf:maker, and through mo:releaseType to the type of record (mo:album, mo:single, mo:ep, mo:ep). mo:soundtrack, mo:live, mo:compilation, mo:remix, mo:interview, and mo:audiobook). For example, the following SPARQL query extracts all live albums by Slayer: ``` SELECT ?album WHERE { ?artist foaf:name 'Slayer'. } ``` The dataset also contains relations between records and artists, specifying their role during the record creation, for example mo:performer, mo:singer, mo:composer, mo:producer, and mo:lyricist. Tracks are also related to their creator through the property foaf:maker, to their duration through tl:duration, and through mo:trackNum to their posi- tion in the track list of a record. For example, the following SPARQL query extracts the title of the first track of Abbey Road: ``` SELECT ?title WHERE { ?album dc:title 'Abbey Road' . ?track mo:trackNum '1' . } ``` ### 3.1.4 SPARQL endpoint DBpedia provides official SPARQL endpoints: - **English DBpedia**: [http://dbpedia.org/sparql/](http://dbpedia.org/sparql/) - **Spanish DBpedia**: [http://es.dbpedia.org/sparql](http://es.dbpedia.org/sparql) We also provide a SPARQL endpoint for English DBpedia as well as the Mu- icBrainz dataset at the following location: [http://greententacle.techfak.uni-bielefeld.de:5171/sparql](http://greententacle.techfak.uni-bielefeld.de:5171/sparql) Evaluation will take place with respect to this SPARQL endpoint for the dbpedia and musicbrainz question sets, and with respect to the Spanish DBpedia end- point for the esdbpedia question set. ### 3.2 Training and test phase The task is to extract correct answers for natural language questions or cor- responding keywords from one of the given RDF repositories. Participating systems will be evaluated with respect to precision and recall. Moreover, participants are encouraged to report performance, i.e. the average time their system takes to answer a query. 3.2.1 Training questions In order to get acquainted with the datasets and possible questions, a set of 100 training questions for each dataset can be downloaded at the following locations: http://greententacle.techfak.uni-bielefeld.de/~cunger/qald/3/ - **English DBpedia** dbpedia-train.xml (without answers) dbpedia-train-answers.xml (with answers) - **Spanish DBpedia** esdbpedia-train.xml (without answers) esdbpedia-train-answers.xml (with answers) - **MusicBrainz** musicbrainz-train.xml (without answers) musicbrainz-train-answers.xml (with answers) All training questions are annotated with keywords, corresponding SPARQL queries and, if indicated, answers retrieved from the provided SPARQL endpoint. Annotations are provided in the following XML format. The overall document is enclosed by a tag that specifies an ID for the dataset indicating the domain and whether it is train or test (i.e. dbpedia-train, dbpedia-test, esdbpedia-train, esdbpedia-test, musicbrainz-train, musicbrainz-test). ``` <dataset id="dbpedia-train"> <question id="1">...</question> ... <question id="100">...</question> </dataset> ``` Each of the questions specifies an ID for the question (don’t worry if they are not ordered) together with a range of other attributes explained below, the natural language string of the question in six languages (English, German, Spanish, Italian, French, and Dutch), keywords in the same six languages, a corresponding SPARQL query, as well as the answers this query returns. Here is an example: ``` <question id="36" answertype="resource" aggregation="false" onlydbo="false"> <string lang="en">Through which countries does the Yenisei river flow?</string> <string lang="de">Durch welche Länder fließt der Yenisei?</string> <string lang="es">Por qué países fluye el río Yenisei?</string> <string lang="it">Attraverso quali stati scorre il fiume Yenisei?</string> <string lang="fr">...</string> ``` Door welke landen stroomt de Jenisej? **Keywords** Yenisei river, flow through, country **Query** ```query PREFIX res: <http://dbpedia.org/resource/> PREFIX dbp: <http://dbpedia.org/property/> SELECT DISTINCT ?uri WHERE { OPTIONAL {?uri rdfs:label ?string . FILTER (lang(?string)="en")} } ``` **Answers** - `<uri>http://dbpedia.org/resource/Mongolia</uri>` - `<uri>http://dbpedia.org/resource/Russia</uri>` **Question** The following attributes are specified for each question along with its ID: - **answertype** gives the answer type, which can be one the following: - resource: One or many resources, for which the URI is provided. - string: A string value such as Valentina Tereshkova. - number: A numerical value such as 47 or 1.8. - date: A date provided in the format YYYY-MM-DD, e.g. 1983-11-02. This format is also required when you submit results containing a date as answer. - boolean: Either true or false. Answer of these types are required to be enclosed by the corresponding tag, i.e. `<number>47</number>`, `<string>Valentina Tereshkova</string>` and `<boolean>true</boolean>` (except for resources, for which a URI and/or a string should be provided, see the cave example above). - **aggregation** indicates whether any operations beyond triple pattern matching are required to answer the question (e.g., counting, filters, ordering, etc.). - **onlydbo** is given only for DBpedia questions and reports whether the query relies solely on concepts from the DBpedia ontology. As an additional challenge, a few of the training and test questions are out of scope, i.e. they cannot be answered with respect to the dataset. The query is specified as OUT OF SCOPE and the answer set is empty. Here is an example from the DBpedia training question set: For evaluation, your system should in these cases specify OUT OF SCOPE as query and/or an empty answer set, just like in the example. ### 3.2.2 Submitting results during test phase During test phase, a set of different questions for each dataset without annotations are provided at the following location: http://greententacle.techfak.uni-bielefeld.de/~cunger/qald/3/ - dbpedia-test-questions.xml - musicbrainz-test-questions.xml Note that there will be only one question set for DBpedia, applying to both the English and the Spanish DBpedia. On which one you worked has to be indicated in your submission in the dataset id tag. Results can be submitted from May 1 to May 17, 2013, via the same online form used during training phase (note the drop down box that will then allow you to specify test instead of training): http://greententacle.techfak.uni-bielefeld.de/~cunger/qald/index.php?x=evaltool&q=3 The only difference is that evaluation results are not displayed. You can upload results as often as you like (e.g., trying different configurations of your system); in this case the file with the best results will count. All submissions are required to comply with the XML format specified above. For all questions, the dataset ID and question IDs are obligatory. Beyond that, you are free to specify either a SPARQL query or the answers (or both), depending on which of them your system returns. You are also allowed to change the natural language question or keywords (insert quotes, reformulate, use some controlled language format, and the like). If you do so, please document these changes, i.e. replace the provided question string or keywords by the input you used. Also, it is preferred if your submission leaves out all question strings and keywords except for the ones in the language your system worked on. So if you have a Dutch question answering system, please only provide the Dutch question string and/or keywords in your submission. Otherwise please mark the language in either the system name or configuration slot, when uploading it. This way we can properly honour your multilinguality efforts. 3.2.3 Evaluation measures For each of the questions, your specified answers, or the answers your specified SPARQL query retrieves, will be compared to the answers provided by the gold standard XML document. The evaluation tool computes precision, recall and F-measure for every question:\(^6\) \[ \text{Recall} = \frac{\text{number of correct system answers}}{\text{number of gold standard answers}} \] \[ \text{Precision} = \frac{\text{number of correct system answers}}{\text{number of system answers}} \] \[ \text{F-measure} = \frac{2 \times \text{Precision} \times \text{Recall}}{\text{Precision} + \text{Recall}} \] The tool then also computes the overall precision and recall taking the average mean of all single precision and recall values, as well as the overall F-measure. All these results are printed in a simple HTML output; additionally you get a list of all question that your tool failed to capture correctly. You are allowed to submit results as often as you wish. 3.3 Participant’s challenge Are there questions that your tool is very good at but that might prove difficult for others? Are there questions that are very interesting but are not among the training questions? Then send in these questions and challenge others! In order to make a start, we provide a few questions that cannot be answered over DBpedia or MusicBrainz alone, but require the combination of both datasets. You can access them at the following location: http://greententacle.techfak.uni-bielefeld.de/~cunger/qald/3/participants-challenge.xml If there are any questions you would like to contribute, please send an email with the questions and, ideally, corresponding SPARQL queries to Christina Unger: cunning@cit-ec.uni-bielefeld.de. The questions will then be added to the document and published on the ILD mailing list. --- \(^6\)In the case of out-of-scope questions, an empty answer set counts as precision and recall 1, while a non-empty answer set counts as precision and recall 0. 4 Task 2: Ontology lexicalization Multilingual information access can be facilitated by the availability of lexica in different languages, for example allowing for an easy mapping of Spanish, German, and French natural language expressions to English ontology labels. The task consists in finding English lexicalizations of a set of classes and properties from the DBpedia ontology, for example in a Wikipedia corpus. 4.1 Training data The training data consists of a set of 10 classes and 30 properties from the DBpedia ontology, as well as a lemon lexicon containing lexicalizations of those classes and properties. - [http://greententacle.techfak.uni-bielefeld.de/~cunger/qald/3/dbpedia_train_classes_properties.txt](http://greententacle.techfak.uni-bielefeld.de/~cunger/qald/3/dbpedia_train_classes_properties.txt) - [http://greententacle.techfak.uni-bielefeld.de/~cunger/qald/3/dbpedia_train_lexicon_en.ttl](http://greententacle.techfak.uni-bielefeld.de/~cunger/qald/3/dbpedia_train_lexicon_en.ttl) Detailed information on lemon can be found on [http://lemon-model.net](http://lemon-model.net), including a cookbook which contains all you need to know in order to create lemon lexica: [http://lemon-model.net/lemon-cookbook.pdf](http://lemon-model.net/lemon-cookbook.pdf) 4.2 Submitting results during test phase Submitted lexicalizations are expected to be in lemon format. The training lexicon uses LexInfo ([lexinfo.net/ontology/2.0/lexinfo.owl](http://lexinfo.net/ontology/2.0/lexinfo.owl)) as linguistic ontology, mainly for reasons of readability, but you may as well use ISOcat ([http://www.isocat.org](http://www.isocat.org)), for example. Lexica can be submitted from May 1 to May 17, 2013, via the same online form used for the question answering task: Evaluation results will be displayed during training phase but not during test phase. You can upload results as often as you like (e.g., trying different configurations of your system); in this case the file with the best results will count. 4.3 Evaluation measures For each class and property, the submitted lexical entries will be compared to the gold standard lexical entries along two dimensions: i) lexical precision, lexical recall and lexical F-measure, and ii) lexical accuracy. In the first dimension, we evaluate how many of the gold standard entries for a property are in the submitted lexicon (recall), and how many of the submitted entries are among the gold standard entries (precision), where two entries count as the same lexicalization if their lemma, part of speech and sense coincide. Thus lexical precision $P_{\text{lex}}$ and recall $R_{\text{lex}}$ for a class or property $\text{uri}$ are defined as follows: \[ P_{\text{lex}}(\text{uri}) = \frac{|\text{entries}_{\text{submitted}}(\text{uri}) \cap \text{entries}_{\text{gold}}(\text{uri})|}{|\text{entries}_{\text{submitted}}(\text{uri})|} \] \[ R_{\text{lex}}(\text{uri}) = \frac{|\text{entries}_{\text{submitted}}(\text{uri}) \cap \text{entries}_{\text{gold}}(\text{uri})|}{|\text{entries}_{\text{gold}}(\text{uri})|} \] Where $\text{entries}_{\text{submitted}}(\text{uri})$ is the set of entries for the class or property $\text{uri}$ in the submitted lexicon, while $\text{entries}_{\text{gold}}(\text{uri})$ is the set of entries for $\text{uri}$ in the gold standard lexicon. The F-measure $F_{\text{lex}}(\text{uri})$ is then computed as the harmonic mean of $P_{\text{lex}}(\text{uri})$ and $R_{\text{lex}}(\text{uri})$, as usual. The second dimension, lexical accuracy, serves to evaluate whether the specified subcategorization frame and its arguments are correct, and whether these syntactic arguments have been mapped correctly to the semantic arguments (domain and range) of the property in question. The accuracy of a submitted lexical entry $l_{\text{auto}}$ for a class or property $\text{uri}$ w.r.t. the corresponding gold standard entry $l_{\text{gold}}$ is therefore defined as: \[ A_{\text{uri}}(l_{\text{submitted}}) = \text{frameEq}(l_{\text{submitted}}, l_{\text{gold}}) + \frac{|\text{args}(l_{\text{submitted}}) \cap \text{args}(l_{\text{gold}})|}{|\text{args}(l_{\text{gold}})|} \] \[ + \sum_{a \in \text{args}(l_{\text{submitted}})} \frac{\text{map}(a)}{|\text{args}(l_{\text{submitted}})|} \] where $\text{frameEq}(l_1, l_2)$ is 1 if the subcategorization frame of $l_1$ is the same as the subcategorization frame of $l_2$, and 0 otherwise, where $\text{args}(l)$ returns the syntactic arguments of $l$’s frame, and where \[ \text{map}(a) = \begin{cases} 1, & \text{if } a \text{ has been mapped to the correct semantic argument of } p \\ 0, & \text{otherwise} \end{cases} \] When comparing the argument mapping of a submitted entry with that of a gold standard entry, we only consider the class of the argument, simply being subject or object. This abstracts from the specific type of subject (e.g. copulative subject) and object (e.g. indirect object, prepositional object, etc.) and therefore evaluates the argument mappings independently of the correctness of the frame and frame arguments. The lexical accuracy $A_{\text{lex}}(\text{uri})$ for a class or property $\text{uri}$ is then computed as the average mean of the accuracy values of each generated lexicalization. All measures are computed for each concept (class and property) and then averaged for all concepts. 5 Contact and trouble shooting If you have any questions or comments, including worries about the training and test questions, trouble with the datasets, the SPARQL end point, or the online submission and evaluation form, please contact Christina Unger: cunger@cit-sc.uni-bielefeld.de.
{"Source-Url": "https://pub.uni-bielefeld.de/download/2643357/2643597", "len_cl100k_base": 6842, "olmocr-version": "0.1.49", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 32507, "total-output-tokens": 8158, "length": "2e12", "weborganizer": {"__label__adult": 0.0007252693176269531, "__label__art_design": 0.002635955810546875, "__label__crime_law": 0.0008916854858398438, "__label__education_jobs": 0.03497314453125, "__label__entertainment": 0.0019197463989257812, "__label__fashion_beauty": 0.0004119873046875, "__label__finance_business": 0.0011816024780273438, "__label__food_dining": 0.0006814002990722656, "__label__games": 0.0026531219482421875, "__label__hardware": 0.000950336456298828, "__label__health": 0.0008707046508789062, "__label__history": 0.0010519027709960938, "__label__home_hobbies": 0.0002968311309814453, "__label__industrial": 0.0006246566772460938, "__label__literature": 0.007373809814453125, "__label__politics": 0.0008792877197265625, "__label__religion": 0.00130462646484375, "__label__science_tech": 0.316650390625, "__label__social_life": 0.0011501312255859375, "__label__software": 0.13623046875, "__label__software_dev": 0.484619140625, "__label__sports_fitness": 0.0006761550903320312, "__label__transportation": 0.0007214546203613281, "__label__travel": 0.0004582405090332031}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27822, 0.02368]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27822, 0.37108]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27822, 0.77191]], "google_gemma-3-12b-it_contains_pii": [[0, 1198, false], [1198, 3344, null], [3344, 4733, null], [4733, 5601, null], [5601, 7399, null], [7399, 9784, null], [9784, 11646, null], [11646, 13435, null], [13435, 15586, null], [15586, 17412, null], [17412, 19542, null], [19542, 21537, null], [21537, 24186, null], [24186, 27436, null], [27436, 27536, null], [27536, 27822, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1198, true], [1198, 3344, null], [3344, 4733, null], [4733, 5601, null], [5601, 7399, null], [7399, 9784, null], [9784, 11646, null], [11646, 13435, null], [13435, 15586, null], [15586, 17412, null], [17412, 19542, null], [19542, 21537, null], [21537, 24186, null], [24186, 27436, null], [27436, 27536, null], [27536, 27822, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27822, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27822, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27822, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27822, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27822, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27822, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27822, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27822, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27822, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27822, null]], "pdf_page_numbers": [[0, 1198, 1], [1198, 3344, 2], [3344, 4733, 3], [4733, 5601, 4], [5601, 7399, 5], [7399, 9784, 6], [9784, 11646, 7], [11646, 13435, 8], [13435, 15586, 9], [15586, 17412, 10], [17412, 19542, 11], [19542, 21537, 12], [21537, 24186, 13], [24186, 27436, 14], [27436, 27536, 15], [27536, 27822, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27822, 0.0]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
8b7368839546e782ae251a6acd7c1b2e8b8b811d
Flexible Software Component Design Using A Product Platform Approach Hemant Jain *University of Wisconsin, Milwaukee* Marcus Rothenberger *University of Nevada Las Vegas* Vijayan Sugumaran *Oakland University* Follow this and additional works at: [http://aisel.aisnet.org/icis2006](http://aisel.aisnet.org/icis2006) Recommended Citation [http://aisel.aisnet.org/icis2006/15](http://aisel.aisnet.org/icis2006/15) This material is brought to you by the International Conference on Information Systems (ICIS) at AIS Electronic Library (AISeL). It has been accepted for inclusion in ICIS 2006 Proceedings by an authorized administrator of AIS Electronic Library (AISeL). For more information, please contact elibrary@aisnet.org. FLEXIBLE SOFTWARE COMPONENT DESIGN USING A PRODUCT PLATFORM APPROACH* Hemant Jain Sheldon B. Lubar School of Business University of Wisconsin - Milwaukee Milwaukee, WI jain@uwm.edu Marcus A. Rothenberger Department of MIS University of Nevada Las Vegas Las Vegas, NV marcus.rothenberger@unlv.edu Vijayan Sugumaran Department of Decision and Information Sciences Oakland University Rochester, MI sugumara@oakland.edu Abstract The concept of reusing software artifacts to improve development efficiency and software quality has been around for quite some time, yet it has focused mostly on in-house reuse with limited success. Recently developments in component-based software development, Web services, and service-oriented architecture are targeting inter-organizational reuse by promoting black-box reuse facilitated by standard Web service-based interface definitions. This reuse paradigm reduces dramatically the cost of component integration and maintenance, as it is no longer necessary to understand implementation details. However, this requires a very high level of standardization and clear functional definitions to facilitate retrieval through search engines. Because of this, its application has been limited to relatively small infrastructure items, such as user interfaces, printing components, and data access modules. Further, the reuse potential of such components is high, as infrastructure functionality is needed across domains. However, without the ability to build the core of an application around reusable software components, the true value of component-based software development that can offer lower cost, high quality, agility, and responsiveness cannot be realized. Thus, in order to move this adoption philosophy forward, domain-specific components must be available. This research develops a method that promotes flexible component design based on a common product platform with derivative products. Following the design research method, the methodology artifact will be evaluated through an experimental evaluation and a formal assessment of its value to component-based development. Keywords: Component design, software reuse, component-based development, design science * Authors contributed equally. Authorship is in alphabetical order. Introduction From the early years of information technology, software has predominantly been made to order to fit specific organizational needs. Custom-developed software is made to the requirements of the client organization. This option usually ensures the best fit of the software to the existing organizational processes; thus, a company is not required to change the way it does business. On the down side, custom software is characterized by high cost, low quality, longer development time, and low agility. Reuse of software artifacts has been frequently discussed over the past decade as a means of improving development efficiency and software quality, yet such discussion emphasized mostly in-house reuse with the limitation of a comparably small repository (Basili et al. 1996; Poulin et al. 1993). It is only recently that the development of component standards, such as CORBA and JavaBeans, has promoted the emergence of component markets that support the notion of inter-organizational reuse. This movement has been further enhanced by wide adoption of Web services standards and movement toward service-oriented architecture. As opposed to the traditional reuse notion, which requires developers to modify and customize retrieved components to fit the requirements of new software projects, component-based software development limits the customization of modules to the selection of appropriate parameters. This black-box reuse approach reduces the cost of component integration and maintenance, as it is no longer necessary to understand implementation details to reuse a component (Ravichandran and Rothenberger 2003). However, because of the need for high functionality standardization, the availability and use of black-box components have mostly been limited to infrastructure items, such as user interfaces, printing components, and data access modules (Taft 2000). The functionality of infrastructure components is highly standardized and can be clearly defined for retrieval through search engines. Further, the reuse potential of such components is very high, as infrastructure functionality is needed across domains. Nevertheless, without the ability to build the core of an application around reusable software components, the true value of component-based software development cannot be obtained. Thus, in order to move this adoption philosophy forward, domain-specific components must be available for various domains. This research investigates how domain-specific components can be structured to maximize their reuse potential. Product platforms have achieved in the physical world what we propose in this research for software components. Platform-based components are domain-specific components that are customizable (without code changes) to various needs. Thus, this concept increases the requirements flexibility of component-based development while retaining the benefit of black-box component design. Motivated by the product platform literature, we have developed a component design methodology that draws from the areas of domain analysis, domain modeling, component-based software development, and software reuse. The design approach will be formalized and evaluated according to the design science paradigm (Hevner et al. 2004; Peffers et al. 2006). Background and Conceptual Development Software Reuse Although software productivity has steadily increased, the demand for software development is still very high. To address these demands, research has been conducted to build reusable artifacts such as data objects, design patterns, and pseudo code. Reuse of such artifacts has been suggested as the only way to achieve the low-cost, high-quality, and low-development time objectives of application development (Krueger 1992; Mili et al. 1995). Reuse involves generating new designs by combining high-level specifications and existing component artifacts (Setliff et al. 1993). Tools and techniques are also being developed to provide support for reuse-based software design (Nierstrasz and Meijler 1995; Purao and Storey 1997). A number of reusable artifacts have been developed, such as class libraries, components (Szyiperski 1998), frameworks, and patterns. Research has also been undertaken that attempts to realize the benefits of reuse for object-oriented conceptual design through the creation of tools to facilitate design and construction of new systems with reuse (Purao and Storey 1997; Sugumaran et al. 2000). Higher-level design fragments and models are being developed (Han et al. 1999). **Domain Analysis and Modeling** Domain analysis is an activity similar to systems analysis, performed over an application domain. It involves analyzing existing systems in the application domain and creating a domain model that characterizes that application domain. An early definition of domain analysis and modeling provided by Neighbors is: “Domain analysis and modeling is an activity in which all systems in an application domain are generalized by means of a domain model that transcends specific applications” (Neighbors 1984). The primary objective of the domain modeling approach to software construction is to increase reuse, i.e., reuse not only of code modules but also of domain knowledge such as domain requirements, specifications, and designs. From the domain model, target systems can be generated by either tailoring the domain model, or by a combination of evolving the domain model and then tailoring it. A domain model represents the common characteristics and variations among the existing and future members of a family of software systems in a particular application domain. A computer-based domain model captures both the static and dynamic aspects of the application domain. The static aspects include object types of the domain, attributes of those object types (Meyer 1988), and relationships among them. The dynamic properties include the operations associated with object types, the state changes of object types, and the messages passed between object types. The domain model may also include integrity constraints that govern the behavior of object types in the domain. Thus, it is a key ingredient for the creation of reusable core assets (Kang et al. 2002). Recently domain analysis methods such as feature-oriented approach (Kang et al. 2002), Reuse-Driven Software Engineering Business (RSEB) (Jacobson et al. 1997), FODAcom (Vici et al. 1998), FeatuRSEB (Griss et al. 1998), and Product Line Analysis (PLA) (Chastek 2002) have been used to analyze commonality and variability among products within a product line. In particular, the feature-oriented approach has been used extensively in industry and academia since the FODA method was introduced in 1990 by the Software Engineering Institute (Clements et al. 2005). The feature-oriented approach attempts to analyze commonality and variability in terms of features, and thus a feature-based model provides a basis for developing, parameterizing, and configuring reusable assets (Kang et al. 2002). In the feature-oriented approach, during feature modeling, if there are no existing products or if the existing products do not have a specified set of features associated with them, then features associated with each individual product must be identified and defined (Bosch 2000). In mature and stable domains, the approach identifies the features of a given domain by analyzing the domain terminology and provides feature categories as a feature identification framework (Kang et al. 1998). Also, product features identified in MPP (marketing and product plan) are organized into an initial feature model (Kang et al. 2002). Although the approach suggests standardizing domain terminology and clarifying domain scope before feature identification, it is difficult to accomplish in immature or emergent domains because of the lack of availability of domain experts and supporting material. Further, in a very new or an envisioned market like an emerging market (Li 2002), it is practically impossible to identify any reference product that would match exactly the envisioned product line. Hence, the informational input such as domain knowledge will be very scarce for such domains. In addition, the approach is not appropriate when domain experts are not available, and features are not useful for exploring new or poorly understood system characteristics (Chastek 2002). Both FODAcom (Vici and Argentieri, 1998) and FeatuRSEB (Griss et al. 1998) have used use-case models with feature models for C&V analysis. PLA (Chastek 2002) combines traditional object-based analysis with FODA for a product line analysis. However, these methods do not provide a systematic and concrete mechanism for feature identification and the rationale for features like the feature-oriented approach does. **Component-Based Software Development** There is general agreement in the software industry that component-based software development (CBD) technology will bring profound changes in systems development and delivery. Numerous advantages of CBD are touted. An information system developed from reusable components is argued to be more reliable (Vitharana and Jain 2000), to increase developer productivity (Lim 1994), to reduce skills requirement (Kiely 1998), to shorten the development life cycle (Due 2000), to reduce time-to-market (Lim 1994), to increase the quality of the developed system (Lim 1994; Sprott 2000), and to cut down the development cost (Due 2000). Beyond these operational benefits, CBSD also has been found to provide strategic benefits, such as the opportunity to enter new markets or the flexibility to respond to competitive forces and changing market conditions (Favaro et al. 1998). Component providers have the opportunity to enter new markets because of the potential to cross sell components with associated functionalities. Similarly, end user organizations have the flexibility to quickly replace components with newer ones, containing additional features, in order to respond to competitive forces and changing market conditions. Component-based software development is changing the way applications are being developed and delivered to the end user. It is causing a shift in software development paradigms, particularly with the development of several component architecture standards such as CORBA, COM, and EJB (Szyperski 1998). A component is a well-defined unit of software that has a published interface and can be used in conjunction with other components to form larger units (Hopkins 2000). For example, in an auction application domain, one component captures the characteristics of a bid and its associated processes. Another component deals with transaction processing. These can be combined to form a larger component that would be a reusable artifact. Software Product Line The software product line approach is recognized as a successful method for improving reuse in software development. The idea behind the product line concept is for organizations to develop a product family from reusable core assets rather than from scratch. It is different from the platform-based component design that we are proposing in this research, as it does not address the nature of the reusable component itself. Instead, the software product line approach provides a means to select and integrate individual components to build a product line-based application. The product line approach is receiving increased attention, specifically as software engineers or developers are faced with increasing pressure to produce software more quickly and economically. To identify and describe the right functionality to be encapsulated as reusable artifacts is a key challenge within the product line approach. In order to address this challenge, the key requirements for developing future products that drive the design of a product line need to be identified. Thus, a thorough requirements analysis for the product line, where particular common and variant requirements are systematically identified and described, must be performed. Furthermore, the requirements and commonality and variability (C&V) identified must satisfy an organization’s high-level business goals, and thus, requirements analysis and C&V analysis must be carried out to satisfy these goals and provide the rationale for them. Application Frameworks An application framework is a reusable design for implementing a software system that can be expressed as a set of abstract classes (Johnson and Foote 1998). These classes can be specialized to produce custom applications. An application framework captures the standard structure of an application by bundling large amounts of reusable code. Over the years, several frameworks have been developed in areas such as user interface design, graphical editors, networks, financial applications, etc. (Fayad et al. 1999). Some examples of frameworks are MacApp, Model View Controller (MVC), Apache Struts, Java Server Faces (JSF), Django, Symfony, Java Native Interface (JNI), Microsoft’s Distributed Common Object Model (DCOM), Common Object Request Broker Architecture (CORBA) etc. Although many object-oriented application frameworks have been developed, they have had limited success for a number of reasons. One of the main ones is that frameworks have a steep learning curve. Developers have to understand the abstract designs of classes and how they interact with each other for different purposes. Class complexity and object collaboration complexity are often cited as the major obstacles for using application frameworks. In addition, large-scale reusable frameworks also fail due to lack of integratability, maintainability, efficiency, and standards (Fayad and Schmidt 1997). Frameworks are generally tied to a particular language or technology, and hence components from multiple frameworks cannot be combined easily (Johnson 1997). Demeyer et al. (1997) specify the following design guidelines for “tailorable” frameworks: interoperability, distribution, and extensibility. Our product platform-based approach for component design is based on these principles and accommodates multiple levels of granularity compared to other frameworks. Also, our proposed approach is independent of any application framework and can be used to create a new framework in a particular application domain. We also provide a formal specification for the platform design and plug-ins, which promotes interoperability and extensibility. Thus, our proposed methodology is flexible and agile. Domain-Specific Component Reuse In order to successfully sell domain-specific components, providers must ensure that the software components allow for sufficient flexibility to be integrated in a maximum number of applications. The most common approach to achieving this is the parameterization of components. This means that components are developed with a set of optional functionalities and choices that can be triggered by their parameters. The component user can select the options by setting the parameters accordingly (Barnes and Bollinger 1991). This method limits possible applications of the component to what the component developers anticipate, as only those applications will be supported with appropriate parameters. An alternative method may address the lack of extensibility. Reusing smaller components (with less functionality per component) can allow software developers to more exactly choose components based on the requirements (Apte et al. 1990). If a component representing a particular aspect of the desired functionality is not available, it can be developed with little effort and integrated with the remaining components that meet the requirements. Nevertheless, this approach is problematic: since the components each represent a low development effort, the leverage of each reuse instance is small. Activities such as retrieval and parameterization will use a proportionally larger share of development time, thus increasing development cost. Component flexibility can be increased over traditional parameterized components by building a platform for each component that allows the creation of derivative components by using different combinations of available plug-ins (plug-ins are lower-level component stubs that extend the functionality of a component platform or a higher-level plug-in). Complex application can then be built from platform-based components. This approach is an alternative to traditional component design combining the retrievability and integration ease of parameterized components with extensibility of very small components. Figure 1 contrasts the benefits of the three component design approaches. Product Platforms In the physical world, the modular design of product platforms has been a means to increase product variety to better meet varying customer requirements (Salvador et al. 2002). For example, Hewlett-Packard’s (HP) OfficeJet platform combined the functions of previously distinct products, such as computer printer, fax machine, scanner, and photo copier, thus meeting customer demand in a flexible manner (Meyer and Lehnerd 1997). The lessons learned from the modular development of physical products may also apply to the development of software components; as in manufacturing, modularity promotes an increased fit between software and customer specifications by enabling the developers to combine and customize elements of the application, thus reducing the need to custom develop based on each client’s specifications. In this research, we are developing a platform-based component design method as an alternative to existing component design options. The Artifact **Platform Design through Functional Decomposition** The major challenge in platform-based component development is to design a component platform that unifies the functional requirements of the common component user base. Based on platform-development in manufacturing, the design of a platform must be guided by the customer demand (Meyer and Seliger 1998). In terms of component-platform development, this means that the projected market for the component must drive its design. We are using the Unified Modeling Language (UML) notation of the object-oriented design model to illustrate the development of a component platform. The proposed methodology consists of two phases: a) platform design and b) post-implementation modifications. The following subsections describe these phases in more detail. **Platform Design** For the design of a platform-based component, individual component models must be created to meet the requirements of each application domain in which the platform component that is to be developed will be used. As a next step, commonalities between the individual designs must be transferred into the platform and plug-in design hierarchy of the platform-based component. Finally, the design hierarchy may be evaluated and fine-tuned if necessary. The overall process diagram depicting this methodology is shown in Figure 2. ![Figure 2. Overall Process Diagram for the Methodology](image-url) The paper presents a scenario of a platform-based reservation component that demonstrates the transformation process from individual components to a component platform design hierarchy. Figure 3 is the integrated reservation design hierarchy providing the functionality for reservations in different industries developed from a set of individual component designs, covering the domains of train, airlines, show, and sports game reservations. A component platform can be associated with multiple levels of plug-ins; the actual number of levels for a specific design is determined by the transformation methodology and the nature of the individual designs it converts. In this example, plug-ins are provided on two levels; the first level incorporates industry segment-specific functionality for transportation and event reservations, the second level provides the lowest level of industry functionality for train, airline, show, and sports game reservations. Figure 3. Reservation Component Platform with Industry Plug-Ins When adding functionality by incorporating plug-ins into the overall platform design, the methodology differentiates between distinct functionalities and partially common functionalities: Distinct functionalities have no common elements across the derivative components; their implementation will result in separate classes for each plug-in. Thus, the functionality must be excluded from the platform; only the plug-ins will contain the class that relates to the desired functionality in each derivative (e.g., Seat class in Figure 3). Common functionalities also vary across the derivative components; however, some aspects are common to all derivatives. The commonalities manifest themselves in the implementation of common attributes or common methods in the platform’s parent class that represents the dimension. The plug-ins inherit the class properties from the platform and extend the class to match the functionalities required for each derivative component (e.g., Transport Reservation class in Figure 3). Plug-ins connect to higher levels of the platform hierarchy using either inheritance (if the distinct class is a specialization of a common class) or association (if no commonalities with the distinct class exist). The transformation process from the individual component designs to the platform design hierarchy is driven by the identification and integration of commonalities that exist between the individual designs. The conversion follows a formal process that has been refined through multiple iterations of applying the methodology to different scenarios and evaluating the outcome. Refinements were made after each iteration; Figure 4 depicts the formal specification of the resulting methodology. The development of a component platform and its plug-ins is a recursive process that we named “CreatePlugIn” (Figure 4). For better readability, we also use an English language description to present these steps (Figure 5). 1. \( S_C = \bigcup C_{ji} \setminus \forall C_{ji} \in D_j \mid \forall D_j \in S_D \) 2. \( \forall C_j' \subseteq C_j \mid C_j \in S_C : \left( \forall D_j \in S_D : C_j' \in D_j \right) \) where \( \lnot \exists \left( C_j'' \subseteq C_j \mid C_j'' \supset C_j' : \left( \forall D_j \in S_D : C_j'' \in D_j \right) \right) \) \( \Rightarrow C_j' \in P_k \land C_j' \not\in S_C \land \forall D_j \in S_D : C_j' \not\in D_j \) 3. Repeat until \( S_D = \{ \} \) a. For \( C_j' \subseteq C_j \mid C_j \in S_C \) contained in the most \( D_j \in S_D \) where \( \lnot \exists \left( C_j'' \subseteq C_j \mid C_j'' \supset C_j' \right) \) that is contained in the same \( D_i \) \( \Rightarrow S_D' \subseteq S_D : \left( \forall D_j \in S_D' : C_j' \in D_j \right) \land S_D' \not\in S_D \) b. The next Plug-In will be a child of \( P_k \) c. Execute CreatePlugIn(\( S_D' \)) where: - \( S_D \) Set of Individual Component Designs - \( D_i \) Individual Component Designs (\( D_i \in S_D \)) - \( S_C \) Set of Classes - \( C_j \) Classes (\( C_j \in S_C \)) - \( C_{ji} \) Classes (\( C_{ji} \in D_j \)) - \( P_k \) Platform-Based Component Plug-In (Highest Level Plug-In is the Component Platform) Figure 4. Formal Specification of CreatePlugIn(\( S_D \)) 1. $S_C$ is the union of all classes contained in the individual designs $S_D$ 2. For all classes (or possible class generalizations) $C_j'$ that are part of the set of classes $S_C$, where no other class generalization exists in all individual designs that incorporates more functionality of the original class than $C_j'$ does, add $C_j'$ to the current Plug-In (or Platform if this is the highest level), remove $C_j'$ from the set of classes $S_C$, and remove $C_j'$ from all individual designs $D_i$. 3. Repeat until the set of designs $S_D$ is empty a. Find the class (or possible class generalizations) $C_j'$ that is part of the set of classes $S_C$ and that is contained in the largest number of individual designs $D_i$ where no other class generalization exists in the same set of individual designs $D_i$ that incorporates more functionality of the original class than $C_j'$ does, create a subset of all designs $S_D'$ that includes only the individual designs that contain $C_j'$ and remove the designs $S_D'$ from $S_D$ b. The next Plug-In will be a child of the current Plug-In $P_k$. c. Execute the process recursively with $S_D'$ as the parameter Figure 5. English Language Description of $CreatePlugIn(S_D)$ Post-implementation Modifications Future derivatives of a platform can be added along the functional dimensions that were anticipated when the platform was designed. With regards to the Reservation platform example (Figure 3), that means that the set of supported industry can be expanded by adding a new industry plug-in. If the class that implements a new dimension is not part of the existing platform, functionality can also change along this dimension by adding the new class entirely to the plug-in that represents the new derivative, thus not changing the platform. Additions are only viable if the new derivative component utilizes most of the platform’s functionality (in the printer context that means that it makes economic sense to add fax capabilities to a printer; however, it does not make economic sense to add fax capabilities to a dishwasher). Further, additions should be relatively small in size compared to the existing platform (e.g. it does not make economic sense to add fax capabilities to a speaker). Although the fax function could utilize a speaker, the relative size of the fax functionality compared to the speaker functionality justifies the development of a separate fax platform. In the context of the Reservation platform example, that means that different types of booking agents can be added as an afterthought if this addition does not affect the original component platform. Figure 6 shows this addition, which is possible because the different types of booking agents specialize an existing platform class without modifying the original platform component design. Evaluation of the Artifact In design research, the utility, quality, and efficacy of the artifact must be evaluated (Hevner et al. 2004). Hereby the researcher can select from a variety of evaluation methods. It is important to match the evaluation methods appropriately with the artifact that is to be evaluated (Hevner et al. 2004). The design artifact in this research is the methodology facilitating a component design that we claim results in a higher component flexibility and therefore agility. A formal analysis of the reservation platform scenario demonstrates the utility of the artifact. We formally evaluated whether the platform/plug-in component hierarchy that resulted by applying the proposed methodology to the individual component designs is equivalent to the individual component designs. Only if the resulting platform component with all its plug-ins can be successfully decomposed into the original individual designs, the methodology can be deemed lossless and correct. This formal evaluation was conducted as part of the iterative process that lead to the final version of the methodology. For space considerations, the decomposition is not presented in detail; however, they are available from the authors upon request. As this functional equivalency can be formally demonstrated, the utility of the methodology is supported. The objective of the platform component method was to improve the flexibility and agility of components. Since one platform component can incorporate a number of different domains (four domains in the scenario presented), we demonstrated that the platform component is more flexible than each of the individual components. Further, the platform component is extensible without requiring the developer to understand the implementation details of large parts of the platform. The study discussed extensions in the context of the agent scenario (Figure 6) and hereby supported the notion of agility. Thus, the scenarios presented in the study support the efficacy of the design approach. While the utility evaluation is based on a formal assessment that is rigorous, the evaluation of efficacy is descriptive, which is a valid, yet not the strongest, evaluation method (Hevner et al. 2004). Thus, we will evaluate the methodology’s efficacy and quality in an experiment involving a group of experts completing a task with and without the platform-based approach. Subsequently, the experts will evaluate their experiences. By the time of the conference, we will have completed the research and we will present the completed evaluations. **Conclusion** This study developed an innovative method for component design that promotes the development of more flexible and more agile reusable components. This approach may be able to move black-box component reuse forward from an infrastructure-centered reuse paradigm to a method for large-scale domain-specific reuse. Thus, through the availability of more flexible and more agile components, the benefits that software developers can obtain from component-based software development may increase. **References**
{"Source-Url": "http://aisel.aisnet.org/cgi/viewcontent.cgi?article=1136&context=icis2006", "len_cl100k_base": 6275, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 34229, "total-output-tokens": 9281, "length": "2e12", "weborganizer": {"__label__adult": 0.0003633499145507813, "__label__art_design": 0.0004324913024902344, "__label__crime_law": 0.000270843505859375, "__label__education_jobs": 0.000896453857421875, "__label__entertainment": 5.048513412475586e-05, "__label__fashion_beauty": 0.0001424551010131836, "__label__finance_business": 0.0002186298370361328, "__label__food_dining": 0.0002751350402832031, "__label__games": 0.000423431396484375, "__label__hardware": 0.00047087669372558594, "__label__health": 0.0003266334533691406, "__label__history": 0.00017333030700683594, "__label__home_hobbies": 6.026029586791992e-05, "__label__industrial": 0.00021922588348388672, "__label__literature": 0.00020885467529296875, "__label__politics": 0.0002058744430541992, "__label__religion": 0.00034809112548828125, "__label__science_tech": 0.003124237060546875, "__label__social_life": 7.635354995727539e-05, "__label__software": 0.0034923553466796875, "__label__software_dev": 0.9873046875, "__label__sports_fitness": 0.0002601146697998047, "__label__transportation": 0.00037598609924316406, "__label__travel": 0.00016868114471435547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38508, 0.02745]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38508, 0.36927]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38508, 0.89319]], "google_gemma-3-12b-it_contains_pii": [[0, 915, false], [915, 3194, null], [3194, 7748, null], [7748, 13085, null], [13085, 17760, null], [17760, 20160, null], [20160, 22330, null], [22330, 23288, null], [23288, 23352, null], [23352, 26586, null], [26586, 29935, null], [29935, 31808, null], [31808, 35854, null], [35854, 38508, null], [38508, 38508, null]], "google_gemma-3-12b-it_is_public_document": [[0, 915, true], [915, 3194, null], [3194, 7748, null], [7748, 13085, null], [13085, 17760, null], [17760, 20160, null], [20160, 22330, null], [22330, 23288, null], [23288, 23352, null], [23352, 26586, null], [26586, 29935, null], [29935, 31808, null], [31808, 35854, null], [35854, 38508, null], [38508, 38508, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38508, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38508, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38508, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38508, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38508, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38508, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38508, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38508, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38508, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38508, null]], "pdf_page_numbers": [[0, 915, 1], [915, 3194, 2], [3194, 7748, 3], [7748, 13085, 4], [13085, 17760, 5], [17760, 20160, 6], [20160, 22330, 7], [22330, 23288, 8], [23288, 23352, 9], [23352, 26586, 10], [26586, 29935, 11], [29935, 31808, 12], [31808, 35854, 13], [35854, 38508, 14], [38508, 38508, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38508, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
693f1eabec5bc2900b4cf984bca6905b44967988
[REMOVED]
{"len_cl100k_base": 6097, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 28578, "total-output-tokens": 7724, "length": "2e12", "weborganizer": {"__label__adult": 0.0005121231079101562, "__label__art_design": 0.0005192756652832031, "__label__crime_law": 0.0006566047668457031, "__label__education_jobs": 0.0005946159362792969, "__label__entertainment": 0.00012993812561035156, "__label__fashion_beauty": 0.0002310276031494141, "__label__finance_business": 0.00033783912658691406, "__label__food_dining": 0.0005140304565429688, "__label__games": 0.0008106231689453125, "__label__hardware": 0.0092620849609375, "__label__health": 0.0007929801940917969, "__label__history": 0.0003952980041503906, "__label__home_hobbies": 0.000179290771484375, "__label__industrial": 0.0012493133544921875, "__label__literature": 0.0003402233123779297, "__label__politics": 0.0003888607025146485, "__label__religion": 0.0006856918334960938, "__label__science_tech": 0.269287109375, "__label__social_life": 9.918212890625e-05, "__label__software": 0.0097503662109375, "__label__software_dev": 0.701171875, "__label__sports_fitness": 0.0004677772521972656, "__label__transportation": 0.0016603469848632812, "__label__travel": 0.0002620220184326172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32692, 0.02974]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32692, 0.28001]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32692, 0.90931]], "google_gemma-3-12b-it_contains_pii": [[0, 2183, false], [2183, 5264, null], [5264, 7528, null], [7528, 9870, null], [9870, 12689, null], [12689, 14415, null], [14415, 16787, null], [16787, 18833, null], [18833, 20059, null], [20059, 21314, null], [21314, 24122, null], [24122, 27019, null], [27019, 29899, null], [29899, 32692, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2183, true], [2183, 5264, null], [5264, 7528, null], [7528, 9870, null], [9870, 12689, null], [12689, 14415, null], [14415, 16787, null], [16787, 18833, null], [18833, 20059, null], [20059, 21314, null], [21314, 24122, null], [24122, 27019, null], [27019, 29899, null], [29899, 32692, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32692, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32692, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32692, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32692, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32692, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32692, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32692, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32692, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32692, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32692, null]], "pdf_page_numbers": [[0, 2183, 1], [2183, 5264, 2], [5264, 7528, 3], [7528, 9870, 4], [9870, 12689, 5], [12689, 14415, 6], [14415, 16787, 7], [16787, 18833, 8], [18833, 20059, 9], [20059, 21314, 10], [21314, 24122, 11], [24122, 27019, 12], [27019, 29899, 13], [29899, 32692, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32692, 0.09302]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
7209bec75ad61fb2f5c3a8f6820b88bb4880d0ae
[REMOVED]
{"Source-Url": "https://cea.hal.science/cea-03010533/file/article.pdf", "len_cl100k_base": 5489, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 29337, "total-output-tokens": 7171, "length": "2e12", "weborganizer": {"__label__adult": 0.0003731250762939453, "__label__art_design": 0.00033402442932128906, "__label__crime_law": 0.0003952980041503906, "__label__education_jobs": 0.0005464553833007812, "__label__entertainment": 9.888410568237303e-05, "__label__fashion_beauty": 0.00018930435180664065, "__label__finance_business": 0.0002601146697998047, "__label__food_dining": 0.0004279613494873047, "__label__games": 0.0006537437438964844, "__label__hardware": 0.001995086669921875, "__label__health": 0.0007038116455078125, "__label__history": 0.00034356117248535156, "__label__home_hobbies": 0.00012254714965820312, "__label__industrial": 0.0006694793701171875, "__label__literature": 0.00023543834686279297, "__label__politics": 0.0003767013549804687, "__label__religion": 0.00061798095703125, "__label__science_tech": 0.10400390625, "__label__social_life": 0.00010567903518676758, "__label__software": 0.0084381103515625, "__label__software_dev": 0.87744140625, "__label__sports_fitness": 0.00047898292541503906, "__label__transportation": 0.000820159912109375, "__label__travel": 0.0002696514129638672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31147, 0.02743]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31147, 0.16091]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31147, 0.88522]], "google_gemma-3-12b-it_contains_pii": [[0, 1157, false], [1157, 3860, null], [3860, 6869, null], [6869, 9883, null], [9883, 11919, null], [11919, 14237, null], [14237, 16644, null], [16644, 19709, null], [19709, 22575, null], [22575, 25594, null], [25594, 26666, null], [26666, 29778, null], [29778, 31147, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1157, true], [1157, 3860, null], [3860, 6869, null], [6869, 9883, null], [9883, 11919, null], [11919, 14237, null], [14237, 16644, null], [16644, 19709, null], [19709, 22575, null], [22575, 25594, null], [25594, 26666, null], [26666, 29778, null], [29778, 31147, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31147, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31147, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31147, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31147, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31147, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31147, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31147, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31147, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31147, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31147, null]], "pdf_page_numbers": [[0, 1157, 1], [1157, 3860, 2], [3860, 6869, 3], [6869, 9883, 4], [9883, 11919, 5], [11919, 14237, 6], [14237, 16644, 7], [16644, 19709, 8], [19709, 22575, 9], [22575, 25594, 10], [25594, 26666, 11], [26666, 29778, 12], [29778, 31147, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31147, 0.01935]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
41c3d1caa5a08bfbc99e7a96821b28d5a70416ab
A generic task ontology for scheduling applications Conference Item For guidance on citations see FAQs © 2001 The Authors Version: Accepted Manuscript Link(s) to article on publisher's website: Copyright and Moral Rights for the articles on this site are retained by the individual authors and/or other copyright owners. For more information on Open Research Online’s data policy on reuse of materials please consult the policies page. oro.open.ac.uk A Generic Task Ontology for Scheduling Applications Dnyanesh Rajpathak¹, Enrico Motta¹ and Rajkumar Roy² ¹Knowledge Media Institute 1The Open University Walton Hall, Milton Keynes, MK7 6AA, UK. ²Enterprise Integration Cranfield University Cranfield, Bedford, MK43 0AL, UK. Abstract An ontology can be seen as a reference model to describe the entities which exist in an universe of discourse and their properties. These entities may be individuals, classes, relationships, and functions. In sum anything that may be useful to describe specific models. In this paper we present a generic task ontology for scheduling problems. The ontology is generic in the sense that it is both domain and application independent. We refer to it as a ‘task ontology’ to emphasise that it describes the class of scheduling tasks, independently of the various ways by which these tasks can be solved. The proposed task ontology has been successfully validated to measure its knowledge capturing capability. Our aim is to move beyond current brittle approaches to system development to provide firm theoretical and engineering foundations to various classes of knowledge-based applications. Keyword: Intelligent Scheduling, Ontologies, Knowledge Modelling, Knowledge Acquisition, Reuse. 1. Introduction In generic terms the scheduling task can be characterised as an assignment of time-constrained jobs to time-constrained resources within a pre-defined time framework, which represents the complete time horizon of the schedule. An admissible schedule will have to satisfy a set of hard and/or soft constraints imposed on jobs or resources. The output of a scheduling task is a legal schedule in accordance with a given solution criterion (e.g. complete, admissible) as compared to [17,21]. If we look at scheduling as a constructive design process, we can say that its main building blocks are time related activities [5]. These activities may differ according to the target-domain and depend on the level of granularity of the application. Unfortunately, this changing nature of the target-domain increases the overall cost and time required for developing the application system. In order to overcome this serious bottleneck a need for reusable components arises in system development [12]. Ideally, we would like to have components that can efficiently be reused across wider domains that can support both the acquisition of the relevant scheduling knowledge and the system development process. Ontology can be viewed as a conceptual information model describing the various entities that exist in a particular domain of discourse such as classes, relationships, and functions [19]. In this paper we present generic task ontology for scheduling. We refer to it as a ‘task ontology’ to emphasise that it describes the class of scheduling tasks independently of the various ways by which these tasks can be solved. The task ontology is generic in the sense that it is both domain and application independent. In addition to these engineering concerns, the role of the proposed task ontology is also to provide a clear specification of the class of scheduling applications. Although, scheduling has been studied in detail by several authors [13,17,21], and there have been some attempts at developing ontologies for scheduling [1,10,18], these ontologies tend to be fairly coarse-grained and some times are committed to specific domains. The cost related issues are not usually expressed, along with the various preference criterions in the scheduling domain. More importantly none of the aforementioned approaches provides the desired level of detail and formalisation. The aim of this paper is therefore to describe our initial work aimed at putting scheduling on firmer ontological foundations. The paper is organised as follows. In the next section we present the scheduling problem. In 2.1 we describe the main concepts in the scheduling task ontology as a class along with its attributes. In 2.2 we define the main axioms in the task ontology. In section 3 we briefly compare our work with other approaches. In section 4 we describe the validation of the task ontology carried out in two domains. Finally, in section 5 we conclude the paper by reiterating the contribution of this work and highlight some issues that require further investigation. 2. A Generic Specification of Scheduling Task A scheduling task can be represented as a mapping from a seven-dimensional space \( \{J, R, H_c, S_C, Str, P, Cf\} \) to the space of solutions for the schedule \( \{S_{sol1}, \ldots, S_{soln}\} \). The components of the scheduling task are specified as follows. - **J** = Jobs = a set of jobs that can be assigned to a set of resources \( \{j_1, \ldots, j_n\} \); - **R** = Resources = a set of available resources to which jobs can be assigned \( \{r_1, \ldots, r_n\} \); - **Hc** = Hard constraints = a set of hard constraints which must not be violated by the schedule solution \( \{h_{c1}, \ldots, h_{cn}\} \); - **Sc** = Soft constraints = a set of soft constraints which can be relaxed if necessary to reach the schedule solution \( \{s_{c1}, \ldots, s_{cn}\} \); - **Str** = Schedule time-range = the complete time horizon of the schedule \( \{s_{t1}, \ldots, s_{tn}\} \); - **P** = Preferences = a set of preferences which can be used to define the criterion for choosing among the competing schedule solutions say, \( S_1 \) and \( S_2 = \{P_1, \ldots, P_n\} \); - **Cf** = Cost function = is a function that computes a cost to the final schedule solution. In accordance with the above-mentioned inputs, it is now possible to define the following types of criterions for the schedule solution. - A schedule, say \( S_i \), is **complete**, if all the jobs are assigned to the available resources by the completion of the schedule. - A schedule is **minimally admissible** \( \text{min-a} \), if all the sets of hard constraints are satisfied. - A schedule is **maximally admissible** \( \text{max-a} \), if it is minimally admissible (it satisfy all the hard constraints) and satisfies all the soft constraints as well. - A schedule is legal, **schedule-solution** \( (S_{sol}) \), if it is both **complete** and **maximally admissible**. - A schedule solution say \( S_{sol-d} \), is **optimal** \( \text{Cf}(S_{sol-d}) < \text{Cf}(S_{sol-c}) \), if no other schedule has a lower cost than \( S_{sol-d} \). In other words the cost of the final schedule solution, \( S_{sol-d} \), computed by applying the cost function, is optimal to any other schedule solution \( S_{sol-c} \). A figure 1 and 2 depicts the generic inputs of the scheduling task ontology with class and relation diagram respectively. ![Figure 1. Framework Model of the Scheduling Task Ontology.](image1.png) ![Figure 2. Relation diagram for the classes in the Task Ontology.](image2.png) The boxes in Figure 1 indicate the classes in the task ontology and the arrows between any two boxes represent the Subclass-of, Meta, Part-of and Is-A hierarchy accordingly. The concepts shown in bold in Figure 1 indicate the external viewpoint imposed by the task ontology. In Figure 1 and 2, the boxes with the rounded corner indicate the relations and the rectangles represent the classes in the task ontology. In figure 2 the arrows between the classes and relations represent the argumentation among them. For example, a class schedule is maximally admissible if it satisfies every hard constraint as well as soft constraint. We take the point of view that scheduling is an assignment of jobs to resources but the inverse of this assignment is not always true, i.e. resources may remain unassigned when a scheduling task is completed. Our task ontology is based on a job centred point of view [1,9,13,18]. 2.1 The Scheduling Task Ontology In this subsection we describe the major inputs of the task ontology depicted in Figure 1 and 2 from the knowledge modelling perspective by using the following structure. First, we will give the definition of the class, and then its attributes in terms of slots (which represent the binary relation) within those particular classes [3]. The slots are represented as italics in the definition of classes. It is important to keep in mind that all these slots are defined separately as a class or a relation in the task ontology depending upon the requirement. Finally, we will describe the main axioms developed in the task ontology. This scheduling task ontology comprises about 54 definitions, but the permitted space does not allow us to discuss all these concepts in detail. Here, we will discuss the major modelling decisions taken while developing the task ontology. In doing so, we assume the existence of a time ontology and other main base ontologies [3]. The base ontologies provide the definitions for a basic modelling concepts such as tasks, relations, methods, roles, numbers, sets etc. Class JOB Definition. A job represents the most abstract class that involves various activity-ranges and can be assigned to the resource for its execution. Activity-Range: this slot represents the fact that every job can have a range of activities that need to be performed in order to accomplish the job (see class activity). Suitable-Resource-Range: this slot explains that the job has a set of suitable resource ranges on which it can be assigned for its execution. Time-Range: this slot inherits the values of the class Job-time-range, which represents the earliest and latest start and finish times of the job along with the unit of time. In the task ontology a distinct relation is defined, called assigned-to-resource, which actually models the assignment of a job to a resource as an element of the class schedule (shown by the double arrows in Figure 2). Class JOB-TYPE Definition. All the instances of a class job-type are the subclasses of class job. For instance, in the manufacturing environment if the job is machining then the job-type is drilling, milling etc. Class ACTIVITY Definition. The activity is something that represents the various sets of operations for any given job. This offers the scope for the detailed breakdown of job. It is defined as follows, J = {j1, …. ,jn}, where ji = {ai1, …. ,ain}. For example, if the job is drilling then its set of activities could be the machine set-up, the loading, the actual drilling operation and the unloading of the job etc. Fixed-Duration: this slot indicates that every activity has a fixed duration for its execution. The cumulative all these duration represents the total duration of job. Class RESOURCE Definition. A resource represents the most abstract class to which the jobs can be assigned for their execution. Handles-Job-Type: the purpose of this slot is twofold. First, this slot explicitly represents the type of job/s that resource can handle. Second, by giving the cardinality value it is capable of handling n number of jobs {j1, …. ,jn}, provided that these jobs adheres to the resource-availability axiom (see resource-availability axiom by the end of this section). Available-Duration: this indicates the duration of the resource for which it can be available to perform the assigned job. In order to maintain the consistency of this duration, the relation is established between the job duration with that of the available duration of resource. This relation imposes the constraint that the duration of the job must be less than that of the available duration of the resource. Competence: it is a qualitative measure (yes/no) of the resource competence, which shows if the resource is competent to execute the assigned job. Class RESOURCE-TYPE Definition All the instances of a class resource-type are the subclasses of class resource e.g. type of machine, type of transport vehicle, etc. Class CONSTRAINT Definition The class constraint has the same definition for both hard and soft constraints. These are modelled as distinctive subclasses of class constraint. The hard constraints are the constraints that must not be violated under any circumstances, where the soft constraints have to be satisfied by the completion time of schedule. For example, the due date (soft) constraint often need to be relaxed for the couple of jobs due to limited capacity of the production activity [21]. Both the constraints are applied on a job or a resource through the class schedule, which helps to satisfy both minimal as well as the maximal admissibility conditions of a solution. Has-Expression: such expression has a number of advantages. First, it allows us to reason about the hard and soft constraints and to attach the properties to them. It also specialises the constraints according to specific classes of the scheduling applications. In particular, this expression is parameterised in terms of pairs of job-resource, i.e. a job-assignment for a class schedule. Applicability-Condition: this condition gives us the scope to maintain the truth status of the class schedule. This validates whether the hard and soft constraints associated with jobs or resources are satisfied by the schedule solution. By using the above-mentioned has-expression which states that the job has to satisfy the both constraints is used for satisfying the legal, schedule-solution condition. Class JOB-TIME-RANGE Definition. This class indicates the complete time range of an individual job. It is specified as a start and end time for a particular job in terms of the following slots, earliest start-time, earliest end-time and latest start-time, latest end-time of the job, along with the unit-of-time. It indicates the unit in which time is expressed. Earliest-Start-Time: (in terms of a time-point), this shows how early a particular job can start. Latest-Start-Time: (in terms of a time-point), this shows how late a particular job can start and still not violate the given time-range of the schedule. Earliest-End-Time: (in terms of a time-point), this shows how early a particular job can finish. Latest-End-Time: (in terms of a time-point), this shows how late a particular job can finish. Unit-of-Time: this simply indicates the unit in which the time is specified, e.g. second, hour etc. We take the point of view that if the earliest and latest of the start-time and the end-time are not mentioned explicitly in the problem, then in such cases the start-time and end-time will be used for representing the allowed time-range of the job. Class SCHEDULE-TIME-RANGE Definition. This time-range represents the start and end time of the schedule, which is the total time horizon for which the schedule is constructed. The unit-of-time simply indicates the unit in which the time is specified. Start-Time: the start time of the schedule. End-Time: the end time of the schedule. Unit-of-Time: the unit of time. In the task ontology a separate relation is defined, called time-range-between-job-and-schedule. It is a binary relation between the time-range of a job and a schedule. This relation imposed the constraint which states that the start-time of the first job must be greater than or equal to the start-time of the schedule and the end-time of the last job must be less than or equal to the end-time of the schedule. Hence, it avoids from overshooting a complete time horizon of the schedule. For example, if the schedule starts at (9.00am hour); so the start-time of the first job to be scheduled is either (9.00am hour) or (9.01am hour). Similarly, if a schedule finishes at (6.00pm hour); the end of the last job is either (6.00pm hour) or (5.59pm hour). Class PREFERENCE Definition. This class is represented by a prefer-expression. Such expression helps in ranking the various schedule solutions according to some criterion. It is important to keep in mind that the difference between the soft constraint and preferences is rather conceptual than formal. It is expressed by means of a prefer relation. It is a binary relation that defines the partial order preference over any two schedules say, $S_1$ and $S_2$ depending upon the real life preferences. In our task ontology we choose the optimum schedule based on the cost specific preferences in scheduling. This allows us to reflect the impact of various real-life preferences from a scheduling environment on the cost of a final schedule such as, missing the deadline, resource usage etc. The preference criterions contribute to the axioms related to the cost while calculating the cost of a final schedule solution. **Class COST-FUNCTION** Definition. This function simply calculates the cost of a schedule in terms of various preferences. It also provides the global criterion for ranking the different schedule solutions. *Domain / Range:* As depicted in figure 1, the domain of this cost-function is schedule and the range of a cost-function is cost. The cost is represented as a set of either real numbers or vectors. The cost-function is constructed by subsuming the preferences (see axiom definitions 3 and 4). **Class SCHEDULE** Definition. As indicated in the figure 1, a schedule is represented as a set of job-assignment pairs. The set job-assignment is represented in terms of set membership relation, which is true for any elements of the set job-assignment and false for any other set tuple. The set membership relation is modelled by using the following membership-test. *Membership-Test:* The schedule-membership test of a class schedule is a binary relation between a class job and a resource and it is true for pairs of the form, $(?job . ?resource)$, i.e. a job-assignment in a schedule. The domain of a schedule membership relation is a job to be assigned and range is a resource to which it can be assigned. A class job-assignment is used in order to model the pairs of the form $(?job . ?resource)$. Finally, a class job-assignment is used for satisfying the sufficient and necessary (IFF)-condition in a class schedule as indicated in figure 1 by Is-A set of schedule arrow. **2.2 Axioms in the Task-Ontology** In the task ontology four axioms have been defined which ensures the legality of the scheduling task specification under any circumstances. 1. *Resource-Availability:* this axiom states that the same resources can not be assigned for two different sets of jobs if their time-ranges are overlapping with each other. As resources are assigned to specific set of jobs in a schedule, then the resources are unavailable for the other sets of jobs and other relevant time periods must be generated and associated with these resources for assigning the next set of jobs. 2. *Constraints-Are-Either-Hard-Or-Soft:* this axiom states that the hard and soft constraints are exhaustive subclasses of a class constraint. This gives us the scope to use both of these constraints more efficiently while satisfying the minimal and maximal admissibility schedule solution conditions. 3. *Cost-Subsumes-Preferences:* this axiom states that the cost-function that computes a cost of the schedule, subsumes each of the preferences say, $\{P_{r1}, \ldots, P_{rn}\}$ in order to give the cost of a schedule according to preference specific criterions. In other words the cost-function must be constructed by combining the preference specific cost criterions. It is not much of the knowledge acquisition issue but specifying the preference specific criterions and transforming its effect on the cost of a final schedule. 4. *Cost-Preference-Consistency:* this axiom states that the cost-function should not contradict any partial order expressed by the preference class. Also, the order over any two schedules for selecting the preferential schedule solution must be consistent with that of the cost-function. **3. Related Work** Here, we compare our work with other three scheduling task ontologies and try to explain the major differences between these and our work. The OZONE ontology [18] also provides a generic perspective for building scheduling systems. There are some major differences between our work and that of OZONE ontology. The OZONE ontology takes into account the external environmental factors in scheduling, such as Demand, Product etc., where we are mainly interested in the ‘core issues’ involved in building scheduling systems. More importantly, there is no indication about the cost and preference related issues. Finally, ours is an operational task ontology, which is formerly specified by using the modelling language over the OZONE one. The CommonKADS [1] ontology and Job Assignment ontology [10] gives the modelling behaviour for the scheduling task. The fundamental difference between these two approaches and ours is the level of granularity. Their approach is characterised at a very abstract level. All the main concepts in these ontologies are informally illustrated at some length, but their definitions are not detailed. For instance, in CommonKADS the job assignment structure is simply characterised as a set of job-assignment tuples. This could obstruct the expressiveness of the user. As in ours it is modelled through the class schedule, which provide the better control over the behaviour of both job and resource. In CommonKADS there is no clear indication about cost. As in Job Assignment the cost as well as the preference related issues are missing. In contrast to all the above three ontologies the main purpose of our ontology is not only to provide the conceptual framework but also the practical reusable resource for modelling the scheduling applications. 4. Validation To evaluate the strength of our task ontology we have tested it on two different domains: the Office-allocation and the Satellite-scheduling problems. In the office-allocation problem there are number of students (jobs) that need to go in given rooms (resources), which was one of the main hard constraints in the problem. Each student has a number of activities along with the duration of each activity and each room is available for only certain period of time. The important preference used for the usage of the specific rooms is; the research students can share the double-size room if the single-size room is not available. Using the preferences in this fashion gave us more flexibility for using the available resources more efficiently, instead of under-usage of the resources. Also, the students could stay in the room for only a certain period of time without violating the available duration of the room. The final schedule produced by using the task ontology was of the form of a pair {Student, Room}, by maintaining all the constraints, time-ranges and the preferences. The Satellite-scheduling problem was mainly chosen because one application hardly confirms the generality of the task ontology. In this problem there were three satellites that communicates with the available antennas such as low-range-antenna, wide-range-antenna, and metrological-antenna. The main hard-constraints is of the various forms. 1) No two satellites can communicate with the same type of antenna if their time ranges overlap with each other. 2) Every satellite must have at least four communication slots; the gap between any two communications with the same type of antennas must be of two hours. 3) Antennas are of limited resources and can communicate with the assigned satellites for 15 minutes only. The final schedule produced by maintaining all the constraints and the time-ranges is of the form {Satellite . Antenna}, where satellite is a job and the antenna is a resource on which it is assigned. In the office-allocation problem mainly the time element for the availability of the rooms was added, which is not considered by the present ontologies. Additionally, in our task ontology the jobs are broken down into the more detailed level by specifying the number of activities that can be involved in a particular job. As opposed to our approach the jobs are treated as an abstract class without its further breakdown in [1], [10]. In our point of view our task ontology provides the desired level flexibility from the representational perspective. Even though, these two application domains appear on the extreme of the application spectra, as one is from resource-allocation as other is from the space application. They are successfully modelled by using the task ontology. 5. Conclusion and Future Work The proposed task ontology can now be seen as a knowledge capturing tool in various domains. This satisfies the important reusability aspect discussed in section 1. The reusability was empirically tested as discussed in section 4. The given ontological framework provides a fairly fine-grained structure that is needed to build the scheduling systems. This could help the user in expressing their viewpoint more clearly on the particular scenario. The cost related axioms ensure that an optimal solution is constructed by subsuming the various preferences in scheduling applications. task specification. The conflict between various jobs for the usage of same resource depending upon their time range overlap is tackled by the resource-availability axiom. As discussed in section 1, this task ontology provides firm theoretical and engineering foundations for various classes of knowledge-based applications. In future we are planning to use this task ontology as a major building block for building the generic problem-solvers for understanding the space of the scheduling behaviour. In order to accomplish the whole process successfully this task ontology can be seen as an initial building block. References
{"Source-Url": "http://oro.open.ac.uk/23737/1/generictaskontology.pdf", "len_cl100k_base": 5381, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 23977, "total-output-tokens": 6971, "length": "2e12", "weborganizer": {"__label__adult": 0.00035381317138671875, "__label__art_design": 0.0008707046508789062, "__label__crime_law": 0.0004954338073730469, "__label__education_jobs": 0.00926971435546875, "__label__entertainment": 0.0001659393310546875, "__label__fashion_beauty": 0.00030159950256347656, "__label__finance_business": 0.003376007080078125, "__label__food_dining": 0.0004925727844238281, "__label__games": 0.000988006591796875, "__label__hardware": 0.0009975433349609375, "__label__health": 0.0008716583251953125, "__label__history": 0.0005168914794921875, "__label__home_hobbies": 0.0003223419189453125, "__label__industrial": 0.00244903564453125, "__label__literature": 0.000766754150390625, "__label__politics": 0.0004398822784423828, "__label__religion": 0.000530242919921875, "__label__science_tech": 0.376953125, "__label__social_life": 0.0003466606140136719, "__label__software": 0.092041015625, "__label__software_dev": 0.50537109375, "__label__sports_fitness": 0.0003209114074707031, "__label__transportation": 0.0012979507446289062, "__label__travel": 0.0002779960632324219}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29727, 0.02119]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29727, 0.52183]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29727, 0.91575]], "google_gemma-3-12b-it_contains_pii": [[0, 748, false], [748, 5094, null], [5094, 7927, null], [7927, 11952, null], [11952, 16422, null], [16422, 20875, null], [20875, 25672, null], [25672, 29727, null]], "google_gemma-3-12b-it_is_public_document": [[0, 748, true], [748, 5094, null], [5094, 7927, null], [7927, 11952, null], [11952, 16422, null], [16422, 20875, null], [20875, 25672, null], [25672, 29727, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29727, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29727, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29727, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29727, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29727, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29727, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29727, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29727, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29727, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29727, null]], "pdf_page_numbers": [[0, 748, 1], [748, 5094, 2], [5094, 7927, 3], [7927, 11952, 4], [11952, 16422, 5], [16422, 20875, 6], [20875, 25672, 7], [25672, 29727, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29727, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
6016bd3b1a1783db29b71ec6cb7f14ed99a2b712
Cycles Assessment with CycleTable Jannik Laval, Simon Denier, Stéphane Ducasse To cite this version: Jannik Laval, Simon Denier, Stéphane Ducasse. Cycles Assessment with CycleTable. [Research Report] 2011. <inria-00593795> HAL Id: inria-00593795 https://hal.inria.fr/inria-00593795 Submitted on 17 May 2011 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Cycles Assessment with CycleTable Jannik Laval, Simon Denier, Stéphane Ducasse RMoD Team INRIA - Lille Nord Europe - USTL - CNRS UMR 8022 Lille, France {jannik.laval, simon.denier, stephane.ducasse}@inria.fr Abstract—Understanding the package organization of a large application is a challenging and critical task since it allows developers to better maintain the application. Several approaches show in different ways software structure. Fewer show modularity issues at the package level. We focus on modularity issues due to cyclic dependencies between packages. Most approaches detect Strongly Connected Components (SCC) in a graph of dependencies. However, SCC detection does not allow one to easily understand and remove cyclic dependencies in legacy software displaying dozens of packages all dependent on each other. This paper presents i) a heuristic to focus on shared dependencies between cycles in SCC and ii) CycleTable, a visualization showing interesting dependencies to efficiently remove cycles in the system. This visualization is completed with enriched cells, small views displaying the internals of a dependency [LDDB09]. We performed i) a case study which shows that the shared dependency heuristic highlights dependencies to be removed, and ii) a comparative study which shows that CycleTable is useful for the task of breaking cycles in a SCC compared to a normal node-link representation. Keywords-software visualization; reengineering; cycle; package; dependency Note for the reader: this paper makes heavy use of colors in the figures. Please obtain and read a colored version of this paper to better understand the ideas presented in this paper. I. INTRODUCTION It is frequent to have large applications structured over many packages. Packages are units of reuse and deployment: a package is built, tested, and released as a whole as soon as one of its classes is changed, or used elsewhere [Mar00]. Then modularity is as important at package level as at class level (if not more). Modularity implies that releasing a new package will only impact the dependent packages in the building chain. Cycles between packages have a high impact on the modularity of the application. Indeed, a cycle in the package dependency graph implies all packages in the cycle to be built, tested, and released together as they depend on each other. Martin [Mar00] proposes the Acyclic Dependencies Principle (ADP), which states that there should not be any cyclic dependency between packages. While it is easy to detect and correct cyclic dependency as soon as they arrive, the problem is more difficult in legacy software where no cycle assessment has been performed on the go. Then dependencies often form one unique Strongly Connected Component (SCC) where all packages depend on each other. It is cumbersome to understand the interweaving of dependencies and difficult to devise an efficient plan for breaking cycles. In a previous work [LDDB09], we propose eDSM which enhances Dependency Structural Matrix (DSM) for a better understanding of cycles at the package level. eDSM shows Strongly Connected Components and highlights cycles between two packages in the SCC. However, eDSM is not adapted when one cycle involves more than two packages. We devise a new approach based on the decomposition of one SCC into a set of “minimal cycles”. “Minimal cycles” are simple cycles containing a few nodes and are thus easy to understand and fix. Together the set of minimal cycles cover all dependencies in the SCC and allows the engineer to assess a minimal plan to remove all cycles in the system. In this paper, we present a new heuristic to break cycles named “shared dependencies” and a new visualization, called CycleTable, entirely dedicated to cyclic dependency assessment. The heuristic of “shared dependencies” comes from the decomposition of the SCC into minimal cycles. Often minimal cycles are intertwined together so that one dependency is shared by the cycles. Removing such a shared dependency breaks involved cycles. CycleTable layout displays all cycles at once and shows how they are intertwined through one or more shared dependencies. CycleTable combines this layout with the enriched cells (eCell) of eDSM [LDDB09] to present details of dependencies at class level, which allows the engineer to assess the weight of the dependency. This work is implemented on top of the Moose software analysis platform. Section II introduces the background and the challenges of cycle analysis with the traditional node-link representation of graphs and with DSM. Section III and Section IV explain CycleTable layout and usage. Section V presents enriched cells in CycleTable and discusses on a sample case the criteria to break cycles as highlighted by the visualization. Section VI presents some validations based on a case study and a comparative study. Section VII lists related work and Section VIII concludes the paper. II. CYCLE UNDERSTANDING PROBLEMS In this section, we present important points related to cycle understanding and which methods exist to fix them. A. Definitions Definition 1: A Strongly Connected Component (SCC) is the maximal set of nodes (here, packages) that depend on each other. In Figure 1, all nodes are in a single SCC. Definition 2: A cycle is a circular dependency between two or more packages. We distinguish two types of cycles: - **direct cycle**: It represents a cycle between two packages. In Figure 1, C and D are in a direct cycle because there is one dependency from C to D, and one dependency from D to C. - **indirect cycle**: It represents a cycle between more than two packages. In Figure 1, A, B and C are in indirect cycle. A, B and E are also in indirect cycle. Definition 3: A minimal cycle is a cycle with no node (here no package) appearing more than once when listing the sequence of nodes in the cycle. In Figure 2, A-B-E and A-B-C are two different minimal cycles, but A-B-C-D-C is not because C is present twice. A-B-C-D-C can be reconstructed with the two minimal cycle A-B-C and C-D. Definition 4: A shared dependency is a dependency presents in multiple minimal cycles. In Figure 2, the edge between A and B is shared by the two minimal cycles A-B-E and A-B-C. B. Feedback Arcset In graph theory, a feedback arcset is a collection of edges we should remove to obtain an acyclic graph. The minimum feedback arcset is the minimal collection of edges to remove to break the cycle. This approach could produce good results working on package dependencies because it does not break so much the structure. This method is not usable for two important reasons: - It is a NP-complete problem (optimized by Kann [Kan92] to become APX-hard). - It does not take into account the semantic of the software structure. Optimizing a graph is not equivalent to a possible realistic at the software level. C. Cycle Visualization 1) Graph Visualization: Figure 1 shows a sample graph with five nodes and three minimal cycles. Notice that cycle A-B-C and A-B-E share a common dependency (in bold) from A to B. This shared dependency is interesting to spot since it joins two cycles and by removing it we would break those cycles. Graph layouts offer an intuitive representation of graphs, and some handle cyclic graph better than others. On large graphs, complex layouts may reduce the clutter but this is often not simple to achieve. 2) DSM and eDSM Visualization: DSM (Dependency Structural Matrix) provides a good approach to see software dependencies [Ste81], [SGCH01], [LF05], [SGS+05], [SJSJ05] and particularly cycles [SJSJ05]. It provides a dependency-centric view possibly associated with color to perceive more rapidly some values [HBO10]. We use DSM (Dependency Structural Matrix) to see direct cycles: a direct cycle is displayed in red and the two cells representing the two dependencies of the direct cycle are symmetric along the diagonal [SJSJ05]. Seeing indirect cycles is more difficult, as the visualization is not adapted for it. The main reason for this problem is that it is difficult to read an indirect cycle in the matrix, i.e., to follow the sequence of cells representing the sequence of dependencies in the cycle. The order can appear quite arbitrary as one has to jump between different columns and rows (this problem does not exist with direct cycles as there is only two cells involved, mirroring each other along the diagonal). The cycle A-B-E composed by the three dependencies A>B, B>E and E>A has been circled in Figure 3 to show the complexity of reading indirect cycles, intertwined with direct cycles. In Figure 3, the whole matrix displays a pale blue background, indicating that A, B, C, D, and E are in the same SCC. We can see the direct cycle between C and D (in red) and in yellow the other dependencies in the SCC. We propose eDSM [LDDB09], an improvement of DSM, which shows the relationships between classes in package dependencies. It shows all classes involved in a dependency and which types of dependency exist, providing a good understanding of the dependency and support for breaking the dependency when necessary. While eDSM allows us to analyze direct cycles comfortably, it does not address the problem of indirect cycles left over after removal of direct cycles. D. Lack of Solutions In this section, we presented the problem of understanding and breaking cycles and explained why existing approaches are not up to the task. Solving cycles in legacy systems with several packages and large SCCs is difficult. Feedback Arcset is not necessarily adapted. Node-link representations become unreadable with a large number of packages and dependencies crossing each other. DSM does not provide enough information about indirect cycles. Based on our experience, we propose to focus on shared dependencies in order to efficiently understand and break cycles. The next section shows how this heuristic is embodied and used in the CycleTable visualization. III. CYCLE TABLE During our experiments with eDSM, we have noted that a dependency can be part of multiple cycles. These “shared” dependencies should be highlighted because when we remove them, all involved cycles disappear. Our intuition is that the more shared a dependency is, the more likely it is unwanted in the architecture and should be removed. We propose a visualization to help reengineers to identify dependencies involved in cycles and to highlight shared dependencies. This visualization shows all minimal cycles ordered by shared dependencies. A. CycleTable in a Nutshell We design CycleTable with the purpose of visualizing intertwined cycles. CycleTable is a rectangular matrix where packages are placed in rows and cycles in columns. CycleTable (i) shows each minimal cycle independently in columns; (ii) highlights shared dependencies between minimal cycles; and (iii) uses context displaying cells to provide information about dependency internals, enabling small multiples and micro-macro reading [Tuf97] i.e., variations of the ![Figure 3. DSM corresponding to the graph of Figure 1. Each cell represents a dependency.](image1) ![Figure 4. CycleTable for Figure 1 sample graph.](image2) C. Cycle Sequence Cycle sequence represents a relative order between dependencies. This number is sometimes necessary to retrieve the exact order of dependencies in a cycle. Let’s take the example of cycle A-B-E. In Figure 4, the first relative dependency is A>B (there is the number 1 in top-left corner). The second dependency of the cycle is B>E (number 2 in top-left corner). The third and last dependency of the cycle is E>A (number 3 in top-left corner). In this particular case, it is not useful as cycle sequence follows the top-down order. To understand the usefulness of this information, Figure 5 provides a real example. The 13th cycle displays a cycle that cannot be read from top to bottom. As the dependencies are not in the right order, it is useful to have the sequence of the cycle. The cycle should be read from number 1 to number 5: 1: FxExtension>MsFinder, 2: MsFinder>MsWizard, 3: MsWizard>MsGene, 4: MsGene>MsCore and 5: MsCore>FxExtension. IV. READING A CYCLE TABLE Figure 5 shows a sample CycleTable with 9 packages involved in 15 minimal cycles. A. Reading a package There are three visualization patterns for a package. - There is one color in the row: the package has one shared dependency to another package but it is involved in multiple cycles. For example in Figure 5, the package MsCore (row 2) has one shared dependency to FxExtension, this dependency is shared by all cycles displayed in this CycleTable. - There are white cells in the row: a white dependency is not a shared dependency. The package is involved in multiple cycles with many dependencies. In Figure 5, the package MsWizard (last row) has three different dependencies involved in three different cycles. There is no shared dependency. - When there are multiple colors in the row, there are multiple cycles with multiple shared dependencies. It is the common visualization pattern. The goal is to look at the most present color. For example in Figure 5, the line MsMont has six cells: four cells are white and two cells are green. The two green cells are the same dependency, the white cells represents four different dependencies. MsMont has five different dependencies, involved in six cycles. B. Reading a cycle A column represents a cycle. Cycle length is displayed at the bottom of the table. A cycle with colored cells has shared dependencies with other cycles. For example in Figure 5, the 9th cycle between MsCore, MsCore and FxExtensions has two shared dependencies (red and blue) and one non-shared dependency. C. Reading Colors The more cells share the same color, the more the same dependency is involved in multiple cycles. Then this dependency is interesting for cycle removing. We do not say that this dependency must be removed, but when we remove a shared dependency, all cycles involving this dependency are removed. For example, in Figure 5, if the blue dependency from MsCore to FxExtension could be removed, all presented cycles would be removed. V. CYCLETABLE WITH ENRICHED CELLS A package is a complex structure containing multiple classes in relation with other classes inside and outside the package. Showing the details at class level of a dependency from one package to another is also important to understand the dependency and assess its weight. We use a small view in each cell (named eCell for enriched cell). A first version was proposed in [LDDB09] and has been adapted to CycleTable. Figure 7 shows how each eCell provides a closed context to understand each dependency separately, yet allows one to compare the complexity of dependencies with each other. As the subject of this paper is not eCell, this section only shows the behavior that we can use in cells. A. Overall structure of an enriched cell An enriched cell contents displays all dependencies at class level from a source package to a target package. An enriched cell is composed of three parts (see Figure 6): - At the bottom is a colored frame which represents the identification of shared dependencies by a color and a number for sequence identification. - The top row gives an overview of the strength and nature of dependencies between classes into the two involved packages. It shows the total number of dependencies (Tot) in black, inheritance dependencies (Inh) in blue, references to classes (Ref) in red, invocations (Msg) in green, and class extensions (Ext) made by the source package to the target one in gray. - The two large boxes in the middle detail class dependencies going from the top box to the bottom box. 1 A class extension is a method defined in a package, for which the class is defined in a different package [BDN05]. Class extensions exist in Smalltalk, CLOS, Ruby, Python, Objective-C and C#3. They offer a convenient way to incrementally modify existing classes when subclassing is inappropriate. They support the layering of applications by grouping a package with its extensions to other packages. AspectJ inter-type declarations offer a similar mechanism. i.e., from the source package to the target package. Each box contains squares that represent involved classes: referencing classes in the source package and referenced classes in the target package. Dependencies between squares link each source class (in top box) to its target classes (in bottom box) (Figure 6). ![Figure 5. A subset of Moose in CycleTable with simple cells.](image) **Figure 6.** Enriched cell structural information. ### B. Breaking Cycles with an enriched CycleTable Figure 7 shows a CycleTable with four packages of Moose core: FxCore, MsCore, FxExtensions and MsFinder. Six minimal cycles are immediately visible. It also appears that three dependencies are each involved in multiple cycles (with red, blue, and orange frames). An important asset of CycleTable is that it does not focus on a single solution to break cycles. It rather highlights different options as there are many ways to resolve such cycles. Only the reengineer can select what he thinks is the best solution. We now discuss how CycleTable allows one to consider solutions for solving cycles in Figure 7. The first point to consider in CycleTable is the notion of shared dependencies, the number of cycles that are involved, and their weight. For example, the red cell linking FxCore to MsCore (first row) is in two indirect cycles and one direct cycle. It has a weight of two dependencies and involves four classes (two in each package) as well. But one dependency is an inheritance which can require some work to remove. Finally, from a semantic point of view, MsCore is at the root of many functions in the system so it seems normal to have such dependencies from FxCore. Instead, we turn our focus to the blue cells (named A in Figure 7), linking MsCore to FxExtensions. It has a weight of five dependencies and involves two classes. From a semantic point of view, FxExtensions represents extended functionalities of the system so it seems that the dependency from MsCore is misplaced: it is just a single method referencing a class in FxExtensions. Moving the method to package FxExtensions is enough in this case to remove the dependency. This single action breaks four cycles. Two direct cycles remain: (FxCore - MsCore) named B in Figure 7 and (FxExtensions - MsFinder) named C in Figure 7. The cycle C has a dependency shared with previously fixed cycles (yellow dependency) and is small (two internal dependencies). But the other dependency is also made of two internal dependencies. The situation is balanced. In this case the reengineer has to rely first on his knowledge of the system architecture to detect the improper dependency (FxExtensions >MsFinder). CycleTable is still useful to explore the involved classes and methods. We assessed before that the dependency from FxCore to MsCore is acceptable. Hence, the dependency from MsCore to FxCore should be removed to resolve the cycle labeled B (Figure 7). As for the first case, a single method making a reference was misplaced in package MsCore and should become a class extension. VI. Validation We performed two studies to validate our approach. First, we show on a case study that unexpected dependencies in the architecture, which should be removed, often reveal themselves as shared dependencies and are given the primary focus in CycleTable. Second, we perform a comparative study of CycleTable with a normal node-link visualization to validate the efficiency of CycleTable when understanding and fixing large sets of intertwined cycles. A. Case study on unexpected dependencies 1) Protocol: The case study was realized on the core of Moose version 4beta4 (33 packages). The rest of Moose is not included in this case study because it does not have cycles. A developer from the Moose team evaluated all package dependencies of the system (106 dependencies), regardless of their involvement in cycles. The goal was to retrieve an objective evaluation of each dependency. The possible values that the developer can give are: the dependency is expected in the system architecture, purpose of the dependency requires deep investigation, and the dependency is unexpected and should probably be removed. After this step, we match all shared dependencies from CycleTable against the evaluation given by the developer. We assessed two hypotheses: the probability that unexpected dependencies are often shared, and prominence of unexpected dependencies in CycleTable, given by their positions in the matrix. 2) Results—shared dependencies as primary targets for removal investigation: Table I summarizes the results of the case study. The first three lines show some characteristics of the system: there are 14 packages involved in 42 minimal cycles, themselves including 17 different shared dependencies. Then, the assessment performed by the Moose developer returned 11 unexpected dependencies, which should be removed. Finally, we perform the intersection between unexpected and shared dependencies: 9 out of the 11 unexpected dependencies are also shared by various cycles. The two other unexpected but not shared dependencies are actually independent direct cycles i.e., they are direct cycles forming each one SCC, with no intertwined cycles. These two dependencies are not critical in the system architecture. The 11 unexpected dependencies retrieved by the developer cover the 42 minimal cycles: in other words, fixing those 11 dependencies would break all cycles. It is remarkable that fixing the 9 shared dependencies actually break 40 out of 42 minimal cycles (the two remaining cycles being the independent direct cycles). This case study shows that i) unexpected dependencies are often shared dependencies, and that ii) removing shared dependencies can break multiple cycles with minimal effort, as only a handful of dependencies need to be assessed. <table> <thead> <tr> <th>Characteristics</th> <th>Moose</th> </tr> </thead> <tbody> <tr> <td>number of packages</td> <td>33</td> </tr> <tr> <td>number of packages in cycles</td> <td>14</td> </tr> <tr> <td>number of dependencies</td> <td>106</td> </tr> <tr> <td>number of minimal cycles</td> <td>42</td> </tr> <tr> <td>number of shared dependencies</td> <td>17</td> </tr> <tr> <td>number of unexpected dependencies</td> <td>11</td> </tr> <tr> <td>unexpected $\cap$ shared</td> <td>9</td> </tr> <tr> <td>cycles coverage by unexpected $\cap$ shared</td> <td>40 / 42</td> </tr> </tbody> </table> Table I RESULTS OF SHARED DEPENDENCY VALIDATION. 3) Results—prominence of unexpected dependencies in CycleTable: CycleTable uses a heuristic to order packages and cycles in the matrix. This heuristic tries to place cycles sharing common dependencies next to each other. In this study, we show that the ordering given by the heuristic effectively also puts forward unexpected dependencies, given that they are often shared. Starting with the set of unexpected dependencies retrieved by the developer, we looked up the position of the source package in CycleTable. This position corresponds to the row where the dependency is displayed. Table II shows that 9 out of 11 unexpected dependencies (80%) appear in the first three lines (3 out of 15 packages, 20%). Thus issues with cyclic dependencies relate mostly to three packages. This result shows that just by focusing on the first part of the visualization, a great deal of work can be done in breaking cycles. <table> <thead> <tr> <th>Unexpected dependency</th> <th>Position in CycleTable (line number)</th> </tr> </thead> <tbody> <tr> <td>Famix-Core » Famix-Implementation</td> <td>1</td> </tr> <tr> <td>Moose-Core » Famix-Core</td> <td>2</td> </tr> <tr> <td>Moose-Core » Moose-SmalltalkImporter</td> <td>2</td> </tr> <tr> <td>Moose-Core » Famix-Extensions</td> <td>2</td> </tr> <tr> <td>Moose-Core » Moose-GenericImporter</td> <td>2</td> </tr> <tr> <td>Moose-Core » Famix-Implementation</td> <td>2</td> </tr> <tr> <td>Famix-Extension » Famix-Smalltalk</td> <td>3</td> </tr> <tr> <td>Famix-Extensions » Moose-Finder</td> <td>3</td> </tr> <tr> <td>Famix-Extension » Famix-Java</td> <td>3</td> </tr> <tr> <td>Fame » Moose-Core</td> <td>9</td> </tr> <tr> <td>Moose-Wizard » Moose-Finder</td> <td>10</td> </tr> </tbody> </table> Table II RESULTS OF UNEXPECTED DEPENDENCY POSITION. B. Comparative Study with Node-link Representation 1) Protocol: In this comparative study, we validate CycleTable as a useful visualization to understand and break a large set of cycles intertwined together. The precise goal of the study was to validate the effectiveness of CycleTable layout in matrix, compared to a common node-link layout. We measure the time taken by participant to reason about cycles and the quality of their answer. The protocol is the following: first the participant is given a tutorial about the task and the tool with a small example, questions and correct answers to train himself. Second he performs the same questions on the real case study. For the case study, we use a subset of Moose (the 14 packages in cycles, see Table I). Since we focus on assessing the tool, we replace all package names by arbitrary letters from A to O and we do not use enriched cell (Figure 9). Hence, participants could not use prior Moose background (some had already work as developers in Moose) or package names to guide their intuition. The assessment of multiple intertwined cycles is impractical when one uses a single node-link representation showing the full SCC (as shown in Figure 8). Instead, we choose to display a series of node-link representations, one for each minimal cycle built with dotty/GraphViz [GN00]. This allows us to map the same data in CycleTable and node-link. In particular, a shared dependency was also displayed with the same color across multiple node-links. One group of seven participants answers questions using CycleTable. The other group of six participants answers questions using the node-link visualization. One group of seven participants answers questions using CycleTable. The other group of six participants answers questions using the node-link visualization. Figure 9. Sample of CycleTable (left) and node-link visualization(right) used in the study. 2) List of questions: Here are the eight questions that the users have to answer. We also give the rating of the answer, based on the distance of the answer to the correct one. A 0 rating indicates a good answer. Q1: Give 2 packages that are in a direct cycle (cycle between two packages). Rating: 0 when the answer represents a direct cycle, 1 else. Q2: Give a minimal cycle involving N and O (enumerate packages in order). Rating: 0 when the answer is {O, G, N}, +1 for each false value. Q3: How many minimal cycles go through package F? Rating: Computed as the difference between the answer and 16, the correct answer. Q4: How many shared dependencies exit package F? Rating: Computed as the difference between the answer and 2, the correct answer. Q5: How many dependencies should be removed to break all cycles involving package F? Rating: Computed as the difference between the answer and 8, the correct answer. Q6: What is the biggest shared dependency in the system? Rating: 0 when the answer is {G, M}, 1 else. Q7: How many minimal cycles are broken by removing the biggest shared dependency? Rating: Computed as the difference between the answer and 24, the correct answer. Q8: Give the minimum number of edges to remove in order to break all cycles in the system. Rating: Computed as the difference between the answer and 10, the correct answer. 3) Results: 13 participants performed the study, from license’s degree students to experienced researchers with various programming skills and experience in visualizations. We distinguish three parts in the results (Figure 10). The first part relates to questions Q1 and Q2. For these two easy questions, the user should identify cycles. Results shows that it is faster to identify a cycle with a node-link visualization. We can consider that it is still more intuitive than CycleTable. The second part relates to questions 3 to 7, where the user should recognize shared dependencies or packages involved in multiple cycles. Here, CycleTable performs better and faster. Q7 appears as an exception: actually participants in both groups confused two very similar colors, which is a mistake on our part for the choice of colors. These questions validate the design of CycleTable compared to node-link, for the purpose of reasoning about shared dependencies. Finally question 8 evaluates the capacity to assess the full complexity of the graph. It builds upon the preparatory work made by answering the previous questions as one needs to assess a minimal set of dependencies, mostly based on the impact of removing shared dependencies. Results show it takes in average more than 90 seconds with CycleTable and more than 3 minutes with node-link visualization. While both groups gave similar answers, it highlights the ease to read CycleTable for this task. C. Threats to validity Rating.: We compute a rating based on the distance to the expected answer. We can see that CycleTable provides better answers than a node-link visualization. But false-answers are due to a visualization without software meaning for participants. In the case of real reengineering session, results could be different. It is a part of future work. Removable Dependency.: We suppose for CycleTable that a critical dependency is often shared. In our case study, results show that this hypothesis is right. But we should do more experiments to confirm it. Smalltalk Software.: Smalltalk software applications have specific features as class extension, which makes it easier to modularize software, but which makes it also easier to make cycles. A work is in progress to analyze Java software. D. Conclusion of the Study We create this visualization to assess cycles at package level. To analyze the usefulness of the visualization, we do not have other visualization tool to compare. We build a node-link visualization which shows shared dependencies. The benefit of node-link visualization is that there is no learning time. Figure 10. Boxplots showing distance to expected answer in absolute and time in minutes for questions 1 to 8. Graph shown on left and CycleTable on right for each question. The study shows that CycleTable is efficient to detect and to help reengineer breaking cycles between packages. There are still some limits that we would like to overcome, with the goal to make CycleTable more effective for reengineers. The order of cycles in the matrix is based on the similarity they share with each other, based on their common shared dependencies. The order of packages follows cycle sequences as soon as cycles are inserted in the matrix. This heuristic gives good result in the first rows and columns of the visualization. Then it becomes difficult to arrange cycles and packages so that a shared dependency forms a unique line of color in its row. When there are cycles without shared dependencies, CycleTable shows cycles separately but without colors. Such systems are actually simple to understand. In this case the use of other visualization such as node-link or DSM could be better. VII. RELATED WORK Node-link visualization.: Often node-link Visualizations are used to show dependencies among software entities. Several tools such as dotty/GraphViz, Walrus or Guess can be used. Using node-link visualization is intuitive and has a short learning curve. One problem with node-link visualization is finding a layout scaling on large sets of nodes and dependencies: such a layout needs to preserve the readability of nodes, the ease of navigation following dependencies, and to minimize dependency crossing. Even then, cycle identification is not trivial. **Package Blueprint.** It shows how one package uses and is used by other packages [DPS+07]. It provides a fine-grained view. However, package blueprint lacks (1) the identification of cycles at system level and (2) the detailed focus on classes actually involved in the cycles. **Dependency Structural Matrix.** Contrary to node-link, a DSM visualization preserves the same structure whatever the data size is. This enables the user to dive fast into the representation using the normal process. SCCs can be identified by colored cells. Moreover, eDSM [LDDB09] displays fine-grained information about dependencies between packages. Classes in source package as well as in target package are shown in the cells of the DSM. **Dependence Clusters.** Brinkley and Harman proposed two visualizations for assessing program dependencies, both from a qualitative and quantitative point of view [BH04]. They identify global variables and formal parameters in software source-code. Subsequently, they visualize the dependencies. Additionally, the MSG visualization [BH05] helps finding dependence clusters and locating avoidable dependencies. Some aspects of their work are similar to ours. Granularity and the methodology employed differ: they operate on source-code and use slicing method, while we focused on coarse-grained entities and use model analysis. VIII. Conclusion This paper proposes CycleTable, a visualization showing cycles between packages in order to break cyclic dependencies. A fundamental heuristic of CycleTable is the focus on “shared dependencies”, which can impact many cycles at once by their removal. The visualization can be completed with eCell which has been integrated in eDSM [LDDB09]. We validate the heuristic of “shared dependencies” in a case study and the efficiency of CycleTable over a node-link visualization in a comparative study. We plan to work on (i) pursuing the validation of “shared dependencies”, (ii) applying CycleTable on several software applications, in particular in other languages, and (iii) making an approach with both node-link visualization and CycleTable as they are complementary. Acknowledgements This work was supported by Ministry of Higher Education and Research, Nord-Pas de Calais Regional Council and FEDER through the ‘Contrat de Projets Etat Region (CPER) 2007-2013’. REFERENCES
{"Source-Url": "https://core.ac.uk/download/pdf/51216966.pdf", "len_cl100k_base": 7453, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 33974, "total-output-tokens": 8633, "length": "2e12", "weborganizer": {"__label__adult": 0.000244140625, "__label__art_design": 0.0005321502685546875, "__label__crime_law": 0.00022935867309570312, "__label__education_jobs": 0.0006136894226074219, "__label__entertainment": 5.0008296966552734e-05, "__label__fashion_beauty": 0.00010341405868530272, "__label__finance_business": 0.00012969970703125, "__label__food_dining": 0.00020170211791992188, "__label__games": 0.0004029273986816406, "__label__hardware": 0.0004549026489257813, "__label__health": 0.00018727779388427737, "__label__history": 0.00016307830810546875, "__label__home_hobbies": 5.97834587097168e-05, "__label__industrial": 0.00018966197967529297, "__label__literature": 0.0001823902130126953, "__label__politics": 0.00014400482177734375, "__label__religion": 0.0002684593200683594, "__label__science_tech": 0.006679534912109375, "__label__social_life": 7.462501525878906e-05, "__label__software": 0.009002685546875, "__label__software_dev": 0.9794921875, "__label__sports_fitness": 0.00017368793487548828, "__label__transportation": 0.0002732276916503906, "__label__travel": 0.0001354217529296875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37799, 0.01899]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37799, 0.51926]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37799, 0.91466]], "google_gemma-3-12b-it_contains_pii": [[0, 853, false], [853, 5803, null], [5803, 9914, null], [9914, 12056, null], [12056, 17057, null], [17057, 19190, null], [19190, 21470, null], [21470, 26688, null], [26688, 31367, null], [31367, 32815, null], [32815, 37799, null]], "google_gemma-3-12b-it_is_public_document": [[0, 853, true], [853, 5803, null], [5803, 9914, null], [9914, 12056, null], [12056, 17057, null], [17057, 19190, null], [19190, 21470, null], [21470, 26688, null], [26688, 31367, null], [31367, 32815, null], [32815, 37799, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37799, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37799, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37799, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37799, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37799, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37799, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37799, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37799, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37799, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37799, null]], "pdf_page_numbers": [[0, 853, 1], [853, 5803, 2], [5803, 9914, 3], [9914, 12056, 4], [12056, 17057, 5], [17057, 19190, 6], [19190, 21470, 7], [21470, 26688, 8], [26688, 31367, 9], [31367, 32815, 10], [32815, 37799, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37799, 0.12778]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
93cbfe93edd36a1ffff677f73a2181436320b359
ADA in Introductory Courses Dr. Sara Stoecklin Dr. Marion Harmon Dr. Sara Stoecklin Florida A&M University Tallahassee, Florida 32307-2001 U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 The views, opinions and/or findings contained in this report are those of the author(s) and should not be construed as an official Department of the Army position, policy, or decision, unless so designated by other documentation. Approved for public release; distribution unlimited. This project at Florida A & M University sponsored development of a new sequence of two lecture courses entitled (1) Fundamentals of Programming with Ada and (2) Program, File and Data Structures with Ada with two innovative supporting laboratory courses. These four courses replaced a sequence of courses previously taught with Pascal. The laboratory courses were developed to support the theory that reading programs and experimenting with those programs prior to writing original programs would be helpful in teaching complex programming languages such as Ada. The intent of this project was to allow students to learn a usable language during their first courses in programming and for these same students to develop projects using the ADA language during project courses. Additionally, there are several industries who needed programmers in ADA and these students were targeted for those types of industries. Subject Terms ADA, Introductory Courses, Programming Laboratory Security Classification 17. Security Classification of Report UNCLASSIFIED 18. Security Classification of This Page UNCLASSIFIED 19. Security Classification of Abstract UNCLASSIFIED 20. Limitation of Abstract UL ADA in Introductory Courses FINAL REPORT Dr. Sara Stoecklin Dr. Marion Harmon 12/01/94 U.S. Army Research Office BAA #92–25 ARO 30999–MA DAAL03–92–G–0415 FLORIDA A & M UNIVERSITY HBCU Approved for Public Release; Distribution Unlimited PROGRESS REPORT 1. ARO PROPOSAL NUMBER: 300999–MA 2. PERIOD COVERED BY REPORT: 1 January 1993 – November 24, 1994 3. TITLE OF PROPOSAL: ADA in Introductory Courses 4. CONTRACT OR GRANT NUMBER: DAAL03–92–G–0415 5. NAME OF INSTITUTION: Florida A & M University 6. AUTHORS OF REPORT: Dr. Sara Stoecklin and Dr. Marion Harmon 7. LIST OF MANUSCRIPTS SUBMITTED OR PUBLISHED UNDER ARO SPONSORSHIP DURING THIS REPORTING PERIOD, INCLUDING JOURNAL REFERENCES: 3. Harmon, M. Course Material including Syllabus, Outlines, Course Notes, Assignments, Tests, and Quizzes 8. SCIENTIFIC PERSONNEL SUPPORTED BY THIS PROJECT AND DEGREES AWARDED DURING THIS REPORTING PERIOD: Name Degrees Awarded 1. Debra Snead BS – CIS/FAMU 2. Tomeka Williams BS – CIS/FAMU 3. Mary Clark 4. Dr. Sara Stoecklin 5. Dr. Marion Harmon 6. Dr. Usha Chandra 9. REPORT OF INVENTIONS (BY TITLE ONLY): (NONE) Dr. Sara Stoecklin Florida A&M University Tallahassee, Florida 32307–2001 I. FORWARD This project at Florida A & M University sponsored development of a new sequence of two lecture courses entitled (1) Fundamentals of Programming with Ada and (2) Program, File and Data Structures with Ada with two innovative supporting laboratory courses. These four courses replaced a sequence of courses previously taught with Pascal. The laboratory courses were developed to support the theory that reading programs and experimenting with those programs prior to writing original programs would be helpful in teaching complex programming languages such as Ada. The intent of this project was to allow students to learn a usable language during their first courses in programming and for these same students to develop projects using the ADA language during project courses. Additionally, there are several industries in Florida who needed programmers in ADA and these students were targeted for those types of industries. II. TABLE OF CONTENTS III. LIST OF APPENDIXES, ILLUSTRATIONS AND TABLES ............ 1 IV. BODY OF REPORT .......................................................... 2 V. REPORT IF INVENTIONS (BY TITLE ONLY) NONE VI. BIBLIOGRAPHY ............................................................ 3 VII. APPENDIXES .................................................................. 4 III. LIST OF APPENDIXES, ILLUSTRATIONS AND TABLES A. Syllabus for Introductory Courses including: Conceptual Objectives Performance Objectives General Course Topic Outline B. Sample Test from Test Bank C. Sample Programming Assignment D. Sample of one Chapter of Laboratory Book with: 1. Laboratory Assignments 2. Laboratory Scientific Questions 3. Automated Version of Laboratory Assignments IV. BODY OF REPORT A. STATEMENT OF THE PROBLEM STUDIED The problem studies was to develop courses which would teach the ADA language as the first programming language course for incoming students. Many students have trouble with problem solving, problem decomposition, data typing and data structures while writing those first programs. The theory tested here was to teach reading and basic experimenting with code prior to code development. This technique was to aid in the understanding of program development. B. SUMMARY OF THE MOST IMPORTANT RESULTS 1. Information and Technology Sharing The most significant result of the project was the sharing of information between universities trying various techniques with Ada. The results and laboratory course developed as part of this project was shared with two other universities trying to teach Ada as their first programming course. East Tennessee University, one of those universities, used the laboratory course material in the development of their own programming courses to support a software engineering effort. 2. Improved Learning through Laboratory Experiences A significant result of the project was the faculty noting that students certainly benefited from reading code prior to writing code. Currently, our first programming courses has seen so much use in these activities that this part of the course has become more important. The potential integration of the extra laboratory developed is now being considered to be a critical part of the actual introductory course. 3. Ada Environment Support Another significant result of the project was the development of support for the Ada environment. Students who worked in this environment were significantly better prepared for internships and work using the Ada programming language. 4. Ada Students The most valuable result of this project were the students who continue to be exposed to the ADA programming language and have added to the number of Ada trained personnel in the job market. C. LIST OF ALL PUBLICATIONS AND TECHNICAL REPORTS 3. Harmon, M. Course Material including Syllabus, Outlines, Course Notes, Assignments, Tests, and Quizzes D. LIST OF ALL PARTICIPATING SCIENTIFIC PERSONNEL <table> <thead> <tr> <th>Name</th> <th>Degrees Awarded</th> </tr> </thead> <tbody> <tr> <td>1. Debra Snead</td> <td>BS – CIS/FAMU</td> </tr> <tr> <td>2. Tomeka Williams</td> <td>BS – CIS/FAMU</td> </tr> <tr> <td>3. Mary Clark</td> <td></td> </tr> <tr> <td>4. Dr. Sara Stoecklin</td> <td></td> </tr> <tr> <td>5. Dr. Marion Harmon</td> <td></td> </tr> <tr> <td>6. Dr. Usha Chandra</td> <td></td> </tr> </tbody> </table> V. REPORT IF INVENTIONS (BY TITLE ONLY) – none VI. BIBLIOGRAPHY [PET85] Petrofski, H., To Engineer is Human, St Martin's Press, New York, 1985. VII. APPENDIXES APPENDIXES COP1215 SYLLABUS Fundamentals of Programming with ADA Prerequisite: MAC 1104 and MAC 1133 OR MAC 1142, OR CALC I Co-requisite: COP1215L AND MAD2102 OVERVIEW OF COURSE This course is designed to prepare students for analyzing problems and for designing and implementing algorithms by understanding language concepts and the ADA programming language. COURSE OBJECTIVES 1. Introduction to the programming progress; problem analysis, algorithm development, verification of algorithm, coding, debugging, testing, documentation, and maintenance. 2. Introduction to the top-down methodology and the use of structure charts and pseudocode to develop algorithms. 3. Introduction to the syntactic and semantic rules of ADA. 4. Introduction to the stylistic issues in coding of programs. CRITICAL PREREQUISITE KNOWLEDGE AND SKILLS 1. Students should have a working knowledge of Boolean algebra operations. 2. Students should have a working knowledge of relational operators and mathematical operators. 3. Students should be able to apply problem solving techniques to solve algebraic problems. PERFORMANCE OBJECTIVES FOR STUDENTS Upon Completion of this course, students should be able to perform the following tasks: 1. Use the ADA Environment to create, edit, compile, and execute an ADA program. 2. Distinguish between computer hardware and software. 3. List the basic programming steps. 4. Apply the top-down methodology to solve simple to moderate problems. 5. Know the difference between batch and interactive processing. 7. Trace the execution of a program. 8. Select adequate data sets for testing of programs. 9. Develop algorithms which require the use of selection, repetition, and procedure control structures. 10. Given a design which requires sub-tasks, a. Determine what the formal and actual parameter list should be for each module. b. Determine the method of parameter passing, call by reference versus call by value. c. Determine scope of variables, local versus global 11. Manipulate an array of record data type and its operations. CONCEPTUAL OBJECTIVES FOR STUDENTS Upon the successful completion of this course, the student will understand: 1. The computer is an electronic device which only follows instructions. 2. There are several steps to the programming process. 3. One method of design is the top-down methodology. 4. Data types and their operations. 5. Design tools, structure charts, N-S charts, pseudocode 6. Techniques for problem solving 7. Data validation and testing strategies ### PROPOSED SCHEDULE <table> <thead> <tr> <th>WEEK</th> <th>TOPIC</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Chapter 1</td> </tr> <tr> <td>2</td> <td>Chapter 2</td> </tr> </tbody> </table> | 3 | Chapter 3 | Input and Design Methodology **First Program** | | 4 | Chapter 4 | Conditions, Boolean Expressions, and Selection Control Structures. **Second Program – Read format/use file** | | 5 | Chapter 5 | Looping | | 6 | Chapter 5/6 | **Third Program – Selection/ Loops** | | 7 | Chapter 6 | Procedures | | 8 | Chapter 7 | Value Parameters, Nesting, Procedures, and More on Interface Design **Fourth Program – Procedures** | | 9 | Chapter 8 | Functions, Precision, and Recursion | | 10 | Chapter 9 | Sets and Additional Control Structures | | 11 | Chapter 10 | Simple Data Types **Fifth Program – Procedures** | | 12 | Chapter 11 | One-Dimensional Arrays | | 13 | Chapter 12 | Applied Arrays: Lists and Strings | | 14 | Chapter 13 | Multi-dimensional Arrays **Sixth Program – Arrays** | | 15 | Chapter 14 | Records and Data Abstractions | | 16 | Chapter 15 | Exams | (1 point each) TRUE/FALSE. _______ 1) The ada programming language was developed on behalf of the U.S. Department of Defense to help control the cost of software for computers embedded in larger systems. _______ 2) The parameter mode OUT specifies that the parameter will be used to transmit information from the calling program or subprogram. _______ 3) The following statement would be ignored by the ada compiler and assumed to be a comment: --- The Program computes .... _______ 4) In ada identifiers, may consist of letter, digits, and underscores(_). _______ 5) When a new compilation unit uses entities defined in older, separately compiled units, the new unit must begin with a occurs clause naming the older units. _______ 6) An Ada program is simply a subprogram with parameters. _______ 7) In an ada package only those subprograms defined in the package specifications can be call from outside the package. _______ 8) To give more precise control over entities provided to the outside world, the Ada language allows a package specification to contain a private part, which begins with the word PRIVATE and extends to the end of the Package specifications. _______ 9) Floating-point types are approximations of real numbers and are evenly spaced. _______ 10) Declarations are processed one after the other at the time a program is compiled. _______ 11) A discrete type is either an integer type or an enumeration type. _______ 12) An allocator is a expression whose evaluation causes dynamic allocation of a variable. 13) An incomplete type declaration must be followed immediately by a full declaration for the same type. 14) The following FOR LOOP WILL PRINT "HELLO" 4 times? ```ada type DAY_OF_WEEK_TYPE IS (MON, WED, THUR, FRI) FOR I IN FRI .. MON LOOP TEXT_IO.PUT("HELLO"); END LOOP; ``` 15) Assume the following: ```ada TYPE NUM_PTR is Access STRING(1..5); ZIP_CODE_PTR : NUM_PTR; BEGIN ZIP_CODE_PTR := NEW STRING; \is this statement legal?\n : : ``` (2 POINTS EACH) MULTIPLE CHOICE. 1) WHICH OF THE FOLLOWING IS NOT A RESERVED WORD IN ADA. A) OUT \hspace{2cm} B) DIGIT C) LOOP \hspace{2cm} D) DELTA \hspace{2cm} E) ELSIF 2) Which of the following declarations is INVALID. a) LIST: CONSTANT STRING(1..5) := "string"; b) X, Y, Z : INTEGER := 0; c) A : INTEGER := 4; d) ch : CHARACTER range 'A' .. 'C'; e) none of these 3) Which of the following operators can be used to catenate to operands. a) ^ \hspace{2cm} b) + c) / \hspace{2cm} d) & \hspace{2cm} e) none of these 4) For what value of X will the following condition to evaluate to TRUE. ASSUME THAT Y IS 7. \[ \text{NOT (X = 5) AND (Y > X) OR X < 10} \] A) 5 ONLY B) ANY NUMBER LESS THAN 10 C) 10 D) A NUMBER LESS THAN 7 BUT GREATER THAN 4 E) NONE OF THESE 5) Which of the following forms of an allocator is INVALID. a) NEW typemark b) NEW typemark "(expression) c) NEW typemark index-constraint d) NEW typemark "(expression) e) NEW typemark " (expression) SHORT ANSWERS. (5 points each) 1) Write an ada subprogram to overload the "<" operator. This subprogram should be return the small of two integer operands. FOR example x := 5 < 7; would store a 5 in x since 5 is less than 7. but for x := 5 < 4; x would get the value 4. Your subprogram should accept two integer parameters. 2) DECLARE A RECORD WITH THE FOLLOWING FIELDS. \[ \text{RECORD} \] \[ \begin{aligned} \text{name: 20 characters} & & \\ \text{age: 1..120} & & \\ \text{eye color: blue, brown, green, gray, unknown} & & \\ \text{batting average: 0.0 .. 1.0 to the nearest thousandth precision.} & & \\ \text{height, weight: Float 3 digits accuracy.} & & \\ \end{aligned} \] end record: 3) Declare and unconstrained array of the records declared above. 4) TYPE DAY_TYPE is (MON, TUES, WED, THUR, FRI, SAT, SUN); X : DAY_TYPE; IF X = MON then TEXT_IO.PUT("MONDAY"); ELSIF (X = FRI) and then (Y = SAT) then TEXT_IO.PUT("The Weekend is Near"); ELSEIF (X = TUES) then TEXT_IO.PUT("TUESDAY"); END IF; REWRITE THE IF STATEMENT ABOVE USING A CASE STRUCTURE. 5) WRITE A FUNCTION TO COMPUTE THE FOLLOWING SUM. THE FUNCTION SHOULD RETURN THE SUM OF THE FOLLOWING SERIES. \[ 2 + 4 + 8 + 16 + 32 + \ldots + 512 + 1024 \] (2 points each) GIVEN the type declarations TYPE ZIP_TYPE is RANGE 1 .. 99999. TYPE SPEED_TYPE is DIGITS 6; TYPE AVG_SPEED_TYPE is DELTA 2.5 RANGE 0.0 .. 200.0 TYPE ARRAY_TYPE is ARRAY (INTEGER RANGE <>) of CHARACTER; TYPE COLOR_TYPE is (RED, BLUE, GREEN, PINK, ORANGE); indicate which of the following object declarations have appropriate constraints and what is wrong with the declarations that do not have appropriate constraints. a) table_of_char: ARRAY_TYPE(1 .. 50); ____________________________ b) SPEED_1 : SPEED_TYPE DIGITS 7 RANGE 0.3 .. 10.0 __________ c) COLOR := COLOR_TYPE := COLOR_TYPE'LAST; ______________________ d) SPEED_3 : AVG_SPEED_TYPE DELTA 1.5; __________________________ (2 points each) EVALUATE: a) -5 REM 2 _______________ b) -8 MOD -4 _______________ c) 6 REM 4 _______________ d) 26 REM -5 _______________ (5 points) Write the following sequence of statements without a GOTO statement. DIVISOR := 2; WHILE (DIVISOR ** 2 < = CHANDIDATE) LOOP IF (CANDIDATE MOD DIVISOR = 0) THEN GOTO not_prime; END IF; END LOOP; -- if this point is reached then the candidate must be prime << not_prime >> CANDIDATE := CANDIDATE + 1; CONSIDER the following declarations: TYPE HOUR_TYPE IS RANGE 1 .. 12; SUBTYPE LATE_HOUR_SUBTYPE IS HOUR_TYPE RANGE 5 .. 10; TYPE DAY_TYPE IS RANGE 1 .. 7; SUBTYPE WORK_DAY_SUBTYPE IS DAY_TYPE RANGE 1 .. 6; HOUR_NUM : HOUR_TYPE; LATE_HOUR_NUM : LATE_HOUR_SUBTYPE; DAY_NUM : CONSTANT DAY_TYPE; WORK_DAY : WORK_DAY_SUBTYPE; YOU MAY ASSUME THAT LEGAL ASSIGNMENTS HAVE BEEN MADE TO THESE OBJECTS AT SOME POINT DURING EXECUTION. (2 points each) Which of the following assignments are not legal and why? a) HOUR_NUM := LATE_HOUR_NUM; ______________________ b) LATE_HOUR_NUM := HOUR_NUM; ______________________ c) DAY_NUM := 3; _________________________________ d) WORK_DAY := DAY_NUM __________________________ True or False. (5 points) ____ Ada was developed on behalf of the U. S. Department of Defense strictly for the development of embedded computer systems. ____ Ada is a standardized high level programming language. ____ Ada provides for separate compilation for separate subprogram components. ____ In ada it is possible to declare more than one subprogram with the same name, if they do not have the same parameters and result types, this is called redefining names. ____ A data type that can be completely characterized by the way in which the values of the data are related to each other by the operations. ____ A package is a collection of related entities that can be used by multiple programs. ____ A draws new distinction between the external appearance of a package and its internal workings. ____ The following is not an ada reserved word WHEN. ____ The following is an illegal ada identifier FIVE_%. (5 points) Locate the syntax errors in the following Ada Program: With Basic_IO; PROCEDURE EXAMPLE is J, L : INTEGER BEGIN FOR I in 1..5 loop PUT("ENTER A NUMBER "); BASIC_IO.GET(J); NEW_LINE; PUT("ENTER ANOTHER NUMBER "); BASIC_IO.GET(L); NEW_LINE; J := J + L; PUT("THE RESULT IS "); NEW_LINE; PUT(J) end END EXAMPLE; Number the instances of syntax errors and explain why be side the number: 1) ____________________________ 2) ____________________________ 3) ____________________________ there are more than 3 (10 points) Write an ada program that accepts a sentence from the screen and prints the number of vowels in the sentence. example: The cat chased the mouse across the room. number of vowels is 13. The program will be graded on correctness and style. THE COLOR PALETTE ASSIGNMENT You are to write an Ada program which will mix colors and display their resultant color. In order to accomplish this task, you will probably need to use the following Ada concepts: Overloading of an operator (use '+') Enumeration of the colors (declare a type) Enumeration I/O (instantiate enumeration I/O) Loop, Case, If constructs Put/Get to/from the terminal (implies use of TEXT_IO) Your program should use the following color palette as a minimum, but you are free to add other colors and define their resultant colors as you deem appropriate. <table> <thead> <tr> <th>First Color</th> <th>Second Color</th> <th>Resultant Color</th> </tr> </thead> <tbody> <tr> <td>RED</td> <td>YELLOW</td> <td>ORANGE</td> </tr> <tr> <td>BLUE</td> <td>RED</td> <td>PURPLE</td> </tr> <tr> <td>BLUE</td> <td>YELLOW</td> <td>GREEN</td> </tr> </tbody> </table> Don't forget to consider the inverse cases, e.g., if YELLOW is the first color and RED is the second color, then their resultant mix is still ORANGE. Any other combination is not defined, unless you choose to do so. Your program should follow the following scenario. It should prompt the user for the first color, then read that color. Next, it should prompt for the second color and read that color. Then it should call a function which determines what the mix will be if there is one. (You should strongly consider overloading the '+' operator as your function call.) The resultant color, if there is one, should then be printed; otherwise, a message should be printed telling the user that the mix is undefined. Finally, you should prompt the user to determine if the user desires to mix other colors. If so, then loop back through this process; if not, then exit the loop. GOOD LUCK! Unit 2 Introduction to the Ada Language Conceptual Objectives 1. Understand the Ada Compiler 2. Understand the Basic Commands of the Operating System 3. Understand the Basic Commands of the Compiler 4. Understand the Basic Commands of the Editor 5. Understand How to Run an Ada Program 6. Recognize Overloading Performance Objectives 1. Be able to use DOS commands such as COPY, ERASE, TYPE, DIR, and PRINT for files 2. Be able to RETRIEVE, STORE, and EDIT an Ada Program 3. Be able to COMPILIE and RUN an Ada Program 4. Be able to Traverse through the Ada Program 5. Be able to Run an Ada Program using Different Data Types Ada Programs The Ada programs used in this chapter are composed of two types of statements, declaration statements and execution statements. Declaration statements used in this chapter include the procedure statement, identifier statement, and the included statements of other packages. The execution statements used in this chapter are the assignment statement and the output statements. A short description is included below with examples of each of the statements. Procedure Statement Each Ada program is given a program name by the procedure statement. This name is used to link and compile the program. In addition, the program is stored in the DOS directory under a filename. These two names do not have to be the same. The procedure statement has the keyword procedure, the name of a valid identifier in the Ada language, and the keyword is. This statement is followed by the Ada statements necessary for the program and the end end statement for the procedure. The syntax of the procedure statement to declare the program name is: ```adalog procedure IDENTIFIER is begin -- Ada statements go here end IDENTIFIER; ``` Declaration Statements — Identifiers There are various declarations which are made through valid Ada identifiers. Such things as Variables and Constants are described using simple data types such as integer, float(real), fixed, character and Boolean. The declaration statements contain the name of the identifier and the data type for that identifier. The syntax of the variable statements to declare these identifiers to the Ada program is: ```adalog IDENTIFIER : INTEGER; IDENTIFIER : FLOAT; IDENTIFIER : CHARACTER; ``` The syntax of the constant statement is: ```adalog IDENTIFIER : CONSTANT INTEGER := 10; IDENTIFIER : CONSTANT FLOAT := 10.0; IDENTIFIER : CONSTANT CHARACTER := "A"; ``` I/O Packages During the first few weeks a simple input/output (I/O) package will be used to allow your Ada program to input values from the keyboard and output values to the screen. This package is called Simple_Ada_IO Output Statements There are two statements in the Simple_Ada_IO that allow the user to express information to an output device, such as the screen. These two statements are PUT and PUT_LINE. The first statement, PUT, will display on the output device with no advancement to the next line (i.e. no carriage return [CR] and line feed [LF]). The second statement, PUT_LINE, will display on the output device with an advancement in the line. Variables, constants, and literals are output using these two statements. Literals are strings of characters such as those characters on the keyboard. Variables and constants are those identifiers declared by the declaration statements. The syntax of expressing a variable, constant, and literal on an output device are: SIMPLE_ADA_IO.PUT ( IDENTIFIER ); <--- displays identifier’s value SIMPLE_ADA_IO.PUT_LINE ( "ABC" ); <--- displays the literal ABC. Assignment Statement The assignment statement allows value assignments to occur to an identifier on the left hand side of the :=. If there is an identifier on the right hand side of the := such as IDENTIFIER2 then the value of IDENTIFIER2 is assigned to the identifier on the left hand side of the := such as IDENTIFIER1. If there is an arithmetic expression on the right side of the := then the value of the expression is calculated and assigned to the identifier on the left hand side of the := such as IDENTIFIERA. The syntax of the assignment statement is: IDENTIFIER1 := IDENTIFIER2; IDENTIFIERA := arithmetic expression; Use the program, named Temp_Conversion, printed below for the following laboratories. The program is located in the file called UNIT2.A1. Review the temperature conversion program to understand how it converts a temperature from Fahrenheit to Celsius and from Celsius to Fahrenheit. NOTE: The two dashes —— located together identify comment statements embedded in the Ada program and have NO relevance to the execution of the program. They DO serve as documentation to the program to allow programmers to make comments within the program for better program understanding. -- UNIT2.A1 with SIMPLE_ADA_IO; procedure TEMP_CONVERSION is -- This program allows the user to enter a temperature. -- The temperature can be either Fahrenheit or Celsius. -- A temperature in Fahrenheit is converted to Celsius. -- A temperature in Celsius is converted to Fahrenheit. TEMP_IN_FAHRENHEIT : constant INTEGER := 32; TEMP_IN_CELSIUS : constant INTEGER := 0; FAHRENHEIT_TO_CELSIUS : INTEGER; -- RESULT CELSIUS_TO_FAHRENHEIT : INTEGER; -- RESULT begin -- TEMP_CONVERSION CELSIUS_TO_FAHRENHEIT := (9*TEMP_IN_CELSIUS + 160)/5; FAHRENHEIT_TO_CELSIUS := 5*(TEMP_IN_FAHRENHEIT - 32)/9; SIMPLE_ADA_IO.PUT (TEMP_IN_FAHRENHEIT); SIMPLE_ADA_IO.PUT (" in Fahrenheit is "); SIMPLE_ADA_IO.PUT (FAHRENHEIT_TO_CELSIUS); SIMPLE_ADA_IO.PUT (" in Celsius."); end TEMP_CONVERSION; LABORATORY 1 The program named TEMP_CONVERSION is located in a file named UNIT2.A1. 2. Edit the program Change the last PUT statement to a PUT_LINE statement. Assure yourself that you made the change correctly. 2. Compile and Link the Program. IF THE PROGRAM DOES NOT COMPILE then the statement you inserted must be incorrect. 3. Run the program (execute the program). What temperature was calculated for Celsius? __________ Laboratory 1 completed __________ LABORATORY 2: The program computes values for each of the formulas. One value is Celsius to Fahrenheit and the other is Fahrenheit to Celsius. There is a PUT statement for only one of these calculations of Fahrenheit to Celsius. 1. Edit the Program. Add a PUT_LINE statement to PUT the other calculations. Assure yourself that you made the change correctly. 2. Compile and Link and Run the program. What temperature was calculated for Celsius? __________ What temperature was calculated for Fahrenheit? __________ Laboratory 2 completed __________ LABORATORY 3 Two constants are declared in the Ada program TEMP_CONVERSION. 1. Edit the Program. Change the value of the constant TEMP_IN_FAHRENHEIT to 212. Change the value of the constant TEMP_IN CELSIUS to 100. 2. Compile and Link your program. 3. Run your program. What is the output for Fahrenheit to Celsius? ________ What is the output for Celsius to Fahrenheit? ________ Laboratory 3 completed ________ LABORATORY 4 Parentheses in the equation set mathematical precedence of the calculations. Deleting or changing them can make a difference in the mathematical calculations. 1. Edit your program. Change the constant value TEMP_IN_FAHRENHEIT back to 32. Change the constant value TEMP_IN CELSIUS back to 0. 2. Compile, Link, and Run your program. What is the output for Fahrenheit to Celsius? ________ What is the output for Celsius to Fahrenheit? ________ 3. Edit your program Delete the parentheses from the first assignment statements. 4. Compile, Link, and Rerun you program. What is the output for Fahrenheit to Celsius? ________ What is the output for Celsius to Fahrenheit? ________ Is the output the same as in question 3? Why or Why not? Laboratory 4 completed ________ Program UNIT2_EXERCISE (UNIT2.A2) is a segment of a program. A segment is an outline of a program with sections of the program intentionally omitted. Use this program segment for Laboratories 5 and 6. ======================================== -- UNIT2.A2 -- PLACE YOUR CODE HERE TO ACCESS Simple_Ada_IO. procedure UNIT2_EXERCISE is begin -- UNIT2_EXERCISE -- PLACE YOUR CODE HERE end UNIT2_EXERCISE; ======================================== LABORATORY 5 Write a program to display the following information on the screen. Use literal constants to identify each of the data items written on the screen. a. Your last name b. Your birth date Laboratory 5 completed LABORATORY 6 Now change your solution for laboratory 5 to use named constants for month, day, and year rather than the literal constants. You will need a constant definition in the declaration section to your Ada program. Laboratory 6 completed SIMPLEIO.ADA You will note that each of the programs use a package SIMPLE_ADA_IO. This program, shown below, allows Get and Put data to the screen and keyboard. --- SIMPLEIO.ADA with IO EXCEPTIONS with TEXT_IO; package SIMPLE_ADA_IO is procedure GET (ITEM : out INTEGER); procedure GET (ITEM : out CHARACTERS); procedure GET (ITEM : out STRING); procedure GET (ITEM : out FLOAT); procedure GET (FILE : in TEXT_IO.FILE_TYPE; ITEM : out INTEGER); procedure GET (FILE : in TEXT_IO.FILE_TYPE; ITEM : out CHARACTERS); procedure GET (FILE : in TEXT_IO.FILE_TYPE; ITEM : out STRING); procedure GET (FILE : in TEXT_IO.FILE_TYPE; ITEM : out FLOAT); procedure PUT (ITEM : in INTEGER); procedure PUT (ITEM : in CHARACTERS); procedure PUT (ITEM : in STRING); procedure PUT (ITEM : in FLOAT); procedure PUT (FILE : in TEXT_IO.FILE_TYPE; ITEM : in INTEGER); procedure PUT (FILE : in TEXT_IO.FILE_TYPE; ITEM : in CHARACTERS); procedure PUT (FILE : in TEXT_IO.FILE_TYPE; ITEM : in STRING); procedure PUT (FILE : in TEXT_IO.FILE_TYPE; ITEM : in FLOAT); procedure GET_LINE (ITEM : out STRING; LAST : out NATURAL); procedure PUT_LINE (ITEM : in STRING); procedure NEW_PAGE; function END_OF_FILE return BOOLEAN; function END_OF_LINE return BOOLEAN; function DEVICE_ERROR : exception return IO_EXCEPTIONS.DEVICE_ERROR; exception ENO_ERROR : exception return IO_EXCEPTIONS.END_ERROR; DATA_ERROR : exception return IO_EXCEPTIONS.DATA_ERROR; end SIMPLE_ADA_IO; package body SIMPLE_ADA_IO is package TYPE_INTEGER_IO is new TEXT_IO.INTEGER_IO (INTEGER); package TYPE_FLOAT_IO is new TEXT_IO.FLOAT_IO (FLOAT); procedure GET (ITEM : out INTEGER) is begin TYPE_INTEGER_IO.GET (ITEM); end GET; procedure PUT (ITEM : in INTEGER) is begin begin TYPE_INTEGER_IO.PUT (ITEM); end PUT; end GET; procedure GET (ITEM : out STRING) is begin TEXT_IO.GET (ITEM); end GET; procedure PUT (ITEM : in STRING) is begin TEXT_IO.PUT (ITEM); end PUT; procedure GET (FILE : in TEXT_IO.FILE_TYPE; ITEM : out INTEGER) is begin TEXT_IO.GET (FILE); end GET; procedure PUT (FILE : in TEXT_IO.FILE_TYPE; ITEM : in INTEGER) is begin TEXT_IO.PUT (FILE); end PUT; procedure GET (FILE : in TEXT_IO.FILE_TYPE; ITEM : in CHARACTERS) is begin TEXT_IO.GET (FILE); end GET; procedure PUT (FILE : in TEXT_IO.FILE_TYPE; ITEM : in CHARACTERS) is begin TEXT_IO.PUT (FILE); end PUT; procedure GET (FILE : in TEXT_IO.FILE_TYPE; ITEM : in STRING) is begin TEXT_IO.GET (FILE); end GET; procedure PUT (FILE : in TEXT_IO.FILE_TYPE; ITEM : in STRING) is begin TEXT_IO.PUT (FILE); end PUT; procedure GET_LINE (ITEM : out STRING; LAST : out NATURAL) is begin TEXT_IO.GET_LINE (ITEM, LAST); end GET_LINE; procedure PUT_LINE (ITEM : in STRING) is begin TEXT_IO.PUT_LINE (ITEM); end PUT_LINE; procedure NEW_LINE is begin TEXT_IO.NEW_LINE; end NEW_LINE; procedure NEW_PAGE is begin TEXT_IO.NEW_PAGE; end NEW_PAGE; function END_OF_FILE return BOOLEAN is begin return TEXT_IO.END_OF_FILE; end END_OF_FILE; function END_OF_LINE return BOOLEAN is begin return TEXT_IO.END_OF_LINE; end END_OF_LINE; end SIMPLE_ADA_IO;
{"Source-Url": "http://www.dtic.mil/dtic/tr/fulltext/u2/a292219.pdf", "len_cl100k_base": 7966, "olmocr-version": "0.1.53", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 29723, "total-output-tokens": 9629, "length": "2e12", "weborganizer": {"__label__adult": 0.0006623268127441406, "__label__art_design": 0.0008268356323242188, "__label__crime_law": 0.0005545616149902344, "__label__education_jobs": 0.055755615234375, "__label__entertainment": 0.00016486644744873047, "__label__fashion_beauty": 0.0003666877746582031, "__label__finance_business": 0.0006442070007324219, "__label__food_dining": 0.0006852149963378906, "__label__games": 0.0010080337524414062, "__label__hardware": 0.0015344619750976562, "__label__health": 0.000766754150390625, "__label__history": 0.00048613548278808594, "__label__home_hobbies": 0.00031113624572753906, "__label__industrial": 0.0008254051208496094, "__label__literature": 0.0006394386291503906, "__label__politics": 0.0005764961242675781, "__label__religion": 0.000911712646484375, "__label__science_tech": 0.0135040283203125, "__label__social_life": 0.00034499168395996094, "__label__software": 0.00701141357421875, "__label__software_dev": 0.91015625, "__label__sports_fitness": 0.0005922317504882812, "__label__transportation": 0.0010805130004882812, "__label__travel": 0.00036406517028808594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33832, 0.02579]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33832, 0.45014]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33832, 0.81675]], "google_gemma-3-12b-it_contains_pii": [[0, 1699, false], [1699, 1942, null], [1942, 3199, null], [3199, 4932, null], [4932, 6953, null], [6953, 8461, null], [8461, 8472, null], [8472, 11050, null], [11050, 12147, null], [12147, 13688, null], [13688, 14662, null], [14662, 15819, null], [15819, 16386, null], [16386, 17560, null], [17560, 18285, null], [18285, 19569, null], [19569, 20019, null], [20019, 21737, null], [21737, 22367, null], [22367, 24194, null], [24194, 25941, null], [25941, 27330, null], [27330, 28436, null], [28436, 29682, null], [29682, 30602, null], [30602, 33832, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1699, true], [1699, 1942, null], [1942, 3199, null], [3199, 4932, null], [4932, 6953, null], [6953, 8461, null], [8461, 8472, null], [8472, 11050, null], [11050, 12147, null], [12147, 13688, null], [13688, 14662, null], [14662, 15819, null], [15819, 16386, null], [16386, 17560, null], [17560, 18285, null], [18285, 19569, null], [19569, 20019, null], [20019, 21737, null], [21737, 22367, null], [22367, 24194, null], [24194, 25941, null], [25941, 27330, null], [27330, 28436, null], [28436, 29682, null], [29682, 30602, null], [30602, 33832, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33832, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 33832, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33832, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33832, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33832, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33832, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33832, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33832, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33832, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33832, null]], "pdf_page_numbers": [[0, 1699, 1], [1699, 1942, 2], [1942, 3199, 3], [3199, 4932, 4], [4932, 6953, 5], [6953, 8461, 6], [8461, 8472, 7], [8472, 11050, 8], [11050, 12147, 9], [12147, 13688, 10], [13688, 14662, 11], [14662, 15819, 12], [15819, 16386, 13], [16386, 17560, 14], [17560, 18285, 15], [18285, 19569, 16], [19569, 20019, 17], [20019, 21737, 18], [21737, 22367, 19], [22367, 24194, 20], [24194, 25941, 21], [25941, 27330, 22], [27330, 28436, 23], [28436, 29682, 24], [29682, 30602, 25], [30602, 33832, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33832, 0.04407]]}
olmocr_science_pdfs
2024-12-12
2024-12-12
c14565788495f4063fdb347ebc854585abf09047
[REMOVED]
{"Source-Url": "https://research-repository.griffith.edu.au/bitstream/handle/10072/49227/81577_1.pdf?sequence=1", "len_cl100k_base": 5027, "olmocr-version": "0.1.49", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 32477, "total-output-tokens": 6828, "length": "2e12", "weborganizer": {"__label__adult": 0.0002818107604980469, "__label__art_design": 0.0003812313079833984, "__label__crime_law": 0.0002963542938232422, "__label__education_jobs": 0.00203704833984375, "__label__entertainment": 5.6624412536621094e-05, "__label__fashion_beauty": 0.0001380443572998047, "__label__finance_business": 0.00045180320739746094, "__label__food_dining": 0.00028514862060546875, "__label__games": 0.0005025863647460938, "__label__hardware": 0.0004820823669433594, "__label__health": 0.00034332275390625, "__label__history": 0.00021660327911376953, "__label__home_hobbies": 8.07642936706543e-05, "__label__industrial": 0.00033664703369140625, "__label__literature": 0.00033354759216308594, "__label__politics": 0.00017631053924560547, "__label__religion": 0.0003216266632080078, "__label__science_tech": 0.0152435302734375, "__label__social_life": 9.614229202270508e-05, "__label__software": 0.007770538330078125, "__label__software_dev": 0.96923828125, "__label__sports_fitness": 0.0002357959747314453, "__label__transportation": 0.0003662109375, "__label__travel": 0.0001544952392578125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27895, 0.0312]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27895, 0.49693]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27895, 0.91111]], "google_gemma-3-12b-it_contains_pii": [[0, 2473, false], [2473, 4812, null], [4812, 6928, null], [6928, 9728, null], [9728, 11400, null], [11400, 13391, null], [13391, 14955, null], [14955, 17093, null], [17093, 19807, null], [19807, 22141, null], [22141, 25074, null], [25074, 27895, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2473, true], [2473, 4812, null], [4812, 6928, null], [6928, 9728, null], [9728, 11400, null], [11400, 13391, null], [13391, 14955, null], [14955, 17093, null], [17093, 19807, null], [19807, 22141, null], [22141, 25074, null], [25074, 27895, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27895, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27895, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27895, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27895, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27895, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27895, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27895, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27895, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27895, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27895, null]], "pdf_page_numbers": [[0, 2473, 1], [2473, 4812, 2], [4812, 6928, 3], [6928, 9728, 4], [9728, 11400, 5], [11400, 13391, 6], [13391, 14955, 7], [14955, 17093, 8], [17093, 19807, 9], [19807, 22141, 10], [22141, 25074, 11], [25074, 27895, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27895, 0.0303]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
b9f6eda8eedb72fbf7d7fa91c96dfe06de8ccddc
Classes and Methods for Spatial Data: the sp Package Edzer Pebesma* Roger S. Bivand† Feb 2005 Contents 1 Introduction 2 2 Spatial data classes 2 3 Manipulating spatial objects 3 3.1 Standard methods .......................... 3 3.2 Spatial methods .......................... 4 4 Spatial points 4 4.1 Points without attributes .................. 4 4.2 Points with attributes .................. 7 5 Grids 11 5.1 Creating grids from topology ................. 11 5.2 Creating grids from points .................. 13 5.3 Grided data with attributes .................. 14 5.4 Are grids stored as points or as matrix/array? .............. 15 5.5 Row and column selection of a region .............. 16 6 Lines 17 6.1 Building line objects from scratch .............. 17 6.2 Building line objects with attributes .............. 19 7 Polygons 19 7.1 Building from scratch .................. 19 7.2 Polygons with attributes .................. 20 *Institute for Geoinformatics, University of Muenster, Heisenbergstraße 2, 48149 Münster, Germany; edzer.pebesma@uni-muenster.de †Economic Geography Section, Department of Economics, Norwegian School of Economics and Business Administration, Breiviksveien 40, N-5045 Bergen, Norway; Roger.Bivand@nhh.no 1 Introduction The sp package provides classes and methods for dealing with spatial data in R\(^1\). The spatial data structures implemented include points, lines, polygons and grids; each of them with or without attribute data. We have chosen to use S4 classes and methods style (Chambers, 1998) to allow validation of objects created. Although we mainly aim at using spatial data in the geographical (two-dimensional) domain, the data structures that have a straightforward implementation in higher dimensions (points, grids) do allow this. From the package home page on CRAN, \url{https://cran.r-project.org/package=sp}, links to a graph gallery with R code, and the development source tree are found. This vignette describes the classes, methods and functions provided by sp. Instead of manipulating the class slots (components) directly\(^2\), we provide methods and functions to create or modify the classes from elementary types such as matrices, data.frames or lists and convert them back to any of these types. Also, coercion (type casting) from one class to the other is provided, where relevant. Package sp is loaded by \begin{verbatim} > library(sp) \end{verbatim} 2 Spatial data classes The spatial data classes implemented are points, grids, lines, rings and polygons. Package sp provides classes for the spatial-only information (the topology), e.g. SpatialPoints, and extensions for the case where attribute information stored in a data.frame is available for each spatial entity (e.g. for points, the SpatialPointsDataFrame). The available data classes are: --- \(^1\) The motivation to write this package was born on a pre-conference spatial data workshop during DSC 2003. At that time, the advantage of having multiple R packages for spatial statistics seemed to be hindered by a lack of a uniform interface for handling spatial data. Each package had its own conventions on how spatial data were stored and returned. With this package, and packages supporting the classes provided here, we hope that R with its extension packages becomes more coherent for analyzing different types of spatial data. \(^2\) which is possible, but not recommended because validity of resulting objects is no longer verified. <table> <thead> <tr> <th>data type</th> <th>class</th> <th>attributes</th> <th>contains</th> </tr> </thead> <tbody> <tr> <td>points</td> <td>SpatialPoints</td> <td>No</td> <td>Spatial</td> </tr> <tr> <td>multipoints</td> <td>SpatialMultiPoints</td> <td>data.frame</td> <td>SpatialPoints</td> </tr> <tr> <td>pixels</td> <td>SpatialPixels</td> <td>No</td> <td>SpatialPoints</td> </tr> <tr> <td>pixels</td> <td>SpatialPixelsDataFrame</td> <td>data.frame</td> <td>SpatialPixels</td> </tr> <tr> <td>full grid</td> <td>SpatialGrid</td> <td>No</td> <td>SpatialPixels</td> </tr> <tr> <td>full grid</td> <td>SpatialGridDataFrame</td> <td>data.frame</td> <td>SpatialGrid</td> </tr> <tr> <td>line</td> <td>Line</td> <td>No</td> <td></td> </tr> <tr> <td>lines</td> <td>Lines</td> <td>No</td> <td>Line list</td> </tr> <tr> <td>lines</td> <td>SpatialLines</td> <td>No</td> <td>Spatial, Lines list</td> </tr> <tr> <td>lines</td> <td>SpatialLinesDataFrame</td> <td>data.frame</td> <td>SpatialLines</td> </tr> <tr> <td>polygons</td> <td>Polygon</td> <td>No</td> <td>Line</td> </tr> <tr> <td>polygons</td> <td>Polygons</td> <td>No</td> <td>Polygon list</td> </tr> <tr> <td>polygons</td> <td>SpatialPolygons</td> <td>No</td> <td>Spatial, Polygons list</td> </tr> <tr> <td>polygons</td> <td>SpatialPolygonsDataFrame</td> <td>data.frame</td> <td>SpatialPolygons</td> </tr> </tbody> </table> The class `Spatial` only holds metadata common to all derived classes (bounding box, coordinate reference system), and is of convenient for defining methods that are common to all derived classes. In the following sections we will show how we can create objects of these classes from scratch or from other objects, and which methods and functions are available to them. ### 3 Manipulating spatial objects Although entries in spatial objects are in principle accessible through their slot name, e.g. `x@coords` contains the coordinates of an object of class or extending `SpatialPoints`, we strongly encourage users to access the data by using functions and methods, in this case `coordinates(x)` to retrieve the coordinates. #### 3.1 Standard methods Selecting, retrieving or replacing certain attributes in spatial objects with attributes is done using standard methods - `[` select "rows" (features) and/or columns in the data attribute table; e.g. `meuse[1:2, "zinc"]` returns a `SpatialPointsDataFrame` with the first two points and an attribute table with only variable "zinc". - `[[` extracts a column from the data attribute table - `[[<-` assign or replace a column in the data attribute table. Other methods available are: plot, summary, print, dim and names (operate on the data.frame part), as.data.frame, as.matrix and image (for gridded data), lines (for line data), points (for point data), subset (points and grids), stack (point and grid data.frames), over for spatial joins, spplot, and length (number of features). 3.2 Spatial methods A number of spatial methods are available for the classes in sp: - dimensions(x) returns number of spatial dimensions - y = spTransform(x, CRS("+proj=longlat +datum=WGS84")) convert or transform from one coordinate reference system (geographic projection) to another (requires package rgdal to be installed) - bbox(x) returns a matrix with the coordinates bounding box; the dimensions form rows, min/max form the columns - coordinates(x) returns a matrix with the spatial coordinates - gridded(x) tells whether x derives from SpatialPixels, or when used in assignment, coerces a SpatialPoints object into a SpatialPixels object. - spplot(x) plots attributes, possibly in combination with other types of data (points, lines, grids, polygons), and possibly in as a conditioning plot for multiple attributes - over(x, y) retrieve index or attributes of y corresponding (intersecting) with spatial locations of x. - spsample(x) samples point coordinates in the continuous space of SpatialPolygons, a gridded area, or along a SpatialLines. Subsetting and sample can be used to randomly subsample full spatial entities. - geometry(x) strips the data.frame, and returns the geometry-only object 4 Spatial points 4.1 Points without attributes We can generate a set of 10 points on the unit square \([0, 1] \times [0, 1]\) by ```r > xc = round(runif(10), 2) > yc = round(runif(10), 2) > xy = cbind(xc, yc) > xy ``` this $10 \times 2$ matrix can be converted into a SpatialPoints object by ```r > xy.sp = SpatialPoints(xy) > xy.sp SpatialPoints: xc yc [1,] 0.67 0.30 [2,] 0.96 0.62 [3,] 0.92 0.91 [4,] 0.77 0.85 [5,] 0.72 0.46 [6,] 0.74 0.39 [7,] 0.63 0.64 [8,] 0.19 0.72 [9,] 0.70 0.20 [10,] 0.37 0.28 ``` Coordinate Reference System (CRS) arguments: NA ```r > plot(xy.sp, pch = 2) The plot is shown in figure 1. ``` We can retrieve the coordinates from `xy.sp` by ```r > xy.cc = coordinates(xy.sp) > class(xy.cc) [1] "matrix" ``` ```r > dim(xy.cc) [1] 10 2 ``` and other methods retrieve the bounding box, the dimensions, select points (not dimensions or columns), coerce to a data.frame, or print a summary: ```r > bbox(xy.sp) ``` Figure 1: plot of SpatialPoints object; aspect ratio of x and y axis units is 1 min max xc 0.19 0.96 yc 0.20 0.91 > dimensions(xy.sp) [1] 2 > xy.sp[1:2] SpatialPoints: xc yc [1,] 0.67 0.30 [2,] 0.96 0.62 Coordinate Reference System (CRS) arguments: NA > xy.df = as.data.frame(xy.sp) > class(xy.df) [1] "data.frame" > dim(xy.df) [1] 10 2 > summary(xy.sp) Object of class SpatialPoints Coordinates: min max xc 0.19 0.96 yc 0.20 0.91 Is projected: NA proj4string: [NA] Number of points: 10 4.2 Points with attributes One way of creating a SpatialPointsDataFrame object is by building it from a SpatialPoints object and a data.frame containing the attributes: > df = data.frame(z1 = round(5 + rnorm(10), 2), z2 = 20:29) > df z1 z2 1 3.10 20 2 4.15 21 3 3.68 22 4 4.45 23 ```r 5 6.62 24 6 5.57 25 7 3.66 26 8 3.75 27 9 5.19 28 10 5.02 29 > xy.spdf = SpatialPointsDataFrame(xy.sp, df) > xy.spdf coordinates z1 z2 1 (0.67, 0.3) 3.10 20 2 (0.96, 0.62) 4.15 21 3 (0.92, 0.91) 3.68 22 4 (0.77, 0.85) 4.45 23 5 (0.72, 0.46) 6.62 24 6 (0.74, 0.39) 5.57 25 7 (0.63, 0.64) 3.66 26 8 (0.19, 0.72) 3.75 27 9 (0.7, 0.2) 5.19 28 10 (0.37, 0.28) 5.02 29 > summary(xy.spdf) Object of class SpatialPointsDataFrame Coordinates: min max xc 0.19 0.96 yc 0.20 0.91 Is projected: NA proj4string : [NA] Number of points: 10 Data attributes: z1 z2 Min. :3.100 Min. :20.00 1st Qu.:3.697 1st Qu.:22.25 Median :4.300 Median :24.50 Mean :4.519 Mean :24.50 3rd Qu.:5.147 3rd Qu.:26.75 Max. :6.620 Max. :29.00 > dimensions(xy.spdf) [1] 2 > xy.spdf[1:2, ] # selects row 1 and 2 ``` coordinates z1 z2 1 (0.67, 0.3) 3.10 20 2 (0.96, 0.62) 4.15 21 > xy.spdf[1] # selects attribute column 1, along with the coordinates coordinates z1 1 (0.67, 0.3) 3.10 2 (0.96, 0.62) 4.15 3 (0.92, 0.91) 3.68 4 (0.77, 0.85) 4.45 5 (0.72, 0.46) 6.62 6 (0.74, 0.39) 5.57 7 (0.63, 0.64) 3.66 8 (0.19, 0.72) 3.75 9 (0.7, 0.2) 5.19 10 (0.37, 0.28) 5.02 > xy.spdf[1:2, "z2"] # select row 1,2 and attribute "z2" coordinates z2 1 (0.67, 0.3) 20 2 (0.96, 0.62) 21 > xy.df = as.data.frame(xy.spdf) > xy.df[1:2,] z1 z2 xc yc 1 3.10 20 0.67 0.30 2 4.15 21 0.96 0.62 > xy.cc = coordinates(xy.spdf) > class(xy.cc) [1] "matrix" > dim(xy.cc) [1] 10 2 A note on selection with [: the behaviour is as much as possible copied from that of data.frames, but coordinates are always sticky and a SpatialPointsDataFrame is always returned; drop=FALSE is not allowed. If coordinates should be dropped, use the as.data.frame method and select the non-coordinate data, or use [] to select a single attribute column (example below). SpatialPointsDataFrame objects can be created directly from data.frames by specifying which columns contain the coordinates: > df1 = data.frame(xy, df) > coordinates(df1) = c("xc", "yc") > df1 coordinates z1 z2 1 (0.67, 0.3) 3.10 20 2 (0.96, 0.62) 4.15 21 3 (0.92, 0.91) 3.68 22 4 (0.77, 0.85) 4.45 23 5 (0.72, 0.46) 6.62 24 6 (0.74, 0.39) 5.57 25 7 (0.63, 0.64) 3.66 26 8 (0.19, 0.72) 3.75 27 9 (0.7, 0.2) 5.19 28 10 (0.37, 0.28) 5.02 29 or > df2 = data.frame(xy, df) > coordinates(df2) = ~xc+yc > df2[1:2,] coordinates z1 z2 1 (0.67, 0.3) 3.10 20 2 (0.96, 0.62) 4.15 21 > as.data.frame(df2)[1:2,] xc yc z1 z2 1 0.67 0.30 3.10 20 2 0.96 0.62 4.15 21 Note that in this form, coordinates by setting (specifying) the coordinates promotes its argument, an object of class data.frame to an object of class SpatialPointsDataFrame. The method as.data.frame coerces back to the original data.frame. When used on a right-hand side of an equation, coordinates retrieves the matrix with coordinates: > coordinates(df2)[1:2,] xc yc 1 0.67 0.30 2 0.96 0.62 Elements (columns) in the data.frame part of an object can be manipulated (retrieved, assigned) directly: > df2["z2"] Object of class `SpatialPointsDataFrame` Coordinates: <table> <thead> <tr> <th></th> <th>min</th> <th>max</th> </tr> </thead> <tbody> <tr> <td>xc</td> <td>0.19</td> <td>0.96</td> </tr> <tr> <td>yc</td> <td>0.20</td> <td>0.91</td> </tr> </tbody> </table> Is projected: NA proj4string: [NA] Number of points: 10 Data attributes: <table> <thead> <tr> <th></th> <th>Min.</th> <th>1st Qu.</th> <th>Median</th> <th>Mean</th> <th>3rd Qu.</th> <th>Max.</th> </tr> </thead> <tbody> <tr> <td>z1</td> <td>3.100</td> <td>3.697</td> <td>4.300</td> <td>4.519</td> <td>5.147</td> <td>6.620</td> </tr> <tr> <td>z2</td> <td>20.00</td> <td>21.25</td> <td>23.50</td> <td>23.60</td> <td>25.75</td> <td>28.00</td> </tr> <tr> <td>z3</td> <td>1.00</td> <td>3.25</td> <td>5.50</td> <td>5.50</td> <td>7.75</td> <td>10.00</td> </tr> </tbody> </table> Plotting attribute data can be done by using either `spplot` to colour symbols, or `bubble` which uses symbol size: ```r > bubble(df2, "z1", key.space = "bottom") > spplot(df2, "z1", key.space = "bottom") ``` the resulting plots are shown in figure 2. ## 5 Grids Package `sp` has two classes for grid topology: `SpatialPixels` and `SpatialGrid`. The pixels form stores coordinates and is for partial grids, or unordered points; the `SpatialGrid` form does not store coordinates but holds full grids (i.e., `SpatialGridDataFrame` holds attribute values for each grid cell). Objects can be coerced from one representation to the other. ### 5.1 Creating grids from topology When we know the offset, the cell sizes and the dimensions of a grid, we can specify this by using the function `GridTopology`: ```r > gt = GridTopology(cellcentre.offset = c(1,1,2), cellsize=c(1,1,1), cells.dim = c(3,4,6)) > grd = SpatialGrid(gt) > summary(grd) ``` Figure 2: plot of SpatialPointsDataFrame object, using symbol size (bubble, top) or colour (spplot, bottom) Object of class SpatialGrid Coordinates: \[ \begin{array}{c c c} \text{min} & \text{max} \\ [1,] & 0.5 & 3.5 \\ [2,] & 0.5 & 4.5 \\ [3,] & 1.5 & 7.5 \\ \end{array} \] Is projected: NA proj4string : [NA] Grid attributes: \[ \begin{array}{c c c c} \text{cellcentre.offset} & \text{cellsize} & \text{cells.dim} \\ 1 & 1 & 1 & 3 \\ 2 & 1 & 1 & 4 \\ 3 & 2 & 1 & 6 \\ \end{array} \] The grid parameters can be retrieved by the function \> gridparameters(grd) \[ \begin{array}{c c c c} \text{cellcentre.offset} & \text{cellsize} & \text{cells.dim} \\ 1 & 1 & 1 & 3 \\ 2 & 1 & 1 & 4 \\ 3 & 2 & 1 & 6 \\ \end{array} \] 5.2 Creating grids from points In the following example a three-dimensional grid is constructed from a set of point coordinates: \> pts = expand.grid(x = 1:3, y = 1:4, z=2:7) \> grd.pts = SpatialPixels(SpatialPoints(pts)) \> summary(grd.pts) Object of class SpatialPixels Coordinates: \[ \begin{array}{c c c c} \text{x} & \text{y} & \text{z} \\ 0.5 & 3.5 \\ 0.5 & 4.5 \\ 1.5 & 7.5 \\ \end{array} \] Is projected: NA proj4string : [NA] Number of points: 72 Grid attributes: \[ \begin{array}{c c c c} \text{cellcentre.offset} & \text{cellsize} & \text{cells.dim} \\ x & y & z \\ 1 & 1 & 1 & 3 \\ 1 & 1 & 4 \\ 2 & 1 & 6 \\ \end{array} \] > grd = as(grd.pts, "SpatialGrid") > summary(grd) Object of class SpatialGrid Coordinates: min max x 0.5 3.5 y 0.5 4.5 z 1.5 7.5 Is projected: NA proj4string : [NA] Grid attributes: cellcentre.offset cellsize cells.dim x 1 1 3 y 1 1 4 z 2 1 6 Note that when passed a points argument, SpatialPixels accepts a tolerance (default 10 * .Machine$double.eps) to specify how close the points have to be to being exactly on a grid. For very large coordinates, this value may have to be increased. A warning is issued if full rows and/or columns are missing. 5.3 Gridded data with attributes Spatial, gridded data are data with coordinates on a regular lattice. To form such a grid we can go from coordinates: ```r > attr = expand.grid(xc = 1:3, yc = 1:3) > grd.attr = data.frame(attr, z1 = 1:9, z2 = 9:1) > coordinates(grd.attr) = ~xc+yc > gridded(grd.attr) [1] FALSE > gridded(grd.attr) = TRUE > gridded(grd.attr) [1] TRUE ``` ```r > summary(grd.attr) Object of class SpatialPixelsDataFrame Coordinates: min max xc 0.5 3.5 yc 0.5 3.5 Is projected: NA proj4string : [NA] ``` 14 Number of points: 9 Grid attributes: <table> <thead> <tr> <th></th> <th>cellcentre.offset</th> <th>cellsize</th> <th>cells.dim</th> </tr> </thead> <tbody> <tr> <td>xc</td> <td>1</td> <td>1</td> <td>3</td> </tr> <tr> <td>yc</td> <td>1</td> <td>1</td> <td>3</td> </tr> </tbody> </table> Data attributes: <table> <thead> <tr> <th></th> <th>z1</th> <th>z2</th> </tr> </thead> <tbody> <tr> <td>Min.</td> <td>1</td> <td>1</td> </tr> <tr> <td>1st Qu.</td> <td>3</td> <td>3</td> </tr> <tr> <td>Median</td> <td>5</td> <td>5</td> </tr> <tr> <td>Mean</td> <td>5</td> <td>5</td> </tr> <tr> <td>3rd Qu.</td> <td>7</td> <td>7</td> </tr> <tr> <td>Max.</td> <td>9</td> <td>9</td> </tr> </tbody> </table> Package `raster` provides dedicated methods to deal with raster data, and can deal with grids that are too large to be stored in memory. ### 5.4 Are grids stored as points or as matrix/array? The form in which gridded data comes depends on whether the grid was created from a set of points or from a matrix or external grid format (e.g. read through `rgdal`). Retrieving the form, or conversion to another can be done by `as(x, "class"`) or by using the function `fullgrid`: ```r > fullgrid(grd) [1] TRUE > fullgrid(grd.pts) [1] FALSE > fullgrid(grd.attr) [1] FALSE > fullgrid(grd.pts) = TRUE > fullgrid(grd.attr) = TRUE > fullgrid(grd.pts) [1] TRUE > fullgrid(grd.attr) [1] TRUE ``` The advantage of having grids in cell form is that when a large part of the grid contains missing values, these cells are not stored. In addition, no ordering of grid cells is required. For plotting by a grid with `levelplot`, this form is required and *spplot* (for grids a front-end to *levelplot*) will convert grids that are not in this form. In contrast, *image* requires a slightly altered version of the full grid form. A disadvantage of the cell form is that the coordinates for each point have to be stored, which may be prohibitive for large grids. Grids in cell form do have an index to allow for fast transformation to the full grid form. Besides *print*, *summary*, *plot*, objects of class *SpatialGridDataFrame* have methods for - [ select rows (points) and/or columns (variables) - [[ extract a column from the attribute table - [[<- assign or replace a column in the attribute table - *coordinates* retrieve the coordinates of grid cells - *as.matrix*, *as.array* retrieve the data as a matrix or array. The first index (rows) is the x-column, the second index (columns) the y-coordinate, different attributes the third index. Row index 1 is the smallest x-coordinate; column index 1 is the largest y-coordinate (top-to-bottom). - *as* coercion methods for *data.frame*, *SpatialPointsDataFrame* - *image* plot an image of the grid Finally, *spplot*, a front-end to *levelplot* allows the plotting of a single grid plot or a lattice of grid plots. ### 5.5 Row and column selection of a region Rows/columns selection can be done when gridded data is in the full grid form (as *SpatialGridDataFrame*). In this form also rows and/or columns can be de-selected (in which case a warning is issued): ```r > fullgrid(grd.attr) = FALSE > grd.attr[1:5, "z1"] ``` Object of class *SpatialPixelsDataFrame* Object of class *SpatialPixels* Grid topology: - cellcentre.offset cellsize cells.dim - xc 1 1 3 - yc 1 1 3 SpatialPoints: - xc yc - [1,] 1 3 - [2,] 2 3 - [3,] 3 3 16 Data summary: z1 Min. :4.0 1st Qu.:5.0 Median :7.0 Mean :6.6 3rd Qu.:8.0 Max. :9.0 > fullgrid(grd.attr) = TRUE > grd.attr[1:2,-2, c("z2","z1")] Object of class SpatialGridDataFrame Object of class SpatialGrid Grid topology: cellcentre.offset cellsize cells.dim xc 1 2 2 yc 2 1 2 SpatialPoints: xc yc [1,] 1 3 [2,] 3 3 [3,] 1 2 [4,] 3 2 Data summary: z2 z1 Min. :1.0 Min. :4.0 1st Qu.:2.5 1st Qu.:5.5 Median :3.5 Median :6.5 Mean :3.5 Mean :6.5 3rd Qu.:4.5 3rd Qu.:7.5 Max. :6.0 Max. :9.0 6 Lines 6.1 Building line objects from scratch In many instances, line coordinates will be retrieved from external sources. The following example shows how to build an object of class SpatialLines from scratch. Note that the Lines objects have to be given character ID values, and that these values must be unique for Lines objects combined in a SpatialLines object. ```r > l1 = cbind(c(1,2,3),c(3,2,2)) > l1a = cbind(l1[,1]+.05,l1[,2]+.05) > l2 = cbind(c(1,2,3),c(1,1.5,1)) > Sl1 = Line(l1) > Sl1a = Line(l1a) > Sl2 = Line(l2) > S1 = Lines(list(Sl1, Sl1a), ID="a") > S2 = Lines(list(Sl2), ID="b") > S1 = SpatialLines(list(S1,S2)) > summary(S1) Object of class SpatialLines Coordinates: min max x 1 3.05 y 1 3.05 Is projected: NA proj4string : [NA] > plot(S1, col = c("red", "blue")) ``` ![Image of lines and a triangle representing the spatial lines object](image) 6.2 Building line objects with attributes The class `SpatialLinesDataFrame` is designed for holding lines data that have an attribute table (data.frame) attached to it: ```r > df = data.frame(z = c(1,2), row.names=sapply(slot(Sl, "lines"), function(x) slot(x, "ID")) > Sldf = SpatialLinesDataFrame(Sl, data = df) > summary(Sldf) ``` Object of class `SpatialLinesDataFrame` Coordinates: ``` min max x 1 3.05 y 1 3.05 ``` Is projected: NA proj4string: [NA] Data attributes: ``` z Min. :1.00 1st Qu.:1.25 Median :1.50 Mean :1.50 3rd Qu.:1.75 Max. :2.00 ``` Not many useful methods for it are available yet. The `plot` method only plots the lines, ignoring attribute table values. Suggestions for useful methods are welcome. 7 Polygons 7.1 Building from scratch The following example shows how a set of polygons are built from scratch. Note that `Sr4` has the opposite direction (anti-clockwise) as the other three (clockwise); it is meant to represent a hole in the `Sr3` polygon. The default value for the hole colour `pbg` is "transparent", which will not show, but which often does not matter, because another polygon fills the hole — here it is set to "white". Note that the `Polygons` objects have to be given character ID values, and that these values must be unique for `Polygons` objects combined in a `SpatialPolygons` object. ```r > Sr1 = Polygon(cbind(c(2,4,4,1,2),c(2,3,5,4,2))) > Sr2 = Polygon(cbind(c(5,4,2,5),c(2,3,2,2))) > Sr3 = Polygon(cbind(c(4,4,5,10,4),c(5,3,2,5,5))) > Sr4 = Polygon(cbind(c(5,6,6,5,5),c(4,4,3,3,4)), hole = TRUE) > Srs1 = Polygons(list(Sr1, "s1")) ``` > Srs2 = Polygons(list(Sr2), "s2") > Srs3 = Polygons(list(Sr3, Sr4), "s3/4") > SpP = SpatialPolygons(list(Srs1, Srs2, Srs3), 1:3) > plot(SpP, col = 1:3, pbg="white") > # plot(SpP) 7.2 Polygons with attributes Polygons with attributes, objects of class SpatialPolygonsDataFrame, are built from the SpatialPolygons object (topology) and the attributes (data.frame). The row.names of the attributes data frame are matched with the ID slots of the SpatialPolygons object, and the rows of the data frame will be re-ordered if necessary. > attr = data.frame(a=1:3, b=3:1, row.names=c("s3/4", "s2", "s1")) > SrDf = SpatialPolygonsDataFrame(SpP, attr) > as(SrDf, "data.frame") a b s1 3 1 s2 2 2 s3/4 1 3 or, as another way to create the `SpatialPolygonsDataFrame` object: ```r > SrDf = attr > polygons(SrDf) = SpP ``` ## 8 Importing and exporting data Data import and export from external data sources and file formats is handled in the `rgdal` package in the first instance, using the available OGR/GDAL drivers for vector and raster data. This keeps the range of drivers up to date, and secures code maintenance through working closely with the open source geospatial community. Mac OSX users unable or unwilling to install `rgdal` from source after installing its external dependencies will find some functions in the `maptools` package to import and export a limited range of formats. ### References
{"Source-Url": "https://cran.r-project.org/web/packages/sp/vignettes/intro_sp.pdf", "len_cl100k_base": 8088, "olmocr-version": "0.1.53", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 41096, "total-output-tokens": 8981, "length": "2e12", "weborganizer": {"__label__adult": 0.0002732276916503906, "__label__art_design": 0.0010280609130859375, "__label__crime_law": 0.0003185272216796875, "__label__education_jobs": 0.0009360313415527344, "__label__entertainment": 0.00010478496551513672, "__label__fashion_beauty": 0.0001424551010131836, "__label__finance_business": 0.00022470951080322263, "__label__food_dining": 0.0003273487091064453, "__label__games": 0.0006237030029296875, "__label__hardware": 0.0010700225830078125, "__label__health": 0.0003104209899902344, "__label__history": 0.000743865966796875, "__label__home_hobbies": 0.00017845630645751953, "__label__industrial": 0.0005655288696289062, "__label__literature": 0.0001962184906005859, "__label__politics": 0.0003294944763183594, "__label__religion": 0.000392913818359375, "__label__science_tech": 0.0880126953125, "__label__social_life": 0.0001354217529296875, "__label__software": 0.054443359375, "__label__software_dev": 0.8486328125, "__label__sports_fitness": 0.0002777576446533203, "__label__transportation": 0.0005054473876953125, "__label__travel": 0.0003969669342041016}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23577, 0.13522]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23577, 0.21679]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23577, 0.71372]], "google_gemma-3-12b-it_contains_pii": [[0, 1278, false], [1278, 3513, null], [3513, 5561, null], [5561, 7321, null], [7321, 8079, null], [8079, 8159, null], [8159, 8870, null], [8870, 9682, null], [9682, 10829, null], [10829, 11935, null], [11935, 13386, null], [13386, 13494, null], [13494, 14755, null], [14755, 15855, null], [15855, 17302, null], [17302, 19056, null], [19056, 19856, null], [19856, 20453, null], [20453, 22075, null], [22075, 22781, null], [22781, 23577, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1278, true], [1278, 3513, null], [3513, 5561, null], [5561, 7321, null], [7321, 8079, null], [8079, 8159, null], [8159, 8870, null], [8870, 9682, null], [9682, 10829, null], [10829, 11935, null], [11935, 13386, null], [13386, 13494, null], [13494, 14755, null], [14755, 15855, null], [15855, 17302, null], [17302, 19056, null], [19056, 19856, null], [19856, 20453, null], [20453, 22075, null], [22075, 22781, null], [22781, 23577, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23577, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23577, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23577, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23577, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23577, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23577, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23577, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23577, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23577, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23577, null]], "pdf_page_numbers": [[0, 1278, 1], [1278, 3513, 2], [3513, 5561, 3], [5561, 7321, 4], [7321, 8079, 5], [8079, 8159, 6], [8159, 8870, 7], [8870, 9682, 8], [9682, 10829, 9], [10829, 11935, 10], [11935, 13386, 11], [13386, 13494, 12], [13494, 14755, 13], [14755, 15855, 14], [15855, 17302, 15], [17302, 19056, 16], [19056, 19856, 17], [19856, 20453, 18], [20453, 22075, 19], [22075, 22781, 20], [22781, 23577, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23577, 0.06526]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
6374dc28b5521ac512657997858f703270ca27c3
Design Of Web-Based School Library Information System Using YII Framework In SMA Karya Pembangunan Ciwidey Johni S Pasaribu*¹, Po Abas Sunarya*² ¹Politeknik Piksi Ganesha Informatics Engineering, ²Program Studi Magister Teknik Informatika Fakultas Sains dan Teknologi Universitas Raharja E-mail: *¹johni_0106@yahoo.com, ²abas@raharja.info Abstract The rapid development of the computer world that brings every individual or group of people will use information and communication technology in every activity. Likewise in the world of education, including libraries that utilize information and communication technology in managing the process of borrowing and returning library books. The benefits of making a web-based library information system include being able to save time where there is no need for manual recording, making it easier to find book data or library members, making it easier in making book loan reports and other reports and other benefits. The research method used is Research and Development (R&D). The making of software is done by a waterfall model consisting of: (1) needs analysis, (2) design, (3) implementation, and (4) testing. Then UML visual modeling is used which is the standardization of modeling language for object-oriented software development. The expected results of this study produce a web-based library information system using the YII Framework with MVC (Model View Controller) method in SMA Karya Pembangunan Ciwidey that can solve existing problems. The formulation of the problem of this research is how to build a library information system in SMA Karya Pembangunan Ciwidey so that it can present accurate and efficient information. The purpose of this research is to produce a library information system. Keywords — School Library, System Information, Web-Based. 1. INTRODUCTION With the rapid development of information and communication technology in this globalization era, it will have an impact on library management, where a school that applies information and communication technology in its school can improve services to library users. With information and communication technology in the library, it is expected to be able to provide maximum service to library users and minimize errors in data processing. The library was established to meet the information needs of the community, so it is a place that provides facilities and sources of information and becomes a learning center [¹]. In practice, libraries as managers of information and knowledge must be able to use and utilize information technology to the fullest. There are several reasons why libraries should utilize information and communication technology: (1) libraries must be able to provide services in quantity, (2) libraries are able to provide services in the use of collections together, (3) libraries are able to make effective human resources, (4) libraries use time efficiency, and (5) the library is able to manage the diversity of information it has [²]. In an effort to improve service performance, the use of information systems is an appropriate alternative or solution. Reasons for using information systems include: 1) greater processing speed 2) accuracy and better consistency, 3) achieving information faster, 4) reducing costs, 5) better security [³]. The school library functions as a place to read where the library is now increasingly less in demand by students in places to read and look for book literature. This is due to the presence of other information media such as the internet which through this media will make it easier to search various kinds of information and reading sources. The impact of this internet information media was also felt by the SMA Karya Pembangunan Ciwidey Library where the students were less interested in reading and borrowing books from the library besides having certain assignments from the teacher which required students to borrow books from the library while the library had a large collection of books. Other problems are the SMA Karya Pembangunan Ciwidey Library in serving borrowers and book readers experiencing problems in managing administrative data in the library, namely errors in recording serial numbers in the master book, errors in managing borrowing data and book returns. In addition, students also have difficulty in finding books to be borrowed, where students must find the book carefully on the shelf, besides that there is no information whether the book is still there or does not exist because it is being borrowed by someone else. Based on the background that has been described, the causes of the problem can be identified: 1. Recording and managing data requires a long time and is not effective because it still uses simple media. 2. Difficult to find member data, book data, and transactions due to lack of organized data. 3. Making a report requires a long time because the user must do it manually from data collection to processing. 4. Data security is less guaranteed because it is physical so that data can be lost or manipulated. The objectives to be achieved are as follows: 1. Facilitate the recording and management of data so that it can improve time efficiency. 2. Facilitate the search for data books, members, and library transactions. 3. Assist users in making reports. Existing similar research is research conducted by Minarni and Fazril Hadi Saputra with the title Web-Based Library Information System in Padang Health Polytechnic [4]. The system designed here is book borrowing, book return, member data input, circulation and book search. The research aims for the Politeknik Kesehatan Padang Library to serve students of the Padang Health Polytechnic and visitors who come directly to the library or who access via the internet get information quickly and accurately. Similar research was also carried out by Dani Eko Hendrianto with the title Making Website Based Library Information System at SMPN 1 Donorojo Pacitan Regency [5]. The research aims to produce a library information system in improving the services and performance of library staff in terms of managing library administration data and accelerating student loan and book return transactions. From the research conducted by Dani Eko Hendrianto, it can be concluded that his research resulted in a web-based information system that can be used by library staff of SMPN 1 Donorojo in managing and inputting book data to speed up the process of searching and compiling data in book collections, magazines, research journals, member data collection, data collection of loans and repayments and can accelerate the process of borrowing and book repayment transactions conducted by students of SMPN 1 Donorojo, Pacitan Regency. Another relevant study is the Web-Based School Library Information System with CodeIgniter and PostgreSQL Framework in SMA Negeri 1 Ngaglik by Punky Indra Permana, Yogyakarta State University [6]. The purpose of this research is to create a school library information system and determine the level of eligibility in terms of functionality, security, usability, maintainability, portability and efficiency. The results obtained in this study can be concluded that by using the Codeigniter framework, producing good quality software in terms of functionality, security, usability, maintainability, portability and efficiency. The relevant research equation for this research is about software testing methods for manufacturing information systems. The first research focuses on creating and testing information systems for polytechnic library services. The second relevant research focuses on creating and testing information systems for junior high school libraries. Then the third research focuses on creating and testing information systems for high school library services. While the difference is the PHP framework and the quality standards used. This study uses the YII Framework. YII is a high performance component-based PHP framework for large-scale Web application development. The results of this software can be used in the SMA Karya Pembangunan Ciwidey Library so that this new system can facilitate the recording and management of data so that it can improve time efficiency. In this information system, library staff/managers carry out data processing and report to the leadership easily and more accurately. The author tries to create an information system that makes it easier for students to find book data, book borrowing information and submit library member registration that can be done using the internet. 2. RESEARCH METHODS The research method used in this research is the Research and Development method. Research and Development (R&D) is a research method used to make or produce products to test the effectiveness of these products. The software development approach used in this study is the classic life cycle model or the waterfall model. The classic life cycle model uses a systematic and sequential approach to the level of system progress throughout all analysis, design, code, testing, and maintenance. This study uses a waterfall development model that is combined with the prototyping paradigm to help researchers easily define user needs and anticipate changing needs in the software development process. Prototyping is used as a technique that can be implemented in the context of other process models, although the prototyping paradigm can be used as a stand-alone process model. The prototyping paradigm can help developers and users better understand what needs to be built when the requirements are still general. The phases in the Waterfall Model according to Sommerville's reference are as follows: 1. Requirements analysis and definition Gathering the complete requirements and then analyzed and defined the needs that must be met by the program to be built. This phase must be done in full to be able to produce a complete design. Needs analysis conducted by researchers is in the form of field studies (observations), interviews, and relevant research searches (literature). The results obtained are in the form of user requirements or data related to user desires and software requirements (software requirements). 2. System and software design The design is done after the complete needs analysis is collected. The System and software design stage is actually a multi-step process that focuses on the four attributes of a different program; data structure, software architecture, interface representation, and procedural detail (algorithm). This stage produces the system requirements. 3. Implementation and unit testing The completed system and software design stage is then continued to the language translation phase into codes using a specific programming language that can be understood by computers. The program is built directly tested externally to find out the errors that occur and ensure that the input given to the system is able to provide the actual output and according to the user's desired. 4. Integration and system testing The uniting of the program units is then tested as a whole (system testing). System testing that researchers use in testing the software made is usability, functionality, and correctness. 5. Operation and maintenance Operate programs in their environment and carry out maintenance, such as adjustments or changes due to adaptation to the actual situation. Figure 1. Waterfall Model Sommerville A software framework is a basic design that can be used and developed for an application system or subsystem. A software framework provides a collection of basic code that can help in the process of developing and combining different components in a software [12]. A programming framework can simplify the process of compiling program function codes by reducing code operations that are repetitive [13]. Because the purpose of the framework is to help with general activities, many frameworks provide libraries for database access, session data management, etc. [14]. The web programming framework based on the PHP-Hypertext Preprocessor programming process simplifies the application development process, helping to structure the functions of a system faster because it does not have to write it from scratch. This can also improve the quality and stability of programming code structures [15]. The framework significantly reduces the time, resources, effort needed to develop and manage web applications. In addition, the framework is an open architecture based on common standards used [16]. PHP is a server-side programming language specifically designed for web-based application development. Many advantages of the PHP programming language, among others in the aspects of performance, scalability, portability, open source, and especially to connect and manipulate a database [17]. Database management is done by Structure Query Language (SQL). Some studies state that traditional database query languages are not easy to use for inexperienced database technology users, as a consequence because their interactions are based on textual languages, such as SQL [18]. In system design (design pattern), it is well known one of them is the Model-View-Controller approach [19], which can make it easy in the process of developing and managing an application, because [20]: (1) display (output) applications can change drastically without changing data structure and business logic, (2) applications can be easily managed / used with different interfaces, for example multi-language, or setting different user access rights. The Model-View-Controller design pattern approach is an easy way to develop interactive software system architectures [21]. Also known as the Presentation / Abstraction / Control (PAC) design pattern, the main idea is to separate the interface and the data below it [22]. The Model-View-Controller pattern has proven to be effective for creating and organizing modular applications [23]. Yii is a high-performance component-based PHP framework for large-scale Web application development. It provides maximum reusability in Web programming and can accelerate the development process significantly [24]. Yii Framework itself is a framework that has the concept of solving a problem no longer seen from how the procedure, but from what objects are related to solve the problem [25]. The Yii Framework implements a model-view-controller (MVC) design pattern that is widely adopted in web programming. MVC aims to separate business logic from user interface considerations so that developers can more easily change each part without affecting the others [26]. Yii implements a model-view-controller (MVC) design pattern, which is widely adopted in Web programming. MVC aims to separate business logic from user interface considerations so that developers can more easily change each part without affecting the others. In MVC, the model describes information (data) and business rules; view contains elements of user interface such as text, form input; while the controller manages communication between models and views. In addition to the MVC implementation, Yii also introduces a front controller, called Application, which encapsulates the execution context for processing a request. The application collects some information about the user's request and then sends it to the appropriate controller for further handling. The following diagram shows the static structure of an Yii application according to the official Yii Framework website: 3. RESULTS AND DISCUSSION Library information system development begins with an analysis of system requirements. This analysis is needed so that the development process is right on target and functioned properly as a library information system for SMA Karya Pembangunan Ciwidey. Analysis and design of web-based library information systems in SMA Karya Pembangunan contains use case diagrams, class diagrams and activity diagrams. Also given the implementation of the system that has been made. 3.1. Analysis and Design The minimum requirements that must be present according to the results of observations and interviews are as follows in Tabel 1 [27]. Functional requirements as in Tabel 1 and non-functional requirements as in Tabel 2 of web-based library information system software in SMA Karya Pembangunan are as follows: ![Figure 2. The static structure of the Yii application (Sharive, 2013).](image) ### Table 1. Functional Needs <table> <thead> <tr> <th>No</th> <th>Functional Needs</th> <th>What the actor does</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>The system must log in before it can be accessed</td> <td>Library staff entered the system</td> </tr> <tr> <td>2</td> <td>The system must be able to manage books</td> <td>Library staff manage book data</td> </tr> <tr> <td>3</td> <td>The system must be able to manage members</td> <td>Library staff manage member data</td> </tr> <tr> <td>4</td> <td>The system must be able to manage transactions</td> <td>Library staff manage transaction data</td> </tr> <tr> <td>5</td> <td>The system must be able to provide reports</td> <td>Library staff make a report</td> </tr> </tbody> </table> ### Table 2. Non-Functional Needs <table> <thead> <tr> <th>No</th> <th>Non-Functional Needs</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Operational</td> <td>Windows 7 32/64 bit Operating System; Computer Specifications: Processor (Pentium 4 / Dualcore 1.6 Ghz), RAM: 512MB, VGA: 256MB, Monitor: 14 inch, Keyboard: Standard Type USB Cable 101/102 key, Mouse: Type USB Cable with optical; Web Browsers: Google Chrome, Internet Explorer &amp; Mozilla Firefox; Web Server: Apache; Database Server: MySQL; Yii Framework version 1.1; Sublime Text</td> </tr> </tbody> </table> 2 Security The application system and database are equipped with a password; equipped with CCTV in the reading room and bag storage 3 Information Used to display the procedures for registering new members; used to display information when a user enters an incorrect password; provide a library member identity report is complete 3.2. Use Case Diagram The use case diagram is used to find out what functions exist in a system and who has the right to use these functions. Use Case Diagrams consist of actors and interactions that they do in a system. In the development of use case diagram software is used to explain the relationships and actors in the form of input or output in a system. Use case diagrams depict actors and relationships with their respective functions. In the information system developed there are actors namely library staff. Library staff has functions including managing book data, managing member data, managing transaction data, and making reports. The following contains a use case diagram for a web-based library information system at SMA Karya Pembangunan. ![Use Case Diagram](image) **Figure 3. Use Case Diagram** 3.3. Class Diagram Class Diagram contains a description of the structure and explanation of classes, packages and objects and relationships with each other such as associations, inheritance and others in a system. Class diagram of the library information system can be seen in Figure 4 as follows: ![Class Diagram](image) **Figure 4. Class Diagram** 3.4. **Activity Diagram** Activity Diagram illustrates the workflow or activity of a system or business process. Based on the use case diagram that has been made, the activity diagram illustrated can be seen as follows: **Figure 5.** Activity Diagram Manage Books **Figure 6.** Activity Diagram Manage Members 3.5. Implementation The design of the application makes it easy for users to use this system. Figure 9 The Login Screen Dialog functions so that potential users (library staff) can register themselves. As for the description: enter the Username and enter the Password, then click LOGIN to go to the start page. After logging in, the user will enter The Home Screen Dialog. The Home Screen Dialog is the Start Page to enter the Book Master Page, Member Master Page, Transaction Master Page, and Report Master Page. The Book Master Screen Dialog functions to display a list of available books. There are several functions provided, namely: Create New Book Data (add book), Book Details (see details about books), Edit Books (fill in details about books) and Delete Books (delete book data in the information system). Figures 11 through 15 show The Book Master Screen Dialog and its functions. Figure 13. The Book Details Screen Dialog Figure 14. The Edit Book Screen Dialog Figure 15. The Delete Book Screen Dialog The Member Master Screen Dialog functions like the Book Master Screen Dialog, where there are several functions provided, namely: Create New Members (add members), Member Details (see details about members), Edit Members (fill in details about members) and Delete Members (delete member data in the information system). Figure 16. The Member Master Screen Dialog The Transaction Screen Dialog contains the Borrowing Transaction Screen Dialog and the Return Transaction Screen Dialog. The Borrowing Transaction Screen Dialog functions to make book loan data, namely: the title of the book, the borrower, the date of the loan, the date of return, being late (if there is a penalty), the status of the book returning or renewing. Likewise, the Return Transaction Screen Dialog functions to make a book return transaction, namely: book title, borrower, borrowed date, return date, status (back) or not (cancel). The Report & Print Screen Dialog functions to report book and member data and can print them. This Report & Print Screen Dialog contains a Book Report Screen Dialog and Transaction Report Screen Dialog. The Transaction Report Screen Dialog contains a report on borrowing or returning books (book title, borrower's name, borrowed date, return date, and status). This Transaction Report can be printed and can be specially selected Loan Transaction Report or Return Transaction Report. 4. CONCLUSION Based on the results of research and discussion conducted, the authors can draw some conclusions as follows: 1. With this software, the SMA Karya Pembangunan Ciwidey is easier to record and manage data so that the time is more efficient. 2. With this software, it also makes it easier to search book data, member data, and transaction data in the library (borrow and return books). 3. Making reports such as book reports, member reports and transaction reports (borrowing and returning books) can also be done easily. Also can be done printing the report. 4. Data security can be stored and maintained properly because of the security protection that is done in stages in its access. 5. SUGGESTED Based on the research that has been done, the following suggestions can be given: 1. This software needs to be developed further so that there is integration between the library members’ data and the data of students at the school where students who are still active and students who have graduated from the school will be able to know. 2. Adding features is needed to get more complex applications and provide solutions to every problem encountered. Some examples of features in the future development such as the addition of barcode features to read the identity of the book. 3. It is also necessary to add other features in the form of making access rights for SMA Karya Pembangunan Ciwidey Library members. REFERENCES
{"Source-Url": "http://ejournal.raharja.ac.id/index.php/ccit/article/download/1001/863", "len_cl100k_base": 4615, "olmocr-version": "0.1.48", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 33310, "total-output-tokens": 6507, "length": "2e12", "weborganizer": {"__label__adult": 0.0003974437713623047, "__label__art_design": 0.0007090568542480469, "__label__crime_law": 0.0004527568817138672, "__label__education_jobs": 0.0149383544921875, "__label__entertainment": 7.730722427368164e-05, "__label__fashion_beauty": 0.00019478797912597656, "__label__finance_business": 0.0003523826599121094, "__label__food_dining": 0.0004274845123291016, "__label__games": 0.0004892349243164062, "__label__hardware": 0.000827789306640625, "__label__health": 0.0004622936248779297, "__label__history": 0.0003304481506347656, "__label__home_hobbies": 0.0001341104507446289, "__label__industrial": 0.0003714561462402344, "__label__literature": 0.00043845176696777344, "__label__politics": 0.0002388954162597656, "__label__religion": 0.0004744529724121094, "__label__science_tech": 0.01024627685546875, "__label__social_life": 0.0001804828643798828, "__label__software": 0.00998687744140625, "__label__software_dev": 0.95703125, "__label__sports_fitness": 0.00022721290588378904, "__label__transportation": 0.0005478858947753906, "__label__travel": 0.00023496150970458984}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27397, 0.03047]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27397, 0.58007]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27397, 0.87459]], "google_gemma-3-12b-it_contains_pii": [[0, 3970, false], [3970, 8561, null], [8561, 11425, null], [11425, 15495, null], [15495, 17511, null], [17511, 19016, null], [19016, 19329, null], [19329, 19641, null], [19641, 20222, null], [20222, 21076, null], [21076, 21929, null], [21929, 24645, null], [24645, 27397, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3970, true], [3970, 8561, null], [8561, 11425, null], [11425, 15495, null], [15495, 17511, null], [17511, 19016, null], [19016, 19329, null], [19329, 19641, null], [19641, 20222, null], [20222, 21076, null], [21076, 21929, null], [21929, 24645, null], [24645, 27397, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27397, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27397, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27397, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27397, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27397, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27397, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27397, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27397, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27397, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27397, null]], "pdf_page_numbers": [[0, 3970, 1], [3970, 8561, 2], [8561, 11425, 3], [11425, 15495, 4], [15495, 17511, 5], [17511, 19016, 6], [19016, 19329, 7], [19329, 19641, 8], [19641, 20222, 9], [20222, 21076, 10], [21076, 21929, 11], [21929, 24645, 12], [24645, 27397, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27397, 0.07752]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
3896e53afb817d41c3b1f5c48b7ab280014bf3c8
Graphical Tool to integrate the Prometheus AEOlus methodology and Jason Platform Rafhael R. Cunha\textsuperscript{1}, Diana F. Adamatti\textsuperscript{2}, Cléo Z. Billa\textsuperscript{2} \textsuperscript{1}Instituto Federal de Educação, Ciência e Tecnologia do Rio Grande do Sul (IFRS) – Campus Vacaria Rua Eng. João Viterbo de Oliveira, 3061 – Vacaria – RS – Brasil \textsuperscript{2}Centro de Ciências Computacionais (C3) – Universidade Federal do Rio Grande (FURG) Avenida Itália km 8 Bairro Carreiros – Rio Grande – RS – Brazil rafhael.cunha@vacaria.ifrs.edu.br, \{dianaada, cleo.billa\}.gmail.com Abstract. Software Engineering (SE) is an area that intends to build high-quality software in a systematic way. However, traditional software engineering techniques and methods do not support the demand for developing Multiagent Systems. Therefore a new subarea has been studied, called Agent Oriented Software Engineering (AOSE). The AOSE area proposes solutions to issues related to the development of agent oriented systems. There is still no standardization in this subarea, resulting in several methodologies. Another predominant factor for the instability of this subarea is the limited supporting tools available. The main purpose of this paper is to present an Eclipse plug-in to support Prometheus AEOlus methodology. Additionally, we have created a mechanism that is able to automatically generate code to AgentSpeak language, which is the base language of the Jason platform. 1. Introduction In artificial intelligence, the agents oriented paradigm has been researched and used to minimize complexity and to increase the efficiency of distributed software. This practice is widely used to software development with these characteristics, boosting the Multiagent Systems (MAS) area. In this context, several methods have been proposed in order to attend the demand of agents oriented software [Padgham and Winikoff 2002] [Bresciani et al. 2004] [DeLoach 1999] [Henderson-Sellers and Giorgini 2005] [Caire et al. 2002] [Cossentino and Potts 2002] [Wooldridge et al. 2000]. Such methodologies have been created for several reasons. However, one of these is able to address all dimensions of a MAS, as stated by [Demazeau 1995]. The Prometheus AEOlus methodology [Uez 2013] was proposed as an alternative to model agents based on the BDI architecture (Beliefs, Desires and Intentions) [Dennett 1989][Bratman 1987]. Furthermore, this methodology supports the modeling of organization and environment dimensions, unlike other ones. As a final goal, the Prometheus AEOlus methodology promises to be compatible to JaCaMo framework [Boissier et al. 2013]. The JaCaMo framework integrates three different development platforms, but complementary to develop MAS. The “Ja”, comes from Jason, and it consists of a platform for developing and running software agents based on BDI architecture and the AgentSpeak language. The “Ca” represents the CartAgO platform [Ricci et al. 2011], that is responsible for programming environment artifacts, and “Mo”, symbolizes Moise+ [Hübner et al. 2002], for programming multi-agent organisations. The Prometheus AEOlus methodology adds diagrams and notations to model other dimensions presented in a MAS. And it aims to provide a meta-models to allow code generation for the three development platforms that compose the JaCaMo framework. However, until now there is not a tool to support this methodology. Therefore, the goal of this paper is to present a graphical tool that supports Prometheus AEOlus methodology, as well as a mechanism to automatically generate code to the Jason development platform. Firstly, we have developed the interconnection to the Jason platform. As future work we intend to develop the association with the other platforms that compose the JaCaMo framework. This paper is structured as follows: In the section 2 is presented theoretical and technological concepts for understanding the content of this work. In the Section 3 we show the related methodologies, as well as a comparative study to demonstrate the reason of this work. In the section 4 we discuss the general architecture of the plug-in developed, as well as the steps taken to completion. In the section 5 is presented a case study that aims to demonstrate the use of the developed platform. Finally, in the section 6 is showed the conclusions of the paper. 2. Theoretical-technological foundation This section presents the concepts on agents and multi-agent systems, points out the origin and objectives of AOSE, and reports about plug-in development for the Eclipse IDE programming. 2.1. Agents and Multiagent Systems According to [Weiss 1999], an agent is commonly described as a computer system that is located in an environment and can be virtual or not, interacting with this via sensors and actuators. Additionally, this agent is able to make decisions and take action autonomously in order to reach their goals. [Russell and Norvig 2009] explained that an agent is any object that can perceive the environment through sensors and acting on it through actuators. According to [Wooldridge and Jennings 1995], an agent is a computer system located in an environment, and is capable of autonomous action on the environment in order to achieve their stated objectives. [Briot et al. 2001] define a Multi-Agent System (MAS) as an organized set of agents. This organization is responsible for setting rules for agents can coexist in a common environment, sharing resources and working collectively. [Demazeau 1995] proposes the division of a MAS into four main components: the Agent, the Environment, Interaction, and Organization. The used vowels are A, E, I, O, respectively to represent these dimensions, making this division known as a paradigm of vowels. 2.2. Agent Oriented Software Engineering Agent Oriented Software Engineering (AOSE) has been developed to help the development of complex systems. This subarea merges Artificial Intelligence and Software Engineering areas to support the development of agents oriented systems [Guedes 2012]. According to [Gleizes and Gomez-Sanz 2011], the AOSE area has been raising for two main reasons: (i) the conceptual structures have reached a maturity level where its viable to spend efforts to find a consensus between modeling languages; (ii) the influence of model-oriented engineering emphasizes the potential value of having models in the center of the development process. 2.3. Eclipse Plug-In Development Eclipse platform is based on plug-ins that are used to extend the functionality of Eclipse IDE [Foundation 2012]. These plug-ins are coded in Java programming language and they can offer several service modalities, as code library, documentation guides, and an extension of the platform itself [desRivieres and Wiegand 2004]. According to [Foundation 2014], the Graphical Modeling Framework (GMF) is a framework for developing graphical editors for domain models within the Eclipse platform. It was based on two frameworks called Graphical Editing Framework (GEF), used for creating generic graphic editors and Eclipse Modeling Framework (EMF), which enables developers to build metamodels and generate Java code referred to it. The graphics of this work have been developed using such framework. 3. Related Work According to [Padgham and Winikoff 2002], Prometheus is a methodology for development of intelligent agents that differs from the other to be detailed and comprehensive, covering all necessary activities for the development of intelligent agent systems. The available tool for Prometheus is currently called Prometheus Design Tools (PDT) [Thangarajah et al. 2005]. PDT enables users to create and modify projects developed with Prometheus methodology. This tool ensures that there are no certain inconsistencies, using wrong diagrams notations, in addition to exporting individual diagrams and generate a report with the complete project. The methodology does not take into account the development platform to the phase of detailed design. However, it allows code to be generated for JACK language. According to [Bresciani et al. 2004], the Tropos methodology [Mylopoulos et al. 2001] [Bergenti et al. 2004] [Cossentino et al. 2005] aims to support all the activities of analysis and project development to agents oriented software. It was developed based on the i* framework, which allows to model the multiagent system based on the concepts of actor, object, and dependency among actors. According to Cossentino et al. [Cossentino et al. 2005], the Tropos methodology allows direct mapping of modeling for JACK language. However, using the TAOM4E tool also can automatically generate code for JADEX language [Morandini et al. 2008]. The Multiagent Systems Engineering (MASE) methodology was proposed by DeLoach [DeLoach 1999]. It is a methodology developed for analysis and multiagent systems design. MASE uses the abstraction provided by MAS to help designers to develop distributed intelligence systems software [Henderson-Sellers and Giorgini 2005]. According to Henderson and Giorgini [Henderson-Sellers and Giorgini 2005], MASE is built on oriented techniques to existing objects. However, it is specialized for the MAS project. To help designers to use the methodology was built a tool called agentTool. This serves as a validation platform for MASE. The tool is based on graphics and fully interactive [DeLoach et al. 2001]. However, the tool does not have support for code generation to any development platform. The Multiagent Systems Engineering (MASE) methodology was proposed by DeLoach [DeLoach 1999]. It is a methodology developed for analysis and multiagent systems design. MASE uses the abstraction provided by MAS to help designers to develop distributed intelligence systems software [Henderson-Sellers and Giorgini 2005]. According to [Henderson-Sellers and Giorgini 2005], MASE is built on oriented techniques to existing objects. However, it is specialized for the MAS project. To help designers to use the methodology was built a tool called agentTool. This serves as a validation platform for any development platform. 1http://agenttool.cis.ksu.edu/ MASE. The tool is based on graphics and fully interactive [DeLoach et al. 2001]. However, the tool does not have support for code generation to any development platform2. According to [Henderson-Sellers and Giorgini 2005], the Ingenias methodology provides a notation for MAS modeling through a well-defined collection of activities to guide the development process. Specifically, the task analysis, design, verification and implementation. These steps are supported by an integrated set of tools called Ingenias Development Kit (IDK). According to [Caire et al. 2002], MESSAGE takes UML as a starting point and add agent, role, and task concepts to attend the needs of multiagent systems. According to [Uez 2013], this methodology proposes the analysis and design of a MAS based on five points of view: organization, objectives/tasks, agents/roles, interaction, and domain. However, such methodology does not have any tool. According to Cossentino and Potts [Cossentino and Potts 2002], Process for Agent Societies Specification and Implementation (PASSI) is a methodology to design and develop multiagent societies integrating design models and concepts of software engineering and artificial intelligence, using the UML notation. Furthermore, PASSI is based on the FIPA architecture for modeling agents. To support the model code, there is the PTK tool (PASSI Toolkit). The tool also provides standard libraries that can be used for code generation and allows system testing before deploying software. The method allows the generation of code for the JADE framework. According to Uez [Uez 2013], the Prometheus AEOlus methodology was developed based on two technologies: the Prometheus methodology [Padgham and Winikoff 2005] and JaCaMo framework [Boissier et al. 2013]. The main purpose of this methodology is create an extension of Prometheus methodology to include Environment and Organization dimensions. The development process defined in this methodology is divided in four --- 2http://agenttool.cis.ksu.edu/ phases: system specification, architectural design, detailed design, and implementation [Uez 2013]. The first phase aims to specify the settings and system objectives. In the architectural design phase, the elements of the system are defined. After, the detailed design phase aims to define the internal structure of the agents through their beliefs, plans, and capabilities [Uez 2013]. The last phase is the implementation phase, and it has the goal to generate code for the JaCaMo framework. The Prometheus AEOlus methodology [Uez 2013] was based on the MAS division proposed by Demazeau [Demazeau 1995], he divides MAS in four dimensions: agents, interactions, environment, and organization. According to [Uez 2014], Prometheus AEOlus presents the following specification activities: 1) scenarios; 2) objectives; 3) shares; 4) perceptions; 5) roles; 6) organizational structure; 7) missions; 8) norms; 9) environment; 10) interactions; 11) agents; 12) plans. In each of these activities, which can be developed over an interaction during the four phases of development, at least one diagram or descriptor is created. However, the methodology does not have a graphical tool to support its use. 3.1. Analysis of related work Table 1. Methodologies x development platforms that automatically generate code from a methodology tool. <table> <thead> <tr> <th>Methodology</th> <th>Support Tool</th> <th>Development Platform</th> <th>MAS Dimensions</th> </tr> </thead> <tbody> <tr> <td>Prometheus</td> <td>PDTools</td> <td>Jack</td> <td>Agents</td> </tr> <tr> <td>Tropos</td> <td>TAOM4E</td> <td>Jadex e Jack</td> <td>Agents</td> </tr> <tr> <td>MaSE</td> <td>AgentTool</td> <td>-</td> <td>-</td> </tr> <tr> <td>Ingenias</td> <td>IDK</td> <td>Jade</td> <td>Agents</td> </tr> <tr> <td>MESSAGE</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>PASSI</td> <td>PTK</td> <td>Jade</td> <td>Agents</td> </tr> <tr> <td>GAIA</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Prometheus AEOlus</td> <td>-</td> <td>JaCaMo</td> <td>Agents, Environment, Organization</td> </tr> </tbody> </table> In the Table 1, Methodology and Support Tool columns show the methodologies studied in this work and the graphical tools that support their use, respectively. According this table, it shows that there are many graphical tools for modeling MAS and although each technique has its own peculiarities, it is known that same pattern beyond an MAS agents dimensions. Therefore, from this comparison, it follows that the main problem regarding the specification/implementation of a MAS consists in processing the work products³ of code developed automatically. Regarding to the code generation, that aims to transform the specifications set out in the work products in source code. Table 1 also shows the development platforms that allow to generate automatic code, besides the dimensions of the MAS that this generation supports, through the Development Platform Dimensions and MAS columns, respectively. Thus, it is shown through the table, the preponderance for code generation for Jade and Jack frameworks, with the exception of Tropos also generates code for JADEX platform. However, analyzing them, it is clear that all focus their implementation within agents, causing loss of other information present in the work products designed previously. ³Work Product is the result of the completion of a work that serves as a resource to achieve some goal. Based on above, only one methodology has in its core the concern with the agent, environment and organization dimensions is the Prometheus AEOlus, through the framework JaCaMo [Boissier et al. 2013]. Because of the peculiarity found this methodology, we decided to develop a graphical tool to support it. ### 4. A Platform to Support Prometheus AEOlus Methodology This section presents the general architecture to develop the plug-in as well as the steps to implement it. #### 4.1. General Architecture The general architecture of Prometheus AEOlus tool is illustrated in Figure 1. Each package is a generated plug-in function to incorporate the tool developed. The package called generation is responsible for generating tool code. Within this package are grouped classes responsible for carrying out this task. ![Figure 1. General Architecture of Prometheus AEOlus Tool.] The diagram package holds the classes responsible for the graphic editor. It contains all the configuration files and the codes generated by all from the GMF Eclipse development framework. Unlike other plug-ins that make up this tool, graphic editor does not have an initializer, resulting in dependence on the plug-in wizard to boot. The wizard package is responsible for making the connection among the other two plug-ins that make up the tool developers. It combines the features described for other plug-ins and hence have dependency among them. In this plug-in, there is no source code, but only extension points in the eXtensible Markup Language (XML) configuration, where the dependencies to other plug-ins are reported. #### 4.2. Development of Agent Overview Diagram In the Section 3, the Prometheus AEOlus methodology specifies at least twelve diagrams to the development process. However, was decided to explain the development only the overview diagram agents. According to Uez [Uez 2014], the agent overview diagram describes the agent internally, i.e., this diagram details plans, skills, and beliefs of an agent, as well as the capabilities. The plan indicates the actions that the agent must perform to achieve a goal. Each plan must have an event trigger that will start it. Events triggers could be messages or environment perceptions. An event is linked to a plan by a dotted arrow. As mentioned by Uez [Uez 2014], the plans may include messages, actions, beliefs, or perceptions. All these elements are connected to a plan through simple and solid arrows, and they are connected by dotted arrows to indicate the order in which these actions should occur. Figure 2 shows the six steps necessary to design the graphical editor referring to the overview diagram of the agent. This editor is designed using the GMF Eclipse described in Section 2.3. The development process shown in Figure 2 follows the specified stream Eclipse Foundation [Foundation 2014]. In the first step, the meta-model is developed diagram, i.e., classes and attributes are defined that represent the rating that contains the diagram. Examples of class are: Action Message Plan, among others. The second step is to generate the skeleton code relating to metamodel developed previously. The third step is to create a derivative metamodel file, which is the graphic editor of the design field diagram in question, and all the figures used in this field are specified by geometric points and represent their classes specified in the metamodel earlier development. These steps are from the part of the EMF GMF Eclipse plug-in. The fourth step of the development process is the graphic palette specification that will be used to chart component choice that should be plotted in the drawing field previously produced. The palette is also described based on the metamodel, and images for each metamodel class are developed externally. The fifth step is the combination of all previously created files. Finally, the sixth stage is the generation of a file that will contain all the information specified above, plus some configuration parameters needed to add extra functionality to the editor. This file can then be used to generate the code needed for the implementation of graphic editor referring to the diagram developed. These steps are related to GEF, part of GMF Eclipse plug-in. 4.3. Development of The Code Generator Plug-In The Eclipse EMF plug-in uses a derivative of XML file, called XML Metadata Interchange (XMI) to make data persistence, as shown in Eclipse Foundation [Foundation 2014]. This file contains several tags to make the process of information extraction more accessible. In the process of code generation for Jason platform, we used these XMI files generated by GMF. These files contain information regarding about the General Model of the Prometheus AEOlus methodology, i.e., the diagram that combines together all entities and relationships of the methodology. Because of implementation issues, we developed a general diagram to enable the modeling of all entities of the methodology, to the code generation process to be more accessible for this first prototype. The Code Generator plug-in is divided into four layers, show is Figure 3. The layers are named in: Model, Generator, View, and Service. The first layer is responsible for representing objects through the entities used by the diagrams of Prometheus AEOlus methodology. In addition, this layer also has classes that help the plug-in to find Figure 3. Architecture of code generator plug-in information presented in the XMI file. The Generator layer includes the classes that centralize the plug-in tasks in the Eclipse environment, and the classes that write the string with the code that must be stored in the file the plug-in will generate later. The Service layer is responsible for reading the XMI file generated by the graphical editor plug-in, turning it into objects, and return them as a hashmap. The View layer aims to manage the use of services and code generation. Through methods from the classes of this layer, the developer can enable the functionality of the plug-in. Activation occurs through events that are triggered by users that use it. 5. Case Study - Build a House This section aims to demonstrate the use of the developed solution. The validation tool occurred on two different ways. The first aimed to perform unit tests to evaluate the efficiency of the code generator plug-in, validating its architecture and evaluating its efficiency. However, this first form of validation is not addressed in this paper, because it is not the focus of this work. The second validation method presents the use the plug-in graphic editor, in order to demonstrate its effectiveness in enabling users to transcribe integrally the diagrams described in [Uez 2013]. For this reason, we used the work products produced in this same work, but transcripts in our computational tool. In [Boissier et al. 2013], the case study called “Build a House” was presented. In the case study, a character called Giacomo wants to build this house. He needs to hire companies that can perform tasks related to construction and coordinate the work of these companies. Hiring will be done through a bidding process in which the firm submitting the lowest price for the job will be hired. This case study aimed to demonstrate the use of JaCaMo framework, and therefore, this work has also been underutilized. To demonstrate the use of the graphic editor, we chose to model computationally Giacomo Agent, which is the main character of the case study in question. Figure 4 shows the Overview Diagram of the Giacomo Agent. This is one of the diagrams of the scenario. We model other diagrams, but they are not in this paper because there are no space to it. In Figure, the Giacomo entity is modeled as an agent. The entities buildHouse, notifyWinner, hire and, auctionStart represent plans that make up the agent. The entities buildHouse, build, hire, auctionWinner, notifiesWinners and, startAuction represent messages sent or received by the plan or the perception of them. The entities wait, lookArt, createArt and, nameArt represent the actions to be performed during the plan implementation. Finally, the entities task and winner represent the beliefs related to their plans. The Figure 5 exemplifies the generated code to the agent Giacomo. This code regards the diagram shown in Figure 4. The syntax of Jason recommends that the agent’s goals should be informed at first. Then, beliefs should be declared, and finally, the plans of the agent. The properties of each element was based on the case study presented on Uez [Uez 2013]. In Figure 5, line 1 is a message to the plug-in user, showing where he can inform the individual goals of each agents. After, the plans of the agent are described, as we mentioned previously. It is worth remembering that in the AgentSpeak language, the syntax of a plan is as follows: \[ \text{Trigger : context } \leftarrow \text{ body} \] Therefore, lines 3, 8, 14, and 20 of Figure 5 describes the beginning of the plans. In line 4, is the declaration of the `buildHouse` plan, with the trigger `buildHouse`. The trigger watches an event to start the plan. As Uez [Uez 2013] describes, only two things can be a trigger to a plan: entities of the message type, or perception linked to a plan by dotted arrows. The `True` statement after `:` indicates the context of the plan, in this case stipulated by parameters by the designer. Lines 5, and 6, respectively, represent the inclusion of new objectives. Also in Figure 5, lines 9, 10, 11, and 12 represent the `hire` plan. After the `\leftarrow` symbol is the concatenation of all the entities that make up the body of a plan. The lines 15, 16, 17, and 18 represent the `notifyWinner` plan. The word broadcast refers to a message sent to all the agents of the system. In addition, through the Figure 4, we can see that the links between entities messages and plans represent the messages that will be sent by the system. However, it is necessary that the message has a connection with the sender and the receiver. Lines 17, and 18 are queries of the agent’s beliefs. Finally, lines 21, 22, 23, and 24 represent the `auctionStart` plan. Line 22 describes the actions that should be executed by the plan. In addition to Giacomo agent, this case study is composed of at least three more agents to represent companies that can participate in the bidding process. We only demonstrated the work product produced for Giacomo agent and the result of the code generation for the same. This process is enabled through the user clicking a button available on the Eclipse programming IDE for this purpose. Subsequently, the XMI produced by work diagrammed product is analyzed, thereby generating the entire skeleton codes required for reproduction of the agent in the Jason development platform. Furthermore, regarding the generated BDI architecture code, which is the basis which the platform in question. The plug-in developed still is in the testing phase, therefore, is not available for download. **Figure 4. General Model Diagram for Giacomo Agent in the “Build a House” Case Study** This paper proposes an Eclipse plug-in that allows the graphical design of MAS using the Prometheus AEOlus methodology. This methodology was developed by Uez [Uez 2013] and it aims to enable the integrated modelling of three dimensions involving MAS design. This integration intends to simplify the implementation of a MAS in the framework called JaCaMo. To aid this process, our plug-in is able to scan information about the MAS on the available diagrams and automatically generates code. This code is generated in AgentSpeak programming language, which is interpreted by the Jason platform, part of the JaCaMo framework. The dimensions environment (CartAgO) and organization (Moise+), which make up the rest of the JaCaMo framework, were not addressed in this work. The separation of the Prometheus AEOlus methodology support platform in several plug-ins is one of the positive points in the preparation of this work, as it is the simplification of the tool maintenance process, and decouple the responsibilities in various minimized solutions. This separation provides the adhesion of collaborators, making it easier to spot maintenance in different parts of the source code developed by facilitating their control version. The choice of a plug-in consolidated graphical modeling to make the graphical editor, entitled GMF Eclipse [Foundation 2014], was also an important factor, since it is cemented in the graphics editors market development, presenting a good documentation available and various forums able to answer questions by experienced users in using the plug-in. The GMF Eclipse separates the process developed in several steps, using the concept of Model Driven Engineering (MDE) which corresponds to an architecture related to models, consisting of an approach for the construction of systems by separating the specification of the functionality or logic business of implementation. The development of an automatic plug-in to generate code is a factor to be improved in future work. Although the solution presents an architecture with well-divided responsibilities, it can later be replaced by an alternative that explore the MDE concepts present in Eclipse GMF, becoming a fully independent solution and easy support. Finally, the platform is an alternative to model MAS, where work products related to environmental and organizational dimensions can be specified, an innovation in the AOSE area. In addition, this solution demonstrates that Prometheus AEOlus [Uez 2013] compatibles to Jason development platform, but it is necessary to build new solutions that enable its connection with CartAgO and Moise+ platforms. After all connections are completed, the solution will be unprecedented in the MAS modeling area, as there is not a development platform to generate code for all dimensions proposed by [Demazeau 1995] to a MAS. Acknowledgment The authors thank the Universidade Federal do Rio Grande (FURG) and Fundacao de Amparo a Pesquisa do Rio Grande do Sul (FAPERGS) to support this research. References
{"Source-Url": "http://www.lbd.dcc.ufmg.br/colecoes/eniac/2016/044.pdf", "len_cl100k_base": 6401, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 28118, "total-output-tokens": 8511, "length": "2e12", "weborganizer": {"__label__adult": 0.0003097057342529297, "__label__art_design": 0.0003614425659179687, "__label__crime_law": 0.00027060508728027344, "__label__education_jobs": 0.0007271766662597656, "__label__entertainment": 4.863739013671875e-05, "__label__fashion_beauty": 0.00014126300811767578, "__label__finance_business": 0.00018274784088134768, "__label__food_dining": 0.00025463104248046875, "__label__games": 0.0004320144653320313, "__label__hardware": 0.0005445480346679688, "__label__health": 0.0003421306610107422, "__label__history": 0.00019979476928710935, "__label__home_hobbies": 7.31348991394043e-05, "__label__industrial": 0.0002892017364501953, "__label__literature": 0.00018513202667236328, "__label__politics": 0.0002058744430541992, "__label__religion": 0.0003566741943359375, "__label__science_tech": 0.007049560546875, "__label__social_life": 7.832050323486328e-05, "__label__software": 0.00431060791015625, "__label__software_dev": 0.98291015625, "__label__sports_fitness": 0.00023949146270751953, "__label__transportation": 0.00036716461181640625, "__label__travel": 0.0001786947250366211}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34805, 0.02765]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34805, 0.53984]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34805, 0.89659]], "google_gemma-3-12b-it_contains_pii": [[0, 2996, false], [2996, 6483, null], [6483, 10214, null], [10214, 12239, null], [12239, 15751, null], [15751, 18564, null], [18564, 21153, null], [21153, 24341, null], [24341, 26876, null], [26876, 29197, null], [29197, 32108, null], [32108, 34805, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2996, true], [2996, 6483, null], [6483, 10214, null], [10214, 12239, null], [12239, 15751, null], [15751, 18564, null], [18564, 21153, null], [21153, 24341, null], [24341, 26876, null], [26876, 29197, null], [29197, 32108, null], [32108, 34805, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34805, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34805, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34805, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34805, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34805, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34805, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34805, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34805, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34805, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34805, null]], "pdf_page_numbers": [[0, 2996, 1], [2996, 6483, 2], [6483, 10214, 3], [10214, 12239, 4], [12239, 15751, 5], [15751, 18564, 6], [18564, 21153, 7], [21153, 24341, 8], [24341, 26876, 9], [26876, 29197, 10], [29197, 32108, 11], [32108, 34805, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34805, 0.07752]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
678571aba7ff81e0f1106984f76506494e00db9f
Investigating Novices’ In Situ Reflections on Their Programming Process Dastyni Loksa, Benjamin Xie, Harrison Kwik, and Amy J. Ko University of Washington Seattle, Washington, USA dloksa@uw.edu, bxie@uw.edu, kwikh@uw.edu, ajko@uw.edu ABSTRACT Prior work on novice programmers’ self-regulation have shown it to be inconsistent and shallow, but trainable through direct instruction. However, prior work has primarily studied self-regulation retrospectively, which relies on students to remember how they regulated their process, or in laboratory settings, limiting the ecological validity of findings. To address these limitations, we investigated 31 novice programmers’ self-regulation in situ over 10 weeks. We had them to keep journals about their work and later had them to reflect on their journaling. Through a series of qualitative analyses of journals and survey responses, we found that all participants monitored their process and evaluated their work, that few interpreted the problems they were solving or adapted prior solutions. We also found that some students self-regulated their programming in many ways, while others in almost none. Students reported many difficulties integrating reflection into their work; some were completely unaware of their process, some struggled to integrate reflection into their process, and others found reflection conflicted with their work. These results suggest that self-regulation during programming is highly variable in practice, and that teaching self-regulation skills to improve programming outcomes may require differentiated instruction based on students self-awareness and existing programming practices. KEYWORDS Programming, Metacognition, Self-regulation. 1 INTRODUCTION Self-regulation—the ability to be aware of one’s thoughts and actions, exert control over them, and evaluate how well they are moving one closer towards a goal [17]—is important to successful programming. In an analysis of teaching and learning programming, Sheard et al. highlighted self-regulation as one of a vital set of skills students need to achieve success at programming [19]. Further, prior work has identified that successful learners self-regulate, generating self-explanations of material and use them to monitor for misconceptions [4]. Many studies also have shown that scaffolding aspects of self-regulation benefits novice programmers, further demonstrating the importance of self-regulation for successful programming. For instance, Loksa et al. found that explicitly teaching and scaffolding programming process can improve student productivity, independence, and self-efficacy [12]. Similarly, work by Bielaczyc, Pirolli, and Brown found that training in self-regulation strategies leads to significantly greater programming performance [3]. When it comes to expert software engineers, their systematic and self-reflective methods are a large part of what makes them experts [9] and these self-regulation skills manifest as the deliberate systematic practices which expert programmers and teams use to structure their work [15]. While self-regulation is important for programming, among novices it is infrequent and shallow. Prior work has used retrospective survey data to identify self-regulated learning behaviors such as time management, goal setting, and planning, showing that novices rarely engage in them and when they do they often do so in shallow, unsuccessful ways [6]. Other studies have used interviews, finding that low performing students use few metacognitive or resource management strategies overall [2, 10], and that this is often half of CS1 students. A think-aloud laboratory study on students’ self-regulation found that learners may never engage in many self-regulation behaviors at all, and when they do it is often shallow, ineffective, and does not help avoid critical errors [11]. A key limitation of this prior work is that it is all done retrospectively, asking students to reflect on their work after it has occurred. Therefore, we know little about how novices self-regulate while they program, and how this self-regulation might differ from what they recall about their self-regulation. There are many reasons to suspect that retrospective data might not generalize to in situ settings. In situ programming happens in a variety of environments (office, classroom, at home), where distractions might be more abundant. It happens in settings that are not time-regulated, often unfolding over many hours or days across multiple contexts. Similarly, programming may include teaching assistants, peers, and the Internet. All of these factors may be difficult to recall retrospectively, missing nuances about self-regulation that learners might engage in, but not remember. To investigate this generalizability gap in prior work, we studied novice students’ programming self-regulation in situ across a 10-week series of four 2-week programming assignments. We specifically investigated the following questions: RQ1 When prompted to reflect on process in situ, what degree of in situ self-regulation do learners engage in? RQ2 What challenges do students report encountering when attempting to reflect on their programming process in situ? 2 METHOD To answer our research questions, we asked learners in a 10-week course to write in journals during programming sessions, reflecting on their problem solving and self-regulation. Using reflective journals is one of a few methods of measuring self-regulation [21] and have been used in CS to enhance programming skills [5]. For this study, journals served as both a prompt to self-regulate during problem solving and a record of the self-regulation the participants engaged in. Therefore, rather than just exposing how students work in the absence of being observed, our data reflects more of a best-case scenario for programming, where there is a scaffolded prompt to think about and write about their programming to support their problem solving. 2.1 Course and participants We partnered with an instructor of one section of a required front-end web development course covering course in an information science department of a large public research university. Participants consisted of all 31 undergraduate students enrolled in the course, of which all had passed at least one prerequisite programming course covering Java and basic data structures. Of the 31 students, 25 identified as men, 6 as women. One reported being in their 2nd year of undergraduate, 13 in their 3rd year, and 17 in their 4th year. The course required students to complete 4 projects over 10 weeks. The first project required students to create a personal website using HTML and CSS. The second project required students to create a web-based game written in JavaScript. For this project, students selected the game they wanted to create and were given a variety of suggestions from classic arcade games like Pong or Breakout to casual mobile games like Threes or Bejeweled. The third project of the course required students to create a data explorer using the React framework [8] that allowed a user to interactively explore a data set, such as that exposed by a public web API. What API to use, what data to present, and how to present that data was up to the student, but the project required that the app was both responsive and accessible (perceivable to screen readers). The fourth project for the course required students to create a messaging application using the React framework and a Firebase back-end. This project could be done individually, or in pairs, and required user accounts, authentication, and client-side routing to create a single-page application. 2.2 Data collection To gather data about in situ self-regulation for RQ1, the instructor required students to submit a programming journal for each of their four projects. To ensure that the students had the language to describe their self-regulation in their journals, the first author taught students definitions of self-regulatory behaviors by giving a 20 minute lecture on the first day of class. This lecture introduced a framework of problem solving that depicts programming as a series of iterative problem solving behaviors drawn from prior work [11]. These behaviors, shown at the top of Table 1, included 1) reinterpreting the problem prompt, 2) searching for analogous solutions, 3) adapting previous solutions, 4) implementation, and 5) the evaluation of implemented solutions. We also described several self-regulation behaviors from this framework, shown at the bottom of Table 1: 1) planning, 2) comprehension monitoring, 3) process monitoring, 4) self-explanation, and 5) reflecting on cognition. (Hereafter, we refer to all behaviors in Table 1 as self-regulation behaviors.) We provided the definitions along with the journaling instructions on the course website for later reference. We instructed students to journal about the start and stop times of each coding session, their progress through the problem solving activities in Table 1, and use the journal as a place to self-regulate in the six different ways described in Table 1. To ensure some consistency in journaling and help students understand the expectations, we provided an example journal that demonstrated how a journal <table> <thead> <tr> <th>Programming Behaviors</th> <th>Definition</th> </tr> </thead> <tbody> <tr> <td>Interpret prompt</td> <td>Statements about or demonstrating interpreting or questioning the prompt reconsidering actions in reference to the prompt or decomposing the problem into goals requirements and or sub-problems.</td> </tr> <tr> <td>Search for analogous problems</td> <td>Statements about or demonstrating intent to use code they have previously written or use of examples from outside sources.</td> </tr> <tr> <td>Adapt a solution</td> <td>Statements of or demonstrating changing or refining code.</td> </tr> <tr> <td>Evaluate</td> <td>Statements of or demonstrating testing or evaluating outcomes intent to test a solution or identifying why code was not meeting expectations.</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Self-Regulation</th> <th>Definition</th> </tr> </thead> <tbody> <tr> <td>Planning</td> <td>Statements of intended work goals or requirements an intended order of work</td> </tr> <tr> <td>Process Monitoring</td> <td>Statements of start times stop times duration of coding session about work being started identifying work currently in progress when a task is complete or statements that identify actions as part of their process.</td> </tr> <tr> <td>Comprehension monitoring</td> <td>Statements identifying known or unknown concepts or solutions.</td> </tr> <tr> <td>Self-explanation</td> <td>Statements of code explanation for increased understanding.</td> </tr> <tr> <td>Reflection</td> <td>Statements reflecting on prior thoughts of behaviors.</td> </tr> <tr> <td>Rationale</td> <td>Statements that provided rational to decisions or behaviors.</td> </tr> </tbody> </table> Table 1: Programming and self-regulation behaviors (from prior work [11]) coded in the journals, with definitions. might cover all of the behaviors and demonstrate the level of detail we expected. Because the emotions, struggles, and successes of programming can be very personal, students were assured that their journals would be kept anonymous and we did not enforce a structure nor did we require journals contain specific content. This allowed students to authentically journal about their programming and encouraged including what they valued and expected to be useful. To provide additional scaffolding for reflecting on their process, we provided feedback on the first journal each student submitted, identifying where they could expand on the content, clarity, and depth of their journaling for future journals. To understand the challenges students’ encountered when trying to reflect on their programming process (RQ2), we required students to fill out a survey when submitting each of their assignments and corresponding journals. It asked two open response questions: - Your journal is to help you reflect on your process. Review your journal and briefly describe all points where you had trouble reflecting on your process, and/or writing parts of your journal. - Think back to any time in the last two weeks where you stopped and reflected on your programming process when you were not programming or writing your journal and found it difficult. Describe why it was difficult. Our primary source of data was the student journals. Despite the journals being part of their grade, some students failed to submit journals. In total, we collected 106 journals with a combined total of 4,227 statements of students reflecting on their programming. Students’ journals varied in level of detail, with some being quite extensive, as in this example entry: "Initially I thought I was going to pseudo code a bunch of stuff, but instead I settled for repurposing a bunch of code from a previous exercise that takes user input and posts messages (Chirper from a previous exercise.)" Others, in contrast, were quite terse: "changed how I had BrowserRouter set up." 3 RESULTS 3.1 RQ1: What degree of in situ self-regulation do learners engage in? To answer this question, we performed three analyses: - We coded and computed the frequency of the behaviors in Table 1. - We investigated whether there were distinct patterns of student behavior through clustering. - We analyzed students’ survey responses about their journaling process for the "maturity" of reflection. 3.1.1 Frequency of behaviors. To understand the frequency of self-reflective behaviors, we developed the coding scheme based on a framework from prior work on programming and self-regulation [11]. We coded each statement of each journal entry to identify if it demonstrated one or more of these behaviors. To ensure the codes we applied were well-defined and consistent, we iteratively refined the code definition by having two authors apply the codes to a set of 430 (10% of the total 4,227 statements) randomly selected journal statements, using adjacent journal statements for context if necessary. To drive refinements, we discussed disagreements, refining definitions, and coded a new set of randomly selected journal statements. After four rounds of iteration, the authors reached 83% agreement on this sample data set. One author then coded all remaining statements. Table 1 shows the final code definitions for each of the self-regulation behaviors. Table 2 shows an excerpt of one students’ journal, showing a session of writing, testing, and refining some CSS. Based on these codes, we computed the frequency of each type of code in each student journal. Table 3 shows the range of the number of codes found in journals for each behavior. The "Total" column in Table 4 lists the total percentage of students who journaled about each behavior at least once across all four journals. These two tables show that most participants (>80%) journaled about the following behaviors at least once across their four journals: - Process monitoring (e.g. "Day Two - Start Time - 1:30 PM | End Time 8 PM") - Evaluating solutions (see Table 2.A.D,F,H for examples) - Searching for analogous problems (see Table 2.E for example) - Reflecting on their cognition (see Table 2.B.I for examples) - Self-explanations (e.g. "ALL THAT WAS WRONG WAS THAT I DIDN’T LINK IN THE RIGHT VERSION OF JQUERY ASD-FGHJKL:[sic]" - Rationale (e.g. "I think before I do that I should make the page look prettier and better organized, because I am reusing a lot of code for API requests") The least common behaviors included: - Interpreting the prompt (see Table 2.J for example) - Adapting previous solutions (e.g. "The majority of this chat application will come from exercise sets so I’m taking code from those assignments and picking the components that apply to the project") 3.1.2 Clusters of behaviors. Prior work suggests that there is large variation in self-regulation behaviors [11]; to better understand this variation, we attempted to cluster students based on which behaviors they did and did not exhibit in their journals. We began by computing a binary variable for each student and each behavior in Table 1 that was true if at least one journal exhibited that behavior, and false otherwise. Then, we performed a visual inspection of this binary data and observed that there were potentially 3 patterns of self-regulation behavior. To verify this interpretation, we applied the K-modes unsupervised clustering algorithm [7] to the student data, using the binary variables as features, specifying K=3 to separate the students into 3 clusters. The resulting clusters, shown in Table 4, aligned with our visual inspection. One cluster, which we will refer to as the high coverage cluster, contained 12 participants whose journals had exhibited at least 9 of 10 of the behaviors in Table 1. These high coverage students typically were missing entries exhibiting the behaviors (Interpret the prompt and Adapting solutions). Table 2 shows an excerpt from a student in the high coverage cluster evaluating, reflecting, and interpreting about a series of CSS problems. The second cluster, which we will call the moderate coverage cluster, had 12 participants who journaled about the most common behaviors (Process monitoring, Evaluating solutions, Searching for analogous problems, Reflecting on their cognition, creating Self-explanations, Rationale), but not the less common behaviors. The final cluster, which we will call the low coverage cluster, had 7 participants who Table 2: The second half of one students' journal for the first homework, showing the codes applied from Table 1. This participant was in the high coverage cluster. <table> <thead> <tr> <th>Behavior</th> <th>#1</th> <th>#2</th> <th>#3</th> <th>#4</th> </tr> </thead> <tbody> <tr> <td>Interpret</td> <td>0-1</td> <td>0-1</td> <td>0-1</td> <td>0-1</td> </tr> <tr> <td>Search</td> <td>0-7</td> <td>0-2</td> <td>0-2</td> <td>0-3</td> </tr> <tr> <td>Adapt</td> <td>0-6</td> <td>0-4</td> <td>0-1</td> <td>0-1</td> </tr> <tr> <td>Evaluate</td> <td>0-11</td> <td>0-6</td> <td>0-8</td> <td>0-14</td> </tr> <tr> <td>Planning</td> <td>0-19</td> <td>0-9</td> <td>0-19</td> <td>0-7</td> </tr> <tr> <td>Process Monitoring</td> <td>0-25</td> <td>0-25</td> <td>0-54</td> <td>0-43</td> </tr> <tr> <td>Comprehension monitoring</td> <td>0-6</td> <td>0-2</td> <td>0-4</td> <td>0-2</td> </tr> <tr> <td>Self-explanation</td> <td>0-24</td> <td>0-17</td> <td>0-13</td> <td>0-9</td> </tr> <tr> <td>Reflection</td> <td>0-13</td> <td>0-7</td> <td>0-7</td> <td>0-8</td> </tr> <tr> <td>Rationale</td> <td>0-17</td> <td>0-8</td> <td>0-8</td> <td>0-7</td> </tr> </tbody> </table> Table 3: The range of student entries exhibiting each self-regulation or programming behaviors, by assignment, showing higher frequencies of evaluation, planning, process monitoring, and self-explanation than other behaviors and interpret only occurring once per assignment. <table> <thead> <tr> <th>Code</th> <th>High (12)</th> <th>Moderate (12)</th> <th>Low (7)</th> <th>Total</th> </tr> </thead> <tbody> <tr> <td>Process</td> <td>100%</td> <td>100%</td> <td>100%</td> <td>100%</td> </tr> <tr> <td>Evaluate</td> <td>100%</td> <td>100%</td> <td>100%</td> <td>100%</td> </tr> <tr> <td>Search</td> <td>100%</td> <td>100%</td> <td>86%</td> <td>97%</td> </tr> <tr> <td>Reflection</td> <td>100%</td> <td>100%</td> <td>43%</td> <td>87%</td> </tr> <tr> <td>Explanation</td> <td>100%</td> <td>100%</td> <td>29%</td> <td>84%</td> </tr> <tr> <td>Rationale</td> <td>92%</td> <td>100%</td> <td>29%</td> <td>81%</td> </tr> <tr> <td>Planning</td> <td>92%</td> <td>75%</td> <td>29%</td> <td>71%</td> </tr> <tr> <td>Comprehension</td> <td>100%</td> <td>33%</td> <td>29%</td> <td>58%</td> </tr> <tr> <td>Interpret</td> <td>75%</td> <td>8%</td> <td>0%</td> <td>32%</td> </tr> <tr> <td>Adapt</td> <td>75%</td> <td>8%</td> <td>0%</td> <td>32%</td> </tr> </tbody> </table> Table 4: The three clusters of behavior (and size of cluster). Each percentage indicates the proportion of students in the cluster who exhibited the behavior at least once in a journal. who demonstrated mature reflection were those who identified indicated that they were struggling with reflecting while programming because the reflection conflicted with their process. For example, one participant with mature reflection stated, “When I hit really difficult bugs, I don’t want to reflect on them or journal, I just want to look at my code and chase them down.” These participants demonstrated a high awareness of their current process and how reflection interfered with it. They often stated that they opted not to engage in journaling during programming, choosing to journal after-the-fact to meet the journaling requirement. Another set of participants we identified was those who were actively integrating reflection into their process. This group provided indications that they were still developing a programming process, often reporting struggle reflecting due to the task being difficult rather than because it conflicted with any current process. For instance, one integrating participant expressed, “It is [difficult] because that I might not remember all the details all the time.” The final set of participants we identified as being process-unaware. These participants stated that they did not reflect on their process, that they had no difficulties reflecting while providing no additional details, or responded to the question by describing their process or a segment of code that was difficult rather than an aspect of their process. For example, one process-unaware participant responded, “I never really stopped and thought about something being hard, I just started looking through google/documentation." To understand the distribution of these different levels of maturity, we assigned participants to one of these categories based on the highest frequency of codes they received each code across all 8 (2 questions per survey, 4 surveys) survey responses. Codes across all responses for each participant were fairly stable, most participants only receiving one or two codes across all 8 responses. In cases of a conflict, we assigned participants to the group with the higher level of reflection maturity. In total 10 participants’ were mature, 17 participants’ were integrating, and 2 participants were process-unaware. Finally, we hypothesized that there would be a relationship between students’ maturity of self-regulation and the patterns of journaling behavior indicated by our clusters. To test this hypothesis, we performed a Pearson’s chi-squared analysis and did not find a significant association (p=0.2664). ### 3.2 RQ2: What did students report was challenging about reflecting? To analyze the challenges that students reported writing their journals, we inductively identified themes in students survey responses. Two authors used an inductive coding approach, independently identifying themes in the open ended survey responses to the questions about what difficulties students found reflecting. The researchers used the themes to define a code book which they used to independently code the entire set of survey responses, and found 100% agreement. Our results identified three ways that participants struggled to reflect on their programming process. One struggle students reported having while trying to reflect was that the concept of a programming process was too abstract. Participants appeared not to be practiced in thinking about their own thinking. For instance, one participant responded, “It was difficult to think of how I was actually trying to solve things.” Another challenge that many participants encountered was that it was difficult to recall details about their mental work. For example, one participant said, “Many thoughts that I have about coding come by very quickly, and it’s difficult for me to recall small but important influencers that cause me to change how I build something.” Another challenge we identified was that reflecting on their process actually conflicted with their process. As one participant explained, “It was really difficult to remove myself from my workflow and constantly having to switch between my journal and my code; it broke my workflow and made me work slower.” ### 4 LIMITATIONS As with any empirical study, ours had many limitations. First, all studies of this kind could benefit from more data. Our findings are limited to what we could qualitatively see from a single programming course with 31 students. While appropriate to identify patterns for students in this course, it is not enough data to identify all of the significant patterns of human self-regulation. Another limitation is the type data itself. There exists no way to directly measure cognitive processes. Attempting to expose and understand the mental work of programmers through journals and self-reported reflections is one of a few mechanisms for understanding self-regulation [21]. However, the act of presenting participants with the framework of behaviors and asking them to self-report certainly interacted with their behavior. Also, the robustness of journals as mechanism to observe self-regulation varies due to participants’ ability and willingness to express their inner thoughts. Conducting this study in an authentic classroom setting also means our data is entangled with many uncontrolled variables. These include, but are not limited to, programming task properties (e.g. difficulty), student prior experience, time pressures, distractions, amount of guidance received on tasks, and access to a computer. ### 5 DISCUSSION Answering RQ1, we found the following: All participants monitored their Process and Evaluated their solutions. About one third of the participants only engaged in those two behaviors, while another third engaged in nearly all of the self-regulation behaviors. Students almost never engaged in Interpreting the prompt and Adapting previous solutions, and when they did, they typically engaged in all self-regulation behaviors. We also found that students were either process-unaware, were struggling with integrating reflection into their process, or had a mature programming process that conflicted with reflection. Finally, we found that participants struggled reflecting on their process because it was too abstract to think about, they had problems recalling details about their process, or because reflecting conflicted with their current process. #### 5.1 Interpretation of results One of the challenges participants reported when reflecting was that the concept of a programming process was too abstract for some participants to meaningfully reflect on. This could be because the training we provided, which included definitions and examples for all of the behaviors, was simply inadequate for some students. Another interpretation is that participants where simply not very aware of their process and, thus, had problems identifying and understanding the mental behaviors they were engaging in. Because metacognitive awareness is something that must be developed over time [18], we suspect that it was likely a combination of these two interpretations. We suspect that the challenge of recalling detail of their mental work was also due to a lack of experience thinking about their own thinking. The challenge that reflecting conflicted with their process, however, we believe the self-reports that participants already had a programming process they have found to be successful which was hindered by reflecting. Together, these challenges suggest that reflecting on programming process may be a method best used for rank novices, before they have begun to develop a process of their own, and that reflection of this sort requires careful training and scaffolded practice. While we were able to identify and rank behaviors by how frequently students engaged in them, this says little about what this ranking actually means. One interpretation is that the rank reflects the order in which behaviors are developed or relied upon meaningfully to solve programming problems. It could be that students must first develop their skills of Process monitoring and Evaluating their solutions, whichconst all participants engaged in, in order to gain enough awareness of their other process behaviors. Alternatively, just because a behavior is engaged in does not mean it is done. so meaningfully. Prior work on self-regulation identified that the Planning and Comprehension monitoring behaviors may be the first self-regulation behaviors to help students avoid errors [11]. In our data, however, Planning and Comprehension monitoring were not among the most engaged in behaviors. This could mean that, while these behaviors are undoubtedly important, students often do not rely upon them; they may need to develop other skills first. Another interpretation of the behavior rankings is that it simply highlights the behaviors that are the easiest to be aware of engaging in. This would mean that novices are simply not aware of many behaviors, or do not engage in them at all. For instance, few participants reflected on Interpreting or Adapting. From this we can take that novices simply do not take the time to understand the problem they are attempting to solve and do not, or are not aware of, adapting prior knowledge or examples to help craft a solution. This interpretation aligns with prior work finding that novices often begin to code before understanding the problem [1] and that they have a difficult time leveraging solutions to similar problems to solve their current problem [4, 13]. There are many reasons why the maturity levels (mature, integrating and process-unaware) might not have been associated with the clusters. One explanation is that it was an instrument failure, since the survey questions from which we derived this measure were not originally intended to measure process awareness. However, another explanation is that one’s ability to reflect on, and journal about, programming process is highly contingent on a second mechanism; metacognitive awareness of process. Following this interpretation, we might expect that it could require little-to-no process awareness to simply monitor mental work and identify when a behavior is being engaged in. A participant who is skilled in monitoring mental work in situ, but lacking a greater awareness of their process might, then, be classified as a high coverage participant and be process-unaware. Conversely, a participant who has mature process awareness, even if that awareness concludes that they have a poor process, may lack practice monitoring mental work, or may not be able to expend the mental effort to monitor their process in situ, and thus may exhibit low coverage of self-regulation behaviors. Our data confirms prior work on novice programmers’ self-regulation. Prior work conceptualizes developing programming expertise as a series of “levels” [14]. This prior work argues that “level 2” students should value decomposing program goals (which we identify as Interpreting the prompt) but that their process is insufficient for larger programs and that their primary focus is getting a program to work. The authors indicate that “Level 3” is when students develop some appreciation for the process of designing a successful program, and thus, more robust process awareness. In the context of this prior work, our participants would be categorized as attempting to move from level 2 (code generators), to level 3 (program generators). This is a fairly appropriate characterization of our participants, who may have only taken one previous programming course and are now learning to develop larger front-end web applications. Our results, too, support this classification. The type and variation of self-regulation and awareness of programming process in our results are appropriate for students transitioning between level 2 and level 3. Our results also provide further insights into prior work. Liao et al. identified that high preforming CS students use more metacognitive strategies [10]. While this work provides strategy differences between high and low students, such as exam studying and help seeking strategies, our results suggest what self-regulation behaviors might be driving those strategies. 5.2 Implications and future work Our results have important implications for future self-regulation interventions and research. Our results suggest that educators seeking to scaffold the development of self-regulation skills should strongly consider the robustness of students’ current self-regulation skills. The challenges reported by our participants demonstrate that interventions intended to help build self-regulation skills may act to needlessly slow down and hinder students’ ability to be productive. Alternatively, students that would fall into the high coverage or moderate coverage clusters may disregard the intervention in favor of their current workflows resulting in wasted efforts with no benefits. Additionally, without careful training and scaffolded practice low coverage students may struggle to begin to develop necessary self-regulation skills at all, remaining in a state of low coverage. Instead, educators may want to have tiered, faded scaffolding, systems that first carefully train low coverage students in self-regulation skills and the ability to reflect on them, helping them to achieve moderate awareness. Additionally, educators might want to take into account the self-regulation behaviors that students are more and less apt to engage in. Some work is already attempting to emphasize the importance of developing the Interpreting the prompt from the beginning [16] and similar efforts might be needed for Adapting to help student be more aware of their process and help them more quickly achieve high coverage levels of self-regulation. We believe further research into understanding the development of novice programmers’ self-regulation is warranted. First, future work should replicate our findings in other authentic programming settings, using other programming languages, and in other cultures. Future work should refine the training and instruments used in our study to more accurately measure self-regulation in situ. Future work should leverage our awareness cluster findings as a basis for further investigations on the development of self-regulation skills in programming. Future work should also explore the order in which novices develop particular self-regulation behaviors. Similarly, future work should investigate any connection between the categories of reflection and the clusters of behavior coverage. Despite the limitations on the validity and generalizability of our results, our findings are an important step in understanding the in situ self-regulation of novice programmers. With further research, improved instruments, and refined theories, we hope for a future where educators understand self-regulation development and leverage that understanding to support the development of robust self-regulation skills for all students. 6 ACKNOWLEDGEMENTS This work was supported in part by the National Science Foundation (NSF) under grants 1703304, 1735123, and 12566082. Any opinions, findings, conclusions or recommendations are those of the authors and do not necessarily reflect the views of the NSF. REFERENCES
{"Source-Url": "https://faculty.washington.edu/ajko/papers/Loksa2020SelfRegulationJournals.pdf", "len_cl100k_base": 7369, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 23114, "total-output-tokens": 8728, "length": "2e12", "weborganizer": {"__label__adult": 0.0011615753173828125, "__label__art_design": 0.001529693603515625, "__label__crime_law": 0.0010004043579101562, "__label__education_jobs": 0.403564453125, "__label__entertainment": 0.0002548694610595703, "__label__fashion_beauty": 0.0006513595581054688, "__label__finance_business": 0.0011501312255859375, "__label__food_dining": 0.0013322830200195312, "__label__games": 0.0016613006591796875, "__label__hardware": 0.0017108917236328125, "__label__health": 0.0016202926635742188, "__label__history": 0.0009484291076660156, "__label__home_hobbies": 0.0005784034729003906, "__label__industrial": 0.0011615753173828125, "__label__literature": 0.0015840530395507812, "__label__politics": 0.000957965850830078, "__label__religion": 0.0013093948364257812, "__label__science_tech": 0.0198516845703125, "__label__social_life": 0.0007753372192382812, "__label__software": 0.006351470947265625, "__label__software_dev": 0.54736328125, "__label__sports_fitness": 0.00130462646484375, "__label__transportation": 0.0018482208251953125, "__label__travel": 0.0006537437438964844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39240, 0.04502]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39240, 0.59727]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39240, 0.94728]], "google_gemma-3-12b-it_contains_pii": [[0, 4886, false], [4886, 10909, null], [10909, 17431, null], [17431, 21111, null], [21111, 27866, null], [27866, 34851, null], [34851, 39240, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4886, true], [4886, 10909, null], [10909, 17431, null], [17431, 21111, null], [21111, 27866, null], [27866, 34851, null], [34851, 39240, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39240, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39240, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39240, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39240, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 39240, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39240, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39240, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39240, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39240, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39240, null]], "pdf_page_numbers": [[0, 4886, 1], [4886, 10909, 2], [10909, 17431, 3], [17431, 21111, 4], [21111, 27866, 5], [27866, 34851, 6], [34851, 39240, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39240, 0.27737]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
d1adfb4d1b0cea1d60b535fb89ff6bdc66e0183a
User-Centered Design and Evaluation of Virtual Environments The ever-increasing power of computers and hardware rendering systems enables the creation of visually rich and perceptually realistic virtual environment (VE) applications. At the same time, comparatively little effort has gone into the user interaction components of VEs. Although usability engineering is a newly emerging facet of VE development, user-centered design and evaluation in VEs as a practice still lags far behind what’s needed. In this article we present a structured, iterative methodology for user-centered design and evaluation of VE user interaction. Figure 1 (next page) illustrates our basic technique. We recommend performing (1) user task analysis followed by (2) expert guidelines-based evaluation, (3) formative user-centered evaluation, and finally (4) summative comparative evaluation. In this article we first give some motivation and background for our methodology, then we describe each technique in some detail. We applied these techniques to a real-world battlefield visualization VE, as explained. Finally, we evaluate why this approach provides a cost-effective strategy for assessing and iteratively improving user interaction in VEs. Motivation The user interaction components of VE applications are often poorly designed and rarely evaluated with users. The vast majority of VE research and design effort has gone into the development of visual quality and rendering efficiency. As a result, many visually compelling VEs are difficult to use and thus unproductive. While these VEs might make good entertainment applications, their usability problems prevent them from being useful for efficiently solving real-world problems. Usability engineering and user-centered design and evaluation are newly emerging facets of VE development. VE designers and developers are becoming aware of traditional human-computer interface (HCI) usability efforts and beginning to apply and expand upon those methods for VEs. A few efforts have been reported to date; however, user-centered design and usability evaluation in VEs as a practice still lags. We have reached the point in VE development when we should shift from largely open-ended explorations of new technologies to more scientific studies of the benefits and impact of VEs on their users. We present a methodology for ensuring the usability of virtual environments through user-centered design and evaluation. The two development domains Two distinct domains make up interactive system development—behavioral and constructional. The behavioral domain represents the view of the user and the user interaction with the application, while the constructional domain represents the view of the software developer and the overall system. The user interaction component is developed in the behavioral domain—the look and feel and behavior as a user interacts with an application. User interaction components include all icons, text, graphics, audio, video, and devices through which a user communicates with an interactive system, as well as locomotion, layout, content, and so on. The software component is developed in the constructional domain, including code for both the user interface and the rest of the application. Roles that support each of these domains require different training, skills, and attitudes. While these roles are relatively well defined and the people holding them well trained for software development in the constructional domain—mainly for software and systems engineers—they’re much less well defined and have far fewer well-trained practitioners for user interaction development in the behavioral domain. This holds especially true for usability engineering of VEs—very few experts exist in user interaction design and evaluation of VEs. Thus, interaction designers and evaluators do their work in the behavioral domain, while software and systems engineers and related roles do their work in the constructional domain. Well-known techniques from software engineering suit developing and evaluating the user interface software component. This kind of software evaluation can have many objectives, such as determining fidelity of a design to its implementation, reliability, reusability, and so on. Usability, however, is not one of these objectives, and usability engineering employs a very different set of methods. It isn’t the user interface software component that’s engineered for usability, but rather the user interaction component (which happens to be instantiated in software). Cooperation between usability engineers and software engineers is essential for VEs to mature toward a truly user-centric work and entertainment experience. Thus, producing any interactive system, including a VE, requires both the behavioral and the constructional domains. Nonetheless, the domain that ensures usability, and in which usability engineering is applied, is the behavioral domain. **Our methodology** VE researchers interested in applying proven usability design and evaluation methods discover few documented, well-tested methods for VE usability engineering. They often consider employing existing GUI-based evaluation and design methods, but limitations and incompatibilities between GUIs and VEs may render these methods inapplicable at best. Methods for usability engineering of VEs need to consider a broad variety of issues not addressed in current methods for evaluating usability of GUIs. For example, how does an evaluator collect verbal protocol and interact with a user immersed in a virtual world that frequently generates its own sound and possibly even uses voice input to control the system? How can evaluators observe both users and visual scenes in a Cave Automated Virtual Environment (CAVE) without altering the users’ sense of presence or situational awareness? How can we study, for example, the best way to represent a virtual person in a meeting (someone physically located elsewhere) to others in that meeting? How do preconceived notions and expectations of VE interfaces manifest themselves in subjective data, and how can we account for this manifestation? Other issues include, for example, how limits on observable data imposed by special VE equipment could impact usability engineering methods, by not allowing an evaluator to see a user’s facial expression. Or a user’s ability to move around, especially in a 3D VE, may make it more difficult for an evaluator to follow the user’s actions to determine if that user is performing a specific task correctly. To support rich and dynamic user-centered design and evaluation of VEs, we must forge new usability engineering methods that merge well-established techniques for evaluation and design of human activity with new, innovative methods capable of analyzing emerging VE-based interaction components. We’ve found a successful, cost-effective progression of methods for VE usability engineering that lets researchers not only improve VE usability, but address some of the pragmatic usability engineering questions presented above. Our methodology, illustrated in Figure 1, is based on sequentially performing 1. user task analysis, 2. expert guidelines-based evaluation, 3. formative user-centered evaluation, and 4. summative comparative evaluations. We describe each of these tasks in more detail below. While similar methodologies have been applied to traditional (GUI-based) computer systems, this particular methodology is novel because we specifically designed it for and applied it to VEs, and it leverages a set of --- **Figure 1** 1 Methodology for the user-centered design and evaluation of VE user interaction. - **(1) User task analysis** - Task descriptions, sequences, and dependencies - **(2) Expert guidelines-based evaluation** - Guidelines and heuristics - Streamlined user interface designs - **(3) Formative user-centered evaluation** - Representative user task scenarios - Iteratively refined user interface designs - **(4) Summative comparative evaluation** - Usable and useful user interface prototype --- heuristic guidelines specifically designed for VEs. **User task analysis** A user task analysis is the process of identifying a complete description of tasks, subtasks, and methods required to use a system, as well as other resources necessary for user(s) and the system to cooperatively perform tasks. It follows a formal methodology, described in detail elsewhere. As depicted in Figure 2, a user task analysis represents insights gained through an understanding of user, organizational, and social workflow; needs analysis; and user modeling. A user task analysis generates critical information used throughout all stages of the application development life cycle (and subsequently, all stages of the usability design and evaluation life cycle). A major result is a top-down decomposition of detailed user task descriptions for use by designers and evaluators. Equally revealing results include an understanding of required task sequences as well as sequence semantics. Thus, the results include not only the identification and description of tasks, but also information about the ordering, relationships, and interdependencies among user tasks. Unfortunately, this critical step of user interaction development is often overlooked or poorly done. Without a clear understanding of user task requirements, both evaluators and developers must “best guess” or interpret desired functionality, which inevitably leads to poor user interaction design. Indeed, user interaction developers as well as user interface software developers claim that poor, incomplete, or missing user task analysis is one of the most common causes of poor user interaction design. **Expert guidelines-based evaluation** Expert guidelines-based evaluation (heuristic evaluation or usability inspection) aims to identify potential usability problems by comparing a user interaction design—either existing or evolving—to established usability design guidelines. In this analytical evaluation, an expert in user interaction design assesses a particular interface prototype by determining what usability guidelines it violates and supports. Then, based on these findings, especially the violations, the expert makes recommendations to improve the design. In the case of VEs, this proves particularly challenging because so few guidelines exist specific to VE user interaction. Typically more than one person performs guidelines-based evaluations, since it’s unlikely that any one person could identify all if not most of an interaction design’s usability problems. Nielsen recommends three to five evaluators for a GUI heuristic evaluation, since fewer evaluators generally cannot identify enough problems to warrant the expense, while more evaluators produce diminishing results at higher costs. It’s not clear whether this recommendation is cost effective for VEs, since more complex VE interaction designs may require more evaluators than do GUIs. Each evaluator first inspects the design independently of other evaluators’ findings. Results are then combined, documented, and assessed as evaluators communicate and analyze both common and conflicting usability findings. Further, Nielsen suggests a two-pass approach. During the first pass, evaluators gain an understanding of the general flow of interaction. During the second pass, evaluators identify specific interaction components and conflicts as they relate to both task flow and the larger-scope interaction paradigm. This method is best applied early in the development cycle so that design issues can be addressed as part of the iterative design and development process. Expert guidelines-based evaluations rely on established usability guidelines to determine whether a user interaction design supports intuitive user task performance. While these heuristics are considered the de facto standard for GUIs, we have found them too general, ambiguous, and high level for effective and practical heuristic evaluation of VEs. Recently, we produced a set of usability design guidelines specifically for VEs, contained within a framework of usability characteristics. This framework document (available on the Web at http://www.vpst.org/jgabbard/ve/framework/) provides a reasonable starting point for heuristic evaluation of VEs. The complete document contains several associated usability resources, including specific usability guidelines, detailed context-driven discussion of the numerous ### Formative user-centered evaluation cycle. 1. **Designers/evaluators develop user task scenarios** 2. **Representative users perform scenarios using “think out loud” protocol** 3. **Evaluators collect qualitative and quantitative usability data** 4. **Designers/evaluators suggest improvements for user interaction** 5. **Designers/evaluators refine user task scenarios** --- The framework organizes VE user interaction design guidelines and the related context-driven discussion into four major areas: 1. users and user tasks, 2. input mechanisms, 3. virtual models, and 4. presentation mechanisms. The framework categorizes 195 guidelines covering many aspects of VEs that affect usability, including locomotion, object selection and manipulation, user goals, fidelity of imagery, input device modes and usage, interaction metaphors, and more. The guidelines presented within the framework document suit performing guidelines-based evaluation of VE user interfaces and interaction, since they provide broad coverage of VE interaction and interfaces yet are specific enough for practical application. For example, with respect to navigation within VEs, one guideline reads “provide information so that users can always answer the questions: Where am I now? What is my current attitude and orientation? Where do I want to go? How do I travel there?” Another guideline addresses methods to aid in usable object selection techniques, stating “use transparency to avoid occlusion during selection.” --- ### Summative comparative evaluation In contrast to formative user-centered evaluation, summative comparative evaluation is an empirical assessment of an interaction design in comparison with other maturing interaction designs for performing the same user tasks. Summative evaluation is typically performed with some more-or-less final versions of interaction designs, and it yields primarily quantitative results. The purpose of summative comparative evaluation is to statistically compare user performance with different interaction designs, for example, to determine which one is better, where “better” is defined in advance. When used to assess user interfaces, summative evaluation can be thought of as experimental evaluation with users comparing two or more configurations of user interface components, interaction paradigms, interaction devices, and so forth. Comparing devices and interaction techniques employs a consistent set of user task scenarios (developed during formative evaluation and refined for summative evaluation) resulting in primarily quantitative data results that compare (on a task... by task basis) the designs’ ability to support user task performance. An effective progression Through our recent work, we found that the progression of methods we present suits cost-effective, efficient, design and evaluation of VEs particularly well. Refer to Figure 1 throughout the following discussion. A user task analysis provides the basis for design and evaluation in terms of what types of tasks and task sequences users will need to perform within a specific VE. This analysis generates (among other outputs) a list of detailed task descriptions, sequences, and relationships, user work, and information flow. It provides a basis for design and application of subsequent evaluation methods. For example, the user task analysis may help eliminate or identify specific guidelines or sets of guidelines during expert guidelines-based evaluation. In a similar fashion, a user task analysis serves as both a basis for user evaluation scenario development as well as a checklist for evaluation coverage. That is, a well-developed task analysis provides evaluators with a complete list of end-use functionality detailing not only which tasks are to be performed but also likely task sequences and dependencies. Ordering and dependencies of user tasks is critical to powerful user evaluation scenario development. The closer the match between user task analysis and actual end user tasking, the better and more effective the final user interaction design. An expert guidelines-based evaluation is the first assessment of an interaction design based on the user task analysis and application of guidelines for VE interaction design. This extremely useful evaluation removes many obvious usability problems from an interaction design. A VE interaction design expert will find both subtle and major usability problems through a guidelines-based evaluation. Once problems are identified, experts perform further assessment to understand how particular interaction components, devices, and so on affect user performance. Results of expert guidelines-based evaluations are critical to effective formative and summative evaluations. For example, these results (coupled with results of user task analysis) serve as a basis for user scenario development. That is, if expert guidelines-based evaluation identifies a possible mismatch between implementation of a wireless 3D input device and manipulation of user viewpoint, then scenarios requiring users to manipulate the viewpoint should be included in formative evaluations. Results of expert guidelines-based evaluations are also used to streamline subsequent evaluations. Further, critical usability problems identified during expert guidelines-based evaluation are corrected prior to performing formative evaluations, affording formative evaluations that don’t waste time exposing obvious usability problems addressed by the guidelines-based evaluation. Because formative evaluation involves typical users, it most effectively uncovers issues (such as missing user tasks) that an expert performing a guidelines-based evaluation might be unaware of. A formative evaluation following a guidelines-based evaluation can focus not on major, obvious usability issues, but rather on those more subtle and more difficult to recognize issues. This becomes especially important because of the cost of VE development. Coupling expert guidelines-based evaluations with formative user-centered evaluation helps successfully refine GUIs. Nielsen recommends alternating expert guidelines-based evaluations and formative evaluation. The rationale is that no single method can reliably identify any and all usability problems. Indeed, guidelines-based evaluation and formative evaluation complement each other, often revealing usability problems that the other may have missed. Finally, a summative comparative evaluation following the preceding activities compares good apples to good oranges rather than comparing possibly rotten apples to good oranges. That is, summative studies comparing VEs whose interaction design has had little or no task analysis, guidelines-based evaluation, and/or formative evaluation may really be comparing one VE interaction design that is (for whatever reasons) inherently better—in terms of usability—to a different (and worse) VE interaction design. The first three methods produce a set of well-developed, iteratively refined, user interface designs. Subsequently, the designs compared in the summative study should be as usable, and comparably usable, as feasible. This means that any differences found in a summative comparison are much more likely the result of differences in the designs’ basic nature rather than true differences in usability. Again, because of the cost of VE development, this confidence in results proves especially consequential. The progression of methods is structured at a high level for application to any VE, regardless of the hardware, software, or interaction style used. Employing case-specific task analysis, guidelines, and user task scenarios facilitates broad applicability. As such, each specific method is flexible enough to support evaluation of any VE subsystem (visual, auditory, or haptic, for example) or combination thereof. Figure 4 shows additional properties of the three types of evaluation. The solid arrows underscore the methods’ application sequence. We recommend apply- ing expert guideline-based evaluation first, perhaps iterating several times. The least expensive evaluation to perform and very general, it can cover large portions (if not all) of the user interface. However, expert guideline-based evaluation isn’t very precise: it gives only general indications of what might be wrong and doesn’t address how to fix usability problems. We next apply formative usability evaluation, which is more expensive (it requires users and task scenarios) and less general (a smaller portion of the user interface can be covered per session). However, the results are more precise, often revealing where problems occur and suggesting ways to fix them. Typically iterated several times, formative usability evaluation may lead to additional expert guidelines-based evaluation of modified or missed portions of the user interface. Finally, summative evaluations are very expensive (requiring many more subjects than formative usability evaluations) and also extremely specific—they can answer only very narrowly defined questions. However, summative evaluations answer these questions with a high degree of precision: it’s the only type of evaluation that can statistically quantify how much better one design is than another. **The Dragon battlefield visualization VE** Collaborating with researchers from Virginia Tech, personnel at the Naval Research Laboratory’s Virtual Reality Lab developed a VE for battlefield visualization called Dragon (Figure 5). We applied a slightly less refined version of our usability engineering methodology to the design and evaluation of Dragon’s user interaction component. In this section we briefly describe Dragon and the application domain of battlefield visualization. In the next section we discuss how we applied the methodology to Dragon. For decades, battlefield visualization has relied on paper maps of the battlespace placed under sheets of acetate. As intelligence reports arrive from the field, technicians use grease pencils to mark new information on the acetate. Commanders then draw on the acetate to plan and direct various battlefield situations. Thus, the map and acetate together present a visualization of the battlespace. Using maps and overlays can take several hours to print, distribute, and update. Historically (before high-quality paper maps), these same operations were performed on a sandtable (a box filled with sand shaped to replicate the battlespace terrain). Commanders moved small physical replicas of battlefield objects to direct battlefield situations. Currently, the fast-changing modern battlefield produces so much time-critical information that these cumbersome, time-consuming methods are inadequate for effectively visualizing the battlespace. In Dragon, a Responsive Workbench provides a 3D display for observing and managing battlespace information shared among commanders and other battle planners. Visualized information includes a high-resolution terrain map; entities representing friendly, enemy, unknown, and neutral units; and symbology representing other features such as obstructions or key battle objectives. Dragon receives electronic intelligence feeds that provide constantly updated, displayable information about each entity’s status, including position, speed, heading, damage condition, and so forth. Users can navigate to observe the map and entities from any angle and orientation, and can query and manipulate entities. A user interacts with Dragon using a three-button... game flightstick (removed from its base) fitted with a six-degrees-of-freedom position sensor. Dragon tracks the flightstick’s position and orientation relative to an emitter located on the front center of the Workbench. A virtual laser pointer metaphor is used: a laser beam appears to come out of the flightstick, allowing interaction with the terrain or object that the beam intersects. **Applying the methodology to Dragon** We used the basic Dragon VE application as an instrumentable testbed, modified as needed for our expert guidelines-based and formative user-centered evaluation purposes. We performed extensive evaluations over a nine-month period, using anywhere from one to three users for each cycle of evaluation, and using two to three evaluators per session. From a single evaluation session, we often uncovered design problems so serious that it was pointless to have different users attempt to perform the scenarios with the same design. So we would iterate the design, based on our observations, and begin a new cycle of evaluation. We went through four major cycles of iteration during our evaluation of Dragon,9 each cycle using the progression of usability methods described previously. **User task analysis** Early Dragon developers performed a user task analysis by interviewing several US Navy personnel who use the current system of battlespace visualization (acetate, paper maps, and grease pens). This included both commanders and lower-level technicians. Important Dragon-specific tasks identified included planning and shaping a battlefield, comprehending situational awareness in a changing battlespace, performing engagement and execution exercises, and carrying out “what if” (contingency planning) exercises. The user task analysis also examined how personnel perform their current battlespace visualization tasks. This task analysis took place before we joined the project. However, we revisited the task analysis several times during the course of our own early work and enhanced it with our own observations and interviews. During our early work, we observed that locomotion—how users manipulate their viewpoint to move from place to place in a virtual world (in this case, the map for battlefield visualization)—profundely affects all other user tasks. If a user cannot successfully locomote in a virtual world, then other user tasks (involving specific objects or groups of objects, for example) become impossible. A user cannot query an object if the user cannot navigate through the virtual world to get to that object. Locomotion is a generic (as opposed to Dragon-specific) task that users of almost any VE will have to perform. Thus, we chose locomotion as a major focus of our subsequent work with Dragon. **Expert guidelines-based evaluations** During our expert guidelines-based evaluations, various user interaction design experts worked alone or collectively to assess the evolving user interaction design for Dragon. In our earliest heuristic evaluations, the experts didn’t follow specific user task scenarios per se, but simply engaged with the user interface. All experts knew enough about the purpose of Dragon as a battlefield visualization VE to explore the kinds of tasks most important for users. During each heuristic evaluation session, one person typically “drove,” holding the flightstick and generally deciding what and how to explore in the application. One and sometimes two other experts observed, commented, and collected data. Much discussion occurred during each session. We were often, but not always, the experts assessing the current design. Our assessment and discussions were guided largely by our own knowledge of interaction design for VEs and, more formally, by the framework for usability characteristics discussed above. This framework provided a more structured means of evaluation than merely wandering around in the application. It also provided guidance on how to make modifications to improve discovered design guideline violations. Major design problems uncovered by the expert guidelines-based evaluation included poor mapping of locomotion tasks (pan, zoom, pitch, heading) to flightstick buttons, missing user tasks (egocentric rotate, terrain following), problems with damping of map movement in response to flightstick movement, and inadequate graphical and textual feedback to the user about the current locomotion task (pan, zoom, and so forth). We discuss these problems, and how we addressed them, elsewhere.9 After our cycles of expert guidelines-based evaluation had revealed and remedied as many design flaws as possible, we moved on to formative evaluations. **Formative user-centered evaluations** Based on our user task analysis and early expert guidelines-based evaluations, we created a set of user task scenarios consisting of benchmark user tasks, carefully considered for coverage of specific issues related to locomotion. For example, some of the tasks exploited an egocentric (users move themselves) locomotion metaphor, while others exploited an exocentric (users move the world) locomotion metaphor. Some scenarios exercised various locomotion tasks (degrees of freedom: pan, zoom, rotate, heading, pitch, roll) throughout the virtual map world. Other scenarios served as primed exploration or nonprimed searches, while still others were designed to evaluate rate control versus position control in the virtual world. We thoroughly pretested and debugged all scenarios before presenting them to users during an evaluation session. During each of six formative evaluation sessions, we followed a formal protocol of welcoming the user, giving an overview of the evaluation about to be performed, and then explaining the Responsive Workbench and the Dragon application. We carefully avoided explaining details of the Dragon interaction design, since that was what we were evaluating. Then we asked the user to play with the flightstick to figure out which button activated which locomotion task (pan, zoom, and so on). We timed each user as they attempted to determine this and took notes on comments they made and any critical incidents that occurred. Once a user had successfully figured out how to use the flightstick, we began having them perform the scenarios. If about 15 minutes passed without a user figuring out the flightstick and its buttons (this happened in only one case), we filled in details that they had not yet determined and moved on to scenarios. Time to perform the set of scenarios ranged from about 20 minutes to more than an hour. We timed user performance of individual tasks and scenarios, and counted errors they made during task performance (quantitative data). A typical error was moving the flightstick in the wrong direction for the particular locomotion metaphor (exocentric or egocentric) currently in use. Other errors involved simply not being able to maneuver the map (to rotate it, for example) and persistent problems with mapping locomotion tasks to flightstick buttons. (Again, we discuss these further elsewhere.) We also carefully noted critical incidents, especially related to errors, and constructive comments users made about the design (qualitative data). During each session, we had at least two and often three evaluators present. The leader ran the session and interacted with the user; the other one or two evaluators recorded timings, counted errors, and collected qualitative data. While both the expert guidelines-based evaluation sessions and the formative evaluation sessions were personnel-intensive (with two or three evaluators involved), we found that the quality and amount of data collected by multiple evaluators greatly outweighed the cost of those evaluators. After each session, we analyzed both the quantitative and qualitative data, and based the next design iteration on our results. **Summative comparative evaluations** Our current work aims to summatively evaluate the mature locomotion design. During our expert guidelines-based and formative evaluations, we discovered many different variables affecting locomotion usability in VEs. We narrowed this (initially large) list to five variables, including the framework of usability characteristics,” our observations during heuristic and formative evaluations, and our expertise in VE interaction design. We feel these five variables have the greatest effect on locomotion and are therefore the most important candidates for summative evaluations: 1. locomotion metaphor (ego- versus exocentric), 2. gesture control (controls rate versus controls position), 3. visual presentation device (workbench, desktop, CAVE), 4. head tracking (present versus not present), and 5. stereopsis (present versus not present). **Lessons learned** As explained, we found that our usability engineering methodology had a major impact: Results from formative usability evaluations inform the design of summative studies by helping determine appropriate usability characteristics to evaluate and compare in summative studies. Invariably, numerous alternatives can be considered as factors in a summative evaluation. Formative evaluations typically point out the most important usability characteristics and issues (such as those that recur most frequently, those that have the largest impact on user performance and satisfaction, and so on). These issues then become strong candidates for inclusion in a summative evaluation. For example, if formative evaluation shows that users have a problem with format or placement of textual information in a heavily graphical display, a summative evaluation could explore alternative ways of presenting such textual information. Further, if users want different display modes (for example, stereoscopic and monoscopic, head-tracked and static, landscape view and overhead view of a map), these various configurations can also be the basis of rich comparative studies related to usability. As yet another example of a potential usability problem, users might have difficulty moving around in an immersive 3D version of a VE, but not in a 2D, non-immersive version. A summative study could investigate what parameter(s) of the 3D version causing the problem don’t appear in the 2D version. An important advantage of applying the complete progression of methods is the timeliness of assessment efforts, aligning each component’s strengths (such as level of detail or breadth of focus) with concurrent efforts in the software development process. For example, a user task analysis typically is performed at the onset of interaction design, prior to any prototype development. As prototype designs (paper and pencil prototypes, for example) start to emerge, expert guidelines-based evaluation can begin. As computer-based prototypes are developed, they take on a richer set of functionality, perfect for iterative formative user-centered evaluation. Finally, one or more candidate designs are available for summative comparative evaluation. Once complete, results and documentation from evaluation efforts provide an effective means of persistent design rationale. In complex development environments, tracking—often months after the fact—why particular interaction design changes were made can be very difficult if not impossible. To ensure accuracy and aid effectiveness, the design and development team should include one or more domain experts. These experts provide specific context-related information to help usability experts understand cognitive task and information flow requirements. Domain experts also help direct and rank analysis focus so that evaluation resources are allocated to the most important usability issues. Moreover, having a domain expert on-board early in the design, evaluation, and development cycles helps that expert understand the domain of usability evaluation. This enables the domain expert to become a much more effective resource during subsequent evaluation phases. This concludes our presentation of a methodology for usability engineering of virtual environments. We hope this work provides a starting point for techniques that let practitioners engineer VE interaction that is both useful and usable. **Acknowledgments** Many people contributed to Dragon’s development and therefore to this reported work. NRL personnel involved with major efforts include Jim Durbin, Tony King, Eddy Kuo, Brad Colbert, and Chris Scannell. Other developers included John Crowe, Josh Davies, Bob Doyle, Rob King, Greg Newton, and Josh Summers. At NRL, Larry Rosenblum and Dave Tate gave leadership, inspiration, and guidance to this project. Jim Templeman and Linda Sibert of NRL and Bob Williges of Virginia Tech provided valuable suggestions. This research was funded by the Office of Naval Research under program managers Helen Gigley and Paul Quinn. We would like to thank Helen Gigley for her continued support of ongoing synergistic collaboration in human-computer interaction research between Virginia Tech and NRL over the past several years. References Joseph L. Gabbard is the lead scientist at VPST, where he performs VE-based human-computer interaction research. He is currently interested in researching and developing usability engineering methods specifically for VEs. Other interests include developing innovative and intuitive interaction techniques employing ubiquitous input technology. He is currently pursuing his PhD in computer science at Virginia Polytechnic Institute in Blacksburg, Virginia. He received his MS in computer science, BS in computer science, and BA in sociology from Virginia Tech. He is a member of the IEEE, IEEE Computer Society, and Society for Computer Simulation International. Deborah Hix is a Computer Science faculty member at Virginia Polytechnic Institute and State University in Blacksburg, Virginia, and a founder and principal investigator of the Virginia Tech Human-Computer Interaction (HCI) Project. Most recently, Hix has extended her HCI work into VE usability. She has done extensive consulting and training in the area of user interface development for nearly 20 years. She is co-author of Developing User Interfaces: Ensuring Usability through Product and Process (John Wiley and Sons, New York, 1993). J. Edward Swan II is a scientist with the Virtual Reality Laboratory at the Naval Research Laboratory, where he conducts research in computer graphics and human-computer interaction. At the Naval Research Laboratory he is primarily motivated by the problem domain of battlefield visualization. Currently he is studying effective VE locomotion techniques for battlefield visualization, as well as new techniques in terrain rendering. He received his BS from Auburn University, and his MS and PhD from Ohio State University in 1997. He is a member of ACM, Siggraph, Sigchi, IEEE, and the IEEE Computer Society. Readers may contact Gabbard at VPST, 2000 Kraft Dr., Suite 2600, Blacksburg, VA 24060-6354, e-mail jgabbard@vpst.org, http://www.vpst.org.
{"Source-Url": "http://ivizlab.sfu.ca/arya/Papers/IEEE/C%20G%20&%20A/1999/December/User-Centered%20Design%20of%20Virtual%20Environments.pdf", "len_cl100k_base": 7068, "olmocr-version": "0.1.49", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 28044, "total-output-tokens": 8155, "length": "2e12", "weborganizer": {"__label__adult": 0.000812530517578125, "__label__art_design": 0.01340484619140625, "__label__crime_law": 0.0006399154663085938, "__label__education_jobs": 0.0211944580078125, "__label__entertainment": 0.0005393028259277344, "__label__fashion_beauty": 0.0004868507385253906, "__label__finance_business": 0.0005116462707519531, "__label__food_dining": 0.0007328987121582031, "__label__games": 0.0061492919921875, "__label__hardware": 0.0036468505859375, "__label__health": 0.0018110275268554688, "__label__history": 0.001506805419921875, "__label__home_hobbies": 0.00029850006103515625, "__label__industrial": 0.0009202957153320312, "__label__literature": 0.0013141632080078125, "__label__politics": 0.0004367828369140625, "__label__religion": 0.0010366439819335938, "__label__science_tech": 0.4248046875, "__label__social_life": 0.00023353099822998047, "__label__software": 0.027801513671875, "__label__software_dev": 0.489013671875, "__label__sports_fitness": 0.0009565353393554688, "__label__transportation": 0.0012836456298828125, "__label__travel": 0.0005526542663574219}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41152, 0.01134]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41152, 0.48445]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41152, 0.9263]], "google_gemma-3-12b-it_contains_pii": [[0, 3867, false], [3867, 8118, null], [8118, 12550, null], [12550, 15177, null], [15177, 20581, null], [20581, 24091, null], [24091, 30279, null], [30279, 36423, null], [36423, 41152, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3867, true], [3867, 8118, null], [8118, 12550, null], [12550, 15177, null], [15177, 20581, null], [20581, 24091, null], [24091, 30279, null], [30279, 36423, null], [36423, 41152, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41152, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41152, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41152, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41152, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41152, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41152, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41152, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41152, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41152, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41152, null]], "pdf_page_numbers": [[0, 3867, 1], [3867, 8118, 2], [8118, 12550, 3], [12550, 15177, 4], [15177, 20581, 5], [20581, 24091, 6], [24091, 30279, 7], [30279, 36423, 8], [36423, 41152, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41152, 0.0]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
7887963eea820a67894d2afaf9b4f85c9356a000
This specification defines an XMPP protocol extension for communicating information about the current geographical or physical location of an entity. Legal Copyright This XMPP Extension Protocol is copyright © 1999 – 2020 by the XMPP Standards Foundation (XSF). Permissions Permission is hereby granted, free of charge, to any person obtaining a copy of this specification (the "Specification"), to make use of the Specification without restriction, including without limitation the rights to implement the Specification in a software program, deploy the Specification in a network service, and copy, modify, merge, publish, translate, distribute, sublicense, or sell copies of the Specification, and to permit persons to whom the Specification is furnished to do so, subject to the condition that the foregoing copyright notice and this permission notice shall be included in all copies or substantial portions of the Specification. Unless separate permission is granted, modified works that are redistributed shall not contain misleading information regarding the authors, title, number, or publisher of the Specification, and shall not claim endorsement of the modified works by the authors, any organization or project to which the authors belong, or the XMPP Standards Foundation. Warranty ## NOTE WELL: This Specification is provided on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. ## Liability In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall the XMPP Standards Foundation or any author of this Specification be liable for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising from, out of, or in connection with the Specification or the implementation, deployment, or other use of the Specification (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if the XMPP Standards Foundation or such author has been advised of the possibility of such damages. Conformance This XMPP Extension Protocol has been contributed in full conformance with the XSF’s Intellectual Property Rights Policy (a copy of which can be found at <https://xmpp.org/about/xsf/ipr-policy> or obtained by writing to XMPP Standards Foundation, P.O. Box 787, Parker, CO 80134 USA). Contents 1 Introduction .............................................. 1 2 Requirements .......................................... 1 3 Data Format ............................................ 1 4 Recommended Transport .............................. 4 4.1 Entity publishes location via PEP ................. 4 5 Implementation Notes ................................ 6 6 Mapping to Other Formats ......................... 6 7 Internationalization Considerations ............ 9 8 Security Considerations ............................. 9 9 IANA Considerations ................................ 10 10 XMPP Registrar Considerations ................. 10 10.1 Protocol Namespaces .................. ........... 10 11 XML Schema ........................................ 10 1 Introduction This document defines a format for capturing data about an entity’s geographical location (geoloc). The format defined herein can describe most earthbound geographical locations, especially locations that may change fairly frequently. Potential uses for this approach include: • Publishing location information to a set of subscribers. • Querying another entity for its location. • Sending location information to another entity. • Attaching location information to presence. Geographical location is captured in terms of Global Positioning System (GPS) coordinates as well as civil location (city, street, building, etc.). 2 Requirements The format defined herein was designed to address the following requirements: • It shall be possible to encapsulate location in terms of Global Positioning System (GPS) coordinates as well as civil location (city, street, building, etc.). • The GPS encoding mechanism shall have a single set of units, so that receivers do not need to use heuristics to determine an entity’s position. • It shall be possible to specify the known amount of error in the GPS coordinates. • It shall be possible to include a natural-language description of the location. 3 Data Format Information about the entity’s location is provided by the entity and propagated on the network by the entity’s associated application (usually a client). The information is structured by means of a <geoloc/> element that is qualified by the ‘http://jabber.org/protocol/geoloc’ namespace; the location information itself is provided as the XML character data of the following child elements: <table> <thead> <tr> <th>Element Name</th> <th>Datatype</th> <th>Definition</th> <th>Example</th> </tr> </thead> <tbody> <tr> <td>accuracy</td> <td>xs:decimal</td> <td>Horizontal GPS error in meters; this element obsoletes the <code>&lt;error/&gt;</code> element</td> <td>10</td> </tr> <tr> <td>alt</td> <td>xs:decimal</td> <td>Altitude in meters above or below sea level</td> <td>1609</td> </tr> <tr> <td>altaccuracy</td> <td>xs:decimal</td> <td>Vertical GPS error in meters</td> <td>10</td> </tr> <tr> <td>area</td> <td>xs:string</td> <td>A named area such as a campus or neighborhood</td> <td>Central Park</td> </tr> <tr> <td>bearing</td> <td>xs:decimal</td> <td>GPS bearing (direction in which the entity is heading to reach its next waypoint), measured in decimal degrees relative to true north. It is the responsibility of the receiver to translate bearing into decimal degrees relative to magnetic north, if desired.</td> <td></td> </tr> <tr> <td>building</td> <td>xs:string</td> <td>A specific building on a street or in an area</td> <td>The Empire State Building</td> </tr> <tr> <td>country</td> <td>xs:string</td> <td>The nation where the user is located</td> <td>United States</td> </tr> <tr> <td>countrycode</td> <td>xs:string</td> <td>The ISO 3166 two-letter country code</td> <td>US</td> </tr> <tr> <td>datum</td> <td>xs:string</td> <td>GPS datum If datum is not included, receiver MUST assume WGS84; receivers MUST implement WGS84; senders MAY use another datum, but it is not recommended.</td> <td></td> </tr> <tr> <td>description</td> <td>xs:string</td> <td>A natural-language name for or description of the location</td> <td>Bill’s house</td> </tr> <tr> <td>Element Name</td> <td>Datatype</td> <td>Definition</td> <td>Example</td> </tr> <tr> <td>--------------</td> <td>--------------</td> <td>-----------------------------------------------------------------------------</td> <td>--------------------------------</td> </tr> <tr> <td>error</td> <td>xs:decimal</td> <td>Horizontal GPS error in arc minutes; this element is deprecated in favor of <code>&lt;accuracy/&gt;</code></td> <td>290.8882087</td> </tr> <tr> <td>floor</td> <td>xs:string</td> <td>A particular floor in a building</td> <td>102</td> </tr> <tr> <td>lat</td> <td>xs:decimal</td> <td>Latitude in decimal degrees North</td> <td>39.75</td> </tr> <tr> <td>locality</td> <td>xs:string</td> <td>A locality within the administrative region, such as a town or city</td> <td>New York City</td> </tr> <tr> <td>lon</td> <td>xs:decimal</td> <td>Longitude in decimal degrees East</td> <td>-104.99</td> </tr> <tr> <td>postcode</td> <td>xs:string</td> <td>A code used for postal delivery</td> <td>10118</td> </tr> <tr> <td>region</td> <td>xs:string</td> <td>An administrative region of the nation, such as a state or province</td> <td>New York</td> </tr> <tr> <td>room</td> <td>xs:string</td> <td>A particular room in a building</td> <td>Observatory</td> </tr> <tr> <td>speed</td> <td>xs:decimal</td> <td>The speed at which the entity is moving, in meters per second</td> <td>52.69</td> </tr> <tr> <td>street</td> <td>xs:string</td> <td>A thoroughfare within the locality, or a crossing of two thoroughfares</td> <td>350 Fifth Avenue / 34th and Broadway</td> </tr> <tr> <td>text</td> <td>xs:string</td> <td>A catch-all element that captures any other information about the location</td> <td>Northwest corner of the lobby</td> </tr> <tr> <td>timestamp</td> <td>xs:dateTime</td> <td>UTC timestamp specifying the moment when the reading was taken (MUST conform to the DateTime profile of XMPP Date and Time Profiles (XEP-0082) XEP-0082: XMPP Date and Time Profiles <a href="https://xmpp.org/extensions/xep-0082.html">https://xmpp.org/extensions/xep-0082.html</a>.)</td> <td>2004-02-19T21:12Z</td> </tr> </tbody> </table> ### 4 Recommended Transport Location information about human users SHOULD be communicated and transported by means of Publish-Subscribe (XEP-0060) or the subset thereof specified in Personal Eventing Protocol (XEP-0163) (the examples below assume that the user’s XMPP server supports PEP, thus the publish request lacks a ‘to’ address and the notification message has a ‘from’ address of the user’s bare JID). Although the XMPP publish-subscribe extension is the preferred means for transporting location information about human users, applications that do not involve human users (e.g., device tracking) MAY use other transport methods; however, because location information is not pure presence information and can change independently of network availability, it SHOULD NOT be provided as an extension to <presence/>. #### 4.1 Entity publishes location via PEP Listing 1: Entity publishes location ```xml <iq type='set' from='portia@merchantofvenice.lit/pda' id='publish1'> ``` In order to indicate that the user is no longer publishing any location information, the user’s client shall send an empty <geoloc/> element, which can be considered a "stop command" for geolocation: Listing 3: User stops publishing geolocation information ``` <iq from='portia@merchantofvenice.lit/pda' id='publish2' type='set'> <pubsub xmlns='http://jabber.org/protocol/pubsub'> <publish node='http://jabber.org/protocol/geoloc'> <item> <geoloc xmlns='http://jabber.org/protocol/geoloc' xml:lang='en'> <accuracy>0</.accuracy> <country>Italy</country> <lat>45.44</lat> <locality>Venice</locality> <lon>12.33</lon> </geoloc> </item> </publish> </pubsub> </iq> ``` 5 Implementation Notes Avoid "Mars probe" issues: as specified in Table 1, the units for <lat/> and <lon/> MUST be decimal degrees (where South and West are negative, North and East are positive), the units for <alt/> MUST be meters above or below sea level, and the units for <accuracy/> MUST be meters. 4 In applications where updates are sent whenever there is a certain distance change in location, those applications SHOULD account for time as well, to avoid rate-limiting when the user is (for example) on a jet plane. One possible way to do this would be to send updates at most once per minute of time (every time 60 seconds have elapsed). Inferences SHOULD NOT be made about accuracy from the number of digits specified in the location or altitude. Why the datum madness? See <http://www.xmpp.org/extensions/gps_datum.html> for an example. An entity can provide a GPS path by publishing a series of items (i.e., multiple pubsub events) with appropriate values of the <timestamp/> element. 6 Mapping to Other Formats There are many XML data formats for physical location or address information. It is beyond the scope of this document to provide a mapping from the extension defined herein to every such format. However, it would be valuable to provide a mapping from the XMPP format to the formats used in other presence or extended presence protocols. The two main protocols 4The <accuracy/> element obsoletes the older <error/> element, which specified units of arc minutes instead of meters. of interest are: 1. The Wireless Village (now "IMPS") specifications for mobile instant messaging; these specifications define a presence attribute for address information as encapsulated in the IMPS "Address" element. 2. The SIP-based SIMPLE specifications; in particular, the IETF’s GEOPRIV Working Group has defined an extension to the IETF’s Presence Information Data Format (PIDF) for location information, as specified in RFC 4119 (also known as "PIDF-LO"). The following table also maps the format defined herein to the vCard XML format specified in vcard-temp (XEP-0054). <table> <thead> <tr> <th>XMPP Wireless Village</th> <th>SIMPLE (PIDF-LO)</th> <th>vCard XML</th> </tr> </thead> <tbody> <tr> <td>&lt;country/&gt;</td> <td>&lt;Country/&gt;</td> <td>&lt;CTRY/&gt;</td> </tr> <tr> <td></td> <td></td> <td>As noted in XEP-0054, the XML vCard format defined in draft-dawson-vcard-xml-dtd-01 specified a &lt;COUNTRY/&gt; element rather than a &lt;CTRY/&gt; element; refer to XEP-0054 for details.</td> </tr> <tr> <td>&lt;region/&gt;</td> <td>--</td> <td>&lt;A1/&gt; and/or &lt;A2/&gt;</td> </tr> <tr> <td>&lt;locality/&gt;</td> <td>&lt;City/&gt;</td> <td>&lt;A3/&gt;</td> </tr> <tr> <td></td> <td></td> <td>&lt;LOCALITY/&gt;</td> </tr> <tr> <td>&lt;area/&gt;</td> <td>&lt;NamedArea/&gt;</td> <td>&lt;A4/&gt; and/or &lt;A5/&gt;</td> </tr> <tr> <td></td> <td></td> <td>--</td> </tr> </tbody> </table> --- ### MAPPING TO OTHER FORMATS <table> <thead> <tr> <th>XMPP</th> <th>Wireless Village / IMPS</th> <th>SIMPLE (PIDF-LO)</th> <th>vCard XML</th> </tr> </thead> <tbody> <tr> <td>&lt;street/&gt;</td> <td>The IMPS specification also enables one to define an intersection (e.g., &quot;Broadway and 34th Street&quot;) as the combination of a &lt;Crossing1/&gt; element (e.g., &quot;Broadway&quot;) and a &lt;Crossing2/&gt; element (e.g., &quot;34th Street&quot;). To map from IMPS to XMPP, an application SHOULD map such a combination to one XMPP &lt;street/&gt; element.</td> <td>&lt;A6/&gt; The PIDF-LO format provides information elements for much more granular control over a traditional street address; in PIDF-LO the &lt;A6/&gt; element is the street name only, and further information is provided in distinct elements for a leading street direction (e.g., &quot;N&quot;), trailing street suffix (e.g., &quot;SW&quot;), street suffix (e.g., &quot;Avenue&quot;), house number (e.g., &quot;909&quot;), and house number suffix (e.g., &quot;1/2&quot;). To map from PIDF-LO to XMPP, an application SHOULD construct the complete street address from the PIDF-LO elements (&lt;A6/&gt;, &lt;PRD/&gt;, &lt;POD/&gt;, &lt;STS/&gt;, &lt;HNO/&gt;, and &lt;HNS/&gt; and map the result to one XMPP &lt;street/&gt; element.</td> <td>&lt;STREET/&gt;</td> </tr> <tr> <td>&lt;building/&gt;</td> <td>&lt;Building/&gt; &lt;LMK/&gt; -- &lt;floor/&gt; -- &lt;FLR/&gt; -- &lt;room/&gt; -- &lt;postalcode/&gt; -- &lt;text/&gt; FreeTextLocation/</td> <td>&lt;LOC/&gt; &lt;EXTADR/&gt;</td> <td></td> </tr> </tbody> </table> ### 8 Security Considerations It is imperative to control access to location information, at least by default. Imagine that a stalker got unauthorized access to this information, with enough accuracy and timeliness to be able to find the target person. This scenario could lead to loss of life, so please take access control checks seriously. If an error is deliberately added to a location, the error SHOULD be the same for all receivers, to minimize the likelihood of triangulation. In the case of deliberate error, the `<accuracy/>` element SHOULD NOT be included. ### 7 Internationalization Considerations Because the character data contained in `<geoloc/>` child elements of type `xs:string` is intended to be readable by humans, the `<geoloc/>` element SHOULD possess an `xml:lang` attribute specifying the natural language of such character data. 9 IANA Considerations This document requires no interaction with the Internet Assigned Numbers Authority (IANA) 9. 10 XMPP Registrar Considerations 10.1 Protocol Namespaces The XMPP Registrar 10 includes 'http://jabber.org/protocol/geoloc' to its registry of protocol namespaces. 11 XML Schema ```xml <?xml version='1.0' encoding='UTF-8'?> <xs:schema xmlns:xs='http://www.w3.org/2001/XMLSchema' targetNamespace='http://jabber.org/protocol/geoloc' xmlns='http://jabber.org/protocol/geoloc' elementFormDefault='qualified'> <xs:annotation> <xs:documentation> The protocol documented by this schema is defined in XEP-0080: http://www.xmpp.org/extensions/xep-0080.html </xs:documentation> </xs:annotation> <xs:element name='geoloc'> <xs:complexType> <xs:sequence minOccurs='0'> <xs:element name='accuracy' minOccurs='0' type='xs:decimal'/> <xs:element name='alt' minOccurs='0' type='xs:decimal'/> <xs:element name='altaccuracy' minOccurs='0' type='xs:decimal'/> <xs:element name='area' minOccurs='0' type='xs:string'/> <xs:element name='bearing' minOccurs='0' type='xs:decimal'/> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> ``` 9The Internet Assigned Numbers Authority (IANA) is the central coordinator for the assignment of unique parameter values for Internet protocols, such as port numbers and URI schemes. For further information, see <http://www.iana.org/>. 10The XMPP Registrar maintains a list of reserved protocol namespaces as well as registries of parameters used in the context of XMPP extension protocols approved by the XMPP Standards Foundation. For further information, see <https://xmpp.org/registrar/>. <xs:element name='building' minOccurs='0' type='xs:string'/> <xs:element name='country' minOccurs='0' type='xs:string'/> <xs:element name='countrycode' minOccurs='0' type='xs:string'/> <xs:element name='datum' minOccurs='0' type='xs:string'/> <xs:element name='description' minOccurs='0' type='xs:string'/> <xs:element name='error' minOccurs='0' type='xs:decimal'/> <xs:element name='floor' minOccurs='0' type='xs:string'/> <xs:element name='locality' minOccurs='0' type='xs:string'/> <xs:element name='lon' minOccurs='0' type='xs:decimal'/> <xs:element name='postalcode' minOccurs='0' type='xs:string'/> <xs:element name='region' minOccurs='0' type='xs:string'/> <xs:element name='room' minOccurs='0' type='xs:string'/> <xs:element name='street' minOccurs='0' type='xs:string'/> <xs:element name='text' minOccurs='0' type='xs:string'/> <xs:element name='speed' minOccurs='0' type='xs:decimal'/> <xs:element name='tzo' minOccurs='0' type='xs:string'/> <xs:element name='uri' minOccurs='0' type='xs:anyURI'/> </xs:element> </xs:complexType> </xs:schema>
{"Source-Url": "https://xmpp.org/extensions/xep-0080.pdf", "len_cl100k_base": 4765, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 32528, "total-output-tokens": 4919, "length": "2e12", "weborganizer": {"__label__adult": 0.0007157325744628906, "__label__art_design": 0.0005831718444824219, "__label__crime_law": 0.0120391845703125, "__label__education_jobs": 0.0010232925415039062, "__label__entertainment": 0.00018990039825439453, "__label__fashion_beauty": 0.0002980232238769531, "__label__finance_business": 0.0030517578125, "__label__food_dining": 0.00041747093200683594, "__label__games": 0.0014333724975585938, "__label__hardware": 0.013519287109375, "__label__health": 0.0005168914794921875, "__label__history": 0.0011281967163085938, "__label__home_hobbies": 0.0001854896545410156, "__label__industrial": 0.0013713836669921875, "__label__literature": 0.0008301734924316406, "__label__politics": 0.0012264251708984375, "__label__religion": 0.0006232261657714844, "__label__science_tech": 0.267333984375, "__label__social_life": 0.00012993812561035156, "__label__software": 0.179443359375, "__label__software_dev": 0.51123046875, "__label__sports_fitness": 0.0004749298095703125, "__label__transportation": 0.0016393661499023438, "__label__travel": 0.0004749298095703125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18991, 0.02044]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18991, 0.30071]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18991, 0.74915]], "google_gemma-3-12b-it_contains_pii": [[0, 150, false], [150, 2685, null], [2685, 3454, null], [3454, 5073, null], [5073, 6796, null], [6796, 8974, null], [8974, 9960, null], [9960, 10716, null], [10716, 12227, null], [12227, 13940, null], [13940, 15352, null], [15352, 16209, null], [16209, 17939, null], [17939, 18991, null]], "google_gemma-3-12b-it_is_public_document": [[0, 150, true], [150, 2685, null], [2685, 3454, null], [3454, 5073, null], [5073, 6796, null], [6796, 8974, null], [8974, 9960, null], [9960, 10716, null], [10716, 12227, null], [12227, 13940, null], [13940, 15352, null], [15352, 16209, null], [16209, 17939, null], [17939, 18991, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 18991, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18991, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18991, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18991, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18991, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18991, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18991, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18991, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18991, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18991, null]], "pdf_page_numbers": [[0, 150, 1], [150, 2685, 2], [2685, 3454, 3], [3454, 5073, 4], [5073, 6796, 5], [6796, 8974, 6], [8974, 9960, 7], [9960, 10716, 8], [10716, 12227, 9], [12227, 13940, 10], [13940, 15352, 11], [15352, 16209, 12], [16209, 17939, 13], [17939, 18991, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18991, 0.21429]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
ecc05a4960dcab9984baf78ec293591f5cf5e918
[REMOVED]
{"Source-Url": "https://rd.springer.com/content/pdf/10.1007%2F978-3-319-07485-6_41.pdf", "len_cl100k_base": 5937, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 25203, "total-output-tokens": 8408, "length": "2e12", "weborganizer": {"__label__adult": 0.000606536865234375, "__label__art_design": 0.00559234619140625, "__label__crime_law": 0.0004940032958984375, "__label__education_jobs": 0.1263427734375, "__label__entertainment": 0.00037288665771484375, "__label__fashion_beauty": 0.000370025634765625, "__label__finance_business": 0.0012226104736328125, "__label__food_dining": 0.000537872314453125, "__label__games": 0.0012388229370117188, "__label__hardware": 0.002147674560546875, "__label__health": 0.0007982254028320312, "__label__history": 0.00080108642578125, "__label__home_hobbies": 0.00038313865661621094, "__label__industrial": 0.0005884170532226562, "__label__literature": 0.0014638900756835938, "__label__politics": 0.0003437995910644531, "__label__religion": 0.00083160400390625, "__label__science_tech": 0.08660888671875, "__label__social_life": 0.0010023117065429688, "__label__software": 0.194091796875, "__label__software_dev": 0.57275390625, "__label__sports_fitness": 0.0003364086151123047, "__label__transportation": 0.0007319450378417969, "__label__travel": 0.0004444122314453125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36264, 0.02395]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36264, 0.26378]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36264, 0.92349]], "google_gemma-3-12b-it_contains_pii": [[0, 2456, false], [2456, 5389, null], [5389, 8448, null], [8448, 12050, null], [12050, 15243, null], [15243, 18254, null], [18254, 21156, null], [21156, 24688, null], [24688, 26589, null], [26589, 29402, null], [29402, 32934, null], [32934, 36264, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2456, true], [2456, 5389, null], [5389, 8448, null], [8448, 12050, null], [12050, 15243, null], [15243, 18254, null], [18254, 21156, null], [21156, 24688, null], [24688, 26589, null], [26589, 29402, null], [29402, 32934, null], [32934, 36264, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36264, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36264, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36264, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36264, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 36264, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36264, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36264, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36264, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36264, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36264, null]], "pdf_page_numbers": [[0, 2456, 1], [2456, 5389, 2], [5389, 8448, 3], [8448, 12050, 4], [12050, 15243, 5], [15243, 18254, 6], [18254, 21156, 7], [21156, 24688, 8], [24688, 26589, 9], [26589, 29402, 10], [29402, 32934, 11], [32934, 36264, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36264, 0.20886]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
401ca3f7cc12267552eff1925e943145c3cde65f
Decades ago, increased volume of data made manual analysis obsolete and prompted the use of computational tools with interactive user interfaces and rich palette of data visualizations. Yet their classic, desktop-based architectures can no longer cope with the ever-growing size and complexity of data. Next-generation systems for explorative data analysis will be developed on client–server architectures, which already run concurrent software for data analytics but are not tailored to for an engaged, interactive analysis of data and models. In explorative data analysis, the key is the responsiveness of the system and prompt construction of interactive visualizations that can guide the users to uncover interesting data patterns. In this study, we review the current software architectures for distributed data analysis and propose a list of features to be included in the next generation frameworks for exploratory data analysis. The new generation of tools for explorative data analysis will need to address integrated data storage and processing, fast prototyping of data analysis pipelines supported by machine-proposed analysis workflows, pre-emptive analysis of data, interactivity, and user interfaces for intelligent data visualizations. The systems will rely on a mixture of concurrent software architectures to meet the challenge of seamless integration of explorative data interfaces at client site with management of concurrent data mining procedures on the servers. © 2015 John Wiley & Sons, Ltd. complex analysis. Visual programming environments rely strongly on data visualization and interactions. Users can interact with visualized data and select data subsets to explore them in the emerging analysis pipeline. Interactive data analysis also takes place in modern spreadsheet applications, where a few tricks suffice to construct powerful computational procedures and exceptional data visualizations. Desktop data analysis tools work well if the data are small enough to fit into the working memory of a commodity computer. They fail, however, with data’s increasing size and complexity. Explorative data analysis requires responsive systems. Clogging and substantial response delays due to computationally complex analysis procedures, resampling-based estimations, or rendering of visualizations of large data sets hinder interactivity and discourage the user from exploring. Exploratory data analysis systems are on the crossroad. Desktop-based systems with interactive data analysis interfaces are failing to cope with larger data sets. Analysis of such data requires concurrent architectures for large-scale data mining, but they may lack the necessary front-ends for exploratory analysis. Data mining tools that will offer the best of the two worlds need to address some major software engineering challenges. Computational speed, communication of partial results, accurate reporting on progress, and adaptation to dynamic changes in the analysis pipeline are just a few aspects to consider. In the next chapter, we review major state-of-the-art software architectures that can support concurrent data analysis. These architectures were designed to speed-up the analysis processes through concurrent execution of computational procedures, but lack several features that are intrinsic to interactive and exploratory data analysis. For each of the software architectures, we provide a review, expose its main advantages and highlight its shortcomings that are related to its utility in interactive data analysis. Based on the review, we next present an abstract model of a software architecture for interactive data analytics. Components of this model are further discussed in the section that lists advanced features of next-generation exploratory analysis software environments. We summarize our review in the last section of the article and conclude that present software architectures will need a number of specific but gradual adaptations to support interactive, user-engaged data analysis. ARCHITECTURES FOR CONCURRENT DATA ANALYSIS Analysis of large data sets has gained vast interest from scientists and data miners in recent years. Developers of data mining tools have thus far focused on different aspects of the problem, such as designing algorithms that take advantage of specialized data storage, parallelization of analysis problems, and visualization of large data sets. Various software toolboxes have been crafted to support the data analysis process by allowing users to reuse or upgrade existing analytics components to create new analysis pipelines. Software Agents Agent-based architecture is among the oldest distributed architectures for data analysis. Software agents receive instructions from the user or from other agents, independently perform the analysis, and report the results. This approach scales well with the size of data if the modeling can use only small localized data subsets. Agents are employed concurrently, each reporting a model to be fused with others to gain understanding of the entire domain. Agents can also handle distributed data sets that cannot be moved due to privacy or regulatory concerns.9 Ideally, agents would learn and adapt over time and produce meaningful results with very limited interaction with the user. This goal has never been achieved. Instead, agents in the present systems are simple and specialized, and often dependent on the user’s interaction through graphical user interfaces (Figure 3)9 or query languages.10,11 Some systems are also able to graphically present resulting data models, and, e.g., provide visualizations of decision trees9 or interactive graphs.11 Some also report on the progress of the agent-based execution of analysis. Agent-based architectures take various approaches to parallel execution of data mining tasks. JAM9 is designed to treat data as stationary, and considers data sets at different locations to be private and belonging to various independent organizations. Only the induced models are communicated between agents and fused into a final, resulting model. Papyrus,10 however, can move data from one location to another, and considers various trade-offs between local computation and transfer of the data to multiple computers to optimize computational load. Each data analysis agent produces its own model. In principle, and when only predictions are important, the models may be combined through ensembles by, say, stacking12 or averaging. Fused models are very hard to interpret, however, and special procedures like data visualizations and feature scoring13 are needed for their appropriate inclusion into exploratory data analysis. Agent based architectures have been used in a wide range of data analysis applications that include text mining,\textsuperscript{11} medical data analysis,\textsuperscript{10} and credit card fraud detection.\textsuperscript{9} Agents were found to be of particular benefit when the datasets were spread across multiple sites and could not be moved or combined due to the regulatory restrictions. **Web Services** Web services are a popular technology that allows us to take advantage of remote computational resources.\textsuperscript{14} Services based on Web Service Definition Language (WSDL) allow easy discovery by providing a description of the interface as a part of the service. In contrast, Representational State Transfer (REST) services strive toward simplicity and rely on the features included in the HTTP protocol. Several modern data analysis systems use web services to execute the analysis on remote computers. Weka4WS\textsuperscript{15} has the same graphical user interface as Weka,\textsuperscript{16} but uses web services to remotely analyze data. Taverna\textsuperscript{17} is a workflow management system for creating and executing scientific workflows. It contains references to more than 3500 services that can be added to the user-designed data analysis pipeline. Taverna consists of Taverna Workbench (Figure 4), a desktop application for workflow design, and Taverna Server for remote execution of workflows. Taverna workflows strongly depend on services, which are its sole means of data processing. There is no local, client-based data analysis. In this way, Taverna simplifies and elegantly unifies the analytics architecture. The main bottleneck of the approach is data communication. Services need to exchange data, and if these are large, their transfers can take much longer than the actual computations. Orange4WS, however, can construct workflows from components that are executed either locally or through webservice calls to remote components. Based on the workflow manager in Orange, each component of the workflow executes its tasks as soon as it has all the necessary inputs, which enables the user to get preliminary results during construction of the analysis pipeline. Another distinctive feature of Orange4WS is automatic construction of workflows. The analytic components are annotated with terms from a Knowledge Discovery Ontology and can be assembled into analysis workflow from the user’s specification of desired inputs and outputs. Service oriented architectures for data analysis also have several shortcomings. In an analysis composed of multiple services, each of them has its own processing queue, which is usually hidden from the user. As a result, execution times are hard to predict when some of the services are experiencing heavy load. Service descriptions are sufficient to establish a connection, but their readability depends on the service author and is not subject to a standard. Service inputs with names such as ‘arg1’ or parameters of the type such as ‘string’ suffice for generating a valid request, but are not informative. Orange4WS tackles this problem with additional annotations constructed manually and added independently of the service provider. Problems also arise when combining services from different providers. The input and output data types of services may not match and the user has to manually provide the necessary conversions. Services also need to transfer all the data related to requests, which is unfeasible for large data sets. The Web Service Resource Framework (WSRF, http://www.globus.org/wsrf/specs/ws-wsrf.pdf) partially solves this by storing the data as a resource and using resource identifiers in the requests. Weka4WS supports WSRF and further provides a WSRF-based service that can be executed on grids. Web services have found their utility in various application areas, including astronomy, biology, chemistry, and text mining. BioCatalogue (https://www.biocatalogue.org), e.g., includes... over 1000 different web services from different areas of life sciences, where a substantial number of them deal with data access, analysis, and visualization. These and similar web services can be composed into workflows using open source tools such as Taverna (http://www.taverna.org.uk) or Triana (http://www.trianacode.org/). Using the WSRF standard, web services can take advantage of grids to spread the work on multiple computation nodes. **Grids** Grids solve the problem of controlled and coordinated resource sharing and resource use in dynamic, scalable virtual organizations. Multiple organizations can share data, analysis procedures, and computational resources. Grids also implement various security and access policies. Grid software architectures for data analysis are developed on the existing grid services for data transfer and management, allocation, monitoring, and control of computational resources. These provide the framework to construct additional layers of services that implement various data mining procedures. Grid services expose their interfaces and usage policies. A high-level client can discover and combine them into an analysis pipeline. Many current grid-based data analysis frameworks offer graphical interfaces to assemble analysis pipelines. In workflow editors, we can select different services, set their parameters, and establish communication, while services on the grid execute the data analysis tasks. The pipeline editor in Discovery net can verify the validity of the workflow from service metadata. When multiple sites provide the same service, Discovery net users can manually map tasks to specific computational resources. An alternative to designing implementation-specific graphical editors is to couple grid services with open source workflow managers. These fetch information on available services from a service catalog provided by the grid. The DataMiningGrid, e.g., uses the Triana workflow editor. Job scheduling and parallel execution on grids are managed by a resource broker, which receives requests and delegates them to the underlying grid execution system, such as HTCondor. Independent services concurrently run on different machines, and data-aware scheduling minimizes data transfer. Grids have been used for Gene Annotation, Ecosystem Modeling, and Text Analysis. Data analysis pipeline can be designed in workflow editors such as Taverna (http://www.taverna.org.uk/) or Triana (http://www.trianacode.org/) and executed concurrently on the grid. **MapReduce and Hadoop** MapReduce is a popular approach for processing large amounts of data. By limiting the programmer to the tasks that can be expressed as a series of map and reduce steps, MapReduce provides a high level of parallelism on a large number of commodity machines. The core of MapReduce technology is a distributed file system where the data are redundantly stored on multiple machines. Data analysis algorithms can be expressed as a series of map and reduce steps. The map operation is applied to the entire data set and yields intermediary results. The distributed nature of the underlying file system ensures that processing on different parts of the data can be performed with different machines without moving any data. Outputs of the map step are assembled and processed in the reduce step to yield the final result. MapReduce clusters can contain thousands of machines. The probability of hardware failure rises with the size of cluster. The software architecture thus contains mechanisms to circumvent failures and slow or unresponsive workers by reassigning the corresponding jobs to another machine that holds duplicate data. Apache Hadoop (http://hadoop.apache.org) is a popular open source implementation of the MapReduce architecture and a general-purpose framework. Its data analysis extension is implemented within multiple libraries; Apache Mahout (http://mahout.apache.org) for data mining, BioPig and SeqPig for sequencing data and Pegasus for mining graphs. Data analysis code can be written in Java, PIG, SQL, or R programming language. Commercial third party tools such as Pentaho Business Analytics or IBM InfoSphere provide some limited support for graphical design of MapReduce analysis pipelines. While MapReduce approach works for batch processing of data, other approaches are used when data needs to be queried because of their short response times. Apache HBase is a data store inspired by Google BigTable. It is built on top of the Hadoop File System and supports real-time read/write access to the data. Interactive queries of the data are possible with Dremel (marketed as Google BigQuery). It uses a combination of columnar storage and hierarchical processing to execute aggregate queries on petabytes of data in a few seconds. Project Apache Drill aims to provide an Open Source implementation of functionality provided by Dremel. An alternative approach is open source cluster computing system Spark. Its responsiveness is a result of keeping the data in memory between requests, which can benefit exploratory data analysis. Tools such as RABID\textsuperscript{38} can be used to execute R code on Spark clusters. MapReduce can deal with large amounts of data, but is it ready for exploratory data analysis? Researchers perform exploratory studies on Hadoop\textsuperscript{39} implementations of interactive querying of data have shown that interactive analysis of the larger data sets is possible given enough computers to parallelize the job. However, survey of Hadoop usage in research workflows\textsuperscript{40} shows that high level tools are still not in regular use and that majority of analyses is performed by writing low-level MapReduce code. Optimization of Hadoop for small jobs\textsuperscript{41} and algorithms for data visualization on MapReduce architectures\textsuperscript{42} are two example research areas that are paving the way for exploratory analysis on MapReduce clusters. Cloud Applications Cloud computing offers a number of opportunities for data analysis applications. Perhaps the most promising is horizontal scaling,\textsuperscript{43} where the number of rented workers dynamically changes according to the requirements of the analysis procedures. Several applications allow users to upload data and use cloud computing for data analysis. BigML (https://bigml.com) implements a set of basic data mining procedures that are accessed through a simple graphical interface (Figure 5), making this web application also suitable for users with no formal background in data analytics. BigML includes several advanced procedures that support interactive analysis of big data. For instance, data histograms are first rendered on a data sample and then updated when the entire data set is loaded and processed. Decision trees are a supervised machine-learning technique that starts with the root classification node, which is then iteratively refined. Decision trees in BigML are rendered on the client application dynamically and evolve as their nodes are induced on the computational server. Google Prediction API (https://developers.google.com/prediction) does not support interactivity, but provides an interface to supervised data analysis. \begin{figure} \centering \includegraphics[width=\textwidth]{diagram.png} \caption{The web application BigML allows users to upload their data sets, build and visualize decision tree models, and use them on new data. Each of the application’s tabs (top row) is associated with one of the available tasks carried out within a simple interface also suitable for users with little or no data mining experience.} \end{figure} on the cloud. This is one of the earliest cloud-based data mining solutions, but it is not suitable for explorative data analysis as it returns only predictions and not models. Cloud-based architectures are a low-cost option that enables large-scale data analysis for organizations that by themselves do not own the adequate computational resources. The downside is the need to transfer the data to and from the cloud. Data transfer takes time and may be in part restricted by corporate policies or government regulations. From the viewpoint of explorative data analysis, the principal bottleneck of cloud-based architectures is the lack of their support for responsive visualizations. Existing JavaScript libraries, such as InfoViz (http://thejit.org), D3 (http://d3js.org), or Highcharts (http://www.highcharts.com), support beautiful visualizations, but all assume that the entire data set is locally available for rendering. Resulting visualizations can be interactive, but again only if the entire data set is already present on the client side. Visualizations with multiple levels of detail required for exploration of big data are currently not available. Existing data visualization applications\(^4^) solve this problem with native clients by locally rendering the data they dynamically fetch from the cloud. ### Summary Table 1 summarizes the features of the reviewed architectures. We focus on the five aspects that are most important for exploratory data analysis. The first one is support for fine-grained parallel execution. Concurrent execution of a single task on multiple CPUs can speed up the execution to the degree were interactive analysis of large data becomes feasible. The second one, interactive visualization, requires constant communication with the computational server. Even if computation is lengthy, the user has to be updated on the progress and, ideally, should see any intermediate results, our third aspect in the Table 1, to allow users to focus the analysis on the interesting subsets of the data. Notice that while Grids and Map Reduce provide speed-ups with concurrent execution, these architectures engage computational workers that do not directly communicate with the client and are less suitable for interactive exploration and communication of intermediate results. The fourth aspect examines if the high-level execution environment can be coupled with the chosen architecture. Such environment should allow domain experts to analyze large amounts of data without specifically addressing the intricacies of parallel environments. The final, fifth aspect, is related to data security which is important in industrial or clinical settings or alike. We distinguish between architectures where data stays on a local premise or is spread across the network not necessary managed by the data owner. Grids and Map Reduce can support data locality only if the institution has the appropriate, often costly computational clusters. Obviously, none of the current software architectures was designed with exploratory data analysis in mind, and hence none is ideal for its implementation. Yet, they all contain basic building blocks with which we could construct a suitable architecture. We briefly review these building blocks in the next chapter, and continue with a list of features that are specific to exploratory data analysis. ### COMPONENTS OF DATA ANALYTICS SOFTWARE ARCHITECTURE The five concurrent software architectures that can support data analysis, which we reviewed in the previous section all decouple data processing from the user interface. Regardless of their substantial differences, we may abstract their architecture with a few interconnected components (Figure 6). The user, who may be a field expert and not necessary prolific in computer science, uses a client to issue data analysis tasks to the rest --- **TABLE 1** Five Features of Concurrent Software Architectures That Play an Important Role in Implementations of Exploratory Data Analysis Environments, and Their Support by a Set of Existing Architectures. A Partial Support Is Noted Where the Software Architecture Was Not Designed for the Task But Could in Principle Be Adapted for It <table> <thead> <tr> <th>Component</th> <th>Fine-Grained Parallel Execution</th> <th>Interactive Visualization</th> <th>Communication of Intermediate Results</th> <th>High-Level Execution Environment</th> <th>Data Stays on Premise</th> </tr> </thead> <tbody> <tr> <td>Agents</td> <td>No</td> <td>Yes</td> <td>Yes</td> <td>No</td> <td>Yes</td> </tr> <tr> <td>Web services</td> <td>No</td> <td>Yes</td> <td>No</td> <td>Yes</td> <td>No</td> </tr> <tr> <td>Grids</td> <td>Yes</td> <td>No</td> <td>Partially</td> <td>Yes</td> <td>Partially</td> </tr> <tr> <td>Mapreduce</td> <td>Yes</td> <td>No</td> <td>No</td> <td>Partially</td> <td>Partially</td> </tr> <tr> <td>Cloud</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>No</td> </tr> </tbody> </table> © 2015 John Wiley & Sons, Ltd. Volume 5, July/August 2015 of the system. The client is either a standalone program or a web application that converts analysis requests to a set of instructions and presents the results to the user. The most common ways to design an analysis are workflows, script-based programs or sets of statements in a query language. The client should also be responsible for informing the user about the progress of the analysis and about any errors that occur during the execution of the analysis. The responsiveness of the analysis system mostly depends on the computational backend. Often referred to as computational nodes, servers, or workers, they do the heavy lifting required to execute the analysis. Depending on the variety of tasks executed, workers may be of general-purpose or task-specialized. General-purpose workers are able to execute any of the requested data analysis tasks. Their utilization depends on the data set locality and computational demands. In cloud-based setups, the number of general-purpose workers can be modified in order to meet the application load. Specialized workers can only perform a limited number of tasks, but with greater efficiency than general-purpose workers. For instance, machines equipped with graphical processing units can efficiently execute some data analysis algorithms in parallel. Similarly, clusters equipped with a high-speed InfiniBand network can significantly speed up execution of jobs using OpenMPI. ADVANCED FEATURES OF NEXT-GENERATION EXPLORATORY ANALYSIS SOFTWARE ENVIRONMENTS In the Introduction, we have exposed the key components of exploratory data analysis systems: responsive user interfaces, interaction with analytics engine, graphical data displays, and dynamic workflows. Following we list several core functions that should be present in next-generation explorative data analysis systems and would distinguish these from the current desktop-based systems. We also highlight that some of this functionality have already been prototyped in existing data analysis software libraries, but it is their collective implementation within analytic systems that will present the biggest challenge to the evolution of concurrent data analysis architectures. We group the proposed features from the view-point of a user into those that address the speed-ups of the algorithms and data analysis procedures, support the smooth use of data analysis workflows, deal with procedures for data visualization, and support collaborative data exploration. Concurrent Processing and Algorithms An obvious gain from concurrent architectures for data analysis is greater speed. Users of toolboxes for explorative data analysis expect responsive interfaces and short execution times of underlying data analysis algorithms. These can be achieved by minimizing the data transfer between computational units, parallelization of data mining algorithms, and algorithmic prediction of the future user’s requests to guide preemptive analysis and compute results for the most likely future queries. Integrated Storage and Processing Transmission of the data from storage to processing component of data analysis architecture is time consuming. Modern approaches store the data on the machines that also perform analysis, either by using a distributed file system\(^{50,51}\) or sharding.\(^{52}\) Such architectures scale well, as support for growing data volume is achieved by providing an increased number of computers. The downside of this architecture is that the location of the data files determines the nodes to process the data, which can be problematic when multiple users use the system simultaneously. Potential solutions should consider meta information about storage and processing needs of the computational components of the workflow. These should be considered within task scheduler and load balancing algorithms. Load balancing strategies have been developed within grid computing\(^{53}\) and agent-based systems,\(^ {54}\) and would need to be adapted for data mining tasks in which only rough estimates of processing time are available. Parallelization of Standard Data Mining Tasks Procedures such as scoring of the prediction systems (e.g., cross-validation), parameter estimation, or various optimization methods are standard data mining tasks.\(^ {55}\) Their parallel implementation is often straightforward and many of them are embarrassingly parallel. Some data mining software suites already support this kind of concurrency,\(^ {56}\) and its implementation should be present in any modern explorative data analysis system. Algorithms based on iterative optimization are harder to parallelize, as each step of the procedure relies heavily on the previous steps. Parallel implementation of support vector machines\(^ {57}\) decomposes the model into a hierarchy of submodels. Submodel are trained in parallel and later joined to produce a single prediction model. Training of deep neural networks is still done sequentially, but takes advantage of specialized hardware (GPUs) to speed up the execution of each iteration. Other algorithms, such as mixed effects models are yet to be parallelized. Preemptive Analysis Explorative data analysis is most often based on descriptive statistics and their appropriate rendering. We request the computation of averages, standard deviations, and box plots for almost any examined data set. If computational resources are available, such statistics can be computed in advance, even before the user explicitly requests them. Preemptive execution of data analysis procedures can largely improve the perceived responsiveness of the system. Several existing technologies already preprocess data to speed-up the analysis. In online analytical processing\(^ {58}\) (OLAP), transactional multidimensional data are transformed to OLAP cubes that are suitable for rapid filtering, grouping and computation of sums, averages, and other descriptive statistics. With appropriate design and intelligent workflow engines, other, more complex data analysis procedures may be executed in advance. Choosing which procedures to run will require predictions that reveal the most likely data analysis tasks to be executed according to the current pipeline, data, and user’s choices when dealing with similar analysis problems in the past. For instance, loading a class-labeled data set could trigger a preemptive cross-validation with the user’s favorite supervised data analysis methods that he previously applied to similar data sets. Preemptive analysis may have high computational costs and need to be tailored for particular users. The main challenge in this field is automatic identification of analysis tasks that require preemptive treatment, and automatic selection of types of preemptive analysis given the history of data analysis actions. The field could borrow predictive methods from recommender systems.\(^ {59}\) Design and Use of Workflows Workflows and their construction through visual programming are a perfect match for explorative data analysis. Workflows consist of computational units where users process, visualize, filter, and select data. The output of one computational unit is passed to another unit, in this way instructing the machine about the required analysis steps. Combined with interactive data visualizations, workflows can provide intuitive means to describe potentially complex data analysis procedures. Concurrent architectures can provide parallel implementations of workflows to speed-up analysis of larger data sets and data collections. Yet, these will still require implementation of procedures to report on the progress of analysis and inform the user by returning any intermediate results. For complex data analysis tasks and where computational components for data analysis are abundant, user may benefit from techniques that provide assistance in workflow construction and that propose the likely workflow extension or completion of analysis branches based on their current state and type of analyzed data. Support for Fast Prototyping Prototyping is an efficient way to assemble analysis pipelines and apply them to new data. It allows an expert analyst to try out different methods and adapt them to the data at hand. Prototyping can be done in visual programming interfaces or through scripting, possibly in the interactive console. Each approach has its own merits. Visual interfaces are easier to learn, yet they usually limit the choice of available components. Scripting however enables expert users to finely tune their queries and change advanced parameters. PIG complements Java interface of Apache Hadoop with a high level language that can be used to combine existing map and reduce jobs, while Dremel uses queries similar to SQL to query underlying data. Workflows define a collection and succession of data analysis tasks to be executed on a computational server. Workflow paths by their nature imply parallelism, making concurrent processing architectures ideal for this type of specification of tasks in explorative data analysis. Many current data analysis systems support workflow design, with no consensus on the ‘right’ granularity of tasks they implement and combine. Taverna, e.g., can embed either a simple string-processing routine or a complex genome analysis procedure with a single component. An open challenge is what granularity is best for the user of such systems. Progress Tracking Even with the fast implementation of data analysis algorithms and their concurrent execution, results of data analysis may not be available instantaneously. Execution tracking and estimation of remaining time are important for analyses that run for longer periods. Estimating the execution time is difficult, in particular for heuristic and iterative algorithms, which abound in data analysis. Estimating execution of parallel queries in a general setup is a challenging problem. JAM architecture displays status of execution on used agents. Pig/Hadoop’s progress estimator shows a percentage-remaining estimate under assumption that all operators take the same time to complete. Another solution for execution time estimation of MapReduce tasks is ParaTimer, which provides better estimates by considering distribution over different nodes, concurrent execution, failures, and data skew. Display of Intermediate Results From the user’s viewpoint, rendering of intermediate results for some data analysis procedure is primarily the feature of the client’s interface. The main complexity of the implementation, however, lies within the architecture of the server. Here, the distribution of the tasks on the server has to include handling of requests for intermediate results and potential requests for abortion of the execution. Time-consuming approaches in data mining, such as induction of classification trees in a large random forest or inference of deep neural networks, are iterative. Partial or converging models may be meaningful even before the analysis is completed. For classification trees, nodes close to the root that are inferred first may already hold potentially interesting information. In a random forest, a smaller subset of trees may already provide useful predictions. Early stages of development of a deep neural network may be already sufficiently accurate. Application clients should thus be able to receive and render partial results of the data analysis as they arrive. Exposing them to the user lets him reconsider the course of analysis and potentially change it before the ongoing computation is completed. Display of intermediate results depends on the data analysis method, which would need to be appropriately adapted and implemented on the servers. Another challenge for the software architecture is also to provide means to request or push the intermediate results and report on the status of analysis. Intelligent Data Analysis Interfaces Analysis of complex and heterogeneous data sources requires nontrivial analysis pipelines. Future generation explorative data analysis frameworks will be able to suggest the components or analysis pathways based on the data characteristics, previous actions of the analysts, and typical analysis patterns of the broader community. Technologies for such an endeavor may stem from meta-learning, social computing, and recommender systems. Prediction of workflow components (Figure 7) is a recent area of study that also borrows from techniques of network analysis. Data Visualization Data visualization provides an essential mean of communication with the user of explorative data analysis FIGURE 7 | Machine prediction of workflow components. The user constructs a workflow with two modeling algorithms (Classification Tree and a rule-based classifier CN2), each followed by components for model visualization (Classification Tree Graph and CN2 Rules Viewer). After he adds another modeling algorithm (Logistic Regression), the framework can anticipate that it will be followed by a visualization component (Nomogram) as well. environment. For exploration, the visualizations need to support interactions, and to sustain the speed, the server has to be aware of the limitations of the data transfer. Instead of serving the entire data sets, data projects are computed and even partially rendered on a server. Concurrent algorithms on a server may also sip through the vast space of different projections to find those most informative and worthy to be displayed to the user. Interactive Visualizations Visualizations are an efficient way to gain insight into large data sets. Visual analytics thus plays a very important role in exploratory data analysis. Visualization of large data sets is computationally intensive and cannot be performed on the client alone. Visualizations can be rendered on the server and transferred to the client in a form of a video stream. In such a hybrid architecture, the server transforms the data into a representation of manageable size, which is transferred to the client and rendered there. Visualizations on the client may be updated continuously as the stream of the processed data arrives from the server. Depending on the type of visualization and its processing needs, the data projection methods need to be adapted for parallel computation. An example for the later is an adaptation of multidimensional scaling and its parallel implementation. Intelligent Data Visualizations When data includes many variables, the number of possible projections and data views is vast. Intelligent data visualization copes with this by engaging computational algorithms to propose the most informative visualizations. Various data mining methods already address this problem, although on a smaller scale. These include a dynamic walk through interesting visualizations (GrandTour), or techniques such as VizRank that find and rank the most interesting projections (Figure 8). Enumeration of interesting visualizations and search for paths to explore them is computationally expensive; different visualizations have to be explored through concurrent search and optimization. Processes that run on the server may depend on actions of the users through, say, choice of visualizations or choice of a data subset. This again requires a responsive software architecture that is able to run, abort, and reinitiate tasks on the server, based on the events received from the client site. Another problem that could be solved with intelligent approaches is scaling the visual representations that work on small data sets to data of larger volume. Scatterplot, a popular visualization technique for small data sets, is useless for huge number of data points. For larger data sets, processes on the server would need to estimate the distributions from an informative data sample, and communicate these to client to render a visualization of incomplete data set that potentially uncovers interesting patterns. It is the task of procedures for intelligent visualization to score the interestingness and, through concurrent processing, find data subsets that maximizes this score. Collaboration Analysis of real-life data is often a complex process spanning across different areas of expertise. Data analysis projects are becoming collaborative efforts. Collaboration interfaces should borrow from the ideas of social and gaming platforms. Collaborative data exploration has to be mapped to concurrent execution of data analytics tasks to avoid redundancies in data processing and offer the speed that stems from sharing the results that were rendered for one user to his partners. Collaborative explorative data analysis environments are in its infancy and we have yet to gain experience in this field to propose efficient concurrent solutions. CONCLUSIONS Of the software architectures we have reviewed, it is their combination that will most likely be present in the future systems for explorative data analysis. The key components will likely be implemented within computer grids, perhaps augmented with specialized hardware for large-scale parallel execution. Current FIGURE 8 | Intelligent data visualization by VizRank. The window on the right shows Radviz visualization of gene expression profiles from Ref. 71. Vizrank can score Radviz data projections by degree of separation between data items of different classes and offer them in browsable ranked list (window on the left). architectures would need much improvement, however, to accommodate the advanced tasks that will be supported by the next generation of concurrent explorative data analysis architectures. They will need to address a decrease of latency, continuous streaming of results, and improvements at the client site and user interfaces. Latency could be reduced by incremental analysis of the data that would offer intermediate results prior to completion of the analysis on the entire data set, or even prior to the user’s request for the analysis. Software architectures should stream graphical results, as data visualization is one of the key components of exploratory data analysis. Workflow interfaces on the client side need to be adapted for dynamically changing environments, where the design and execution are interwoven and intermediate results impact the analysis choices of the user. Future systems for data analytics will predict the course of analysis and guide users in the design of the analytics pipeline. Computational servers will anticipate the next steps of analysis and execute some selected procedures in advance by balancing between the cost of computational resources and response time requirements. Collaborative analysis will add an extra dimension to this problem and will require an additional optimization layer to minimize computation by jointly serving the needs of the community of users. Ever growing volume of data will greatly impact the design computer systems for exploratory data analysis. Existing software architectures are no longer adequate; we need to adapt them and invent new ones. But as this review shows, the process can be gradual and based on excellent foundations of concurrent processing and software architectures developed throughout a wide scope of information technologies. REFERENCES and scalable scripting for large sequencing data sets in Hadoop. *Bioinformatics* 2014, 30:119–120. 32. Russell J. *Getting Started with Impala: Interactive SQL for Apache Hadoop*. O'Reilly Media, Inc.; 2014, Sebastopol, California, US. 33. Prajapati V. *Big Data Analytics with R and Hadoop*. Packt Publishing; 2013, Birmingham, UK. FURTHER READINGS
{"Source-Url": "http://eprints.fri.uni-lj.si/3191/1/2015-Staric-WIREs-DMKD.pdf", "len_cl100k_base": 7916, "olmocr-version": "0.1.49", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 47932, "total-output-tokens": 12901, "length": "2e12", "weborganizer": {"__label__adult": 0.00034308433532714844, "__label__art_design": 0.0008106231689453125, "__label__crime_law": 0.00035953521728515625, "__label__education_jobs": 0.0012025833129882812, "__label__entertainment": 0.00016558170318603516, "__label__fashion_beauty": 0.00018417835235595703, "__label__finance_business": 0.00036025047302246094, "__label__food_dining": 0.0004482269287109375, "__label__games": 0.0008397102355957031, "__label__hardware": 0.0012722015380859375, "__label__health": 0.0006103515625, "__label__history": 0.0004165172576904297, "__label__home_hobbies": 0.0001252889633178711, "__label__industrial": 0.0005426406860351562, "__label__literature": 0.0003731250762939453, "__label__politics": 0.00024700164794921875, "__label__religion": 0.0004715919494628906, "__label__science_tech": 0.197265625, "__label__social_life": 0.00012636184692382812, "__label__software": 0.042755126953125, "__label__software_dev": 0.75, "__label__sports_fitness": 0.00025177001953125, "__label__transportation": 0.00039577484130859375, "__label__travel": 0.00022280216217041016}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56416, 0.03195]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56416, 0.26949]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56416, 0.87223]], "google_gemma-3-12b-it_contains_pii": [[0, 1516, false], [1516, 4025, null], [4025, 6681, null], [6681, 8278, null], [8278, 10691, null], [10691, 15754, null], [15754, 18375, null], [18375, 23869, null], [23869, 26284, null], [26284, 31370, null], [31370, 36473, null], [36473, 40973, null], [40973, 43111, null], [43111, 48156, null], [48156, 53395, null], [53395, 56416, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1516, true], [1516, 4025, null], [4025, 6681, null], [6681, 8278, null], [8278, 10691, null], [10691, 15754, null], [15754, 18375, null], [18375, 23869, null], [23869, 26284, null], [26284, 31370, null], [31370, 36473, null], [36473, 40973, null], [40973, 43111, null], [43111, 48156, null], [48156, 53395, null], [53395, 56416, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56416, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56416, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56416, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56416, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56416, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56416, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56416, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56416, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56416, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56416, null]], "pdf_page_numbers": [[0, 1516, 1], [1516, 4025, 2], [4025, 6681, 3], [6681, 8278, 4], [8278, 10691, 5], [10691, 15754, 6], [15754, 18375, 7], [18375, 23869, 8], [23869, 26284, 9], [26284, 31370, 10], [31370, 36473, 11], [36473, 40973, 12], [40973, 43111, 13], [43111, 48156, 14], [48156, 53395, 15], [53395, 56416, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56416, 0.03646]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
18ad9696ccae4f7c2b4a3ddeae37290a0e246d25
METHOD FOR GENERATING AN EXPLANATION OF A CSP SOLUTION Inventors: Felix Geller, Adiswil (CH); Ronny Morad, Hafia (IL) Assignee: International Business Machines Corporation, Armonk, NY (US) Notice: Subject to any disclaimer, the term of this patent is extended or adjusted under 35 U.S.C. 154(b) by 90 days. Appl. No.: 11/954,895 Filed: Dec. 12, 2007 Int. Cl. G06F 9/44 (2006.01) G06G 1/00 (2006.01) U.S. Cl. 717/126; 717/104; 706/19 Field of Classification Search 717/104, 717/114, 117, 124, 126, 134, 135; 706/19 See application file for complete search history. References Cited U.S. PATENT DOCUMENTS 6,031,984 A 2/2000 Walser FOREIGN PATENT DOCUMENTS WO WO 0179993 A2 10/2001 OTHER PUBLICATIONS ABSTRACT The invention provides a computer-implemented method for generating a solution to a constraint satisfaction problem (CSP). The method operates to implement various steps that include defining the CSP problem by a set of variables having finite domains, and constraints defined over the variables, solving the CSP by assigning values to said variables that are consistent with the constraints and debugging the CSP solution. The debugging of the CSP solution is carried out by iteratively executing a propagator to reduce the variable domain. Augmenting the constraints is carried out to supply an explanation for particular values assigned to the variables, and constraints defined over the variable utilized in the solution. 2 Claims, 2 Drawing Sheets OTHER PUBLICATIONS * cited by examiner FIG. 1 FIG. 2 FIG. 3 V₁ Does not contribute to the reduction of d₁. METHOD FOR GENERATING AN EXPLANATION OF A CSP SOLUTION BACKGROUND OF THE INVENTION The present invention relates to constraint programming, and more particularly relates to a system and method for automatically solving constraint satisfaction problems that provides explanations as to why specific constraints were chosen, and why they do not provide solutions, as the case may be. Constraint programming may be characterized broadly as declarative programming, a technique by which a program essentially describes what is to be computed (processed). Declarative program is distinguishable with respect to imperative programming, by which a program describes how an output must be computed. Constraint programs are more specifically referred to as constraint satisfaction problems, or CSPs. CSPs operate upon a set of variables, wherein each variable is “constrained” to a set of values. During operation, each constraint may be applied to a subset of the variables within the set of variables, by which the CSP application program restricts the values that the variables may assume. Preparing a CSP to represent a combinatorial problem that occurs in practice is referred to as modeling, where the purpose of constraint solving is to generate solutions to CSP model. Many problems from different domains, such as combinatorial optimization problems and test generation problems are modeled as CSPs. An example of such a model for a combinatorial optimization problem is described in a paper by Eyal Bin, at al., entitled: Using Constraint Satisfaction Formulations And Solution Techniques For Random Test Program Generation, IBM System Journal, Special Issue on AI, August 2002. Another common approach to solving CSPs includes the use of a “Maintaining Arc Consistency”, or MAC algorithm. MAC algorithms, and constraint problem solving based thereon is described in detail in a paper by A. K. Mackworth, entitled: Consistency In Networks Of Relations, Artificial Intelligence, 8:99-118, 1977. In MAC operation, a constraint is specified by a propagator. Propagators are procedures that filter from the constraint variables’ domains values that cannot participate in a solution. An execution of MAC algorithm consists of repeated invocation of constraint propagators, alternating with non-deterministic choices, or assignment of a value or values to a variable. Often, given a solution to a CSP problem, a user needs to have some understanding as to why specific values or constraints were chosen, or why other values were removed by the constraint propagators. For that matter, where the CSP problem is determined to be unsatisfiable, the user should understand why. The ability to explain the specific values (or lack of them) in a solution is pre-requisite for successful CSP modeling. Successful CSP modeling, for example, would support a task of successfully debugging an imperative program, which ability and successful debugging is a basic need in any successful software development. For that matter, known CSP debugging is normally performed off-line, and is therefore sometimes referred to by the skilled artisan as “post-mortem.” During a MAC execution, a trace of events (“trace”) is generated and analyzed afterwards. i.e., post mortem, upon completion of the process. The trace gives rise to a CSP execution graph. A CSP execution graph highlights two kinds of nodes: 1) variable domains and 2) solver events that transform variable domains. There is a directed edge from a variable domain to solver event. The directed edge denotes input of a solver event, and the directed edge from the solver event to an output. FIG. 1 herein is an exemplary embodiment of a CSP model. In this figure x and y are two integer variables with an initial domain of [1,10] and [3,15] respectively. Each oval node “x>y” is a propagation event and the oval node “instantiate x” is an instantiation event. Finding explanations for the given solution for a CSP execution graph typically requires traversing the graph backwards. Such explanations correspond to a sub-graph of the CSP execution graph, which when viewed as set of variable reductions and propagations, supplies an evidence based reasoning supporting why a specific variable was attributed to or assigned a specific value. There may be numerous explanations for an observed effect, such that a significant shortcoming of the conventional process is that the debugging process typically focuses on a single explanation. While the whole of the CSP execution graph inherently provides what may be described as a global explanation, practical investigation and specific user needs tend to focus on a user’s particular interest in a minimal sub-graph (or explanation). The term sub-graph as used herein is meant represent a minimal explanation for an effect if removing an event from a sub-graph doesn’t imply or reflect that the event is the basis for the effect, or that the event is not readily inferable from the sub-graph. Finding a single minimal explanation requires tracking several paths in the execution graph. The task of tracking the several paths to identify a single minimal explanation, when performed manually, is tedious and labor-intensive. Even discovering immediate causes of a variable reduction by analyzing a single constraint invocation is not trivial. An existing constraints programming environment with explanation support, or “Palm,” is described by N. Jussien and V. Borchardt, in their paper: The P. M System: Explanation-Based Constraint Programming, Proceedings of TRICS: Techniques For Implementing Constraint Programming Systems. Proceedings of TRICS is a post-conference workshop of CP 2000. “Palm” explicitly requires augmenting the constraint propagators for producing explanations. Similar ideas to those used in the “Palm” system for producing explanations are discussed in a paper by R. J. Wallace, E. C. Freuder, in their paper: Explanations For Whom? In Proc.; CP 2001. What would be desirable in the art of identifying explanations for given solutions for CSP execution graphs is a system and method that processes and generates explanations in at least two particular cases. The first of the two cases arises where the CSP solver returns no result for the selected variables, i.e., the case of a CSP failure. The second of the two cases arises where the CSP solver returns a specific domain for one of the variables. Moreover, the method preferably would operate offline, or post mortem, and would focus or operate based on a trace generated by the CSP solver. SUMMARY OF THE INVENTION To that the, the inventors of this invention disclose a system and method for finding explanations for given solutions, or lack of a solution for CSP execution graphs in the at least two particular cases. The system and method operate in a CSP model to identify explanations describing given solutions for CSP execution graphs that return no result, i.e., the case of a CSP failure, and in a case where the CSP solver returns a specific domain for one of the variables. The system and method preferably operate offline, or post mortem, and focus based on a trace generated by the CSP solver. The system and method traverse the CSP execution graph backwards in time. During such a reverse traversing, the system and method traverse each propagation node to determine through which variable nodes to advance backwards. Doing so essentially determines which variables contributed to the reduction of a domain of a desired variable. An advantage of such a system and method is that by the novel operation, additional effort from a user for augmenting the constraint propagators to supply an explanation is not required, as is in the operation of prior art systems and methods. For example, the prior art methods and system typically requires that users provide additional efforts in order to realize explanations when operating the Palp explanation-based system. Developing a propagator that supplies an explanation is more complex than developing a propagator that does not supply one. In addition, there exist many constraint-based systems with non-explaining propagators. The present invention is constructed to be operational within such known prior art systems seamlessly, without manual or other adaptations known to be required by the prior art. BRIEF DESCRIPTION OF THE DRAWING FIGURES The foregoing and other objects, aspects and advantages will be better understood from the following detailed description of embodiments of the inventions, with reference to the drawings, in which: FIG. 1 is a diagram depicting a CSP execution graph that is representative of process flow operation of a conventional constraint satisfaction problem (or model); FIG. 2 is a diagram depicting graphically a reduction of a variable domain in a specific propagation, as provided by the invention; and FIG. 3 is a diagram depicting graphically a reduction of a variable domain in a specific propagation, as provided by the invention, in which V1 is found not to contribute to the reduction of D1; and FIG. 4 is a schematic block diagram depicting a computer system in which the invention may be implemented. DETAILED DESCRIPTION OF THE INVENTION The inventive system and method for finding explanations for given solutions, or lack of a solution for CSP execution graphs is set forth and described herein for the purpose of conveying the broad inventive concepts. The drawings and descriptions provided are not meant to limit the scope and spirit of the invention in any way. To that end, reference will now be made in detail to the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to like elements throughout. In one embodiment, the novel method and algorithm begins operation with the “last” propagation. From that start point, the method determines which variables contributed to the reduction of the domain of the desired variable. Once the variables are identified, the method, for each of the contributing variables, tracks the last propagation that reduced its domain. The method follows the same procedure for each of these propagations. By such novel operation, the system and method traverse the CSP execution graph backwards in time, producing a sub-graph of the CSP execution graph that is presented to the user as the desired explanation. In more detail, the step or portion of the novel operation that determines which variables contributed to the reduction of a domain of one of the variables in a given propagation is carried out by iteratively executing the same propagator with the same input domains. FIG. 2 herein highlights a propagator (p) and the variable V1, V2, corresponding to domains D1, D2, . . . Dn. The inventive operation requires that each iteration use the initial domain of one of the variables as its input domain instead of its original domain at the time of this particular propagation. Such operation assumes that it is possible to re-execute the propagators several times, preferably offline. The resulting output as shown in FIG. 2, corresponds to d_out FIG. 3 herein shows the FIG. 2 functioning whereby the first variable V1 is found not to have contributed to a reduction. In FIG. 4 D1 is the initial domain of V1, and D_out is the output domain of V1 when the propagator is re-executed with the same inputs for variables V2, . . . , Vn. When d_out=d_out it means that V1 do not contribute to the reduction of Vn in this particular propagation. Otherwise, V1 contributes to the reduction of Vn. The method may be implemented as a set of instructions that are executed by a processor to implement the novel, iterative operation. For that matter, the following is a set of pseudo-code, that when tuned as a set of fully executable computer readable instructions, carries out the novel operation for this algorithm. The pseudo-code for the novel procedure is described in “findLocalExplanation” Instantiation nodes are treated as propagation nodes. ``` findExplanation(PropagationNode pn, OutputVariable ov): Graph eq={ } /* explanation sub-graph VariableSet vs=findLocalExplanation(pn, ov) ``` ``` Foreach v: vs Let lpn be last propagation node that reduced the domain of v eg_v=findExplanation(lpn, v) eg_v=UNION(eq, eq_v) return eg findLocalExplanation(PropagationNode pn, OutputVariable ov): ``` ``` /* handle case of instantiation node */ if pn is an instantiation node then Let iv be input node for pn return iv ``` ``` VariableSet vs={ } /* affecting variable set Propagator p=Propagator(pn)/Let p be the propagator of pn ``` ``` //initialize used domains of the input variables Foreach vi: vi is an input node of pn UD(vi)=D(vi)/D(vi) is the domain of variable node vi Foreach vi: vi is an input node of pn d_out=the output domain of ov returned by the activation of propagator p with inputs: UD(v1), . . . , ID(vi), . . . , UD(vN). if d_out=output domain of variable ov /* variable vi not affecting the reduction of domain of output variable ov*/ UD(vi)=ID(vi) vs=UNION(vs, vi) ``` ``` return vs ``` In case of a failed CSP (empty set reached): Let fp be the failed propagation Let v be one of the variables of fp eg=findExplanation(fp, v) return eg ``` The various method embodiments of the invention will be generally implemented by a computer executing a sequence of program instructions for carrying out the steps of the method, assuming all required data for processing is accessible to the computer. The sequence of program instructions may be embodied in a computer program product comprising media storing the program instructions. As will be readily apparent to those skilled in the art, the present invention can be realized in hardware, software, or a combination of hardware and software. Any kind of computer/server system(s) or other apparatus for carrying out the methods described herein—is suited. A typical combination of hardware and software could be a general-purpose computer system with a computer program that, when loaded and executed, carries out the method, and variations on the method as described herein. Alternatively, a specific use computer, containing specialized hardware for carrying out one or more of the functional tasks of the invention, could be utilized. A computer-based system 400 is depicted in FIG. 4 by which the method of the present invention may be carried out. Computer system 400 includes a processing unit 441, which houses a processor, memory and other systems components that implement a general purpose processing system or computer that may execute a computer program product. The computer program product may comprise media, for example a compact storage medium such as a compact disc, which may be read by the processing unit 441 through a disc drive 442, or by any means known to the skilled artisan for providing the computer program product to the general purpose processing system for execution thereby. The computer program product comprises all the respective features enabling the implementation of the methods described herein, and which—when loaded in a computer system—is able to carry out these methods. Computer program, software program, program, or software, in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: (a) conversion to another language, code or notation; and/or (b) reproduction in a different material form. The computer program product may be stored on hard disk drives within processing unit 441 (as mentioned) or may be located on a remote system such as a server 443, coupled to processing unit 441, via a network interface such as an Ethernet interface. Monitor 444, mouse 445 and keyboard 446 are coupled to the processing unit 441, to provide user interaction. Scanner 447 and printer 448 are provided for document input and output. Printer 448 is shown coupled to the processing unit 441 via a network connection, but may be coupled directly to the processing unit. Scanner 447 is shown coupled to the processing unit 441 directly, but it should be understood that such peripherals may be network coupled, or direct coupled without affecting the ability of the processing unit 441 to perform the method of the invention. Preferably, the computer-implemented method includes finding a minimal explanation for said solution, and the augmenting of the constraint operators that is carried out to supply the explanation may include operating first on a last propagation. For that matter, the method preferably includes that the step of augmenting determines which variables contributed to a reduction of domain of a particular variable. More, the inventive method also preferably includes executing the same propagator with the same input domains such that each iteration operates on an initial domain of one of the variables as its input domain in lieu of its original domain at the time of propagation. Although a few examples of the present invention have been shown and described, it would be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the claims and their equivalents. What is claimed is: 1. A computer implemented method for generating an explanation of a solution to a constraint satisfaction problem (CSP), said computer implemented method comprising the steps of: - providing a CSP solution having assigned values to variables that are consistent with constraints defined for each variable; - determining which variables contributed to a reduction of a domain of a desired output variable in a given propagation, said determining comprising: - traversing said CSP solution and, at each propagation node traversed, - iteratively executing the same corresponding propagator at the propagation node used to reduce the variable domain, wherein, at each iteration: - executing a same corresponding propagator with the input domain being an initial domain of one of the input variables “vi” instead of its original domain at the time of the propagation; - determining a corresponding output domain “d_ov” returned by the activation of the same propagator p with the input domain of the input variable being said initial domain of one of the variables; and, - determining whether the returned “d_ov” is equal to an output domain of the desired output variable corresponding to the output variable at the time of propagation; - if not equal, then the input variable “vi” does not affect the reduction of domain of the desired output variable; otherwise, - determining that the replaced variable “vi” does affect the reduction of domain of the desired output variable; and, - augmenting the constraints to supply an explanation for particular values assigned to said variables, and constraints defined over said variable utilized in said solution. 2. A computer program product, the computer program product comprising: a tangible storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method as set forth in claim 1.
{"Source-Url": "https://www.lens.org/images/patent/US/7523445/B1/US_7523445_B1.pdf", "len_cl100k_base": 4254, "olmocr-version": "0.1.49", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 7490, "total-output-tokens": 6887, "length": "2e12", "weborganizer": {"__label__adult": 0.0003859996795654297, "__label__art_design": 0.0003905296325683594, "__label__crime_law": 0.0010290145874023438, "__label__education_jobs": 0.001270294189453125, "__label__entertainment": 9.375810623168944e-05, "__label__fashion_beauty": 0.00018680095672607425, "__label__finance_business": 0.0010442733764648438, "__label__food_dining": 0.0003387928009033203, "__label__games": 0.0007710456848144531, "__label__hardware": 0.002399444580078125, "__label__health": 0.0005879402160644531, "__label__history": 0.0002815723419189453, "__label__home_hobbies": 0.00012981891632080078, "__label__industrial": 0.0007572174072265625, "__label__literature": 0.0004169940948486328, "__label__politics": 0.00031685829162597656, "__label__religion": 0.0003304481506347656, "__label__science_tech": 0.12939453125, "__label__social_life": 6.598234176635742e-05, "__label__software": 0.018035888671875, "__label__software_dev": 0.8408203125, "__label__sports_fitness": 0.0002092123031616211, "__label__transportation": 0.0005793571472167969, "__label__travel": 0.00014352798461914062}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27848, 0.0744]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27848, 0.61464]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27848, 0.88938]], "google_gemma-3-12b-it_contains_pii": [[0, 2625, false], [2625, 8366, null], [8366, 8381, null], [8381, 8436, null], [8436, 15733, null], [15733, 22294, null], [22294, 27848, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2625, true], [2625, 8366, null], [8366, 8381, null], [8381, 8436, null], [8436, 15733, null], [15733, 22294, null], [22294, 27848, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27848, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27848, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27848, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27848, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27848, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27848, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27848, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27848, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27848, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27848, null]], "pdf_page_numbers": [[0, 2625, 1], [2625, 8366, 2], [8366, 8381, 3], [8381, 8436, 4], [8436, 15733, 5], [15733, 22294, 6], [22294, 27848, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27848, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
38e631410ac9a104bcb1cef6912c0736edaa89fb
Red Hat Ansible Automation Platform Ansible Linux Automation Workshop Introduction to Ansible for Red Hat Enterprise Linux Automation for System Administrators and Operators What you will learn - Overview of public cloud provisioning - Converting shell commands into Ansible Commands. - Retrieving information from hosts - Deploying applications at scale - Self-service IT via surveys - Overview of System Roles for Red Hat Enterprise Linux - Overview of Red Hat Insights integration Introduction Topics Covered: - What is the Ansible Automation Platform? - What can it do? Automation happens when one person meets a problem they never want to solve again. Many organizations share the same challenge Too many unintegrated, domain-specific tools Why the Ansible Automation Platform? **Powerful** Orchestrate complex processes at enterprise scale. **Simple** Simplify automation creation and management across multiple domains. **Agentless** Easily integrate with hybrid environments. Automate the deployment and management of automation Your entire IT footprint Do this... Orchestrate | Manage configurations | Deploy applications | Provision / deprovision | Deliver continuously | Secure and comply On these... Firewalls | Load balancers | Applications | Containers | Virtualization platforms Servers | Clouds | Storage | Network devices | And more... Break down silos Different teams a single platform What makes a platform? Red Hat Ansible Automation Platform Content creators - Automation controller Operators - Automation hub Domain experts - Automation services catalog Users - Insights for Ansible Automation Platform Fueled by an open source community Ansible content domains - Infrastructure - Linux - Windows - Cloud - Network - Security Ansible command line Red Hat named a Leader in The Forrester Wave™ Infrastructure Automation Platforms, Q3 2020 Received highest possible score in the criteria of: - Deployment functionality - Product Vision - Partner Ecosystem - Supporting products and services - Community support - Planned product enhancements ▸ “Ansible continues to grow quickly, particularly among enterprises that are automating networks. The solution excels at providing a variety of deployment options and acting as a service broker to a wide array of other automation tools.” ▸ “Red Hat’s solution is a good fit for customers that want a holistic automation platform that integrates with a wide array of other vendors’ infrastructure.” Source: DISCLAIMER: The Forrester Wave™ is copyrighted by Forrester Research, Inc. Forrester and Forrester Wave™ are trademarks of Forrester Research, Inc. The Forrester Wave™ is a graphical representation of Forrester’s call on a market and is plotted using a detailed spreadsheet with exposed scores, weightings, and comments. Forrester does not endorse any vendor, product, or service depicted in the Forrester Wave™. Ansible automates technologies you use Time to automate is measured in minutes <table> <thead> <tr> <th>Cloud</th> <th>Virt &amp; Container</th> <th>Windows</th> <th>Network</th> <th>Security</th> <th>Monitoring</th> </tr> </thead> <tbody> <tr> <td>AWS</td> <td>Docker</td> <td>ACLs</td> <td>A10</td> <td>Checkpoint</td> <td>Dynatrace</td> </tr> <tr> <td>Azure</td> <td>VMware</td> <td>Files</td> <td>Arista</td> <td>Cisco</td> <td>Datadog</td> </tr> <tr> <td>Digital Ocean</td> <td>RHV</td> <td>Packages</td> <td>Aruba</td> <td>CyberArk</td> <td>LogicMonitor</td> </tr> <tr> <td>Google</td> <td>OpenStack</td> <td>IIS</td> <td>Cumulus</td> <td>F5</td> <td>New Relic</td> </tr> <tr> <td>OpenStack</td> <td>OpenShift</td> <td>Regedits</td> <td>Bigswitch</td> <td>Fortinet</td> <td>Sensu</td> </tr> <tr> <td>Rackspace</td> <td></td> <td>Shares</td> <td>Cisco</td> <td>Juniper</td> <td>+more</td> </tr> <tr> <td>+more</td> <td></td> <td>Services</td> <td>Dell</td> <td>IBM</td> <td></td> </tr> <tr> <td>Operating</td> <td></td> <td>Configs</td> <td>Extreme</td> <td>Palo Alto</td> <td></td> </tr> <tr> <td>Systems</td> <td></td> <td>Users</td> <td>F5</td> <td>Snort</td> <td>+more</td> </tr> <tr> <td>RHEL</td> <td></td> <td>Domains</td> <td>Lenovo</td> <td>+more</td> <td></td> </tr> <tr> <td>Linux</td> <td></td> <td></td> <td>MikroTik</td> <td></td> <td></td> </tr> <tr> <td>Windows</td> <td></td> <td></td> <td>Juniper</td> <td></td> <td></td> </tr> <tr> <td>+more</td> <td></td> <td></td> <td>OpenSwitch</td> <td></td> <td></td> </tr> <tr> <td>Storage</td> <td></td> <td></td> <td>OpenSwitch</td> <td></td> <td></td> </tr> <tr> <td>Netapp</td> <td></td> <td></td> <td>+more</td> <td></td> <td></td> </tr> <tr> <td>Red Hat Storage</td> <td></td> <td></td> <td>+more</td> <td></td> <td></td> </tr> <tr> <td>+more</td> <td></td> <td></td> <td>+more</td> <td></td> <td></td> </tr> <tr> <td>Infinidat</td> <td></td> <td></td> <td>+more</td> <td></td> <td></td> </tr> </tbody> </table> Operating Systems - RHEL - Linux - Windows - +more Cloud - AWS - Azure - Digital Ocean - Google - OpenStack - Rackspace - +more Virt & Container - Docker - VMware - RHV - OpenStack - OpenShift - +more Windows - ACLs - Files - Packages - IIS - Regedits - Shares - Services - Configs - Users - Domains - +more Network - A10 - Arista - Aruba - Cumulus - Bigswitch - Cisco - Dell - Extreme - F5 - Lenovo - MikroTik - Juniper - OpenSwitch - +more Security - Checkpoint - Cisco - CyberArk - F5 - Fortinet - Juniper - IBM - Palo Alto - Snort - +more Monitoring - Dynatrace - Datadog - LogicMonitor - New Relic - Sensu - +more Devops - Jira - GitHub - Vagrant - Jenkins - Slack - +more Cloud Topics Covered: - Understanding the Ansible Infrastructure - Check the prerequisites The lab environment today - **Drink our own champagne.** Provisioned by, configured by, and managed by Red Hat Ansible Automation Platform. https://github.com/ansible/workshops - **Learn with the real thing** Every student will have their own fully licensed Red Hat Ansible Tower control node. No emulators or simulators here. - **Red Hat Enterprise Linux** All four nodes are enterprise Linux, showcasing real life use-cases to help spark ideas for what you can automate today. How does it work? **Provision** - **Resources** - Subnets, gateways, security groups, SSH keys - **Instances** - RHEL, Cisco, Arista, Checkpoint, Windows, etc - **Inventory** - Load and sort newly created instances for further automation **Configure** - **Ansible environment** - Install Ansible Tower, SSH config, user accounts, etc - **Code Server** - Configure in-browser text editor and terminal - **DNS** - Configure DNS names for all control nodes **Manage** - **Login Website** - Dynamically create login webpage for students - **Instructor Inventory** - Provide inventory and login information and master key - **Log Information** - Record student count and instructor for statistics Exercise 1 Topics Covered: - Understanding the Ansible Infrastructure - Check the prerequisites Create The automation lifecycle Content creators Discover Domain experts Build Ansible content experience Red Hat cloud / on-premises Automation hub Ansible content domains Trust Infrastructure Linux Windows Cloud Network Security --- - name: install and start apache hosts: web become: yes tasks: - name: httpd package is present yum: name: httpd state: latest - name: latest index.html file is present template: src: files/index.html dest: /var/www/html/ - name: httpd is started service: name: httpd state: started What makes up an Ansible playbook? Plays Modules Plugins Ansible plays What am I automating? What are they? Top level specification for a group of tasks. Will tell that play which hosts it will execute on and control behavior such as fact gathering or privilege level. Building blocks for playbooks Multiple plays can exist within an Ansible playbook that execute on different hosts. --- - name: install and start apache hosts: web become: yes Ansible modules The “tools in the toolkit” What are they? Parametrized components with internal logic, representing a single step to be done. The modules “do” things in Ansible. Language Usually Python, or Powershell for Windows setups. But can be of any language. Ansible plugins The “extra bits” What are they? Plugins are pieces of code that augment Ansible’s core functionality. Ansible uses a plugin architecture to enable a rich, flexible, and expandable feature set. Example become plugin: ```yaml - name: install and start apache hosts: web become: yes ``` Example filter plugins: ```yaml {{ some_variable | to_nice_json }} {{ some_variable | to_nice_yaml }} ``` Ansible Inventory The systems that a playbook runs against What are they? List of systems in your infrastructure that automation is executed against ``` [web] webserver1.example.com webserver2.example.com [db] dbserver1.example.com [switches] leaf01.internal.com leaf02.internal.com ``` Ansible roles Reusable automation actions What are they? Group your tasks and variables of your automation in a reusable structure. Write roles once, and share them with others who have similar challenges in front of them. --- - name: install and start apache hosts: web roles: - common - webservers Collections Simplified and consistent content delivery What are they? Collections are a data structure containing automation content: - Modules - Playbooks - Roles - Plugins - Docs - Tests --- - name: Install NGINX Plus hosts: all tasks: - name: Install NGINX include_role: name: nginxinc.nginx vars: nginx_type: plus - name: Install NGINX App Protect include_role: name: nginxinc.nginx_app_protect vars: nginx_app_protect_setup_license: false nginx_app_protect_remove_license: false nginx_app_protect_install_signatures: false Why the Red Hat Ansible Automation Platform? 90+ certified platforms How Ansible Automation Works Module code is executed locally on the control node Module code is copied to the managed node, executed, then removed Network Devices / API Endpoints Local Execution Remote Execution Linux / Windows Hosts Verify Lab Access - Follow the steps in to access environment - Use the IP provided to you, the script only has example IP - Which editor do you use on command line? If you don’t know, we have a short intro Lab Time Complete exercise **1-setup** now in your lab environment Exercise 2 Topics Covered: - Ansible inventories - Accessing Ansible docs - Modules and getting help Inventory - Ansible works against multiple systems in an inventory - Inventory is usually file based - Can have multiple groups - Can have variables for each group or even host Ansible Inventory The Basics An example of a static Ansible inventory including systems with IP addresses as well as fully qualified domain name (FQDN) [app1srv] appserver01 ansible_host=10.42.0.2 appserver02 ansible_host=10.42.0.3 [web] node-[1:30] ansible_host=10.42.0.[31:60] [web:vars] apache_listen_port=8080 apache_root_path=/var/www/mywebdocs/ [all:vars] ansible_user=kev ansible_ssh_private_key_file=/home/kev/.ssh/id_rsa [app1srv] appserver01 ansible_host=10.42.0.2 appserver02 ansible_host=10.42.0.3 [web] ode-[1:30] ansible_host=10.42.0.[31:60] [web:vars] apache_listen_port=8080 apache_root_path=/var/www/mywebdocs/ [all:vars] ansible_user=ender ansible_ssh_private_key_file=/home/ender/.ssh/id_rsa [nashville] bnaapp01 bnaapp02 [Atlanta] atlapp03 atlapp04 [south:children] atlanta nashville hsvapp05 Accessing the Ansible docs With the use of the latest command utility ansible-navigator, one can trigger access to all the modules available to them as well as details on specific modules. A formal introduction to ansible-navigator and how it can be used to run playbooks in the following exercise. Accessing the Ansible docs Aside from listing a full list of all the modules, you can use ansible-navigator to provide details about a specific module. In this example, we are getting information about the user module. Lab Time Complete exercise 2-thebasics now in your lab environment Exercise 3 Topics Covered: - Playbooks basics - Running a playbook --- - name: install and start apache hosts: web become: yes tasks: - name: httpd package is present yum: name: httpd state: latest - name: latest index.html file is present template: src: files/index.html dest: /var/www/html/ - name: httpd is started service: name: httpd state: started Ansible playbooks --- - name: install and start apache hosts: web become: yes tasks: - name: httpd package is present yum: name: httpd state: latest - name: latest index.html file is present template: src: files/index.html dest: /var/www/html/ - name: httpd is started service: name: httpd state: started --- - name: install and start apache hosts: web become: yes tasks: - name: httpd package is present yum: name: httpd state: latest - name: latest index.html file is present template: src: files/index.html dest: /var/www/html/ - name: httpd is started service: name: httpd state: started Running Playbooks The most important colors of Ansible A task executed as expected, no change was made. A task executed as expected, making a change A task failed to execute successfully A playbook run Where it all starts - A playbook is interpreted and run against one or multiple hosts - task by task. The order of the tasks defines the execution. - In each task, the module does the actual work. Running an Ansible Playbook Using the latest ansible-navigator command What is ansible-navigator? ansible-navigator command line utility and text-based user interface (TUI) for running and developing Ansible automation content. It replaces the previous command used to run playbooks “ansible-playbook”. $ ansible-navigator run playbook.yml How do I use ansible-navigator? As previously mentioned, it replaces the ansible-playbook command. As such it brings two methods of running playbooks: - Direct command-line interface - Text-based User Interface (TUI) ``` # Direct command-line interface method $ ansible-navigator run playbook.yml -m stdout # Text-based User Interface method $ ansible-navigator run playbook.yml ``` ansible-navigator Mapping to previous Ansible commands <table> <thead> <tr> <th>ansible command</th> <th>ansible-navigator command</th> </tr> </thead> <tbody> <tr> <td>ansible-config</td> <td>ansible-navigator config</td> </tr> <tr> <td>ansible-doc</td> <td>ansible-navigator doc</td> </tr> <tr> <td>ansible-inventory</td> <td>ansible-navigator inventory</td> </tr> <tr> <td>ansible-playbook</td> <td>ansible-navigator run</td> </tr> </tbody> </table> ## ansible-navigator ### Common subcommands <table> <thead> <tr> <th>Name</th> <th>Description</th> <th>CLI Example</th> <th>Colon command within TUI</th> </tr> </thead> <tbody> <tr> <td>collections</td> <td>Explore available collections</td> <td>ansible-navigator collections --help</td> <td>:collections</td> </tr> <tr> <td>config</td> <td>Explore the current ansible configuration</td> <td>ansible-navigator config --help</td> <td>:config</td> </tr> <tr> <td>doc</td> <td>Review documentation for a module or plugin</td> <td>ansible-navigator doc --help</td> <td>:doc</td> </tr> <tr> <td>images</td> <td>Explore execution environment images</td> <td>ansible-navigator images --help</td> <td>:images</td> </tr> <tr> <td>inventory</td> <td>Explore and inventory</td> <td>ansible-navigator inventory --help</td> <td>:inventory</td> </tr> <tr> <td>replay</td> <td>Explore a previous run using a playbook artifact</td> <td>ansible-navigator replay --help</td> <td>:replay</td> </tr> <tr> <td>run</td> <td>Run a playbook</td> <td>ansible-navigator run --help</td> <td>:run</td> </tr> <tr> <td>welcome</td> <td>Start at the welcome page</td> <td>ansible-navigator welcome --help</td> <td>:welcome</td> </tr> </tbody> </table> Lab Time Complete exercise 3-playbooks now in your lab environment Exercise 4 Topics Covered: - Working with variables - What are facts? --- - name: variable playbook test hosts: localhost vars: var_one: awesome var_two: ansible is var_three: "{{ var_two }} {{ var_one }}" tasks: - name: print out var_three debug: msg: "{{ var_three }}" --- - name: variable playbook test hosts: localhost vars: var_one: awesome var_two: ansible is var_three: "{{ var_two }} {{ var_one }}" tasks: - name: print out var_three debug: msg: "{{ var_three }}" ansible is awesome Ansible Facts - Just like variables, really... - ... but: coming from the host itself! - Check them out with the setup module ```yaml tasks: - name: Collect all facts of host setup: gather_subset: - 'all' ``` --- - name: facts playbook hosts: localhost tasks: - name: Collect all facts of host setup: gather_subset: - 'all' $ ansible-navigator run playbook.yml ## PlayBook: facts playbook ### Results: <table> <thead> <tr> <th>PLAY NAME</th> <th>OK</th> <th>CHANGED</th> <th>UNREACHABLE</th> <th>FAILED</th> <th>SKIPPED</th> <th>IGNORED</th> <th>IN PROGRESS</th> <th>TASK COUNT</th> <th>PROGRESS</th> </tr> </thead> <tbody> <tr> <td>facts playbook</td> <td>2</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>2</td> <td>COMPLETE</td> </tr> </tbody> </table> ### Details: - **RESULT**: - **HOST**: localhost - **NUMBER**: 0 - **CHANGED**: False - **TASK**: Gathering Facts - **TASK ACTION**: gather_facts - **DURATION**: 1s - **RESULT**: - **HOST**: localhost - **NUMBER**: 1 - **CHANGED**: False - **TASK**: Collect all facts of host - **TASK ACTION**: setup - **DURATION**: 1s ### Tasks: - **ansible_facts**: - ansible_all_ipv4_addresses: - 10.0.2.100 - ansible_all_ipv6_addresses: - fe80::1caa:f0ff:fe15:23c4 Ansible Inventory - Managing Variables In Files ``` $ tree ansible-files/ ├── deploy_index_html.yml ├── files │ ├── dev_web.html │ └── prod_web.html ├── group_vars │ └── web.yml └── host_vars └── node2.yml ``` Ansible Inventory - Managing Variables In Files ```yaml $ cat group_vars/web.yml --- stage: dev $ cat host_vars/node2.yml --- stage: prod - name: copy web.html copy: src: "{{ stage }}_web.html" dest: /var/www/html/index.html ``` Lab Time Complete exercise 4-variables now in your lab environment Exercise 5 Topics Covered: - Surveys Controller surveys allow you to configure how a job runs via a series of questions, making it simple to customize your jobs in a user-friendly way. An Ansible Controller survey is a simple question-and-answer form that allows users to customize their job runs. Combine that with Controller’s role-based access control, and you can build simple, easy self-service for your users. Creating a Survey (1/2) Once a Job Template is saved, the Survey menu will have an **Add Button** Click the button to open the Add Survey window. Creating a Survey (2/2) The Add Survey window allows the Job Template to prompt users for one or more questions. The answers provided become variables for use in the Ansible Playbook. Using a Survey When launching a job, the user will now be prompted with the Survey. The user can be required to fill out the Survey before the Job Template will execute. Lab Time Complete exercise 5-surveys now in your lab environment Exercise 6 Topics Covered: - Red Hat Enterprise Linux System Roles Automation Hub and Ansible Galaxy Physical Site Ansible Content Roles & Collections Physical Site Linux System Roles Collection - Consistent user interface to provide settings to a given subsystem that is abstract from any particular implementation Examples kdump network selinux timesync --- - name: example system roles playbook hosts: web tasks: - name: Configure Firewall include_role: name: linux-system-roles.firewall - name: Configure Timesync include_role: name: redhat.rhel_system_roles.timesync timesync role is referenced from the RHEL System Roles Collection Lab Time Complete exercise 6-system-roles now in your lab environment Reports: Provide executive summaries of automation across the organization Changes made by job template The total count of changes made by each job template in a specified time window. You can use this report to ensure the correct number of changes are made per hostname, as well as see which job templates are doing the most changes to your infrastructure. Where to go next Learn more - Workshops - Documents - Youtube - Twitter Get started - Evals - cloud.redhat.com Get serious - Red Hat Automation Adoption Journey - Red Hat Training - Red Hat Consulting Thank you
{"Source-Url": "https://aap2.demoredhat.com/decks/ansible_rhel_90.pdf", "len_cl100k_base": 5605, "olmocr-version": "0.1.50", "pdf-total-pages": 72, "total-fallback-pages": 0, "total-input-tokens": 83000, "total-output-tokens": 7774, "length": "2e12", "weborganizer": {"__label__adult": 0.00033545494079589844, "__label__art_design": 0.0004794597625732422, "__label__crime_law": 0.0003571510314941406, "__label__education_jobs": 0.01474761962890625, "__label__entertainment": 0.00022745132446289065, "__label__fashion_beauty": 0.0001735687255859375, "__label__finance_business": 0.003154754638671875, "__label__food_dining": 0.0003037452697753906, "__label__games": 0.000728607177734375, "__label__hardware": 0.0013723373413085938, "__label__health": 0.00030422210693359375, "__label__history": 0.00032520294189453125, "__label__home_hobbies": 0.00020956993103027344, "__label__industrial": 0.0007758140563964844, "__label__literature": 0.0002963542938232422, "__label__politics": 0.0003643035888671875, "__label__religion": 0.0003731250762939453, "__label__science_tech": 0.031463623046875, "__label__social_life": 0.00035953521728515625, "__label__software": 0.264404296875, "__label__software_dev": 0.67822265625, "__label__sports_fitness": 0.00023984909057617188, "__label__transportation": 0.0004436969757080078, "__label__travel": 0.00034308433532714844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21120, 0.01167]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21120, 0.16656]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21120, 0.76971]], "google_gemma-3-12b-it_contains_pii": [[0, 176, false], [176, 487, null], [487, 579, null], [579, 662, null], [662, 752, null], [752, 993, null], [993, 1368, null], [1368, 1420, null], [1420, 1803, null], [1803, 3082, null], [3082, 5587, null], [5587, 5680, null], [5680, 6170, null], [6170, 6883, null], [6883, 6981, null], [6981, 7228, null], [7228, 7599, null], [7599, 7659, null], [7659, 8056, null], [8056, 8323, null], [8323, 8740, null], [8740, 9033, null], [9033, 9350, null], [9350, 9543, null], [9543, 9942, null], [9942, 10012, null], [10012, 10252, null], [10252, 10462, null], [10462, 10530, null], [10530, 10633, null], [10633, 10811, null], [10811, 10965, null], [10965, 11246, null], [11246, 11530, null], [11530, 11634, null], [11634, 11935, null], [11935, 12156, null], [12156, 12223, null], [12223, 12292, null], [12292, 12667, null], [12667, 13037, null], [13037, 13387, null], [13387, 13577, null], [13577, 13792, null], [13792, 14137, null], [14137, 14524, null], [14524, 14966, null], [14966, 16323, null], [16323, 16390, null], [16390, 16462, null], [16462, 16701, null], [16701, 16960, null], [16960, 17191, null], [17191, 17369, null], [17369, 18259, null], [18259, 18496, null], [18496, 18738, null], [18738, 18805, null], [18805, 18844, null], [18844, 19224, null], [19224, 19372, null], [19372, 19557, null], [19557, 19728, null], [19728, 19793, null], [19793, 19862, null], [19862, 19963, null], [19963, 20160, null], [20160, 20477, null], [20477, 20547, null], [20547, 20907, null], [20907, 21111, null], [21111, 21120, null]], "google_gemma-3-12b-it_is_public_document": [[0, 176, true], [176, 487, null], [487, 579, null], [579, 662, null], [662, 752, null], [752, 993, null], [993, 1368, null], [1368, 1420, null], [1420, 1803, null], [1803, 3082, null], [3082, 5587, null], [5587, 5680, null], [5680, 6170, null], [6170, 6883, null], [6883, 6981, null], [6981, 7228, null], [7228, 7599, null], [7599, 7659, null], [7659, 8056, null], [8056, 8323, null], [8323, 8740, null], [8740, 9033, null], [9033, 9350, null], [9350, 9543, null], [9543, 9942, null], [9942, 10012, null], [10012, 10252, null], [10252, 10462, null], [10462, 10530, null], [10530, 10633, null], [10633, 10811, null], [10811, 10965, null], [10965, 11246, null], [11246, 11530, null], [11530, 11634, null], [11634, 11935, null], [11935, 12156, null], [12156, 12223, null], [12223, 12292, null], [12292, 12667, null], [12667, 13037, null], [13037, 13387, null], [13387, 13577, null], [13577, 13792, null], [13792, 14137, null], [14137, 14524, null], [14524, 14966, null], [14966, 16323, null], [16323, 16390, null], [16390, 16462, null], [16462, 16701, null], [16701, 16960, null], [16960, 17191, null], [17191, 17369, null], [17369, 18259, null], [18259, 18496, null], [18496, 18738, null], [18738, 18805, null], [18805, 18844, null], [18844, 19224, null], [19224, 19372, null], [19372, 19557, null], [19557, 19728, null], [19728, 19793, null], [19793, 19862, null], [19862, 19963, null], [19963, 20160, null], [20160, 20477, null], [20477, 20547, null], [20547, 20907, null], [20907, 21111, null], [21111, 21120, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 21120, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21120, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21120, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21120, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21120, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21120, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21120, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21120, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21120, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21120, null]], "pdf_page_numbers": [[0, 176, 1], [176, 487, 2], [487, 579, 3], [579, 662, 4], [662, 752, 5], [752, 993, 6], [993, 1368, 7], [1368, 1420, 8], [1420, 1803, 9], [1803, 3082, 10], [3082, 5587, 11], [5587, 5680, 12], [5680, 6170, 13], [6170, 6883, 14], [6883, 6981, 15], [6981, 7228, 16], [7228, 7599, 17], [7599, 7659, 18], [7659, 8056, 19], [8056, 8323, 20], [8323, 8740, 21], [8740, 9033, 22], [9033, 9350, 23], [9350, 9543, 24], [9543, 9942, 25], [9942, 10012, 26], [10012, 10252, 27], [10252, 10462, 28], [10462, 10530, 29], [10530, 10633, 30], [10633, 10811, 31], [10811, 10965, 32], [10965, 11246, 33], [11246, 11530, 34], [11530, 11634, 35], [11634, 11935, 36], [11935, 12156, 37], [12156, 12223, 38], [12223, 12292, 39], [12292, 12667, 40], [12667, 13037, 41], [13037, 13387, 42], [13387, 13577, 43], [13577, 13792, 44], [13792, 14137, 45], [14137, 14524, 46], [14524, 14966, 47], [14966, 16323, 48], [16323, 16390, 49], [16390, 16462, 50], [16462, 16701, 51], [16701, 16960, 52], [16960, 17191, 53], [17191, 17369, 54], [17369, 18259, 55], [18259, 18496, 56], [18496, 18738, 57], [18738, 18805, 58], [18805, 18844, 59], [18844, 19224, 60], [19224, 19372, 61], [19372, 19557, 62], [19557, 19728, 63], [19728, 19793, 64], [19793, 19862, 65], [19862, 19963, 66], [19963, 20160, 67], [20160, 20477, 68], [20477, 20547, 69], [20547, 20907, 70], [20907, 21111, 71], [21111, 21120, 72]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21120, 0.06019]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
d9c5244d1bcb6ccc5dd738416562fd481398d968
Towards a Language for Coherent Enterprise Architecture Descriptions Henk Jonkers¹, René van Buuren¹, Farhad Arbab³, Frank de Boer³, Marcello Bonsangue⁴, Hans Bosma³, Hugo ter Doest¹, Luuk Groenewegen⁴, Juan Guillen Scholten³, Stijn Hoppenbrouwers², Maria-Eugenia Iacob¹, Wil Janssen¹, Marc Lankhorst¹, Diederik van Leeuwen¹, Erik Proper⁷, Andries Stam⁴⁵, Leon van der Torre³, Gert Veldhuijzen van Zanten² ¹Telematica Instituut, P.O. Box 589, 7500 AN Enschede, the Netherlands Phone: +31 53 4850485, fax: +31 53 4850400, e-mail: Henk.Jonkers@telin.nl ²University of Nijmegen, Nijmegen, the Netherlands. ³Centrum voor Wiskunde en Informatica, Amsterdam, the Netherlands ⁴Leiden Institute for Advanced Computer Science, Leiden, the Netherlands ⁵Ordina Public Consulting, Rosmalen, the Netherlands Abstract A coherent description of architectures provides insight, enables communication among different stakeholders and guides complicated (business and ICT) change processes. Unfortunately, so far no architecture description language exists that fully enables integrated enterprise modelling. In this paper we focus on the requirements and design of such a language. This language defines generic, organisation-independent concepts that can be specialised or composed to obtain more specific concepts to be used within a particular organisation. It is not our intention to re-invent the wheel for each architectural domain: wherever possible we conform to existing languages or standards such as UML. We complement them with missing concepts, focussing on concepts to model the relationships among architectural domains. The concepts should also make it possible to define links between models in other languages. The relationship between architecture descriptions at the business layer and at the application layer (business-IT alignment) plays a central role. 1. Introduction Changes in a company’s strategy and business goals have significant consequences for the organisation structure, processes, software systems, data management and technical infrastructures. Companies have to adjust processes to their environment, open up internal systems and make them transparent to both internal and external parties. Architectures are a way to chart the complexity involved. Many enterprises have recognised the value of architectures and to some extent make use of them during system evolution and development. Depending on the type of enterprise or maturity of the architecture practice, in most cases a number of separate architectural domains are distinguished such as product, business, information and application domain. For each architectural domain architects have their own concepts, modelling techniques, tool support, visualisation techniques and so on. Clearly, this way of working does not necessarily lead to a coherent view on the enterprise. Enterprises want to have insight into complex change processes. The development of coherent views of an enterprise and a disciplined architectural working practice significantly contribute to the solution of this complex puzzle. Coherent views provide insight and overview, enable communication among different stakeholders and guide complicated change processes. Unfortunately there is a downside to this euphoria. So far no architecture description language exists that fully enables integrated enterprise modelling. There is a need for an architecture language that enables coherent enterprise modelling. Architects need proper instruments to constructs architectures in a uniform way. Figure 1 illustrates the scope of such an integrated set of architecture instruments. Important elements of such an approach include: • The development of a coherent enterprise modelling language. • Development of specialised views and visualisation techniques in order to provide insight for different stakeholders. • Development of analysis techniques that aid in understanding the complex models. By using a uniform modelling language architects can avoid a Babel-like confusion. At the same time an architectural modelling language should allow the development of specialised visualisation techniques for different stakeholders, such as end-users, project managers, system developers, etc. After all, architectures are the means by which architects communicate with the different stakeholders, and this communication works best if it is tailored towards the specific concerns and information needs that they have. Additionally, analysis techniques, for example, impact-of-change analysis, provide ways to study the properties of an integrated model in more detail. In this way architecture provides the desired insight and overview, which allows a well-organised change process. We realise that multiple languages and dialects will always exist. Striving for one unique language would be like chasing windmills. Therefore, the flexibility to use other languages is recognised, and is addressed by means of a specialisation and generalisation requirement of the language itself. In our view a well-defined enterprise architecture language forms the core of such an architecture approach. In this paper we focus on the requirements and a first design of such a language. It is not our intention to re-invent the wheel for each architecture domain. When possible we follow standards, such as UML, as closely as possible. The focus is on the identification of specific relationship concepts and the definition of cross-domain relations. In order to arrive at a coherent architectural description, several architectural domains and layers as well as their relations must be modelled. This paper describes the first steps towards a language to support this. The relations between the business and application layer, which play a central role in this version of the language, are a first contribution to the solution of the business-ICT alignment problem that we try to tackle. The structure of this paper is as follows. In Section 2 we give an overview of related work. Section 3 describes principles that provide requirements for our language. In Section 4 the actual metamodel is presented. Section 5 illustrates the use of the language with an example. Finally, in Section 6 we draw some conclusions and give some suggestions for future work. 2. Related work For the state of the art in enterprise modelling, we have to consider languages for organisation and process modelling and languages for application and technology modelling. Although there is a trend towards considering the relationship between the organisational processes and the information systems and applications that support them (often referred to as “business-IT alignment), modelling techniques to really express this relationship hardly exist yet. A wide variety of organisation and process modelling languages are currently in use: there is no single standard for models in this domain. The conceptual domains that are covered differ from language to language. In many languages, the relations between domains are not clearly defined. Also, most languages are not really suitable to describe architectures: they provide concepts to model, e.g., detailed business processes, but not the high-level relationships between different processes. Some of the most popular languages are proprietary to a specific software tool. Relevant languages in this category include: - The ebXML set of standards for XML-based electronic business, developed by OASIS and UN/CEFACT, specifies the Business Process Specification Schema [4]. It provides a standard framework by which business systems may be configured to support execution of business collaborations consisting of business transactions. It is focussed on the external behaviour of processes for the sake of automating electronic commerce transactions. It is therefore less suited for general enterprise architecture modelling. - The Business Process Modeling Language BPML [1] of the Business Process Management Initiative, is an XML-based language for modelling business processes that has roots in the workflow management world. It can be used to describe the inner workings of, e.g., ebXML business processes. - IDEF [9], originating from the US Ministry of Defence, is a collection of 16 (unrelated) diagramming techniques, three of which are widely used: IDEF0 (function modelling), IDEF1/IDEF1x (information and data modelling) and IDEF3 (process description). - ARIS [16] is part of the widely used ARIS Toolset. Although ARIS also covers other conceptual domains, there is a clear focus on business process modelling and organisation modelling. - The Testbed language for business process modelling [5], is used by a number of large Dutch organisations. information sector, was developed by the Telematica Institute. We have gained a lot of experience with both the definition and the practical use of this language, and it has provided important inspiration for the definition of business-layer concepts. In contrast to organization and business process modeling, for which there is no single dominant language, in modeling applications and technology, the Unified Modeling Language (UML) [3], has become a true world standard. UML is the mainstream modeling approach within ICT, and its use is expanding into other areas, e.g., in business modeling [6]. Another example is the UML profile for for Enterprise Distributed Object Computing (EDOC), which provides an architecture and modeling support for collaborative or Internet computing, with technologies such as web services, Enterprise Java Beans, and Corba components [15]. This makes UML an important language not only for modeling software systems, but also for business processes and for general business architecture. UML has either incorporated or superseded most of the older ICT modeling techniques still in use. However, UML is not easily accessible and understandable for managers and business specialists; therefore, special visualizations and views of UML models should be provided. Another important weakness of UML is the large number of diagram types, with poorly defined relations between them. Given the importance of UML, other modeling languages will likely provide an interface or mapping to it. Architecture description languages (ADLs) define high-level concepts for architecture description, such as components and connectors. A large number of ADLs have been proposed, some for specific application areas, some more generally applicable, but mostly with a focus on software architecture. In [13] the basics of ADLs are described and the most important ADLs are compared with each other. Most have an academic background, and their application in practice is limited. However, they have a sound formal foundation, which makes them suitable for unambiguous specifications and amenable to different types of analysis. The ADL ACME [8] is widely accepted as a standard to exchange architectural information, also between other ADLs. There are initiatives to integrate ACME in UML, both by defining translations between the languages and by a collaboration with OMG to include ACME concepts in UML 2.0 [19]. In this way, the concepts will be made available to a large user base and be supported by a wide range of software tools. This obviates the need for a separate ADL for modeling software systems. The Architecture Description Markup Language (ADML) was originally developed as an XML encoding of ACME. The Open Group promotes ADML as a standard for enterprise architectures. The Reference Model for Open Distributed Processing (RM-ODP) is a joint ISO/ITU-T standard for the specification open distributed systems. It defines five viewpoints on an ODP system that each has its own specification language. For example, for the enterprise viewpoint, which describes purpose, scope and policies of a system, the RM-ODP Enterprise Language has been defined in which, e.g., business objectives and business processes can be modelled [11]. Although the above overview shows that there is a fairly complete language coverage of the the separate architectural domains, the integration between the languages for the different domains is weak. In this paper, therefore, we focus on a language that makes this integration possible. Within the architectural domains, we reuse elements from existing languages as much as possible. 3. Language requirements and principles In this section we discuss the principles underlying our approach, which provide requirements for the architecture description language. 3.1 Metamodel flexibility A key challenge in the development of a general metamodel for enterprise architecture is to strike a balance between the specificity of the concepts used in different organizations and a very general set of architecture concepts which reflects a view of systems as a mere set of interrelated entities. This effort is illustrated in Figure 2. At the base of the triangle, we find the metamodels of the architecture modelling concepts used by specific organizations, as well as a variety of existing modelling languages and standards. At the top of the triangle we find the “most general” metamodel for system architectures, essentially a metamodel merely comprising the notions of “thing” and “relationship”. The metamodel that we propose defines the concepts somewhere between these extremes, referred to as “enter- prise architecture concepts’. These concepts are applicable to describe enterprise architectures of any information-intensive organisation and, if desired, they can be further specialised or composed to form concepts tailored towards a more specific context. Alternatively, as will be explained in the next subsection, the enterprise architecture concepts can be used to integrate more specific models described in other languages. The enterprise architecture concepts themselves can be defined as specialisations or compositions of the generic concepts at the top of the triangle. Another way to look at this is to view the generic concepts as a general means to define the enterprise architecture concepts: they can be considered the concepts to describe the metamodel. This is a powerful tool to attain metamodel flexibility (see, e.g., [12]). This approach is very similar to OMG’s Meta Object Facility (MOF) [14], which, at the highest abstraction level, defines a hardwired meta-metamodel that is used to define metamodels for different languages. It is subject to further study within the project whether the MOF meta-metamodel can be used as the basis for our most generic language. 3.2 Integration of heterogeneous models In current practice, architectural descriptions are heterogeneous in nature: each domain has its own description techniques, textual or graphical, informal or with a precise meaning. One of the most important goals of our metamodel is to bridge the gaps between these domains, by providing a common conceptual foundation for architectural descriptions. There are two ways in which the incorporation of different languages can be achieved: - The concepts of other languages can be described in terms of our general concepts, for example, as specialisations or compositions of these concepts. In other words, the complete descriptions are translated into our model. - Descriptions in other languages, or parts thereof, can be associated with objects in our model. This may be done in a ‘formal’ way, in which certain ‘main’ concepts from the original language are mapped onto our concepts. However, a simple link, for example a text document is also possible. The models in the original language remain intact. This solution is illustrated in Figure 3. An advantage of the former solution is that analysis and visualisation techniques defined for the enterprise architecture concepts can be applied to the entire model. An advantage of the latter solution is that existing descriptions can be reused as a whole, in a form that is still recognisable by the original designer. Our metamodel should allow for both types of model integration. 3.3 Multiple views and visualisations In accordance with IEEE standard 1471 [10], we assume that, given an architectural description, different views on this model can be created. These views only show selected aspects of the complete description that are relevant for a certain type of stakeholder. Views are described with the same concepts (or a subset of the concepts) used for a complete architectural description. Another important principle in our approach is that we separate the definition of the concepts and their representation. For the precise description of concepts it suffices to define the abstract syntax [7] and their semantics (depending on the application of the models, e.g., to perform certain types of analysis). The concrete syntax, i.e., the actual (graphical) notation that is used to represent the concepts and their relationships, can be chosen independently of their formal definition; this notation may depend on, for example, the selected view or the preferences of an organisation. Figure 4 illustrates the separation of the (input) model, views on this model and the representation of these views. A viewpoint definition, based on stakeholder concerns, determines the selection (or derivation) of view content and the way in which this content is presented (or visualised) to the stakeholder. In certain view presentations, it may be possible to modify the view content, which in turn may modify the original model. This is indicated by the ‘update’ arrows in the figure. 4. The metamodel In the previous sections the requirements for an architecture language were discussed. In this section we further explore the design of such a language resulting in a first version of a metamodel for coherent architecture descriptions. 4.1 Framework When studying architecture methods like TOGAF (see http://www.togaf.org) or tools like ARIS [16], and taking into account our experience in actual organisations, it appears that roughly the following architectural domains can be distinguished: - The Product domain describing the products or services that an enterprise offers to its customers - The Organisation domain describing the actors (employees, organisational units), and the roles they may fulfil, working together in processes to deliver products. - The Process domain describing business processes or business functions that offer products or services - The Information domain describing information that is relevant from a business perspective - The Data domain describing information suitable for automated processing - The Application domain describing software applications that support business processes or functions - The Technical infrastructure domain describing hardware platforms and technical communication infrastructure needed to support applications. As observed earlier, an important requirement for our language is to abstract from domain-specific concepts as much as possible. Revealing the similarities between the concepts used in the above domains yields a first abstraction that leads to a more generic language. In our view a ‘system’ in a broad sense, for example, an organisation or software system, primarily consists of a set of actors (“active things”) that have at least three aspects that should be considered. Actors have structure, i.e. actors can be composed of other actors. In this sense, structure describes the static properties of an actor. Actors show behaviour (dynamics) and are likely to exchange information. Next to the identification of these three aspects, we take a common layered approach distinguishing a business, application and technology layer. These aspects together with the different layers constitute a framework (see Figure 5) consisting of nine cells. The cells in this framework show resemblance to of the cells in the Zachman framework [18]. For further clarification the architectural domains mentioned earlier are projected into this framework. The goal of this paper is not to present a new framework: the framework is mainly intended to guide the design of the metamodel. We observe that to identify relevant concepts that fill the cells in the framework, the framework does not have to be strictly applied. It is impossible and undesirable to define strict boundaries between layers or aspects. Especially considering the fact that we focus on the relation among architectural domains, it is likely that concepts are required to link the various aspects and layers. Typically, such concepts cross the boundaries indicated in the framework. Figure 4. Separation of models, views and presentations For practical reasons, however, it may be useful to define a ‘basic representation’ of the concepts, as we will do to express our example in Section 5. issues in an enterprise; i.e. the issues that directly contribute to the primary processes and business goals. Our aim is to describe the relations between existing concepts or define specific relationship concepts in order to arrive at the desired coherence. Therefore, we draw inspiration from existing architecture languages or approaches such as UML, Testbed [5] and the RM-ODP Enterprise Language [11]. In addition to the concepts that are required to describe the various architectural domains, inter-domain metamodels are necessary to define the relation concepts between two or more domains. In this way, a hierarchy of domain and inter-domain metamodels can be constructed (see Figure 6). Figure 6. Domain and inter-domain concepts The order in which the aspects are presented is arbitrary: any two aspects may be related to each other. In contrast, the layers in the framework constitute a functional or system hierarchy. We do not model all inter-layer relations explicitly. Following a common layered approach (e.g., OSI-model) layers are directly related only to layers directly above or below them. In order to preserve the readability and clarity of models, we do not model the ‘diagonal’ relations between cells explicitly. In our view these relations are not required for modelling the main coherency. These relations can be derived if necessary. 4.3 Concepts and metamodel It is our assumption that, in principle, the same generic concepts can be used to describe the structure, behaviour and information aspect of systems in all three layers of the framework in Section 4.1. In spite of the general applicability of these generic concepts, it is still very useful to also define the concepts specific to each layer. These specific concepts are more easily recognised by the relevant stakeholders. Moreover, they are needed to make the relations between the layers explicit, which is an important goal of our approach. In most cases, the layer-specific concepts are straightforward specialisations of the generic concepts. In Table 1 we first summarise the most important generic concepts that we have identified, after which we discuss their main relationships. <table> <thead> <tr> <th>Concept</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Behaviour element</td> <td>Unit of behaviour. Services can be offered or used by a behaviour element</td> </tr> <tr> <td>Action</td> <td>Atomic behaviour element performed by a single actor</td> </tr> <tr> <td>Process</td> <td>Grouping of causally related actions</td> </tr> <tr> <td>Function</td> <td>Grouping of actions according to, e.g., required expertise, skill, resources, etc.</td> </tr> <tr> <td>Interaction</td> <td>Atomic behaviour element performed by more than one actor</td> </tr> <tr> <td>Service</td> <td>Behaviour made available to the environment. A service is offered by a behaviour element and can be used by another behaviour element.</td> </tr> <tr> <td>Transaction</td> <td>Grouping of interactions with the environment, with a predefined result and with restrictions on the order in which the interactions may occur</td> </tr> <tr> <td>Event</td> <td>Something that happens and may influence behaviour (e.g., a trigger)</td> </tr> <tr> <td>Actor/component</td> <td>Entity that is capable of performing behaviour</td> </tr> <tr> <td>Interface</td> <td>The (logical) location where the behaviour of an component can be accessed</td> </tr> <tr> <td>Role</td> <td>Representation of a collection of responsibilities that may be fulfilled by one or more actors</td> </tr> <tr> <td>Collaboration/Connector</td> <td>Connects roles and interfaces with actors and components respectively</td> </tr> <tr> <td>Data object</td> <td>Representation of information</td> </tr> <tr> <td>Message</td> <td>Data objects intended to be exchanged by actors</td> </tr> <tr> <td>Document</td> <td>Persistent representation of data expressed by means of some medium</td> </tr> <tr> <td>Medium</td> <td>Physical entity or system substantiating data</td> </tr> <tr> <td>Information</td> <td>The interpretation of data as perceived by an actor</td> </tr> </tbody> </table> Table 1. Overview of concepts Figure 7 gives an overview of the overall generic metamodel using standard UML notation. It shows the main concepts for each of the aspects, as well as the main links A distinction is made between the externally visible behaviour of an actor (services) and the internal behaviour that is required to realise these services. Services are accessible via the role/interface of an actor, whereas the actor or component itself performs the actual behaviour. A behaviour element can manipulate or use data elements in various ways. A message is exchanged between actors via services. A link between the information and the structure aspect that we distinguish is that information may pertain to a certain actor. Manipulation of data by an actor always involves behaviour. Clearly, there are more direct relations between the structure domain and other aspects such as governance and responsibility. However, this does not fall under the “operational” view that we consider in this paper. Up to now we considered the relations between aspects. The corresponding metamodel in Figure 7 is generic in the sense that it applies to both layers. In Table 2 we give a possible translation of the most important concepts to more specific terms for the business layer and the application layer concepts. Figure 8 shows a condensed version of the metamodel worked out for the business and application layer, emphasising the relations between the layers. --- **Table 2. Specialisations of the concepts at the business and application layer** <table> <thead> <tr> <th>Information aspect</th> <th>Behaviour aspect</th> <th>Structure aspect</th> </tr> </thead> <tbody> <tr> <td>Business object</td> <td>has Business behaviour element</td> <td>exposes Role</td> </tr> <tr> <td>Manages</td> <td>Uses Business behaviour element</td> <td>Fulfils Actor</td> </tr> <tr> <td>Uses</td> <td>Offers External application service</td> <td>Has Interface</td> </tr> <tr> <td>Offers</td> <td>Offers Software function</td> <td>Performs Component</td> </tr> </tbody> </table> --- **Figure 7. Summary of metamodel** --- **Figure 8. Core of the metamodel** In our view the strongest relation between the layers lies within the behaviour aspect. In line with the trend towards ‘service orientation’, both at the business level (‘service organisation’) and the application level (e.g., web services), we relate the layers by means of services (see Figure 9). In each layer, internal and external services are defined. Internal services are offered and used within the layers. External services, on the other hand, are offered by a layer and used by its next higher layer. Moreover, external services of a higher layer may depend on services in the same architectural layer or one layer below. Examples are external business services (‘customer services’). or external application services that are used by “the busi- ness”. Figure 9. Hierarchy of services For the information aspect, we do not directly link be- tween the two layers: data objects in the application layer are available to the business layer only through services that are offered by applications. A business object is a unit of information that is relevant from a business per- spective. It can be substantiated by a medium like a physical document or text on a computer screen. Without the active mediation of an application service, a business representation of application data cannot be achieved: for example, a printing service (realised by a printer and printing application) is required to transform a Word document into a hardcopy. As for the structure aspect, an (application) interface is the location where components in the application layer interact with business actors. Therefore, ‘interface’ can be considered a linking concept comparable to the service concept for the behaviour aspect. Summarising we observe that behaviour is the central aspect: structure and information are linked through be- behaviour. We note that the current relations concepts cap- ture the main relations between the concepts from different layers and aspects. It is likely that other relations exist or that further refinement of the relations results in more relation types. 5. Example Let us illustrate the use of our concepts by means of a simple example. For this purpose, we first propose a basic representation of the concepts. Figure 10. Representation of main concepts Figure 11 provides an example of a model for the busi- ness layer, describing the three aspects and their relations- ships. It describes a situation where a client requests insurance and receives an invoice for the premium. The model is not complete but shows how business layer con- cepts can be used. Figure 11. Example business layer model The client and insurance company (ArchiSurance) are represented by the Insurance buyer and Insurer role, re- spectively. The request of the client results in a trigger (open arrow) for the ‘Take out insurance’ process, which consists of several sub-processes. Each sub-process gen- ernates a trigger for the next sub-process, indicated by the open arrow. After the request has been received and proc- cessed the sub-process ‘Collect premium’ offers the ‘Pre- mium collection’ transaction in which the Insurance buyer and Insurer settle the agreement. The invoice is sent to the Insurance buyer as part of the collect premium transaction. Figure 12 provides an example of a model for the application layer. Numerous views on this model are possible that all emphasise other elements of the model. In order to emphasise the three aspects and their relationships, we create a layered view distinguishing the information, behaviour and structure aspects. It describes an application consisting of two components, linked by means of a connector. Each component realises an application function, which in turn offers an application service that can be used by the ‘business’ (closed arrows). The two application functions are linked by means of an internal application service, which uses a message to transfer the required transaction data. The ‘Billing’ function also uses ‘pricing data’, which is internal to this function. Figure 12. Example application layer model Joining the two previous figures by relating the sub-processes “Process request” and “Collect premium” with the services “Transaction entry” and “Premium collection” yields a coherent model. In this way the business layer and application layer are linked, taking into account the three aspects and their relationships. From this model the link can be derived between, for example the request for insurance and the application components required to support this request. Adding relevant attributes and assigning appropriate values may allow for more complex types of analysis. Note that it would be helpful to develop views (see Section 3.3) that may be used to select and visualise the relevant elements from this model (which for this small example already becomes fairly complex). For example, a simple view to show only how application services support the sub-processes at the business layer may provide useful insight for several stakeholders in the organisation (see Figure 13). Figure 13. View of the link between the layers 6. Conclusions and future work In this paper we identified a number of principles and requirements for a language for coherent enterprise descriptions, and we presented a first version of such a language. This languages serves to bring the many separate architectural descriptions for specific domains closer together, as at present no architectural language exists that makes it possible to describe the coherence of an enterprise as a whole. Since separate languages and their corresponding approaches are deeply embedded in organisations it is not recommendable to develop an entirely new language. Therefore, our new language aims to embrace and complement successful and widely adopted languages. The concepts of our language for enterprise architecture description holds the middle between the detailed concepts used in various organisations and very general architecture concepts which view systems merely as entities and their inter-relations. Proper generalisation and specialisation mechanism to link concepts from a generic architecture language and specific modelling languages are still required for the practical application of the language. The language forms a basis for bridging the heterogeneity of existing languages. Although the details still need to be worked out, potentially models originating from various tools can be linked. This stimulates possible reuse in a form that is still recognisable for the original designer. In an architecture that encompasses several models, multiple views provide an essential instrument to handle the complexity. Based on the complex coherent model, relevant information can be selected depending on the stakeholder concerns. Likewise, it is possible to present this information in a way that suits the stakeholder. Concepts in our metamodel have been inspired by international standards and cover the business and application layers. For each layer the information, behaviour and structure aspects are described, as well as the main relations between these aspects. Moreover, the relations between the business and application layers are identified. We think that services are a suitable way to relate the layers with respect to the behaviour aspect. The relations between the layers with respect to the other aspects are weaker. Both the structure and information aspect between two layers are linked mainly through behaviour. By means of a simple example we showed that our concepts can be used to make a coherent description covering all aspects and layers within an enterprise. Even this limited example demonstrates that the complexity of the integrated models will be a problem. The development of views that select and visualise relevant elements from these models for specific stakeholders helps to fully exploit the models. The work described in this paper is part of an ongoing project called ArchiMate. Here we focus on the general requirements of an architecture language and the core concepts and their relations. Further work will involve, among other things: - Further specification of the detailed relations between concepts, aspects and layers. - Further specification of concepts, for example, by means of attributes. - Extension of the metamodel to the technological infrastructure layer and product domain. - Formalisation of the metamodel to allow for analysis or automated visualisation. - Identification of relevant viewpoints and related visualisations. - Integration with other tool support environments. - Further practical validation of the metamodel. Acknowledgements This paper results from the ArchiMate project (http://archimate.telin.nl), a research initiative that aims to provide concepts and techniques to support architects in the visualisation, communication and analysis of integrated architectures. The ArchiMate consortium consists of ABN AMRO, Stichting Pensioenfonds ABP, the Dutch Tax and Customs Administration, Ordina, Telematica Instituut, Centrum voor Wiskunde en Informatica, Katholieke Universiteit Nijmegen, and the Leiden Institute of Advanced Computer Science. We would like to thank Henk Eertink for his valuable comments to improve this paper. References
{"Source-Url": "https://repository.ubn.ru.nl/bitstream/handle/2066/111114/111114.pdf?sequence=1", "len_cl100k_base": 7107, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 32653, "total-output-tokens": 8470, "length": "2e12", "weborganizer": {"__label__adult": 0.00042176246643066406, "__label__art_design": 0.00292205810546875, "__label__crime_law": 0.0005207061767578125, "__label__education_jobs": 0.001522064208984375, "__label__entertainment": 0.0001232624053955078, "__label__fashion_beauty": 0.00022721290588378904, "__label__finance_business": 0.0018482208251953125, "__label__food_dining": 0.0004811286926269531, "__label__games": 0.000576019287109375, "__label__hardware": 0.0010519027709960938, "__label__health": 0.0006814002990722656, "__label__history": 0.000507354736328125, "__label__home_hobbies": 0.00013363361358642578, "__label__industrial": 0.0009508132934570312, "__label__literature": 0.0006160736083984375, "__label__politics": 0.0004277229309082031, "__label__religion": 0.0006504058837890625, "__label__science_tech": 0.09796142578125, "__label__social_life": 9.322166442871094e-05, "__label__software": 0.01422119140625, "__label__software_dev": 0.873046875, "__label__sports_fitness": 0.0002605915069580078, "__label__transportation": 0.0006999969482421875, "__label__travel": 0.000244140625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39540, 0.01644]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39540, 0.68785]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39540, 0.90956]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 3950, false], [3950, 8736, null], [8736, 13409, null], [13409, 17592, null], [17592, 20844, null], [20844, 25215, null], [25215, 27772, null], [27772, 30224, null], [30224, 34155, null], [34155, 39540, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 3950, true], [3950, 8736, null], [8736, 13409, null], [13409, 17592, null], [17592, 20844, null], [20844, 25215, null], [25215, 27772, null], [27772, 30224, null], [30224, 34155, null], [34155, 39540, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39540, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39540, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39540, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39540, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39540, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39540, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39540, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39540, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39540, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39540, null]], "pdf_page_numbers": [[0, 0, 1], [0, 3950, 2], [3950, 8736, 3], [8736, 13409, 4], [13409, 17592, 5], [17592, 20844, 6], [20844, 25215, 7], [25215, 27772, 8], [27772, 30224, 9], [30224, 34155, 10], [34155, 39540, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39540, 0.11962]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
e06faff77e97aff1eb95ec36ee4b1c1827114767
Complexity of the CFP, a Method for Classification Based on Feature Partitioning H. Altay Güvenir and İzzet Şirin Computer Engineering and Information Science Department Bilkent University, Ankara 06533 TURKEY Abstract. This paper presents a new methodology for learning from examples, called Classification by Feature Partitioning (CFP). Learning in CFP is accomplished by storing the objects separately in each feature dimension as disjoint partitions of values. A partition is expanded through generalization or specialized by subdividing it into sub-partitions. It is shown that the CFP algorithm has a low sample and training complexity. 1 Introduction Several representation techniques have been used to describe concepts for supervised learning tasks. Exemplar-based learning techniques store only specific examples that are representatives of other several similar instances. Previous implementations of this approach usually extended the nearest neighbor algorithm, which use some kind of similarity metric for classification. The classification complexity of such algorithms is proportional to the number of objects stored. This paper presents another form of exemplar-based learning, called Classification by Feature Partitioning (CFP). The CFP makes several significant improvements over other exemplar-based learning algorithms, where the examples are stored in memory without any change in the representation. For example, IBL algorithms learn a set of instances, which is a representative subset of all training examples [1]. On the other hand, the CFP partitions each feature into segments corresponding to concepts. Therefore, a concept description learned by the CFP is a collection of feature partitions. The CFP algorithm can be seen to produce a special kind of decision trees (e.g., ID3, [3]). Unlike ID3, the CFP probes each feature exactly once. An important difference between decision tree approach and the CFP is that the classification performance of the CFP does not depend critically on any small part of the model. In contrast, decision trees are much more susceptible to small alterations. Similar to CFP and ID3, the probabilistic learning system called PLS1, also creates orthogonal hyperrectangles by inserting boundaries parallel to instance space axes [4]. The PLS1 system starts from the most general description and applies only specializations. The CFP algorithm is briefly described in the next section. Section 3 presents the sample complexity and the training complexity analysis of the CFP algorithm with respect to Probably Approximately Correct (PAC) learning theory [6]. The final section discusses the applicability of the CFP and concludes with a general evaluation of the algorithm. 2 The CFP Algorithm The CFP algorithm learns the projection of the concepts over each feature dimension. An example is given as a vector of feature values plus a label that represents its class. A partition is the basic unit of representation in the CFP algorithm. For each partition, lower and upper bounds of the feature values, the associated class, and the representativeness value (the number of instances it represents) are maintained. Initially, a partition is a point (lower and upper limits are equal) on the line representing the feature dimension. For instance, suppose that the first example \( e_1 \) of class \( C_1 \) is given during the training phase. If the value of \( e_1 \) for feature \( f \) is \( x_1 \), then the set of possible values for feature \( f \) will be partitioned into three partitions: \( [-\infty, x_1], [x_1, 0] \), \( [x_1, \infty] \); where \( U \) stands for undetermined partition. A partition can be extended towards a neighboring point of the same class in an undetermined partition. Assume that the second example \( e_2 \) with class \( C_1 \) is close to \( e_1 \) in feature \( f \). In that case the CFP will generalize the partition at \( x_1 \) on \( f \) into an extended partition: \( [x_1, x_2], C_1, 2 \). Since partitions are disjoint the CFP algorithm pays attention to avoid overgeneralization. In order to generalize a partition in feature \( f \) to cover a new example, the distance between them must be less than a given generalization limit \( D_f \). Otherwise, the new example is stored as another point partition in the feature dimension \( f \). If the feature value of a training example falls in a partition with the same class, then simply the representativeness value of the partition is incremented by one. However, if the new training example falls in a partition with a different class than that of the example, the CFP algorithm specializes the existing partition by dividing it into two range partitions and inserting a point partition (corresponding to the new example) in between them. The training process of the CFP has two steps: (1) learning the feature weights, (2) learning feature partitions. In order to learn appropriate feature weights, for each training example, the prediction of each feature is compared with the actual class of the example. If the prediction of a feature is correct, the weight of that feature is incremented by \( \Delta \) (global feature weight adjustment rate) percent; otherwise, it is decremented by the same amount (all weights are initially set to 1). Classification in the CFP is based on a voting taken among the predictions made by each feature separately. For a given instance \( e \), the prediction based on a feature \( f \) is determined by the value of \( e_f \). If \( e_f \) falls properly within a partition with a known class then the prediction is the class of that partition. If \( e_f \) falls in a point partition then among all the partitions at this point the one with the highest representation count is chosen. If \( e_f \) falls in a partition with undetermined class value, then no prediction for that feature is made. The effect of the prediction of a feature in the voting is proportional with the weight of that feature. The predicted class of a given instance is the one which receives the highest amount of votes among all feature predictions. In order to illustrate the form of the concept descriptions, let us consider a domain with two features, \( f_1 \) and \( f_2 \). Assume that during the training phase, positive (+) instances with \( f_1 \) values in \([x_{11}, x_{12}]\) and \( f_2 \) values in \([x_{22}, x_{24}]\), and negative (−) instances with \( f_1 \) values in \([x_{13}, x_{14}]\) and \( f_2 \) values in \([x_{21}, x_{22}]\) are given. The resulting concept description is shown in Fig. 1. The corresponding concept description for the class (+) can be written as: class +: \( (x_{11} \leq f_1 \leq x_{12} \land f_2 < x_{21}) \) or \( (x_{11} \leq f_1 \leq x_{12} \land f_2 > x_{22}) \) or \( (x_{23} \leq f_2 \leq x_{24} \land f_1 < x_{13}) \) or \( (x_{23} \leq f_2 \leq x_{24} \land f_1 > x_{14}) \) The CFP does not assign any classification to an instance, if it could not determine the appropriate class value. This may result from having seen no instances for a given set of values or having a tie between two or more possible contradicting classifications. If the features have different weights, the ties are broken in favor of the class predicted by the features with the highest weights during the voting process. The CFP algorithm has been implemented and empirically evaluated in sev- eral standard domains and its performance has been compared with similar algorithms. In most of these domains the CFP algorithm attained higher accuracy than the other algorithms. The details of the CFP algorithm and the empirical comparisons can be found in [5]. 3 Complexity of the CFP This section presents an analysis of the CFP algorithm with respect to PAC- learning theory [6]. The intent of the PAC (Probably Approximately Correct) model is that successful learning of an unknown target concept should entail obtaining, with high probability, that it is a good approximation of the concept. Since the classification in the CFP is based on a voting taken among the individual classifications of each attribute, it can learn a concept if each attribute, independently from other attributes, can be used in the classification. We will define what we mean by “learn” in a way that preserves the spirit of the Valiant (1984) definition of learnability, but modifies it for the voting based classification used in the CFP. We first determine the minimum number of training instances required to learn a given concept. Using this sample complexity we derive the training complexity of the CFP algorithm. In the following analysis we assume that all feature values are normalized to the interval $[0,1]$. **Definition 1.** Let $X$ be a subset of $\mathbb{R}^n$ with a fixed probability distribution and $d$ is positive integer less than or equal to $n$. A subset $S$ of $X$ is an $<\varepsilon, \gamma, d>$-net if, for all $x$ in $X$, with probability greater than $\gamma$, there exist an $s$ in $S$ such that $|s_f - x_f| < \varepsilon$ at least for $d$ values of $f$ ($1 \leq f \leq n$). **Lemma 2.** Let $\varepsilon$, $\delta$, and $\gamma$ be fixed positive numbers less than one and $d$ is positive integer less than or equal to $n$. A random sample $S$ containing $m > ([1/\varepsilon]/\gamma) \times (n \ln 2 + \ln([1/\varepsilon]/\delta))$ instances, drawn according to any fixed probability distribution from $[0, 1]^n$, will form an $<\varepsilon, \gamma, d>$-net with confidence greater than $1 - \delta$. **Proof.** We prove this lemma by partitioning the unit interval for each feature dimension, into $k$ equal length sub-intervals, each with length less than $\varepsilon$, such that all pairs of points\(^1\) in the sub-interval are within $\varepsilon$ distance of each other. The idea of the proof is to guarantee that, with high confidence, at least for $d$ dimensions out of $n$, each of $k$ sub-intervals contains at least one point of $m$ instances, with sufficient probability. Let $k = [1/\varepsilon]$, $S_{1f}$ be the set of sub-intervals with probability greater or equal to $\gamma/k$ and $S_{2f}$ be the set of remaining sub-intervals of a dimension $f$. The probability that an arbitrary point in $[0, 1]$ will not lie in a selected sub-interval of $S_{1f}$ is $(1 - \gamma/k)$. The probability that none of the $m$ sample points will lie in a selected sub-interval of $S_{1f}$ is $(1 - \gamma/k)^m$. Therefore, the probability that any sub-interval of $S_{1f}$ is excluded by all $m$ instances is at most $p = k(1 - \frac{\varepsilon}{k})^m$. The probability that, for more than $n - d$ dimensions, any sub-interval of $S_{1f}$’s are excluded by all $m$ instances is at most $\sum_{i=0}^{n-d+1} C(n, i)p^i$.\(^2\) To make sure this probability is small, we force it to be less than $\delta$, that is $\sum_{i=0}^{n-d} C(n, i)p^i < \delta$. Recall the binomial theorem: $(a+b)^n = \sum_{i=0}^{n} C(n, i)a^ib^{n-i}$. With $a = p$ and $b = 1$, $\sum_{i=0}^{n} C(n, i)p^i = (p + 1)^n$. Since $n$ is a positive integer, $(p + 1)^n - 1 = \sum_{i=1}^{n} C(n, i)p^i$ and it is greater than $\sum_{i=n-d+1}^{n} C(n, i)p^i$, our requirement can be written as $(p + 1)^n - 1 < \delta$. On the other hand, $(1 - \gamma/k)^m < e^{-\gamma/n}$ and, since the value of $p$ is greater than zero and less than one, $2^n > (p + 1)^n - 1$. If we solve the requirement $2^nk^ne^{-\gamma/n} < \delta$ for $m$, and substitute $[1/\varepsilon]$ for $k$, it yields $m > [1/\varepsilon]/\gamma \times (n \ln 2 + \ln([1/\varepsilon]/\delta))$. Consequently, with confidence greater than $1 - \delta$, each sub-interval in $S_{1f}$ of $d$ or more dimensions, contains some sample point of an instance of $S$. \(\square\) --- \(^1\) a point here represents the value of an instance for a feature for that dimension \(^2\) $C(n, r)$ represents the number of combinations of $r$ objects out of $n$. Theorem 3. Let $\varepsilon$, $\delta$, and $\gamma$ be fixed positive numbers less than one and $S$ a sample set with $n$ features. If $[\frac{\varepsilon + 1}{\delta}]$ of features of the elements of $S$ form an $<\varepsilon, \gamma, [\frac{\varepsilon + 1}{\delta}]*$-net then, the CFP algorithm with equal feature weights and generalization limit $D_f \geq 2\varepsilon$ for all features, will learn a concept $C$ for $S$ with confidence $1 - \delta$. Proof. Since, the CFP algorithm does not use distance metric for classification, the idea of the proof is to ensure that the CFP can construct $\varepsilon$ length partitions with high confidence, at least one of the $m$ sample instances lies in each sub-intervals of $[\frac{\varepsilon + 1}{\delta}]$ features with sufficient probability. The CFP algorithm employs a majority voting scheme in the classification. Hence, only $d = [\frac{\varepsilon + 1}{\delta}]$ of the features must agree on the classification. Following the proof of the lemma, if $S$ form an $<\varepsilon, \gamma, d>*$-net, then it is guaranteed that each sub-interval contains at least one instance of $S$ with high confidence. The CFP algorithm will generalize two points into one partition, if the distance between them is less than or equal to $D_f$. Therefore, if $D_f \geq 2\varepsilon$ then the points will be generalized into one partition, corresponding to a projection of the concept on that feature. \qed Theorem 4. Let $\varepsilon$, $\delta$, and $\gamma$ be fixed positive numbers less than one. If a random sample set $S$ with $n$ features forms an $<\varepsilon, \gamma, [\frac{\varepsilon + 1}{\delta}]>*$-net with confidence greater than $1 - \delta$, then CFP with $D_f \geq 2\varepsilon$ constructs at most $n[1/\varepsilon]$ partitions. Proof. Since $S$ is an $<\varepsilon, \gamma, [\frac{\varepsilon + 1}{\delta}]>*$-net with with confidence greater than $1 - \delta$, each feature line is divided in to $\varepsilon$ length sub-intervals and each one contains at least one sample point and the CFP algorithm constructs at most one (due to $D_f \geq 2\varepsilon$) partition for each sub-interval. Thus, for $n$ features, the CFP constructs at most $n[1/\varepsilon]$ partitions. \qed Theorem 5. Given $\varepsilon$, $\delta$, and $\gamma$ fixed positive numbers less than one. If random sample $S$ is an $<\varepsilon, \gamma, [\frac{\varepsilon + 1}{\delta}]>*$-net with confidence greater than $1 - \delta$, then classification complexity of the CFP with $D_f \geq 2\varepsilon$ is $O(n\log(1/\varepsilon))$ and the training complexity is for $m$ sample instances is $O(mn\log(1/\varepsilon))$. Proof. Proof of the theorem 2 shows, that the CFP constructs at most $1/\varepsilon$ partitions for each feature. In CFP algorithm the classification is composed of a search and a voting. The complexity of the search operation is $O(\log(1/\varepsilon))$ for each feature. Since the complexity of voting is $O(n)$, the classification complexity of the CFP algorithm is $O(n\log(1/\varepsilon))$ for $n$ features. Consequently, with $m$ training instances, the training complexity of the CFP algorithm is $O(mn\log(1/\varepsilon))$. \qed The classification process in exemplar-based learning algorithms which use some form of the nearest neighbor algorithm involves computing the Euclidean distance (or similarity) of the instance to each stored exemplar in each dimension. Therefore, if there are $M$ exemplars stored in the memory, and $n$ features are used, then the complexity of the classification is $O(nM)$. On the other hand, since the partitions are naturally sorted for each feature dimension, the classification process in the CFP algorithm is only $O(n \log M)$, which significantly reduces the classification complexity. 4 Conclusion The CFP algorithm is applicable to concepts, where each feature, independent of the others, can be used to classify the concept. This approach is a variant of algorithms that learn by projecting into one feature dimension at a time. The novelty of CFP is that it retains a feature-by-feature representation and uses voting to categorize. Algorithms that learn by projecting into one dimension at a time are limited in their ability to find complex concepts. The analysis of the CFP shows that it requires small number of examples and a small amount of memory to learn a given concept, compared to many other similar algorithms. Another outcome of the analysis is that, the CFP has also a low training complexity. The real-world data sets usually contain missing attribute values. Most of the learning systems usually overcome this problem by either filling in missing attribute values, or looking at the probability distribution of values of attributes. In contrast, the CFP solves this problem very naturally. Since the CFP treats each attribute value separately, in the case of an unknown attribute value, it simply leaves the partitioning of that feature intact. The CFP uses feature weights to cope with irrelevant attributes. Introducing feature weights protects the algorithm’s performance, when attributes have different relevances. In the CFP the feature weights are dynamically adjusted according to the global $\Delta$ adjustment rate, which is an important parameter for the predictive accuracy of the algorithm. Another important component of the CFP is $D_f$ generalization limit for each attribute, which controls the generalization process. $\Delta$ and $D_f$ are domain dependent parameters to the CFP, and their selection affects the performance of the algorithm. Determining the best values for these parameters is an optimization problem for a given domain. A version of CFP, called GA-CFP, has been implemented to learn these parameters using genetic algorithms [2]. References This article was processed using the \LaTeX{} macro package with LLNCS style
{"Source-Url": "http://www.cs.bilkent.edu.tr/tech-reports/1993/BU-CEIS-9308.pdf", "len_cl100k_base": 4404, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 11058, "total-output-tokens": 5053, "length": "2e12", "weborganizer": {"__label__adult": 0.0003418922424316406, "__label__art_design": 0.0005068778991699219, "__label__crime_law": 0.0006089210510253906, "__label__education_jobs": 0.0045013427734375, "__label__entertainment": 0.00011497735977172852, "__label__fashion_beauty": 0.0002777576446533203, "__label__finance_business": 0.0005397796630859375, "__label__food_dining": 0.00048422813415527344, "__label__games": 0.0008416175842285156, "__label__hardware": 0.001735687255859375, "__label__health": 0.0013666152954101562, "__label__history": 0.00036787986755371094, "__label__home_hobbies": 0.00022983551025390625, "__label__industrial": 0.0008535385131835938, "__label__literature": 0.00042510032653808594, "__label__politics": 0.0003964900970458984, "__label__religion": 0.0005102157592773438, "__label__science_tech": 0.47412109375, "__label__social_life": 0.000164031982421875, "__label__software": 0.0194854736328125, "__label__software_dev": 0.490966796875, "__label__sports_fitness": 0.0003938674926757813, "__label__transportation": 0.00054931640625, "__label__travel": 0.0002467632293701172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18561, 0.02506]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18561, 0.81226]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18561, 0.88334]], "google_gemma-3-12b-it_contains_pii": [[0, 2630, false], [2630, 5998, null], [5998, 8179, null], [8179, 11931, null], [11931, 15722, null], [15722, 18561, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2630, true], [2630, 5998, null], [5998, 8179, null], [8179, 11931, null], [11931, 15722, null], [15722, 18561, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 18561, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18561, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18561, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18561, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18561, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18561, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18561, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18561, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18561, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18561, null]], "pdf_page_numbers": [[0, 2630, 1], [2630, 5998, 2], [5998, 8179, 3], [8179, 11931, 4], [11931, 15722, 5], [15722, 18561, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18561, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
9f557da04feebdd339360c2bb42f78a753e90748
Tutorial on Modeling VAT Rules Using OWL-DL Nielsen, Morten Ib; Simonsen, Jakob Grue; Larsen, Ken Friis Publication date: 2007 Document version Publisher's PDF, also known as Version of record Citation for published version (APA): Tutorial on Modeling VAT rules using OWL-DL Morten Ib Nielsen, Jakob Grue Simonsen and Ken Friis Larsen Department of Computer Science University of Copenhagen Email: {mortenib|simonsen|kfllarsen}@diku.dk August 28, 2007 Total number of pages: 16 Abstract This paper reports on work in progress. We present a methodology for constructing an OWL-DL model of a subset of Danish VAT rules. It is our intention that domain experts without training in formal modeling or computer science should be able to create and maintain the model using our methodology. In an ERP setting such a model could reduce the Total Cost of Ownership (TCO) and increase the quality of the system. We have selected OWL-DL because we believe that description logic is suited for modeling VAT rules due to the decidability of important inference problems that are key to the way we plan to use the model and because OWL-DL is relatively intuitive to use. 1 Introduction Imagine an ERP system where domain experts can create and implement changes in e.g. VAT rules without the help of programmers. The benefits would be shorter development time and fewer mistakes due to misinterpretation of specifications which lead to reduced TCO and increased quality of the software. On a coarse-grained scale such a system consists of three parts: A model of the rules, a tool to edit the model and the core ERP system using the model. In this paper we focus on the first part - the model. A priori two requirements exist. First the modeling language must be strong enough to express the rules in question and second it must be easy to use without training in formal modeling or computer science. In a more general setting the model can be used as a VAT knowledge system which external programs can query through an interface. In the long run we envision that authorities such as SKAT (Danish tax administration) can provide online access to the model e.g. using web services such that applications always use the newest version of the model. In this paper we describe a methodology we have used to develop a model of a subset of Danish VAT rules using the general purpose Web Ontology Language (OWL) editor Protégé-OWL\textsuperscript{1} and we report on our experiences in doing so. We selected a subset of Danish VAT rules consisting of flat VAT (25\%) plus a set of exceptions where goods and services are free of VAT, chosen because they seem representative. Further the rules are accessible to us by way of an official guideline by the Danish tax administration. Our study is focusing on the feasibility \textsuperscript{1}http://protege.stanford.edu/overview/protege-owl.html. of using OWL to model VAT rules and not on the usability of the Protégé-OWL tool itself. By feasibility we mean how easy or difficult it is (for a human) to express and understand VAT rules in OWL, in particular this does not cover issues such as modularization. The methodology presented here is inspired by the article [1] together with our own experience. Readers of this guide are assumed to have user experience of Protégé-OWL corresponding to [2] but not of computer science nor of modeling in general. 1.1 Motivation One of the overall goals of the strategic research project 3gERP is to reduce the TCO of Enterprise Resource Planning (ERP) systems. We believe that a VAT model helps to this end in two ways. First we envision that domain experts create and update the model thus eliminating a layer of interpretation (the programmer) where errors can be introduced. Second a VAT model can change handling of VAT from being a customization task into being a configuration task, meaning that no code needs to be changed when the model is updated. VAT and legal rules in general deal with frequent transactions between legal entities. Transactions are typically triggered when certain conditions are fulfilled and therefore dynamic checks on these conditions are needed. The idea is to use the model to automatically infer what actions should be taken based on the conditions. In the case of VAT rules we can ask the model whether a delivery is subject to VAT or not based on the information we know about the delivery. The answer from the model will be Yes, No or Maybe\(^2\) and can be used to trigger an appropriate transaction. In a broader perspective the model is supposed to work as a VAT knowledge system that given a context and a question can tell other systems what to do, e.g. guide accounting systems and if required indicate that authorities should be contacted etc. 1.2 Roadmap The remainder of this paper is structured as follows. In Section 2 we give a short account of description logic and OWL. In Section 3, 4 and 5 we present our methodology by giving examples. Finally we outline future work in Section 6 and we conclude in Section 7. 2 Description Logic and OWL In this section we give a short introduction to description logic (DL) and OWL. This introduction can be skipped, if you are already familiar with the concepts. Description logics are knowledge representation languages that can be used to structure terminological knowledge in knowledge systems which are formally well-understood. A knowledge system typically consists of a knowledge base together with a reasoning service. The knowledge base is often split into a set of concept axioms the \(TBox\), a set of assertions the \(Abox\) and a \(Role hierarchy\). These constitute the \(explicit\) knowledge in the knowledge system. The reasoning service is a program that can check the consistency of the knowledge base and make implicit knowledge explicit, e.g. decide equivalence of concepts. Since the reasoning service is a pluggable component knowledge systems separate the technical task of reasoning from the problem of constructing the knowledge base. \(^2\)In the case where insufficient information is provided in order to answer the question. 2.1 OWL OWL which is short for Web Ontology Language is an ontology language designed to be compatible with the World Wide Web and the Semantic Web. The most important abstraction in OWL is concept axioms which are called classes. Each class has a list of necessary conditions and zero or more equivalent lists of necessary and sufficient conditions [2]. A list of necessary conditions is a list of conditions that every member of the class must satisfy. In the same way a list of necessary and sufficient conditions is a list of conditions that must be satisfied by every member of the class and if satisfied guarantees membership in the class. OWL is based on XML, RDF and RDF-S and can be used to represent information in a way that is more accessible to applications than traditional web pages. In addition OWL has a formal semantics, which enables logic reasoning. OWL comes in three variants: OWL-Lite ⊆ OWL-DL ⊆ OWL-Full of increasing expressive power. The variants OWL-Lite and OWL-DL are based on the description logics $\text{SHIF}(D)$ and $\text{SHOIN}(D)$ respectively [3], which guarantees that important inference problems such as satisfiability and subsumption are decidable. Since OWL is XML based we need an editor to create OWL ontologies. We have used the general purpose OWL editor Protégé developed by Stanford Medical Informatics at the Stanford University School of Medicine. 3 VAT Exemption 1: Sales outside EU Our methodology is aimed at modeling VAT rules as described in guidelines instead of the raw law text itself. This choice was made because guidelines are more accessible to us, and because these are the rules that small companies adhere to in practice. Further the investigation of the feasibility of using OWL to model VAT rules concerns the ease with which rules can be formalized and not so much from where the rules are extracted\(^3\). In what follows we refer to the guideline as the legal source. In order to ease reading we have used the word concept only when we speak about the legal source. The corresponding concept in the model (OWL) is called a class. A concept in the legal source is modeled as one or more classes in the model. Here we present the steps we took in order to make our model of Danish VAT rules. 3.1 Pre-modeling 1. Download Protégé-OWL from http://protege.stanford.edu/download/release/full/ and install. Make sure you can start Protégé in OWL-mode (logic view). When started and if you select the Class tab it should look like Figure 1. 2. Download [2] and read it. This is important because many of the constructions we use are explained herein. 3.2 Modeling First you must decide which legal source(s) you want to model. \(^3\)Since we have used the official guidelines by SKAT (Danish tax administration) we believe that the content of the guidelines is in accordance with the law. 3.2 Modeling VAT EXEMPTION 1: SALES OUTSIDE EU Figure 1: Protégé-OWL class-tab, logic view. 3.2 Modeling In our case we used the official guideline *Moms - fakturering, regnskab mv, E nr. 27, Version 5.2 digital, 19. januar 2005.* ### 3.2.1 Overall framework Modeling should start with a read through of the legal source. Based on this general (to be refined later) classes such as *Location, Goods, Services* and *FreeOfVAT* together with attributes such as *hasDeliveryType* and *hasSalesPrice* can be created as subclasses of the built-in top-level class *owl:Thing*. An attribute can usually take on at most a finite number of values. In that case we use value partitions to model them as described in [2][p. 73-76]. If the domain is not finite we use data type properties instead. Deciding on the overall framework helps to structure the capturing of rules in a homogeneous way and enables working in parallel (which can be needed if the legal source is large). After our read through of the legal source we arrived at the overall framework in Figure 2. ![Figure 2: Overall framework.](image) **Naming Convention.** All classes, properties, individuals etc. should be given names picked from or inspired of the legal source. All names should be in the same language as the legal source (in our case Danish). Using the naming convention supported by Protégé-OWL class and individual names should be written in Pascal Notation, e.g. *InternationalOrganization* not *internationalOrganization* or *International_Organization*, while property names are written in Camel Hump Notation, e.g. *someProperty*. Typically a property is used to assign an attribute to a class. In this case we prefix the name of the property with a verb describing the kind of relation the class has along that property, e.g. *hasNumberOfSides* or *isFragile*. ### 3.2.2 Rule modeling - step I Having modeled the overall framework it is time to go through the legal source one section at a time looking for rules that should be modeled. Here we give an elaborate description of how to model a single rule from the legal source starting from the overall framework in Figure --- 4An exception is the domain of truth values, which is built-in as a data type. 3.2 Modeling ### Table 1 <table> <thead> <tr> <th>Extract from the legal source and its translation into English.</th> </tr> </thead> <tbody> <tr> <td>[4][p. 9]</td> </tr> </tbody> </table> And translated into English: *Sales outside EU (3rd countries). No VAT should be added to goods delivered to destinations outside the European Union, or to the Faroe Islands or Greenland. This fact ordinarily also applies to services, but VAT should be added to certain services.* Translated from [4][p. 9] ### Table 2 <table> <thead> <tr> <th>Necessary &amp; sufficient conditions for application of the rule in Table 1.</th> </tr> </thead> <tbody> <tr> <td>• The rule concerns sales.</td> </tr> <tr> <td>• The rule concerns both goods and services.</td> </tr> <tr> <td>• The place of delivery must be outside the European Union, or the Faroe Islands or Greenland.</td> </tr> </tbody> </table> 2. In Section 4 and 5 we give a brief description of how to model other rules. Together the modeling of these rules cover all the constructions we have used in our VAT model. Since our legal source is in Danish we present the rules in their original Danish phrasing together with a translation into English. Now let us consider the rule shown in Table 1. Since our model is only a prototype we make a slight simplification and assume that the rule also applies to *all* services. With this simplification we can identify the necessary and sufficient conditions for application of the rule. These are shown in Table 2. In order to model the necessary and sufficient conditions in Table 2 we must add some attributes to `VarerOgYdelser`. The first and second condition in Table 2 tell us that we must be able to model that goods and services are sold\(^5\). We do that by adding an attribute to the class `VarerOgYdelser` (translates into `GoodsAndServices`) which already exists in our overall framework. Attributes are modeled using functional properties. In accordance with our naming convention we select the name `harLeveranceType` (translates into `hasDeliveryType`). Since there is a finite number of delivery types we model this attribute as a value partition, i.e. an enumeration. Value partitions can be created using a built-in wizard\(^6\). Just as in [2] we store value partitions as subclasses of the class `ValuePartitions`. The reason plain enumerations are not used is that they cannot be sub-partitioned. Using value partitions we retain the possibility of further refining the concepts the value partitions model. --- \(^5\) Instead of being sold goods can also be used as e.g. a trade sample. See [4][p. 8-9] for other examples. \(^6\) Menu ➤ Tools ➤ Patterns ➤ Value Partition... **Remark.** Technically enumerations are constructed by defining a class in terms of a finite set of individuals plus a functional property that has this class as its range. Since individuals are atoms they cannot be subdivided. On the other hand a value partition is defined using a functional property having as its range a class defined as the union of its subclasses all of which are distinct. These subclasses can (because they are classes) be partitioned into more subclasses if needed. Having created the value partition `harLeveranceType` which can have `Salg` (translates into `Sale`) as a value we need to add it as an attribute to the class `VarerOgYdelser`. This is done by adding to the *necessary conditions* an existential quantification over the corresponding property having the value partition (or data type in case of data type attribute) as its range. Thus we add `∃ harLeveranceType some LeveranceType` to `VarerOgYdelser`. The third condition tells us that we must be able to model that goods and services have a place of delivery. A read through of the legal source tells us that only three places are needed namely *Denmark*, *EU* and *non-EU*. Thus this attribute which we name `harLeveranceSted` (translates into `hasPlaceOfDelivery`) must be modeled as a value partition. Having modeled these attributes the class `VarerOgYdelser` looks as shown in Figure 3. ### 3.2.3 Rule modeling - step II Now we are ready to model the rule itself. Since the rule describes a situation where you do not have to pay VAT we model it as a subclass of `Momsfritaget` (translates into `FreeOfVAT`). Following our naming convention we name the class `MomsfritagetSalgAfVarerOgYdelserTilIkke-EU` (translates into `VATFreeSalesOfGoodsAndServicesInNon-EU`). Then we add a textual description of the rule and a reference to where in the legal source the rule stems from to the `rdfs:comment` field. Next we must specify *necessary and sufficient* conditions on membership in `MomsfritagetSalgAfVarerOgYdelserTilIkke-EU`. It is important to remember that if a class has two sets of necessary and sufficient conditions then they must imply each other, see [2][p. 98]. Based on the necessary and sufficient conditions captured in Table 2 we add the following necessary and sufficient conditions to `MomsfritagetSalgAfVarerOgYdelserTilIkke-EU`: - `VarerOgYdelser` - `∃ harLeveranceSted some Ikke-EU` - `∃ harLeveranceType some Salg` The result is shown in Figure 4. ### 4 VAT Exemption 2: Sales to Embassies In this section and onwards we will not mention when to add references to the legal source in `rdfs:comment` fields of classes and properties. The rule of thumb is that this should always be done. Now let us consider the rule in Table 3. We identify the necessary and sufficient conditions for application of the rule. These are shown in Table 4. Figure 3: Class and property view after adding attributes. Figure 4: Asserted Conditions of our model of the legal rule in Table 1. Table 3 Extract from the legal source and its translation into English. <table> <thead> <tr> <th>Salg til ambassader. Du skal ikke beregne moms af varer og transportydelses, som du leverer til ambassader og internationale organisationer i andre EU-lande.</th> </tr> </thead> </table> [4][p. 9] And translated into English: Sales to embassies. VAT should not be added to goods and transport services delivered to embassies and international organizations in countries within the European Union. Translated from [4][p. 9] Table 4 Necessary & Sufficient conditions for application of the rule in Table 3. - The rule concerns sales. - The rule concerns goods and transport services. - The place of delivery must be in the European Union. - The buyer must be an embassy or an international organization. 4.1 Rule modeling - step I We are already able to model that the rule concerns sale and that the place of delivery must be in EU. We cannot model the specific service transportation yet. Therefore we must add it to our model. Since it is a service it should be modeled as a subclass of Services. We name the class modeling the service transportation Transport (translates into Transportation). Now we can model that something belongs to the set of goods and transport services by requiring membership of Varer ⊔ Transport. Finally we must be able to model that the buyer is an embassy or an international organization. Since there are only finitely many different kinds of buyers we model this as a value partition, and because this attribute applies to both Varer and Transport we add it to their most specific common super-class which is VarerOgYdelser. We name this attribute harKøberType (translates into hasKindOfBuyer). After having done all this the model looks as shown in Figure 5. 4.2 Rule modeling - step II Having added all the necessary classes and attributes to the model we are ready to model the rule itself. Since the rule describes a situation where you do not have to pay VAT we model it as a subclass of Momsfritaget. Following our naming convention we name the class MomsfritagetSalgTilAmbassaderOgInternationaleOrganisationerIEU (translates into VATFreeSalesToEmbassiesAndInternationalOrganizationsInEU). Based on the necessary and sufficient conditions captured in Table 4 we add the following necessary and sufficient conditions to MomsfritagetSalgTilAmbassaderOgInternationaleOrganisationerIEU: - harLeveranceType some Salg - Varer ⊔ Transport - harLeveranceSted some EU - harKøberType some AmbassadeOgPersonaleMedDiplomatiskeRettigheder The result is shown in Figure 6. 5 VAT Exemption 3: Sales in other EU countries In this section we consider one final rule, the rule in Table 5. We identify the necessary and sufficient conditions for application of the rule. These are shown in Table 6. 5.1 Rule modeling - step I We are already able to model that the rule concerns sale of goods delivered inside the European Union. The new thing is that we must be able to indicate whether a buyer is registered for VAT and if so, we must register the buyers VAT registration number. We use a functional data type property named erKøberMomsregistreret (translates into isTheBuyerRegisteredForVAT) with the data type xsd:boolean as its range to model whether the buyer is registered for VAT. Similarly we use a functional data type property named erKøbersMomsnummer (translates into isBuyersVATRegistrationNumber) with the data type xsd:string as its range to register the buyers VAT registration number if he has one. Figure 5: The model after adding classes and attributes as described in Section 4.1. 5.1 Rule modeling - step 5 VAT EXEMPTION 3: SALES IN OTHER EU COUNTRIES Figure 6: Asserted Conditions of our model of the legal rule in Table 3. ### Table 5 Extract from the legal source and its translation into English. *Salg til andre EU-lande. Du skal ikke beregne dansk moms, når du sælger varer til momsregistrerede virksomheder i andre EU-lande. Du skal derfor sørge for at få virksomhedens momsnummer.* And translated into English: *Sales in other EU countries. No VAT should be added to goods delivered to companies in other EU countries, provided that the companies are registered for VAT. In this case you must acquire the VAT registration number of the company.* Translated from [4][p. 8] ### Table 6 Necessary & Sufficient conditions for application of the rule in Table 5. - The rule concerns sales. - The rule concerns goods. - The place of delivery must be in the European Union. - The buyer must be registered for VAT. - You must acquire the VAT registration number of the company. 5.2 Rule modeling - step II Having added the necessary attributes to the model we are ready to model the rule itself. Since the rule describes a situation where you do not have to pay VAT we model it as a subclass of \textit{Momsfritaget}. Following our naming convention we name the class \textit{MomsfritagetSalgTilAndreEU-lande} (translates into \textit{VATFreeSalesToOtherEUCountries}). Based on the necessary and sufficient conditions captured in Table 6 we add the following necessary and sufficient conditions to \textit{MomsfritagetSalgTilAndreEU-lande}: - \textit{harLeveranceType} some \textit{Salg} - \textit{Varer} - \textit{harLeveranceSted} some \textit{EU} - \textit{erKøberMomsregistreret} has true We note that the obligation to register the buyers VAT registration number is modeled indirectly, see Section 5.1. The result is shown in Figure 8. 6 Future work Since this is work in progress there are a lot of areas we need to address. In the near future we plan to integrate our model in a prototype ERP system as described in the introduction. This opens the possibility for modeling the parts of the Danish VAT legislation concerning depreciation and VAT reporting (since they are intertwined and contain a lot of technical requirements on the financial reports). We also need to model other countries’ VAT rules in order to confirm that Danish VAT rules are indeed representative with respect to the constructions that are needed in the modeling language. Based on this we need to refine our overall framework such that it captures the common structure and we need to identify what kinds of questions a model must be able to answer. The synthesized knowledge from modeling the VAT rules of other countries should also result in a more detailed analysis of what we can and cannot model. Based on all this we should design a minimal description logic extended with the needed functionality identified in the analysis just mentioned, such as predicates like \( x < 100 \) which are needed in some rules. We should also provide a reasoner for the logic together with an editor such that the above process can be repeated. Finally, in order to compare our OWL model with a different approach, we want to make a model using Datalog, which is the de facto standard language used to express rules in deductive databases, of the rules we have formalized in OWL already. It would also be interesting to try a hybrid solution e.g., OWL plus a rule language like SWRL. This work is independent of the tasks mentioned above and can be carried out in parallel. 7 Conclusion We have shown how to model a subset of Danish VAT rules concerning exemption from VAT using Protégé-OWL. First we created an overall framework for the VAT model with the property that legal rules and the concepts they involve can be modeled as subclasses of existing classes in the framework. This helps to ensure that related concepts are modeled in the same way and that a single concept is not modeled twice. The second step was an iterative process consisting of two steps repeated for each rule. The first step is to extend the model such that the rule in question can be modeled. This is done by modeling concepts from the legal source as classes in the model and by adding attributes to the necessary conditions of such classes. The second step is to model the rule itself. This is done by adding specific requirements for application of the rule to the necessary and sufficient conditions of the class modeling the rule. The step by step iterative modeling has been working fine in practice and an extension to cover several different VAT and duty rates does not seem to be problematic as long as they do not require us to model restrictions such as \( x < 100 \) which is not supported directly in OWL \(^7\). \(^7\)Whether this is a weakness of OWL, or just us trying to use OWL for something it was not designed to Apart from modeling inequalities we have not had modeling problems. One problem though is that reasoning about individuals in OWL models is not supported very well. Therefore we have tried to avoid the use of individuals wherever possible (using value partitions). References
{"Source-Url": "https://static-curis.ku.dk/portal/files/15432526/nielsen-simonsen-larsen.pdf", "len_cl100k_base": 5806, "olmocr-version": "0.1.50", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 30345, "total-output-tokens": 6792, "length": "2e12", "weborganizer": {"__label__adult": 0.0005521774291992188, "__label__art_design": 0.0007343292236328125, "__label__crime_law": 0.0031681060791015625, "__label__education_jobs": 0.0027828216552734375, "__label__entertainment": 0.0001583099365234375, "__label__fashion_beauty": 0.000316619873046875, "__label__finance_business": 0.01861572265625, "__label__food_dining": 0.0005478858947753906, "__label__games": 0.000911235809326172, "__label__hardware": 0.0008411407470703125, "__label__health": 0.000934600830078125, "__label__history": 0.0004973411560058594, "__label__home_hobbies": 0.00024044513702392575, "__label__industrial": 0.0013532638549804688, "__label__literature": 0.0007457733154296875, "__label__politics": 0.001293182373046875, "__label__religion": 0.00045943260192871094, "__label__science_tech": 0.10968017578125, "__label__social_life": 0.0002028942108154297, "__label__software": 0.055694580078125, "__label__software_dev": 0.7978515625, "__label__sports_fitness": 0.0002732276916503906, "__label__transportation": 0.0015249252319335938, "__label__travel": 0.00033783912658691406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26815, 0.03961]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26815, 0.50147]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26815, 0.89914]], "google_gemma-3-12b-it_contains_pii": [[0, 442, false], [442, 3094, null], [3094, 6343, null], [6343, 9206, null], [9206, 9300, null], [9300, 11450, null], [11450, 14394, null], [14394, 17257, null], [17257, 17316, null], [17316, 18247, null], [18247, 20989, null], [20989, 21074, null], [21074, 22081, null], [22081, 23330, null], [23330, 25999, null], [25999, 26264, null], [26264, 26815, null]], "google_gemma-3-12b-it_is_public_document": [[0, 442, true], [442, 3094, null], [3094, 6343, null], [6343, 9206, null], [9206, 9300, null], [9300, 11450, null], [11450, 14394, null], [14394, 17257, null], [17257, 17316, null], [17316, 18247, null], [18247, 20989, null], [20989, 21074, null], [21074, 22081, null], [22081, 23330, null], [23330, 25999, null], [25999, 26264, null], [26264, 26815, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26815, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26815, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26815, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26815, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26815, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26815, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26815, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26815, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26815, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26815, null]], "pdf_page_numbers": [[0, 442, 1], [442, 3094, 2], [3094, 6343, 3], [6343, 9206, 4], [9206, 9300, 5], [9300, 11450, 6], [11450, 14394, 7], [14394, 17257, 8], [17257, 17316, 9], [17316, 18247, 10], [18247, 20989, 11], [20989, 21074, 12], [21074, 22081, 13], [22081, 23330, 14], [23330, 25999, 15], [25999, 26264, 16], [26264, 26815, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26815, 0.07285]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
8ae6847d311cc2ef3d476a5c368c3459fcf5752a
Automatic generation of computer programs* by NOAH S. PRYWES University of Pennsylvania Philadelphia, Pennsylvania ABSTRACT This is an introduction and summary of research on Automatic Program Generation conducted at the Moore School, University of Pennsylvania. This research culminated in development of a Module Description Language (MODEL) designed for use by management, business, or accounting specialists who are not required to have computer training. MODEL statements describe input, output, and various formulae associated with system specification. No processing or sequencing information is required from the user. A MODEL Processor analyzes the specifications and interacts with the user in resolving inconsistencies, ambiguities and incompleteness. A program for performing the required functions is then generated based on the "complete" specification. INTRODUCTION A major aspiration of computer programming language designers has been to make programming so easy that large classes of educated people who have not been exposed to computer training are able to program. For instance, in 1960 the CODASYL committee designed a programming language named COBOL-Common Business Oriented Language. Despite this and many other attempts to reduce the complexity of the programming process, it has continued to require considerable skill and specialization. Currently there are a large number of Application Programmers who handle such tasks. The standard procedure is for a "user," whether manager, accountant or business specialist, to communicate requirements to an Application Programmer who, in turn, composes a program to fit these requirements. The research described in this paper is a continuation of efforts to make feasible the preparation of programs by users (interacting with an automatic program generator) without recourse to middle-men Application Programmers. In view of the historical elusiveness of this goal, it is approached with considerable trepidation. Figure 1 illustrates the overall concept of interactions between the automatic computer program generator and the user. (The components in the diagram are referred to by the indicated numbers). The user (1) is viewed as an individual who is proficient in the immediate field in which the programs are to be applied. Namely, he is viewed as being in a management or a technical capacity, such as in accounting, production control, etc. He must not only have had professional training in this specialized area of activity, but also have had a good mathematical background. The user is not required, however to have had any specialized computer training, but he must understand that when a function is properly specified it can then be executed by a computer. The user composes statements (3) (via a terminal and a Text-Editor (2)) in a language named MODEL (MOdule DEscription Language). Each statement is considered an integral unit and contains a "chunk" of information. A statement may describe an item of data (data description) or an algebraic or logical relation among items of data (assertion). References may be made to statements previously entered in a data base (4) by the user, by others who have specified requirements for similar applications, or by others with whom the user wants to share data. The MODEL Processor (5) analyzes the totality of statements transmitted to it and, as appropriate, solicits from the user additions or changes to these statements. It provides a user with listings, cross-references, and requests for additions or changes necessary to resolve incompleteness, ambiguities or inconsistencies (6). Finally, the analysis by the processor leads to certain logical implications which are also communicated to the user to enable him to clarify and self-check his specifications of the requirements. When all outstanding problems have been resolved in this dialogue, the processor produces a program in a computer language. An Optimizing Compiler (7) produces the object code (for the application program) which is loaded into a digital computer (8) for execution. The system description in this report is based on an operational MODEL II system which incorporates corrections and improvements consisting of new language, sequencing and iteration analysis, over a previously described language and processor. Work is under way to reprogram. the processor to reflect additional improvements. The system is programmed in PL/I and produces object programs in PL/I. In the interest of brevity, the paper describes the use of the MODEL language in the second section of this paper, and the analysis leading to solicitation of additional user information in the third section. The automatic program (and flowchart) generation phases of the MODEL system are omitted and described in the referenced reports. A survey of related research can be found in References 4 and 5. **Distinctive characteristics of MODEL** The MODEL language described in this paper incorporates several new characteristics not existing in previous programming languages and which were adopted because of their distinct advantages over past practices. These characteristics are explained below in the context of the system illustrated in Figure 1 where a user communicates with a processor that generates programs. The new labor saving characteristics that have been incorporated in this approach are as follows: 1. Non-proceduralness—means that the user need not (and cannot) specify any order of evaluation or memory assignments. The "control logic" parts of the ultimately produced program, which are based on such procedural information, are to be deduced by the MODEL Processor. This feature is considered important, not only because it saves programming labor but also because it reduces the necessary computer training of the user. For instance, the user does not need to have such basic concepts as flowcharting or memory. 2. Independence of statements—means that the user can concentrate on composing a single statement at a time. It is neither required nor possible to indicate any relationships among the statements (except implicitly, such as when specifying relationships among variables). A single statement is required for describing each data name, or each formula. Modification or addition of statements can be carried out independently, one statement at a time, in the same manner. 3. Randomness—takes into account that information may originate from a group of users. Also each user's concept of computer requirements is usually not well organized, and a variety of information comes to his mind at different times. The user can describe this information in statements, one at a time, in the order that the information occurs to him. While a certain organization in this approach may be helpful, it is not required. 4. Incrementality—means that once users have provided a certain portion of the totality of the statements, the processor should be able to solicit additions or changes incrementally until a complete specification of the computer requirement is obtained. In this manner, it should be possible to avoid the problems of ambiguities or incompleteness that lead to major misunderstandings between the users and Application Programmers, and which require costly corrections and reprogramming. 5. Self Documentation—is attained as the documentation is generated during the dialogue between a user and the Processor. The additional documentation of the corresponding flowchart can be generated by the Processor automatically when generating the program. These latter types of documentation would normally not be of interest to a user, but rather to a Systems Programmer. The documentation of the user's computer requirement and of the associated program to be produced by MODEL is comprised of the collection of the corresponding statements together with the cross references and summary tables and comments produced by the Processor. 6. Maintenance—involves corrections of the programs based on malfunctions discovered during operation, or on modifications of the specifications to meet new needs. In current practice the modifications must also be performed by a middle-man Application Programmer. It is envisaged, instead, that the user would make the changes in the MODEL statements to reflect either the corrections or the modifications, whereupon the Processor would generate a new program automatically. 7. Sharing—of data or computations can be attained by storing the corresponding MODEL statements in the Processor's data base. A user desirous of sharing the know-how of others who have previously stated requirements in similar application areas needs only to reference these statements in order to incorporate them in the specification of his program. Data bases could be physically shared, while computations would be repeated in each user's program. To make changes in the organization of shared data bases, the data description statements must be modified or added, and previously generated programs based on these statements must be automatically regenerated. In this way changes to shared data bases or programs can be carried out without requiring the users to modify their programs individually. 8. Tolerance—The Processor is tolerant of the user's ambiguities and omissions. To fully specify a requirement would necessitate composing many statements which may appear to a user to be self evident and superfluous. The Processor in synthesizing the MODEL statements into a program must recognize the resulting ambiguities and omissions and generate the necessary additional MODEL statements automatically, thus relieving the user of much tiresome detail. THE LANGUAGE Example An example is used in the following pages to illustrate how a user describes a requirement which he wishes to automate. The example envisages the environment of a department store with many departments, a large number of charge account customers, and an extensive and diverse stock inventory. Point-of-Sale terminals connected to a network of computers are distributed through the several locations of the department store. The user of the MODEL system is envisaged to be a department store analyst who desires to specify the accounting requirements for purchases by cash and charge account customers. Figure 2 gives an overview of the accounting requirement. The corresponding program module is named DEPSALE, and is shown at the center of the figure. The data for DEPSALE comes from three sources. The sales transactions (SALETRAN), come from a Point-of-Sale terminal (POSTERM) sequentially, one at a time, and contain the information provided by the purchasers. The customer data (CUSTMAST) contains records of customers which can be referenced by providing customer numbers. Finally, there is inventory (INVEN) data, where information on stock items can be referenced by providing a stock number. The data that comes in to DEPSALE is referred to as **SOURCE** data. The **TARGET** data consists of the records in CUST-MAST and INVEN which are affected by the sales transaction and must be updated. Other TARGET data are the entries made in a sales journal (SALEJOUR) which are ordered sequentially by sales numbers (SALES #). Finally, a sales slip (SALESLIP) is produced on the terminal in cases where the sales transaction has been consummated or, alternatively, an exception notice (EXCEPT) is produced when the transaction does not take place. **An outline for preparing requirement descriptions** Figure 3 shows, in outline form, the information that needs to be provided in describing a requirement to be automated. As indicated, the user does not need to follow this outline, but can provide the information in any order. With the aid of a Text Editor, the user can enter statements and organize them into sections, subsections, etc. At the highest level the description is divided into three sections: the header, the data description, and the computation description. The header contains identification information: module name(1), source(2) and target(3) data names, and references to sections or subsections of computation description statements, (called assertions) that are in a library in the data base(4). The latter may represent standards of data formats and organizations or any previously entered statements. The user is able to specify more complex operations that would be applied to the statement-data-base, to produce new statements to be incorporated in the specifications of a desired program module. Data description is independent of the computation description, so that the data may be shared by several programs. Data and computation descriptions may be in a library, and called for automatically, to facilitate sharing of data and computations. Data description breaks down into **Data description statements** **Data statements** A **data network** concept is employed. Its application to the DEPSALE program is illustrated in Figure 4. First, a user has to name each data structure (to be further explained) and compose a statement for each name used. Each of the source or target files, inputs or reports is organized internally in a hierarchical structure resembling a --- From the collection of the Computer History Museum (www.computerhistory.org) tree. Each node of the tree must be given a name. The user must compose a statement corresponding to each data name, in which he provides associated information on the branch connection, data length, number of repetitions and other parameters of source and target media. Network (non-tree) structures are described by use of POINTER type assertions which coordinate instances of repeating data. For example, consider the sales transaction (SALETRAN) source data at the top left of Figure 4. The STORAGE name is a POINT-of-SALE Terminal—POSTERM, (not shown in Figure 4). The SALETRAN file (an incoming message, referred to here as a FILE) is describable as a tree structure. The name of the FILE is at the apex, emanating from it is the RECORD (SALEREC) node. The branches emanating from the RECORD node lead to GROUP or FIELD nodes. GROUP nodes are not terminal nodes, and the branches emanating from GROUP nodes can again lead to GROUP or FIELD nodes. The terminal nodes are always referred to as FIELDs. Figure 5 shows the data description statements of SALETRAN. The words, RECORD, STORAGE, GROUP or FIELD are followed by their respective parameters, in parentheses. The STORAGE parameters depend on the devices specified. For POSTERM they are the format, unit number and block size. The parameters of FILE, RECORD or GROUP are the data names of the descendent nodes. Another parameter of FILE is the name of the referencing or sequencing field, if any. Parameters of FIELD are data length and number of repetitions (if more than 1). These parameters can be constants or variables. If they are variables, LENGTH and EXIST type assertions must be provided separately to specify how they can be computed. Data description assertions A number of assertion types are needed to describe the data further. They concern length and number of repetitions of data and inter or intra file pointers. Figure 6 illustrates these assertions. The number of transaction items (ITEM) in a sale transaction is a variable (EXIST type) named EXIST.ITEM. Assertions specifying the computation of these variables are shown at the top of Figure 6. For instance, number of repetitions can be determined from the position of delimiter characters. The delimiter for end of transaction is the field 10. The delimiter for end of transaction is the field 10. Figure 5—Data description statements for SALETRAN ENDTRANS. It is assumed to be a Ready symbol, R. The first assertion in Figure 6 specifies the calculation of the number of repetitions of the group TRITEN. It uses the function INDEX which evaluates a string of characters to determine the position of the R symbol in relation to the beginning of the strings of the SALEREC record. Figure 6 also shows the assertions which specify that fields CUST# and STOCK# in SALEREC can be used as pointers to the customer master and inventory files as illustrated in Figure 4. Note that POINTER.INVENREC has two subscripts, the first is FOR_EACH_ITEM, and the second is '1'. As will be seen, the pointer value may be derived also from the REPLACE field of INVENREC. The other source and target data indicated in Figures 2 and 4 can be described in similar manner. In the interest of brevity the discussion of this data is omitted. In reproducing a listing of the submitted statements, the Processor names all data description statements and assertions and identifies source and target variables, unless already specified by the user. The names assigned to statements are a derivation of the names of the data elements described or a derivation of the names of the dependent (target) variable in assertions. Computation description statements Composition of assertions Following the outline in Figure 3, the computation description consists first of the description of interim variables in a manner similar to data description, except that these variables are stated to be INTERIM. In addition to describing interim variables, the description of computation consists primarily of statements with logical or/and arithmetic constructs which are also referred to as assertions. Using arithmetic and logical operators, as well as functions, the user composes such statements to specify relationships among variables. In composing an assertion it is necessary to separate the dependent and independent variables. One common convention is to place the single dependent variable on the left of the equal sign (=). Note that the = sign means algebraic equality and not assignment. In composing assertions, the user specifies relationships using mathematical, non-procedural notations. Many relationships, not directly expressible using arithmetic and logical operators, must then be expressed using functions that map the SOURCE data into the TARGET data. These functions are a substitute for established mathematical notation. (an example is the Σ symbol, meaning "summation" which is illustrated further below). Other functions evaluate character strings (such as the INDEX function described above). The use of functions in assertions requires stating the name of the function followed with the specification of the parameters, enclosed in parentheses. The functions return a value (which may be a single variable, or components of a vector or an array). SUBSET assertions Examples of SUBSET assertions are shown in Figure 7. For example, the first assertion specifies that only transactions from terminals SALE2 through SALE5 and from clerks C5 through C7 are to be processed. As shown, it is applied to the SALETRAN source data. The second assertion in Figure 7 applies to target data EXCEPT. It specifies that entries in this target file be limited to cases where the balance (BALANCE) would exceed the credit limit (CREDLIM). Illustration of computational assertions As indicated in Figure 3, assertions can be used to specify relations exemplified by accounting rules or business decisions. Figure 8 gives four examples of such assertions (the total number in DEPSALE is 20) and illustrates several features of assertions: The first assertion in Figure 8 specifies the evaluation of the EXTENSION field which is the dollar value of purchased stock items of one type. The subscript notation (FOR_EACH_ITEM) after a variable means that it can have several components and by implication, that the process will be repeated a variable number of times corresponding to the number of repetitions of ITEM. The second assertion in Figure 8 illustrates the use of the SUM function to specify the evaluation of the total charge made on a purchase sales slip (TOTCHARGE). The SUM function has a parameter of the variable to be summed (EXTENSION). (Another assertion not shown in Figure 8 must specify the calculation of TAX). The third and fourth assertions in Figure 8 illustrate business decision rules. For instance, if an item in the sale transaction is out of stock, namely the Quantity On Hand (QOH) field is smaller than the quantity specified in the sales transaction (QUANTIT) then it is desired that a suitable substitute item, if any, should be sold. The stock number of a suitable substitute item is stored in the inventory record (INVENREC) in the field named REPLACE (see Figure 4). To represent decisions, the user can use a variable with the name of the decision prefixed by the word CHOICE. All such variables can have only two values, SELECTED and NOT-SELECTED. They will be described automatically by the Processor (see a later section). The third assertion in Figure 8 shows the expression that specifies when the substitution is to take place. The last assertion in Figure 8 specifies the implementation of the decision, namely the stock number in the REPLACE field is treated in the same way as if it were the stock number in the sales transaction. This requires a new value for the POINTER.INVENREC field (see Figure 8). This operation is expressed by use of subscript. The above assertions are representative of some of the relationships that can be expressed by assertions. The library of functions is open-ended and additions can be made easily to accommodate special needs. However, it is important to restrict the number of functions and to have their operations similar to common mathematical notation in order to assure ease in user familiarization with them. Reporting formatting assertions The description of messages or reports in MODEL is similar to that of information stored in computer storage media. The user always views the information as a string of information divisible logically into records, groups, and fields. However, in the specification of messages or reports he also must consider the continuity and availability of physical space. Additionally, the internal order of data substructures can be specified by the sequence of submission of the corresponding data statements to the Processor. In describing the format of a report, the user must consider tab, carriage return or new page symbols as if they were data fields. In source data, these formatting symbols would already have the desired values. In target data, the obtaining of the values of these data must be specified by assertions. These values are frequently data dependent. Figure 9 illustrates this by showing two assertions that ``` IF CREDLIM > BALANCE THEN SUBSET.EXCEPT = SELECTED ELSE SUBSET.EXCEPT = NOT SELECTED; ELSE POINTER.INVENREC(ITEM, 2) = REPLACE ELSE POINTER.INVENREC(ITEM, 2) = NULL; ``` Figure 8—Examples of computational assertions for the DEPSALE example ``` ENDITEM (FOR_EACH_ITEM) = STRING (CR, 1); END-TAX = STRING (CR, 12 - EITEM); ``` Figure 9—Example of report formatting assertions compute the number of carriage returns used in the saleslip (SALESLIPREC, see Figure 4) after printing ITEM groups. Normally, one carriage return symbol after each item line suffices, except after the TAX line when it is desired to advance to the 12th line of the saleslip, where the total charge (TOTCHARG) is printed. The function STRING generates a string, consisting of substrings (CR) specified in the first parameter, which are repeated a number of times specified in the second parameter (12-EXIST-ITEM). This task of describing a report appears laborious. However, it can become easier by use of picture data types and certain operations (definable in MODEL) which will specify automatically report standards, such as for instance including "end of record," "end of group" and "end of field" fields following the respective data description statements. GRAPH REPRESENTATION AND COMPLETENESS OF A MODEL SPECIFICATION Organization of precedence information Each statement in MODEL is an integral unit identified by a name. The existence of a precedence relationship between two statements indicates that a statement must be evaluated prior to initiating the evaluation of its successor statement. The entire collection of statements is envisaged as a directed graph where the statements are represented by correspondingly labeled nodes and where directed arcs or pointers connect the nodes, each representing a precedence relationship between the statements at the pointer origin and termination nodes. Figure 10 illustrates this view, showing the statements (represented as □) which form nodes of a graph for the above example. Each pointer is labeled, with the corresponding precedence type. Thus, each pointer has a direction and a type. The nodes in Figure 10 are shown to have a number of pointers exits and entries. The exits correspond to precedence pointers emanating from a node and pointing to descendant or target data or assertion nodes; the entries correspond to pointers originating at parent or source data or assertion nodes and terminating at a node. The pointer finding process examines statements pairwise, using rules for determining the precedence type which will become the type of the precedence pointer. The types of the found pointers are entered in a precedence matrix, illustrated in Figure 11. Assume that a specification consists --- Figure 10—Partial graph for depsale, showing data(.) of Figure 5 and assertions ( ) of Figures 7 and 9. of $n$ statements, then there are $n(n-1)$ pairs of statements that need to be considered for finding pointers. The different precedence types indicate corresponding methods of interpreting the respective statements in the subsequent phase of code generation, not discussed in this report. The pointer type recognition rules are summarized below. These precedence types are extensible. Precedence types can be added provided that they can be stated in terms of pointer selection rules applied to statements, pairwise. The definition of pointer selection rules involves analysis of data and function names in predecessor and successor statements. Once the rules have been applied to pairs of statements and existence of pointers has been determined, these pointers are labeled with the appropriate precedence type. The labels of the respective pointers are then entered in the precedence matrix table, shown in Figure 11, at the intersection of a predecessor statement row and the successor column. Thus, once this process has been completed, a row will contain the types of all the exit pointers of a statement and a column will contain all the entry pointers types. There are basically two main types of pointers: 1. **Data Tree Hierarchy**: between data description statements (data names) within a FILE, organized in a tree structure. For source data the node closest to the apex is the predecessor and the node at the end of the branch is the successor, (precedence type 1) and vice versa for target data (precedence type 2). (2) Data Determinancy: between assertions and data descriptions statements. When the data name is the source of an assertion, a data node is the predecessor and an assertion node is the successor (type 3). When a data name is the target of an assertion, an assertion node is the predecessor, and a data node is the successor (type 4). Additionally, there are several miscellaneous precedence types requiring special interpretations in the code generation, as follows: Media (storage) statements can have entries from source file statements (type 6) or entries from target files (type 7). LENGTH and EXIST type variables can have pointers to source (type 8) or target data (type 9) respectively. The POINTER type variables have pointers only to source data (type 10). Figure 11 shows the Precedence Matrix with the rows and columns ordered by respective types of statements. The possible precedence types are indicated at the appropriate intersections. The ordering of statements in Figure 11 is not in fact required, but is purely for illustration and suggests useful reports to the user (as described below). Analysis of precedence information As indicated in Figure 1, a most important aspect of the concept of the MODEL system is the MODEL Processor's solicitation of new or modified statements. A number of reports are produced for review by the user. First, the Processor produces a listing of MODEL statements and a report which cross references each data name with the corresponding data and assertion statements. In addition, the Processor requests the user to resolve problems that it encounters. These requests have been divided into four classes: Incompletenesses, Ambiguities, Inconsistencies, and other Implications. Analysis of the information in the precedence matrix (Figure 11) can provide most of this information, as discussed below. Incompleteness and Inconsistency problems are similar to "errors" and resolution of such problems is prerequisite for completion of the processing. They normally terminate processing. Ambiguity and Implications problems are similar to "warnings," and the Processor continues to complete the subsequent generation of code in the object language. The user may wish to examine these comments of the Processor, and, if necessary, make appropriate modifications, and resubmit them to the Processor. Otherwise these comments should be incorporated in the documentation of the program module. The messages reporting results of the analysis which are sent to the user must be phrased in a manner that will make it easy for him to make modifications. The messages should therefore preferably address only one statement which needs to be added or modified. The exception to this rule would be where problems arise from Inconsistencies or Implications which are based on more than one statement. Incompleteness is defined as an instance where the graph is incomplete, where entire statements are missing or where statements are duplicated. Such cases can be recognized by searching the precedence matrix of Figure 11 to verify the following conditions: (a) Each row and column must have at least one pointer (in a column) and one exit pointer (in a row), with the following exceptions: Source and Target file statements have only an exit pointer or an entry pointer, respectively, while field statements may have no exit pointer (where the field is not used in deriving the target data). Absence of expected pointers, as above, indicates that a statement must be added by the user. (b) The number of pointers in rows and columns must also be checked. Source data and target data statements can have only one pointer in the corresponding column and row, respectively. Also, Assertion statements can have only one pointer in the corresponding row. The existence of more pointers indicates either an Ambiguity which must be resolved by the user adding qualifying names to the names of similarly named data, or a logical Inconsistency due to duplicate statements. (c) Each source data file statement which has an index sequential (ISAM) organization, must also have a POINTER type statement as a predecessor. (d) Pointers in each row and column, should not originate or terminate, respectively, in statements having the same data or assertion name, otherwise an Ambiguity or an Inconsistency is indicated. (e) The number of pointers in an assertion column must equal the number of source variables of the assertion. (f) All pointers must conform with the allowable types indicated in Figure 11. Before reporting the Incompletenesses, an attempt is made to resolve these problems automatically. If such resolution is possible, the suggested additions or modifications of statements are reported. If this process is not successful, appropriate messages with an indication of the missing or inconsistent statements are sent to the user. This supplementing of MODEL statements by the Processor is essential to relieve the user of providing much tiresome detail which may appear evident to the user. Rules for making such judgments may be added or modified based on experience with the Processor. Examples of such automatic additions and modifications to MODEL statements include: (a) Modifying an assertion statement by preceding the names of an ambiguous data used in assertions with the names of their respective files (or other higher level data names). This would resolve the ambiguity where the same data name is used in a number of data statements. (b) Naming of statements and identifying the SOURCE (independent) and TARGET (dependent) variables of assertions (where the user omitted this information). (c) Providing assertions that will indicate equality of similarly named source and target data in the absence of other assertions expressing relationships between such data. Inconsistencies are conflicts which require the user to conduct a logical analysis of more than one of the submitted statements. Some Inconsistencies are simple to determine. Examples were shown in the discussion of incompletenesses above, where a statement node has more than one exit or entry pointer, but only one is permissible. An Inconsistency message must then be produced which includes the offending statements. A more complex type of Inconsistency arises from the existence of “cycles” which are closed paths in the directed graph, each with a number of nodes and interconnecting pointers. The process for finding cycles is discussed in the references. Cycles in the directed graph denote faulty circular logic. They do not indicate iterations in the resulting program. Iterations in the program originate from a number of other features of the language such as the use of subscripts (FOR_EACH_X) following the name of a variable in an assertion and from the use of repeating data. It is required of the user to “open” the loops found by the processor through a modification of some of the statements corresponding to the nodes of the loops before the Processor can continue with the code generation. Implications are classes of logical conclusions based on submitted statements that are considered to be potentially of interest to the user. The Processor, while capable of determining such conclusions, cannot further evaluate the implication, either because of limitation of the analysis methods that are employed, or because of lack of information of the area where the program is to be employed. Therefore the cooperation of the user is requested to check and verify the reported conclusions. Implication can be effectively reported in a form similar to “decision tables” which have been widely used in the past. Two such tables can be extracted from the matrix of Figure 11, consisting only of the data name rows and assertion columns where pointers of Type 3 exist and assertion rows and data columns where Type 4 pointers exist. Such tables are considerably smaller than the matrix of Figure 11 and therefore they can include the entire row and column statements. CONCLUSION The restrictions on space have limited the scope of this article to the extent that only a small illustrative example was presented with the objective of familiarizing the reader with the general use and operation of the system. The reader is referred to Reference 4 for a more comprehensive survey of the field of automatic generation of programs. References 1, 2, 3 and 4 provide detailed description of syntax and semantics of MODEL, and on the methods of generating programs automatically. ACKNOWLEDGMENT This paper is based on MODEL II. The implementation of this system was the responsibility of: S. Shastry and Y. Chang—Syntax analysis, N. A. Rin—precedence matrix and code generation, A. Pnueli—sequence and iteration analysis. REFERENCES
{"Source-Url": "https://csdl.computer.org/csdl/proceedings/afips/1977/5085/00/50850679.pdf", "len_cl100k_base": 6913, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 33002, "total-output-tokens": 7750, "length": "2e12", "weborganizer": {"__label__adult": 0.0002384185791015625, "__label__art_design": 0.00027632713317871094, "__label__crime_law": 0.0002294778823852539, "__label__education_jobs": 0.0023250579833984375, "__label__entertainment": 5.066394805908203e-05, "__label__fashion_beauty": 0.0001119375228881836, "__label__finance_business": 0.0006070137023925781, "__label__food_dining": 0.00025272369384765625, "__label__games": 0.0003731250762939453, "__label__hardware": 0.0012950897216796875, "__label__health": 0.0002925395965576172, "__label__history": 0.00019502639770507812, "__label__home_hobbies": 8.863210678100586e-05, "__label__industrial": 0.00043082237243652344, "__label__literature": 0.00027823448181152344, "__label__politics": 0.00017833709716796875, "__label__religion": 0.0003161430358886719, "__label__science_tech": 0.02728271484375, "__label__social_life": 5.8531761169433594e-05, "__label__software": 0.01380157470703125, "__label__software_dev": 0.95068359375, "__label__sports_fitness": 0.00014734268188476562, "__label__transportation": 0.0003826618194580078, "__label__travel": 0.00011903047561645508}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36876, 0.00729]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36876, 0.74716]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36876, 0.92259]], "google_gemma-3-12b-it_contains_pii": [[0, 4511, false], [4511, 5188, null], [5188, 11068, null], [11068, 13472, null], [13472, 13867, null], [13867, 18262, null], [18262, 23136, null], [23136, 25616, null], [25616, 27149, null], [27149, 32924, null], [32924, 36876, null], [36876, 36876, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4511, true], [4511, 5188, null], [5188, 11068, null], [11068, 13472, null], [13472, 13867, null], [13867, 18262, null], [18262, 23136, null], [23136, 25616, null], [25616, 27149, null], [27149, 32924, null], [32924, 36876, null], [36876, 36876, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36876, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36876, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36876, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36876, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36876, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36876, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36876, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36876, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36876, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36876, null]], "pdf_page_numbers": [[0, 4511, 1], [4511, 5188, 2], [5188, 11068, 3], [11068, 13472, 4], [13472, 13867, 5], [13867, 18262, 6], [18262, 23136, 7], [23136, 25616, 8], [25616, 27149, 9], [27149, 32924, 10], [32924, 36876, 11], [36876, 36876, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36876, 0.0]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
10ca21a319cf15a8d82909e6542539fd68851883
After 50 years and 1 day of exponential growth... Moore’s original paper was published in 19 April 1965 Today • More about OpenMP • Vector instructions (SIMD) • A little bit about instruction-level parallelism and memory hierarchies (caches) About benchmarking and reports Suggestions - Calculate nanoseconds per pixel - or clock cycles per pixel - typically nanosecond ≈ 2–3 clock cycles - Calculate speed (= 1/time) as a function of the number of threads - compare with linear growth nanoseconds/pixel MF1 window $21 \times 21$ pixels 1 thread image size in megapixels nanoseconds/pixel MF3 window 21 × 21 pixels multithreaded image size in megapixels nanoseconds/multiplication CP1 1000 rows 1 thread nanoseconds/multiplication CP4 1000 rows multithreaded images/second threads 4 cores Hyper-Threading images/second 2000 × 2000 pixels 21 × 21 window threads linear MF3 4000 × 4000 pixels 201 × 201 window 2 × 12 cores images/second linear MF3 Calculating speedups - Fair baseline: a single-threaded version (compile *without* OpenMP) - if you compile with OpenMP and run with `OMP_NUM_THREADS=1`, it can be much slower than a good single-threaded implementation OpenMP: memory model quick recap... OpenMP memory model - Contract between programmer & system - Local "temporary view", global "memory" - threads read & write temporary view - may or may not be consistent with memory - Consistency guaranteed only after a "flush" OpenMP memory model - Implicit “flush” e.g.: - when entering/leaving “parallel” regions - when entering/leaving “critical” regions - Mutual exclusion: - for “critical” regions int a = 0; #pragma omp parallel { #pragma omp critical { a += 1; } } Simple rules - **Permitted (without explicit synchronisation):** - *multiple threads reading*, no thread writing - *one thread writing*, same thread reading - **Forbidden (without explicit synchronisation):** - *multiple threads writing* - *one thread writing, another thread reading* Simple rules • Smallest meaningful unit = array element • Many threads can access the same array • Just be careful if they access the same array element • even if you try to manipulate different bits OpenMP: variables private or shared? Two kinds of variables • Shared variables • shared among all threads • be very careful with data races! • Private variables • each thread has its own variable • safe and easy // shared variable int sum_shared = 0; #pragma omp parallel { // private variables (one for each thread) int sum_local = 0; #pragma omp for nowait for (int i = 0; i < 10; ++i) { sum_local += i; } #pragma omp critical { sum_shared += sum_local; } } print(sum_shared); // OK: for (int i = 0; i < n; ++i) { float tmp = x[i]; y[i] = tmp * tmp; } // OK: float tmp; for (int i = 0; i < n; ++i) { tmp = x[i]; y[i] = tmp * tmp; } // OK: #pragma omp parallel for for (int i = 0; i < n; ++i) { float tmp = x[i]; y[i] = tmp * tmp; } // Bad (data race): float tmp; #pragma omp parallel for for (int i = 0; i < n; ++i) { tmp = x[i]; y[i] = tmp * tmp; } // OK (just unnecessarily complicated): #pragma omp parallel { float tmp; #pragma omp for for (int i = 0; i < n; ++i) { tmp = x[i]; y[i] = tmp * tmp; } } Two kinds of variables • Shared variables and private variables • If necessary, you can customise this: • #pragma omp parallel private(x) • #pragma omp parallel shared(x) • #pragma omp parallel firstprivate(x) • Seldom needed, defaults usually fine Best practices • Use subroutines! • much easier to avoid accidents with shared variables this way • Keep the function with #pragmas as short as possible • just e.g. call another function in a for loop OpenMP: critical sections ... and atomics // Good, no critical section needed: #pragma omp parallel for for (int i = 0; i < 10000000; ++i) { ++v[i]; } // Bad, very slow: #pragma omp parallel for for (int i = 0; i < 10000000; ++i) { #pragma omp critical { ++v[i]; } } // Good, no critical section needed: #pragma omp parallel for for (int i = 0; i < 10000000; ++i) { ++v[i]; } 4 ms // Bad, very slow: #pragma omp parallel for for (int i = 0; i < 10000000; ++i) { #pragma omp critical { ++v[i]; } } 40 000 ms / *Bad — no data race but undefined output:* ```c int a = 0; #pragma omp parallel { int b; #pragma omp critical { b = a; } ++b; #pragma omp critical { a = b; } } ``` // OK: int a = 0; #pragma omp parallel { int b; #pragma omp critical { b = a; ++b; a = b; } a = b; } // Bad: same output file #pragma omp parallel for for (int i = 0; i < 10; ++i) { int v = calculate(i); std::cout << v << std::endl; } // OK (but no guarantees on the order of lines) #pragma omp parallel for for (int i = 0; i < 10; ++i) { int v = calculate(i); #pragma omp critical { std::cout << v << std::endl; } } Naming critical sections - You can give names to critical sections: - `#pragma omp critical (myname)` - Different threads can enter simultaneously critical sections with different names - No name = the same name ```c #pragma omp parallel for for (int i = 0; i < 10; ++i) { b(); #pragma omp critical (xxx) { c(); } #pragma omp critical (yyy) { d(); } } ``` #pragma omp parallel for for (int i = 0; i < 10; ++i) { int v = calculate(i); #pragma omp critical (result) { result += v; } #pragma omp critical (output) { std::cout << v << std::endl; } } Atomic operation - Like a tiny critical section - Very restricted: just for e.g. updating a single variable - Much more efficient ```c for (int i = 0; i < n; ++i) { int l = v[i] % m; ++p[l]; } #pragma omp parallel for for (int i = 0; i < n; ++i) { int l = v[i] % m; #pragma omp atomic ++p[l]; } #pragma omp parallel for for (int i = 0; i < n; ++i) { int l = v[i] % m; #pragma omp critical { ++p[l]; } } ``` OpenMP: scheduling #pragma omp for a(); #pragma omp parallel for for (int i = 0; i < 16; ++i) { c(i); } d(); / / Good memory locality: */ // each thread scans a consecutive part of array #pragma omp parallel for for (int i = 0; i < n; ++i) { c(x[i]); } a(); #pragma omp parallel for for (int i = 0; i < 16; ++i) { c(i); } d(); a(); #pragma omp parallel for schedule(static,1) for (int i = 0; i < 16; ++i) { c(i); } d(); ``` <p>| | | | |</p> <table> <thead> <tr> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>c(0)</td> <td>c(4)</td> <td>c(8)</td> <td>c(12)</td> </tr> <tr> <td>c(1)</td> <td>c(5)</td> <td>c(9)</td> <td>c(13)</td> </tr> <tr> <td>c(2)</td> <td>c(6)</td> <td>c(10)</td> <td>c(14)</td> </tr> <tr> <td>c(3)</td> <td>c(7)</td> <td>c(11)</td> <td>c(15)</td> </tr> </tbody> </table> ``` a(); #pragma omp parallel for dynamic for (int i = 0; i < 16; ++i) { c(i); } d(); OpenMP scheduling - Performance, \( n = 100\,000\,000 \): - sequential: 50 ms - parallel: 50 ms - schedule(static,1): 200 ms - schedule(dynamic): 4000 ms ```c for (int i = 0; i < n; ++i) ++v[i]; ``` OpenMP scheduling - **Performance, \( n = 100 \, 000 \, 000 \):** - sequential: 800 ms - parallel: 300 ms - `schedule(static, 1)`: 300 ms - `schedule(dynamic)`: 4000 ms ```c for (int i = 0; i < n; ++i) v[i] = sqrt(i); ``` OpenMP: reductions ... just a convenient shorthand int g = 0; #pragma omp parallel { int l = 0; #pragma omp for for (int i = 0; i < n; ++i) { l += v[i]; } #pragma omp atomic g += l; } ≈ int g = 0; #pragma omp parallel for reduction(+:g) for (int i = 0; i < n; ++i) { g += v[i]; } Vector instructions Vector instructions - **1997**: MMX, 64-bit registers MM0 … MM7 - **1999**: SSE, 128-bit registers XMM0 … XMM7 - **2011**: AVX, 256-bit registers YMM0 … YMM15 - **soon**: AVX-512, 512-bit registers ZMM0…ZMM31 Vector instructions • Hundreds of machine-language instructions • interpret AVX registers as vectors • “horizontal add with saturation” • “conditional dot product” • “sum of absolute differences” • “fused multiply and add” … Vector instructions - Hundreds of machine-language instructions - Directly available via compiler intrinsics and built-in functions, if needed - \( c = \_\text{mm256}_\_\text{hadd}_\text{pd}(a, \ b); \) - \( c = \_\_\text{builtin}_\text{ia32}_\_\text{haddpd256}(a, \ b); \) - see the course web site for pointers Vector instructions • Hundreds of machine-language instructions • Directly available via compiler intrinsics and built-in functions, if needed • In many cases we do not need to worry about low-level details Vector types in GCC typedef double double4_t __attribute__((__vector_size__ (4*sizeof(double)))); // Now these are almost equivalent: double4_t a; double a[4]; Vector types in GCC typedef float float8_t __attribute__((__vector_size__(8*sizeof(float)))); // Now these are almost equivalent: float8_t a; float a[8]; Vector types in GCC // Can address individual elements as usual: float8_t a; for (int i = 0; i < 8; ++i) { a[i] = 123.0 + i; } Vector types in GCC // Can address individual elements as usual: float8_t a[3]; for (int j = 0; j < 3; ++j) { for (int i = 0; i < 8; ++i) { a[j][i] = 123.0 + i; } } Vector types in GCC // Operations on entire vectors: float8_t a, b, c; a += b * c; // Same as: for (int i = 0; i < 8; ++i) { a[i] += b[i] * c[i]; } Vector types in GCC // Operations on entire vectors, also with scalars: float8_t a, b, c; a += b * c / 3 + 2; // Same as: for (int i = 0; i < 8; ++i) { a[i] += b[i] * c[i] / 3 + 2; } Vector types in GCC - Always available - Compiler uses special vector registers and instructions whenever possible - Remember to specify the architecture - `g++ -march=native` Memory alignment - Vector data always properly aligned - memory address divisible by sizeof(vector) - Compiler takes care of this for local variables allocated from stack - You take care of this for arrays allocated from heap Memory alignment - `malloc()` not necessarily good enough - `posix_memalign()` to allocate memory, `free()` to release - See `common/vector.*` for helper functions - `float8_alloc()`, `double4_alloc()` double4_t* x = double4_alloc(n); double4_t* y = double4_alloc(n); for (int i = 0; i < n; ++i) { for (int j = 0; j < 4; ++j) { x[i][j] = ...; } } for (int i = 0; i < n; ++i) { double4_t z = x[i]; y[i] = z * z; } free(x); free(y); Memory alignment - CPU vector instructions: require proper alignment - Vector types: promise of proper alignment - C compiler can safely generate vector instructions for (int i = 0; i < n; ++i) { double4_t z = x[i]; y[i] = z * z; } L42: vmovapd (%rbx,%rax), %ymm0 vmulpd %ymm0, %ymm0, %ymm0 vmovapd %ymm0, (%r12,%rax) addq $32, %rax cmpq %rdx, %rax jne L42 “…pd” = packed doubles = vector of doubles “ymm…” = 256-bit register How to exploit vector extensions How to exploit vector instructions? - Needs some creativity! - Design your algorithm so that you can do the same operation for many items, in parallel - Often some preprocessing & postprocessing needed: convert input data to suitable vectors and back // Goal: sum of squares s = x[0]*x[0] + ... + x[n-1]*x[n-1]; // Preprocessing: pack to vectors v[0] = { x[0], x[1], x[2], x[3] }; v[1] = { x[4], x[5], x[6], x[7] }; ... // Pad last vector with zeroes if needed v[m-1] = { x[n-2], x[n-1], 0, 0 }; // Calculation: each component independently in parallel y = v[0]*v[0] + ... + v[m-1]*v[m-1]; // Postprocessing: combine components Other examples • Interleave input rows: in each vector, element $i$ comes from row $i$ we can then process multiple rows in parallel • Multidimensional input: (red, green, blue)-triples in digital images, multiple channels in digital audio Efficient use of vector instructions — instruction-level parallelism — memory hierarchy Toy example: sum of squares // Repeatedly do multiply-and-add // for an array of with “size” vectors for (int j = 0; j < iter; ++j) { for (int i = 0; i < size; ++i) { double4_t y = v[i]; x += y * y; } } How well does this perform? nanoseconds/vector multiplication array size in bytes disappointing? Instruction-level parallelism • Good: parallelism in vector operations • Bad: very little opportunities for instruction-level parallelism • Inherently sequential: \[ x += y[0] \cdot y[0]; \ x += y[1] \cdot y[1]; \] \[ x += y[2] \cdot y[2]; \ x += y[3] \cdot y[3]; \ldots \] **Bad** \[ x \leftarrow y[0] \times y[0]; \] \[ x \leftarrow y[1] \times y[1]; \] \[ x \leftarrow y[2] \times y[2]; \] \[ x \leftarrow y[3] \times y[3]; \] \[ x \leftarrow y[4] \times y[4]; \] \[ x \leftarrow y[5] \times y[5]; \] \[ x \leftarrow y[6] \times y[6]; \] \[ x \leftarrow y[7] \times y[7]; \] \[ x \leftarrow y[8] \times y[8]; \] \[ ...\] **Better** \[ t[0] \leftarrow y[0] \times y[0]; \] \[ t[1] \leftarrow y[1] \times y[1]; \] \[ t[2] \leftarrow y[2] \times y[2]; \] \[ t[3] \leftarrow y[3] \times y[3]; \] \[ t[0] \leftarrow y[4] \times y[4]; \] \[ t[1] \leftarrow y[5] \times y[5]; \] \[ t[2] \leftarrow y[6] \times y[6]; \] \[ t[3] \leftarrow y[7] \times y[7]; \] \[ t[0] \leftarrow y[8] \times y[8]; \] \[ ...\] // More opportunities for instruction-level parallelism // (assuming here that “size” is a multiple of 8) double4_t t[8]; ... for (int j = 0; j < iter; ++j) { for (int i = 0; i < size; i += 8) { for (int k = 0; k < 8; ++k) { double4_t y = v[i + k]; t[k] += y * y; } } } for (int k = 0; k < 8; ++k) { x += t[k]; } Any improvements? Bottlenecks - Naive version: *latency* of vector operations - Unrolled version, small data: *throughput* of vector operations - Unrolled version, large data: getting data from the *memory* nanoseconds/vector multiplication array size in bytes L1 data 32K L2 256K L3 8M memory 16G Caches How do caches work? • CPU ↔ L1 ↔ L2 ↔ L3 ↔ memory • Whenever you read memory: • CPU reads the full cache line (64 bytes) from the nearest cache that contains it • stores it in all intermediate caches, makes room by throwing away older data Some rules of thumb • Repeatedly work with a small chunk of <= 32KB of data: • all data remains in L1 • small latency (order of 1 ns) • large bandwidth Some rules of thumb • Random reads in >> 8MB of data: • most memory lookups are cache misses • large latency (order of 100 ns) • small bandwidth Some rules of thumb • **Ideal:** linear scanning of L1 • **Good:** random access of L1, linear scanning of L2–L3 • **Tolerable:** linear scanning of main memory • **Horrible:** random access of main memory Some rules of thumb • You can do useful work while you wait for data from memory • Instruction-level parallelism does it automatically, if there are some other independent operations that you can run Optimising cache usage cache blocking in matrix multiplication Arithmetic intensity - Throughput of arithmetic operations larger than main memory bandwidth - Whenever you read data from main memory to caches (or from caches to registers), try to do many arithmetic operations with the same data Example: matrix multiplication - Multiplying $n \times n$ matrixes: $O(n^2)$ data, $O(n^3)$ operations - Naive algorithm: each operation needs to fetch new data from memory - Better algorithm: most operations use data that is already in cache or registers Matrix multiplication \[ \begin{array}{ccc} A & \times & B \\ \times & & \times \\ = & & = \\ C & & C \end{array} \] Naive solution: poor locality Cache blocking: better locality \[ \begin{array}{c} A \\ \times \\ B \\ = \\ C \end{array} \] Reusing data in registers - Naive: calculate 1 dot product $x_1 \cdot y_1$ - Better: calculate simultaneously 4 dot products $x_1 \cdot y_1$, $x_1 \cdot y_2$, $x_2 \cdot y_1$, $x_2 \cdot y_2$ - read 2 times as much data - produce 4 times as many results - better *arithmetic intensity* Reusing data in registers • Naive: calculate 1 dot product $x_1 \cdot y_1$ • Better: calculate simultaneously 9 dot products $x_1 \cdot y_1$, ..., $x_3 \cdot y_3$ • read 3 times as much data • produce 9 times as many results • still enough registers to keep everything...? $A \times B = C$ $C_{11} = X_{11} + Y_{11}$ Summary • Use *vector instructions* to better exploit parallel processing units in modern CPUs • Pay attention to *caches*: reuse data • Do not forget *instruction-level parallelism* • Do not forget using *multiple threads*
{"Source-Url": "http://users.ics.aalto.fi/suomela/ppc-2015/ppc-lectures-2.pdf", "len_cl100k_base": 5265, "olmocr-version": "0.1.53", "pdf-total-pages": 98, "total-fallback-pages": 0, "total-input-tokens": 125446, "total-output-tokens": 9053, "length": "2e12", "weborganizer": {"__label__adult": 0.0004639625549316406, "__label__art_design": 0.0005002021789550781, "__label__crime_law": 0.0004193782806396485, "__label__education_jobs": 0.00035572052001953125, "__label__entertainment": 0.00010162591934204102, "__label__fashion_beauty": 0.0001876354217529297, "__label__finance_business": 0.0002472400665283203, "__label__food_dining": 0.0005292892456054688, "__label__games": 0.0010128021240234375, "__label__hardware": 0.00879669189453125, "__label__health": 0.0005855560302734375, "__label__history": 0.0004029273986816406, "__label__home_hobbies": 0.00017976760864257812, "__label__industrial": 0.0011053085327148438, "__label__literature": 0.0001914501190185547, "__label__politics": 0.00035762786865234375, "__label__religion": 0.0007829666137695312, "__label__science_tech": 0.08624267578125, "__label__social_life": 6.639957427978516e-05, "__label__software": 0.007293701171875, "__label__software_dev": 0.888671875, "__label__sports_fitness": 0.0005521774291992188, "__label__transportation": 0.0008330345153808594, "__label__travel": 0.00029540061950683594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16619, 0.03266]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16619, 0.31036]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16619, 0.63273]], "google_gemma-3-12b-it_contains_pii": [[0, 105, false], [105, 246, null], [246, 277, null], [277, 498, null], [498, 587, null], [587, 674, null], [674, 726, null], [726, 726, null], [726, 726, null], [726, 783, null], [783, 783, null], [783, 832, null], [832, 903, null], [903, 980, null], [980, 1202, null], [1202, 1239, null], [1239, 1472, null], [1472, 1656, null], [1656, 1745, null], [1745, 2040, null], [2040, 2245, null], [2245, 2283, null], [2283, 2468, null], [2468, 2783, null], [2783, 2955, null], [2955, 3190, null], [3190, 3372, null], [3372, 3630, null], [3630, 3837, null], [3837, 3880, null], [3880, 4118, null], [4118, 4385, null], [4385, 4600, null], [4600, 4746, null], [4746, 5095, null], [5095, 5310, null], [5310, 5498, null], [5498, 5732, null], [5732, 5863, null], [5863, 6186, null], [6186, 6222, null], [6222, 6300, null], [6300, 6446, null], [6446, 6524, null], [6524, 6793, null], [6793, 6879, null], [6879, 7088, null], [7088, 7320, null], [7320, 7372, null], [7372, 7639, null], [7639, 7659, null], [7659, 7869, null], [7869, 8105, null], [8105, 8425, null], [8425, 8635, null], [8635, 8797, null], [8797, 8953, null], [8953, 9085, null], [9085, 9267, null], [9267, 9421, null], [9421, 9610, null], [9610, 9789, null], [9789, 10020, null], [10020, 10226, null], [10226, 10482, null], [10482, 10649, null], [10649, 10944, null], [10944, 10977, null], [10977, 11231, null], [11231, 11642, null], [11642, 11892, null], [11892, 11981, null], [11981, 12238, null], [12238, 12238, null], [12238, 12309, null], [12309, 12596, null], [12596, 13329, null], [13329, 13714, null], [13714, 13714, null], [13714, 13904, null], [13904, 13997, null], [13997, 14004, null], [14004, 14250, null], [14250, 14409, null], [14409, 14561, null], [14561, 14771, null], [14771, 14771, null], [14771, 14973, null], [14973, 15037, null], [15037, 15271, null], [15271, 15530, null], [15530, 15648, null], [15648, 15678, null], [15678, 15773, null], [15773, 16066, null], [16066, 16347, null], [16347, 16392, null], [16392, 16619, null]], "google_gemma-3-12b-it_is_public_document": [[0, 105, true], [105, 246, null], [246, 277, null], [277, 498, null], [498, 587, null], [587, 674, null], [674, 726, null], [726, 726, null], [726, 726, null], [726, 783, null], [783, 783, null], [783, 832, null], [832, 903, null], [903, 980, null], [980, 1202, null], [1202, 1239, null], [1239, 1472, null], [1472, 1656, null], [1656, 1745, null], [1745, 2040, null], [2040, 2245, null], [2245, 2283, null], [2283, 2468, null], [2468, 2783, null], [2783, 2955, null], [2955, 3190, null], [3190, 3372, null], [3372, 3630, null], [3630, 3837, null], [3837, 3880, null], [3880, 4118, null], [4118, 4385, null], [4385, 4600, null], [4600, 4746, null], [4746, 5095, null], [5095, 5310, null], [5310, 5498, null], [5498, 5732, null], [5732, 5863, null], [5863, 6186, null], [6186, 6222, null], [6222, 6300, null], [6300, 6446, null], [6446, 6524, null], [6524, 6793, null], [6793, 6879, null], [6879, 7088, null], [7088, 7320, null], [7320, 7372, null], [7372, 7639, null], [7639, 7659, null], [7659, 7869, null], [7869, 8105, null], [8105, 8425, null], [8425, 8635, null], [8635, 8797, null], [8797, 8953, null], [8953, 9085, null], [9085, 9267, null], [9267, 9421, null], [9421, 9610, null], [9610, 9789, null], [9789, 10020, null], [10020, 10226, null], [10226, 10482, null], [10482, 10649, null], [10649, 10944, null], [10944, 10977, null], [10977, 11231, null], [11231, 11642, null], [11642, 11892, null], [11892, 11981, null], [11981, 12238, null], [12238, 12238, null], [12238, 12309, null], [12309, 12596, null], [12596, 13329, null], [13329, 13714, null], [13714, 13714, null], [13714, 13904, null], [13904, 13997, null], [13997, 14004, null], [14004, 14250, null], [14250, 14409, null], [14409, 14561, null], [14561, 14771, null], [14771, 14771, null], [14771, 14973, null], [14973, 15037, null], [15037, 15271, null], [15271, 15530, null], [15530, 15648, null], [15648, 15678, null], [15678, 15773, null], [15773, 16066, null], [16066, 16347, null], [16347, 16392, null], [16392, 16619, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16619, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16619, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16619, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16619, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16619, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16619, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16619, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16619, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16619, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16619, null]], "pdf_page_numbers": [[0, 105, 1], [105, 246, 2], [246, 277, 3], [277, 498, 4], [498, 587, 5], [587, 674, 6], [674, 726, 7], [726, 726, 8], [726, 726, 9], [726, 783, 10], [783, 783, 11], [783, 832, 12], [832, 903, 13], [903, 980, 14], [980, 1202, 15], [1202, 1239, 16], [1239, 1472, 17], [1472, 1656, 18], [1656, 1745, 19], [1745, 2040, 20], [2040, 2245, 21], [2245, 2283, 22], [2283, 2468, 23], [2468, 2783, 24], [2783, 2955, 25], [2955, 3190, 26], [3190, 3372, 27], [3372, 3630, 28], [3630, 3837, 29], [3837, 3880, 30], [3880, 4118, 31], [4118, 4385, 32], [4385, 4600, 33], [4600, 4746, 34], [4746, 5095, 35], [5095, 5310, 36], [5310, 5498, 37], [5498, 5732, 38], [5732, 5863, 39], [5863, 6186, 40], [6186, 6222, 41], [6222, 6300, 42], [6300, 6446, 43], [6446, 6524, 44], [6524, 6793, 45], [6793, 6879, 46], [6879, 7088, 47], [7088, 7320, 48], [7320, 7372, 49], [7372, 7639, 50], [7639, 7659, 51], [7659, 7869, 52], [7869, 8105, 53], [8105, 8425, 54], [8425, 8635, 55], [8635, 8797, 56], [8797, 8953, 57], [8953, 9085, 58], [9085, 9267, 59], [9267, 9421, 60], [9421, 9610, 61], [9610, 9789, 62], [9789, 10020, 63], [10020, 10226, 64], [10226, 10482, 65], [10482, 10649, 66], [10649, 10944, 67], [10944, 10977, 68], [10977, 11231, 69], [11231, 11642, 70], [11642, 11892, 71], [11892, 11981, 72], [11981, 12238, 73], [12238, 12238, 74], [12238, 12309, 75], [12309, 12596, 76], [12596, 13329, 77], [13329, 13714, 78], [13714, 13714, 79], [13714, 13904, 80], [13904, 13997, 81], [13997, 14004, 82], [14004, 14250, 83], [14250, 14409, 84], [14409, 14561, 85], [14561, 14771, 86], [14771, 14771, 87], [14771, 14973, 88], [14973, 15037, 89], [15037, 15271, 90], [15271, 15530, 91], [15530, 15648, 92], [15648, 15678, 93], [15678, 15773, 94], [15773, 16066, 95], [16066, 16347, 96], [16347, 16392, 97], [16392, 16619, 98]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16619, 0.00939]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
53e4626a2c0e7f7e1ff4cb6c8233c6de85386c1d
Simulation Modeling of a Large-Scale Formal Verification Process He Zhang, Gerwin Klein, Mark Staples, June Andronick, Liming Zhu, and Rafal Kolanski NICTA, Australia University of New South Wales, Australia {firstname.lastname}@nicta.com.au Abstract—The L4.verified project successfully completed a large-scale machine-checked formal verification at the code level of the functional correctness of the seL4 operating system microkernel. The project applied a middle-out process, which is significantly different from conventional software development processes. This paper reports a simulation model of this process; it is the first simulation model of a formal verification process. The model aims to support further understanding and investigation of the dynamic characteristics of the process and to support planning and optimization of future process enactment. We based the simulation model on a descriptive process model and information from project logs, meeting notes, and version control data over the project’s history. Simulation results from the initial version of the model show the impact of complex coupling among the activities and artifacts, and frequent parallel as well as iterative work during execution. We examine some possible improvements on the formal verification process in light of the simulation results. Keywords—software process modeling; process simulation; formal verification; system dynamics; microkernel I. INTRODUCTION Formal software verification is the verification method that provides the strongest known assurance that a software system implementation is consistent with its specification. Formal verification does not merely check all lines of code or all decisions in a program, but all possible behaviors for all possible inputs. More commonly applied verification methods are testing and code inspection. While they provide high return for lower assurance levels, they do not scale well to providing high assurance and become prohibitively expensive for the assurance level that formal verification can provide. While formal verification is cheaper for high assurance than testing, it still is a high-effort verification method and currently only feasible to apply for life- or mission-critical software systems. Most previous industrial use of formal methods has only performed formal specification, rarely formal verification [1], and if the latter, then often only for lightweight properties, not for a full proof of implementation correctness [2]. The recent formal verification of the seL4 (secure embedded L4) microkernel [3] has demonstrated that this method does scale to industrially relevant software systems and sizes on the order of 10,000 lines of C code. seL4 is part of the L4 family of high-performance operating system (OS) microkernels [4]. The L4.verified project has performed formal verification not only at the design level, but down to C source code; and not only for lightweight properties, but for the full functional correctness of a highly complex software system—the seL4 microkernel. While a microkernel is a comparatively small software system, its verification with an overall effort of 25 person years was a large-scale research project. Because of its relatively high effort and up to now infrequent use in practice, we believe it important to analyze and investigate the process of formal verification and how it influences the rest of the development process. The process used in the L4.verified project was significant in enabling its success. In earlier work [5], we reported on a detailed, descriptive model of the verification process used in L4.verified that was validated by project data and experience. A qualitative finding was that the middle-out approach provides advantages over pure top-down and bottom-up processes for formal methods. However, a descriptive process model neither reflects the dynamic behavior of the process, nor does it provide predictive power for supporting detailed project planning and execution. As there is little empirical evidence about formal verification processes, we decided to employ a simulation model to further investigate them, based on our experience with L4.verified. This paper builds on our previous work [5], and reports a new process simulation study of the L4.verified project. The objective of this research is to contribute to better understanding of how large formal methods projects can be run successfully. Simulation results will inform process improvements for overall project performance and process adaptability. To our knowledge, in terms of the earlier systematic surveys [6], [7] and more recent observations, this is the first process simulation model of a formal verification process. Based on a tailored descriptive process model of L4.verified, in this paper we report our work that: 1) developed a continuous process simulation model—VPMsim 1.0; 2) approximately calibrated parts of the simulation model with the data from L4.verified’s project repository and team leaders’ recollections; and 3) investigated the possible impacts of process decisions or changes to this project. The paper is structured as follows. We first provide a brief overview of the L4.verified project and seL4 microkernel in Sect. II. Sect. III describes the middle-out verification Software verification can be accomplished by any of several means or their combination. Common verification methods are test, review, and analysis, which can be performed manually or automatically. A number of process simulation models studied these verification methods [6], [7]. Formal methods are another verification option that is able to prove correctness of software with respect to mathematically-specified requirements. It has not yet been investigated using process simulation. A. L4.verified Project The L4.verified project completed the implementation and formal verification of the seL4 microkernel. A kernel is the part of the OS that runs in the privileged mode of the hardware. It has direct access to all hardware resources and provides the basic mechanisms for implementing the rest of the system. A microkernel, as opposed to more common monolithic OS kernels, is reduced to the bare minimum of functionality and code. The seL4 kernel comprises 8,700 lines of C code and 600 lines of assembler (without counting blank lines and comments). This radical reduction in size comes with a price in complexity. It results in high coupling and a high degree of interdependency between different parts of the kernel, as apparent in the function call graph of seL4 in Fig. 1. The motivation for the radical reduction in size and for formally proving functional correctness, is to provide high levels of assurance for safety, security, or correctness of systems. Formal verification gives the highest degree of assurance we can provide [8]. The small size of seL4 reduces the amount of critical code that must be formally verified. The L4.verified project ran over 4 years from 2005 to 2009. It involved two teams: OS kernel developers and formal methods practitioners. A previous paper [5] has reported a number of general lessons about the management and execution of the project. B. Related Work in Process Simulation Systematic literature surveys [6], [7] show that software verification, e.g., inspection and testing, is one of the five most common topics in process simulation. Nonetheless, the model reported in this paper is the first reported simulation model for a formal verification process. This subsection reviews process simulation models that focus on non-formal verification and validation in software development. Software verification ensures product quality, and has been investigated by different process simulation techniques in varying organizational settings. Raffo et al. [9], [10] modeled V&V (verification and validation) as a portion of the traditional V-model style development process (i.e. ISO 12207) adopted on NASA’s software development projects. They created a discrete-event simulator to quantitatively assess the economic benefits of performing V&V activities on development projects and to optimize that benefit across alternative V&V integration strategies. This enabled NASA to more effectively allocate scarce resources for V&V activities. GENSIM 2.0 [11] is a System Dynamics based process simulator that models and simulates a generic development-verification (D-V) process. The GENSIM 2.0 model is constructed with three levels of refinement and their validation counterparts: requirements D-V, design D-V, and code D-V. A variety of verification activities could be adopted on each level. GENSIM 2.0 is able to assess the overall effectiveness and performance (e.g., product quality, project duration and effort/cost) of varying combinations of different development, verification, and validation strategies and techniques depending on the inputs of 28 parameters. III. A Middle-out Formal Verification Process This section describes the middle-out formal verification process of the L4.verified project using a high-level conceptual model and a detailed descriptive model. A. Conceptual Process Model The goal of the L4.verified project was to develop and formally verify a high-performance kernel. It is a challenge to design a formally verifiable kernel while maintaining high performance. To obtain high performance, kernel developers... usually take a bottom-up approach to design, focusing on low-level details that allow efficient management of hardware. In contrast, formal methods practitioners often prefer a top-down approach based on simple models with a high level of abstraction. To achieve both objectives, the L4.verified project bridged the gap between verifiability and performance using an iterative, concurrent prototype-base, middle-out process shown in Fig. 2. This is significantly different from the conventional pure bottom-up and top-down approaches. It is based around an intermediate target that is used and understood by both the kernel developers and the formal methods practitioners, with the aim of rapidly iterating through design, specifications, implementation and formal model until convergence. This intermediate target is a prototype of the kernel written in the functional language Haskell (in the middle of Fig. 2). It is translated automatically into the executable specification of the kernel in the theorem prover Isabelle/HOL [12]. The importance of the use of executable specifications in formal verification in a theorem prover has been recognized previously [13]. The abstract specification, on the right of Fig. 2, is a formal description of the functionality of the kernel. It specifies the outer interface and effects of each system call, but does not describe in detail how these effects are implemented. In other words it describes what is expected from the kernel, whereas the executable specification describes how the kernel will achieve its purpose. In that sense the executable specification represents the design of the kernel. The proof that the executable specification refines the abstract specification was carried out first. This proof can be seen as design verification. On the left of the conceptual model (Fig. 2), a low-level high-performance implementation of the kernel was manually written in the C language. The second proof shows that the source code correctly implements the executable specification, which we will also refer to as code verification. Note that the C code is translated directly and automatically into the theorem prover for verification [14]. B. Descriptive Process Model The formal verification of seL4, in combination with the development of the kernel itself, did not follow a conventional software engineering process reported in the literature. Instead the project followed the implicit conceptual process described above. In earlier work [5], we reported on a postmortem analysis of the process applied in this project and formulated a detailed, descriptive process model that shows process patterns and potential process factors for reuse and scaling of formal verification in software and systems development. In this subsection, the descriptive process model [5] is tailored for our initial simulation model, by eliminating the maintenance phase. This was done to simplify creation and calibration of this initial simulation model. As shown in the conceptual model in Fig. 2, the L4.verified project used a middle-out approach, starting with an executable specification, which was then proved to be consistent with a high-level abstract specification, and later with the low-level source code. Fig. 3 shows the tailored descriptive process model of L4.verified. Each activity in the model is directly linked to its input and output artifacts, and annotated with the performers (OS or FM team), type (manual, automatic, or interactive) and its step number. Note that the terms activity and step are used interchangeably in this paper. We do not include activities on proof tools and libraries in this version. The formal verification activities are technical development processes [15] modeled between the three levels of abstraction. The steps S1 to S6 in Fig. 3 roughly correspond to the transformations between artifacts in the conceptual model. While the main differences between the conceptual model and the descriptive representation of the process in Fig. 3 are the detailed artifact- and work-flow and the explicit decision points being modeled. The initial kernel requirements (and new feature & change requests) on the top left in Fig. 3 are fed into the first step S1 – prototype development. The dashed line denotes the exogenous artifacts that come from outside the process rather than artifacts being generated during the process. The output prototype (Haskell) is automatically translated to be executable specification at S2. S3 defines the abstract specification based on the prototype. When S2 and S3 become stable, the refinement between these two specifications is proved in S4. The defects (inconsistencies between the two specifications) detected in S4 are returned to S1 and S3 separately for rework. On the code level, when the prototype becomes stable (i.e. S4 has gone through its first major iteration), S5 is triggered to manually implement the kernel in C. Later in S6, the source code is verified against the design (executable specification). Because the design becomes mature after S4, code-level defects are usually fixed directly in S5 and S6. In rare cases they are escalated to the design level (S1) or even up to the abstract specification level (S3). In the real project, this process experienced multiple iterations through steps S1–S6. They were triggered by feature changes in the prototype as well as by defects discovered during either verification phase, for example the loop S1-S3-S4-S1-S2-S4-... An interesting artifact in formal verification is the set of invariants. In the middle-out process, the invariants are mostly proved as part of the design verification (S4). These invariants are reused heavily in the code verification, which helps to reduce the workload in step S6. Though theoretically S4 and S6 could be performed in parallel, significant savings in low-level (code) verification were possible because the invariants from S4 had stabilized. In terms of the experience from L4.verified, starting S6 too early may negate this effect. Note that S6 may also induce additional invariants to be proved on the executable specification. Invariant proofs are the highest-effort parts of this verification. More generally, the descriptive process model identifies three main phases of the project annotated on the right-hand side of the diagram: 1) prototype development, which clearly appears in the beginning; 2) specification definition and design verification, an iterative process on the right of the diagram; and 3) kernel implementation and code verification, another iterative process on the left of the diagram. The entire process in Fig. 3 terminated when there were neither bugs remaining or being reported in all artifacts, nor new features or change requests coming into the process. IV. THE VPMsim SIMULATION MODEL In order to understand and investigate the dynamic behaviors of the middle-out formal verification process, we developed a process simulation model based upon the descriptive model (Fig. 3). This section elaborates the initial version of the simulation model—VPMsim 1.0. A. Modeling Scope and Approach There is a difference in modeling scope between the earlier descriptive process model [5] and the simulation model in this study. In the complete descriptive process model, after the release of the verified system (cf. Fig. 3), the L4 verified project progressed into the maintenance phase. This is out of the scope for the simulation modeling reported in this paper. For this stage, the simulation model focuses on the phases from the beginning until the first release of the project, i.e. excluding the maintenance phase. Our research focus for this first version of VPMsim is on the direct activities of the production and verification of the artifacts in Fig. 3. This means we did not: 1) model the development of supporting tools for translation and theorem-prover; 2) explicitly model the conventional verification, e.g., unit test, whose effects are reflected when calibrating the development rates of prototype development and kernel implementation; 3) model the activities that did not contribute to development and verification based on requirements, e.g., documentation and code cleanup. We have used a continuous process simulation approach, System Dynamics (SD), which allows less micro-process level data than would be required for discrete simulation. B. Model Structure and Execution VPMsim 1.0 was developed using Vensim, the most commonly used SD modeling and simulation package in software process research over the past decade [7]. Vensim provides a graphical workbench and a number of extra features on the top of SD, e.g., views and subscripts. The VPMsim 1.0 model comprises ten views, 180+ parameters, including auxiliary ones, and over 2000 lines of code. Fig. 4 shows the high-level structure of the simulation model that is based on the descriptive model of the middle-out process (Fig. 3). The development and formal verification activities of the middle-out process are modeled and organized by views, a mechanism offered by Vensim that facilitates development and understanding of large scale SD models. It also increases module reuse within a complex model. The current version (1.0) model is composed of ten views, eight of which correspond to the specific activities (steps) of the formal verification process modeled in Fig. 3. As shown in Fig. 4, they are prototype development (PD), abstract definition (AD), kernel implementation (KI), high-level verification (HV), low-level verification (LV), prototype rework (PR), abstract rework (AR), and kernel rework (KR). Note that step S2 (Fig. 3) is not modeled as a view in Fig. 4 since this step can be automated. Another important change from the descriptive model is that rework views of abstract, prototype and kernel are created apart from their development views, because differences between the two types of view are significant, e.g., inputs, productivity, workforce and process control. On the bottom of Fig. 4, there are the other two views—process state (PS), which defines important variables shared by the other views of activities and exogenous variables (e.g., requirements generation), and workforce allocation (WA) that implements dynamic workforce allocation among the activities in parallel. In addition to view, subscripting, a mechanism provided by Vensim that enables the variables holding different values for multiple entities simultaneously, is also used in building VPMsim 1.0, but for modeling defect severity levels only (i.e. high, medium and low). Due to limited space, we do not show all views of the SD model. We only describe the relationships and constraints among the views in simulation. The above eight activities (views) are performed by the FM and OS teams respectively. This is denoted by the darker and lighter gray background of the views in Fig. 4. The simulation starts with the PD view by the OS team. (The automated translation of prototype is absorbed in the PD view in the simulation model.) PD is followed by AD (abstract spec. by the FM team) and KI (kernel by the OS team). Note that in the real project and in the simulation, the stabilization of PD may always trigger AD immediately for an early HV. The lower-level KI waits until a majority of HV completes. The question of when to kick off KI depends on a number of factors (e.g., the progress of HV), and is one of the what if questions to be explored in simulation (cf. Sect. V). When the first version of the abstract spec. is completed, it is proved in HV against the prototype (design) from PD. During HV, any design defects and abstract (specification) defects detected are fed into PR and AR correspondingly for rework. The corrected prototype and abstract are re-verified in HV. In the L4.verified project, kernel programming started when a large portion of the design proof was completed. In the simulation model, the implemented kernel (from KI) is verified against the prototype in LV. Any detected code-level defects are fixed in KR. Only a small number of design defects have to go into PR or AR for rework. All of these corrections may lead to re-verification at LV or even at HV. During simulation, requests for new features and changes are generated and introduced into the process from the left (PD in Fig. 4). The resulting updates to prototype, abstract spec., and kernel also have to be re-verified. ### C. Model Parameters and Calibration The VPMsim model consists of a large number of parameters that represent inputs, outputs, and policies or constraints at the activity (view) level and overall process level. Many other parameters have to be calibrated against empirical data for specific projects, teams, artifacts, and techniques. Table I lists a subset of the model parameters as examples. Note that many of the parameters may vary over time and between specific activities, artifacts, teams, and iterations. The development and verification of the sel4 kernel from prototyping through implementation, including all formal models and proofs, has been managed using version control systems. Around 9,000 changesets provide detailed information about the evolution of artifacts over the full lifetime of the project, including the ongoing maintenance phase. Each changeset identifies who made the change, of which artifact, at what time, and the size change of the artifact. Analysis of these changesets give us estimates of the growth of artifacts and the workforce allocation over its project period. For other information for model calibration, such as for invariants, defect distribution, and policy issues, we made relatively coarse estimates based on the team leaders’ recollections. More detail about how the data was retrieved from the repositories can be found in earlier work [5]. Some calibrated parameters are marked in Table I. By analyzing graphical representations of the repository data (cf. [5]) combined with explanatory project logs (comments), we constructed Fig. 5. It shows the real progress on each of the five major artifacts during the project duration (without maintenance phase) in swim lanes. On the leftmost of Fig. 5 the five artifacts are grouped by their development/verification teams. The intermediate states of each artifact (denoted in rectangles) are positioned in the diagram in terms of their artifact type (lane) and occurring time along with the project’s timeline. We marked the critical states for model calibration in gray. The shadowed states (bug fixes) are also important, but cannot easily be distinguished within the repository data. The dashed-line arrows indicate the sequential order and dependencies across the artifacts’ states. The timeline diagram clearly shows how the different artifacts’ states overlapped, and how dependencies and iterations happened in the project. ### V. Model Use and Results As the L4.verified project is the first instance of the middle-out formal verification process and VPMsim 1.0 was calibrated with the data of this project only, the proper application of this model is to re-investigate this project. In this section, we first validate the model by simulating the original L4.verified project as a baseline, then use it to examine some possible changes to the project. #### A. Simulation Baseline We first defined the input parameters as close as possible to the real L4.verified project, and then ran the simulation model. According to the project timeline Fig. 5, there were three major versions of the prototype, which corresponded to the evolved requirements (new features+change requests) during the project. The simulation generates the exact same amount of requirements changes at these three time points. Also the kernel implementation and abstract definition are triggered as shown in Fig. 5. Another important input is the workforce turnover. The baseline model simulates the personnel’s entries and exits in both teams as recorded in the project repository. Due to space limitations, Fig. 6 only shows the changes of some important output parameters generated in this run. In the simulation (Fig. 6-d), the baseline project completes on Dec 25, 2008 with a total effort of 5,400 person-days (roughly 15 person-years), which conforms to the project teams’ experience (14 person-years for kernel-specific verification) [5]. The effort on formal verification related work (done by the FM team) takes about 80% of the total effort (4400 person-days), in particular nearly half on the design refinement proof (approximately 2400 person-days). The frequent ups and downs shown in Fig. 6-c indicate the intensive personnel switches between parallel activities. Note that the simulated baseline project finished earlier than the real project. The length of the gap is almost one calendar year. It is caused by the following possible reasons: 1) the VPMsim 1.0 models development and verification, but not other activities such as documentation and code cleanup; 2) the simulation only handles weekdays and weekends, but the public holidays and team members’ (annual and sick) leave are not calculated; 3) the workload on development related to the theorem prover is not considered in this model. However, by comparing the trends and quantitative measures of the scales of main output parameters (e.g., sizes of artifacts in Fig. 6-a and -b) between the baseline simulation and the original project, the simulation results are close to the reality and most noticeable differences can be reasonably explained. Hence, based on the model calibration and the baseline simulation, we consider VPMsim 1.0 acceptable for further process investigation of L4.verified. ### B. Model Application Scenarios One important characteristic of the L4.verified project is the parallel work on development and verification. Fig. 5 shows a number of parallel activities in the project, in particular the second half of the project. Note that some activities started late in the diagram with a big gap to their predecessors. The real L4.verified project had particular reasons for this, but in other hypothetical projects following the middle-out process these reason may not be present. In a new project, these steps could potentially start earlier in the process, in parallel with other activities, to optimize the overall process performance. For example, 1) the abstract definition, 2) the manual implementation of the kernel in C, 3) the development of new features introduced in prototype (after ver. 2). Accordingly, we chose 6 months as an observation period and investigated the impact of three possible change scenarios to the baseline project using simulation: - **S1**: start the abstract definition six months earlier than the baseline; - **S2**: enable the kernel implementation six months earlier than the baseline; - **S3**: introduce new features to the last major version (ver. 3) of prototype development six months earlier than the baseline. Of these scenarios S2 would have been feasible in the real L4.verified project, S1 and S3 not, at least not easily. ### C. Simulation Results The simulation ran in terms of the process changes suggested above without changing any other model parameters. The results of some output parameters are shown in Fig. 7 in comparison with the baseline. Based on the simulation results, we discuss the possible impacts of the proposed process changes on overall project performance, particularly effort and duration. - **S1**: Fig. 5 shows the abstract definition started about 14 months later than the kickoff of the project. Theoretically, this step can start once the initial version of prototype is stable. For triggering it six months earlier the simulation... predicts savings on both project cost (effort) and duration. The scenario S1 finishes on May 1, 2008 with a cost of 4,437 person-days. This suggested change may result in a nearly seven-month advance of the overall project. This possible improvement can be attributed to the early involvement of the FM team, which allows an earlier high-level verification on design (shown as Fig. 7-c). As two main rounds of design proof complete earlier, it further reduces the later parallel verification work on both levels and increases the verification rate for the code proof. S2: For moving the kernel implementation six months forward, the simulation predicts a delay of the overall project for five months, as well as an effort increase of about 800 person-days compared to the baseline. By looking into the simulation, we found that at the suggested time, the kernel implementation starts when the prototype is not mature and stable yet (in parallel with ver. 3 development and rework), and so it incurs a number of additional code defects that cannot be fixed immediately (a flat defect level in Fig. 7-b). Meanwhile, since the OS team has fewer members than FM team, frequent switches between prototype and kernel may lower the team’s productivity. As a result, the extended completion of prototype and kernel further delays the verification on two levels. S3: In Fig. 5, the third group of requirements (new features) were introduced to prototype nine months after the version 2. When introducing these new features six months earlier, the simulated project effort and duration remain almost unchanged compared to the baseline. Although the OS team develops version 3 of the prototype earlier, bug fixing also relies on the progress on the design refinement proof (cf. the accumulated design bug level in Fig. 7-a). In addition, as the kernel implementation starts at the same time as in the baseline, there are no improvements on the performance of the code proof. Each of the above scenarios only suggests process change on one parameter: the start date of one step. The simulation also supports testing combinations of other possible process changes, but this is beyond the scope of this paper. VI. DISCUSSION A. Experience The simulation model, VPMsim 1.0, reflects the characteristics of the formal verification process in model structure and simulation results: 1) concurrent development/verification activities, 2) frequent iterations and re-verifications, 3) dynamic and concurrent resource (workforce) allocation, and 4) the effect of invariants in code verification. When we started to develop the simulation model, we tried to apply more customizable process patterns (such as [11]) and the use of subscription to simplify the model structure and maximize the reuse of model components across activities (views). However, as shown in Fig. 3, each activity has different types of input and output artifacts and complex control flows with other activities, which resulted in complicated control logic behind the model components. This explosion in complexity further complicated model debugging. In response to this, we restructured the model with eight dedicated views for the activities. This allows a more straightforward modeling and debugging of activity-specific characteristics, but sacrifices some component reusability. Fig. 6-c reflects frequent staff switches between concurrent activities in the project life-cycle. In the extreme case, one developer may work on three different artifacts simultaneously and quickly switch between them. Due to the inherent limitation of continuous modeling, this phenomenon was seldom modeled and reported in the literature of SD-based process simulation. In order to correctly model the dynamic workforce allocation between the parallel work, we developed a large and complex model component (WA view in Fig. 4). This component can be reused in modeling other concurrency-intensive processes in the SD approach. B. Limitations The metrics used in VPMsim 1.0, such as lines of proof and numbers of invariants, are relatively coarse measures, used for simulation study. They sometimes do not realistically capture and reflect the essence of size and progress in a formal verification process. For example, because the invariants are all connected, isolating their individual effects in verification is almost impossible. The investigation and development of appropriate metrics for a formal verification process is beyond the scope of this paper, and requires more theoretical research and empirical evidence from the practical application of formal verification. Another limitation is the precision of the data for model calibration. The details about size of change of artifacts and workforce allocation are based on version control. However, the data set only reflects each artifact’s commit times, not the effort spent on the artifact. In particular, if a person worked on multiple activities simultaneously, a precise estimate of their effort allocation to different activities is hard to achieve. The VPMSim 1.0 is implemented using System Dynamics. We found it is difficult to model such a complex process at a fine-grained level using continuous simulation. For instance, continuous simulation merely allows the tracking of process entities on an average level, e.g., feature, defect, invariant and developer, we cannot assign properties to each of them and trace their change individually. Though we can use mechanisms such as subscribing in Vensim to setup finer categories, the help this provides for modeling precision is still limited. Discrete simulation is more suitable for handling an individual entity’s movement through a process, especially in iterative and parallel styles, and may result in a higher precision with more details for analysis. VII. CONCLUSIONS AND FUTURE WORK The L4.verified formal verification project succeed in large part due to the middle-out process used, together with other formal and technical innovations. Based on the descriptive process model formulated in our previous study, we developed a large scale process simulation model—VPMSim 1.0 to further investigate this unique process. This paper reports the model and simulation results after the initial calibration. Specifically, we 1) developed the first instance of continuous simulation model of a formal verification process; 2) calibrated the model with the data from a real, large-scale project; 3) show the potential value of a process simulator in support of formal verification in practice; 4) report our experience in modeling and simulating a formal methods project. The initial results and experience of this research on formal verification process offer a number of suggestions for future work, such as 1) converting VPMSim 1.0 to a discrete-event (or hybrid) simulation model that allows detailed modeling and tracking of the entity flow; 2) extending the scope of VPMSim to cover the maintenance phase of the L4.verified project and support for future decision making; 3) applying the simulation model in other formal methods projects and enhance its adaptability. ACKNOWLEDGMENT NICTA is funded by the Australian Government as represented by the Department of Broadband, Communications and the Digital Economy and the Australian Research Council through the ICT Centre of Excellence program. REFERENCES
{"Source-Url": "http://ts.data61.csiro.au/publications/nicta_full_text/5605.pdf", "len_cl100k_base": 7143, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 35658, "total-output-tokens": 8693, "length": "2e12", "weborganizer": {"__label__adult": 0.0004057884216308594, "__label__art_design": 0.00032520294189453125, "__label__crime_law": 0.0003275871276855469, "__label__education_jobs": 0.00084686279296875, "__label__entertainment": 6.693601608276367e-05, "__label__fashion_beauty": 0.00017213821411132812, "__label__finance_business": 0.00032258033752441406, "__label__food_dining": 0.0003709793090820313, "__label__games": 0.0006690025329589844, "__label__hardware": 0.0009899139404296875, "__label__health": 0.0005788803100585938, "__label__history": 0.0002963542938232422, "__label__home_hobbies": 8.767843246459961e-05, "__label__industrial": 0.00046443939208984375, "__label__literature": 0.0003070831298828125, "__label__politics": 0.0002543926239013672, "__label__religion": 0.0004723072052001953, "__label__science_tech": 0.03302001953125, "__label__social_life": 9.268522262573242e-05, "__label__software": 0.004611968994140625, "__label__software_dev": 0.9541015625, "__label__sports_fitness": 0.0003528594970703125, "__label__transportation": 0.0006651878356933594, "__label__travel": 0.00021827220916748047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39495, 0.01666]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39495, 0.39658]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39495, 0.9214]], "google_gemma-3-12b-it_contains_pii": [[0, 5322, false], [5322, 9428, null], [9428, 14593, null], [14593, 16388, null], [16388, 20340, null], [20340, 24974, null], [24974, 29205, null], [29205, 31786, null], [31786, 34141, null], [34141, 39495, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5322, true], [5322, 9428, null], [9428, 14593, null], [14593, 16388, null], [16388, 20340, null], [20340, 24974, null], [24974, 29205, null], [29205, 31786, null], [31786, 34141, null], [34141, 39495, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39495, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39495, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39495, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39495, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39495, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39495, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39495, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39495, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39495, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39495, null]], "pdf_page_numbers": [[0, 5322, 1], [5322, 9428, 2], [9428, 14593, 3], [14593, 16388, 4], [16388, 20340, 5], [20340, 24974, 6], [24974, 29205, 7], [29205, 31786, 8], [31786, 34141, 9], [34141, 39495, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39495, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
a3786755f95ac47f772d405cdf94c92ab11bd629
# Package ‘mmeln’ September 25, 2015 **Type** Package **Title** Estimation of Multinormal Mixture Distribution **Version** 1.2 **Date** 2015-09-23 **Author** Charles-Edouard Giguere **Maintainer** Charles-Edouard Giguere <ce.giguere@gmail.com> **Description** Fit multivariate mixture of normal distribution using covariance structure. **License** GPL-3 **LazyLoad** yes **Repository** CRAN **Encoding** UTF-8 **Date/Publication** 2015-09-25 16:19:10 **NeedsCompilation** no ### R topics documented: - mmeln-package .......................... 2 - dmnorm .................................. 3 - estim ................................... 4 - exY ....................................... 5 - mmeln .................................... 6 - plot.mmeln,logLik.mmeln,anova.mmeln,print.mmeln .......................... 8 - post.mmeln,entropy.mmeln .................. 10 Index 12 mmeln-package *Mixture of multivariate normal* **Description** This package fits mixture of multivariate normal. Different types of covariance structure can be used. **Details** <table> <thead> <tr> <th>Package</th> <th>MMELN</th> </tr> </thead> <tbody> <tr> <td>Type</td> <td>Package</td> </tr> <tr> <td>Version</td> <td>1.0</td> </tr> <tr> <td>Date</td> <td>2010-10-22</td> </tr> <tr> <td>License</td> <td>GPL (&gt;= 2)</td> </tr> <tr> <td>LazyLoad</td> <td>yes</td> </tr> </tbody> </table> **Author(s)** Charles-Édouard Giguère Maintainer: Charles-Édouard Giguère <ce.giguere@umontreal.ca> **See Also** mmeln, estim.mmeln, anova.mmeln **Examples** ```r ### load an example. data(exY) ### estimation of the parameters of the mixture. temps <- factor(1:3) mmeln1 <- mmeln(Y, G = 2, form.loc = ~temps-1, form.mel = ~1, cov = "CS") mix1 <- estim(mmeln1, mu = list(rep(1,3), rep(2,3)), tau = c(0), sigma = list(c(1,.6), c(1,.6)), iterlim = 100, tol = 1e-6) mix1 anova(mix1) plot(mix1, main="Mixture of multivariate normal") ``` Description Function to estimate Multivariate Normal Density Function Usage dnrm(X, Mu, Sigma) Arguments X A matrix or a vector (if you have only one multivariate observation) containing the data. This matrix may contain missing data. Mu A mean vector or a matrix where the number of column is p. If Mu is a matrix and X a vector, the density is evaluated for each value of Mu specified in the matrix Mu Sigma The covariance matrix. This matrix must be symmetric positive definite(all eigen values are positive. see eigen) Details This methods compute the value of the density function for a given data and a given set of parameters. It works like the R command dnorm in the stats package. Although this methods can be used directly it is not intended this way. If you want to estimate density of multivariate normal distribution, the library mvtnorm is more appropriate Value This command return a vector of density. Note This function can be used as a standalone but is implemented here for use within the mmeln package Author(s) Charles-Édouard Giguère References M.S. Srivastava (2002), Methods of Multivariate Statistics, WILEY See Also mmeln,eigen Examples ```r dmnorm(1:3,1:3,diag(3)) ``` --- ### Description Compute the MLE of the model parameters using the E-M (Expectation-Maximization) algorithm ### Usage ```r ## S3 method for class 'mmeln' estim(x,...,mu=NA,tau=NA,sigma=NA,random.start=FALSE,iterlim=500,tol=1e-8) ``` ### Arguments - `x`: An object of type `mmeln` containing the design of the model, see `mmeln` - `...`: For the moments no other arguments can be added - `mu`: A list of length $X \times G$ containing the starting value for the location parameters - `tau`: The starting value for the mixture parameters - `sigma`: A list of length $X \times G$ containing the starting value for the covariances parameters - `random.start`: A True/False value indicating if the starting parameters should be given at random. If true the starting values are not needed. - `iterlim`: The maximum number of iterations allowed - `tol`: Tolerance, degree of precision required to stop the iterative process ### Details Methods `estim.mmeln...` are used by the `estim` function but are of no use outside this method. ### Value Return an object of type "mmeln" & "mmelnSOL" les arguments suivants : - `obj$Y`: The data matrix - `obj$G`: The number of groups - `obj$p`: Number of column in Y - `obj$N`: Number of row in Y - `obj$Xg`: The list of location design matrices - `obj$pl`: The number of location parameters Mixture design matrix The number of mixture parameters Covariance type logical value indicating if covariance is equal across group The number of covariance parameters Author(s) Charles-Édouard Giguère References Srivastava, M.S. (2002), Methods of Multivariate Statistics, WILEY See Also mmeln.package Examples data(exY) ### estimation of the parameters of the mixture temps=0:2 mmeln1=mmeln(Y, G = 3, form.loc = list(~temps, ~temps + I(temps^2), ~temps + I(temps^2)), form.mel = ~SEXE, cov = "cs") mmelnSOL1=estim(mmeln1, mu = list(c(1,1), c(2,0,0), c(3,0,0)), tau = c(0,0,0), sigma = list(c(1,0), c(1,0), c(1,0))) A two mixture example Description A simulated dataset used for example Format Two variables are available: SEXE A variable identifying sex of participants. Y A three column matrix containing the data. Details Half of the row follow the distribution $N[(2,3,4)',\begin{bmatrix}1 & .6 & .5 \\ .6 & 1 & .3 \\ .5 & .3 & 1\end{bmatrix}]$, the other half follow the distribution $N[(-1,5,-2)',\begin{bmatrix}1 & .6 & .5 \\ .6 & 1 & .3 \\ .5 & .3 & 1\end{bmatrix}]$. --- **mmeln**: mixture of multivariate normal **Description** constructor for objects of class mmeln: mixture of multivariate normal **Usage** ```r mmeln(Y,G=2,p=dim(Y)[2],form.loc=NULL,X=NULL, form.mel=NULL,Z=NULL,cov="IND",equalcov=FALSE,param=NULL) ``` **Arguments** - **Y**: A matrix containing the data used for estimation. This matrix may contain NA but it needs at least one observation per row. It’s assumed that the missing mechanism is not related to the data under study (MAR: Missing At Random). - **G**: The number of groups in the mixture - **p**: Doesn’t need to be specified. It’s the dimension of the multivariate data (number of column in Y) - **form.loc,X**: Location design of the model. By default, the mean model is used where we estimate p mean in each group. Only one of these two parameters must be specified depending if the model is specified through a formula (See R documentation) or a design matrix. If you want to specify a different design for each group you must pass the arguments as a list. See examples below for further details. If a formula is used it must use variable of length p representing the design across time, for example: ~temps where temps=factor(1:4). If a design matrix is used, it must be of dimension p*k where k<=p - **form.mel,Z**: Mixture design of the model. Only one of these two parameters must be specified. The design is constant across groups. This is equivalent to multinomial regression - **cov**: Covariance type (for now only the CS structure is implemented). Enter either the type of covariance as a string or as numeric corresponding to the position in the following choices : 1)UN (general unstructured covariance), 2) CS (Compound Symmetry with constant variance), 3) UCS (Compound Symmetry with unconstant variance), 4) AR1 (Auto-regressive of order 1 with constant variance), 5) UAR1 (Auto-regressive of order 1 with unconstant variance), 6) IND: (diagonal structure with constant variance), 7) UIND (diagonal structure with unconstant variance) - **equalcov**: Logical value T/F indicating if the variance is equal across groups. Default to FALSE. The `mmeln` object describes the way the mixture is designed and permits a lot of different modelisation of the data. Many specific methods are associated with this class of objects: print, anova, logLik, post. Once a solution is found through the `estim.mmeln` function, the object is promoted to an object of class `mmelnSOL` but inherits all the attributes and function of the `mmeln` class but gains its own print method. The attributes in a `mmeln` object should be accessed via adequate function inside the `mmeln` library except if handled by an advanced user. **Value** Retourne un objet de type "mmeln" ayant les arguments suivants : - `obj$Y` The data matrix - `obj$Yl` A list of length N containing the data in each row without the NA value. - `obj$Yv` A list of length N indicating the column where there is valid data - `obj$G` The number of groups - `obj$p` Number of column in Y - `obj$pi` A vector where p[i] is the number of observation in row i - `obj$N` Number of row in Y - `obj$M` Number of total observations \( \sum_{i=1}^{N}(pi) \) - `obj$Xg` The list of location design matrices - `obj$p1` The number of location parameters - `obj$Z` Mixture design matrix - `obj$pm` The number of mixture parameters - `obj$cov` Covariance type - `obj$equalcov` logical value indicating if covariance is equal across group - `obj$pc` The number of covariance parameters **Author(s)** Charles-Édouard Giguère References Bernard D. Flury (1997), A first course in multivariate statistics, Springer M.S. Srivastava (2002), Methods of Multivariate Statistics, WILEY See Also estim.mmeln Examples data(exY) ### estimation of the parameters of the mixture temps <- 0:2 mmeln1 <- mmeln(y, G = 3, form.loc = list(-temps, -temps + I(temps^2), -temps + I(temps^2)), form.mel = ~SEXE, cov = "CS") plot.mmeln, logLik.mmeln, anova.mmeln, print.mmeln Utility methods for objects of class mmeln Description Methods to plot, compare and assessed the log(Likelihood) of objects of class mmeln. The method cov.tsf which convert a vector of covariance parameter into a covariance matrix and multnm which performs an estimation of multinomial model are internal methods that should not be used unless by experimented user Usage ## S3 method for class 'mmeln' plot(x,...,main="",xlab="Temps",ylab="Y",col=1:x$G,leg=TRUE) ## S3 method for class 'mmeln' logLik(object,...,param=NULL) ## S3 method for class 'mmeln' anova(object,..., test = TRUE) ## S3 method for class 'mmelnSOL' print(x,...,se.estim="MLR") cov.tsf(param,type,p) Arguments - **x**: An object of type mmeln or mmelnSOL (mmelnSOL required for the command print) - **object**: An object of type mmeln - **main**: Title of the graphic - **xlab**: Label of the X axis - **ylab**: Label of the Y axis - **col**: Colour of the lines plotted in each group - **leg**: Logical value indicating if the legend is plotted or not - **...**: Other object of type mmeln to compare (use is only valid in the anova command) - **test**: Logical value indicating if the likelihood ratio test is required. Valid only when two objects are entered - **param**: For the function logLik a list of parameters like defined in mmeln, by default it is taken from the mmeln object. For the cov.tsf function it is vector containing the distinct value of the covariance as defined in the mmeln function - **type**: Type of covariance as defined in mmeln - **p**: Rank of covariance matrix - **se.estim**: Type of estimator. The default is MLR based on the information matrix defined as $I_r^{-1} = I_r^{-1} I_e I_r^{-1}$. The other choices are the Observational information matrix "ML" and the Empirical information matrix based on the cross product of the gradient of the logLikelihood "ML.E" Details The function plot draws XSG lines showing the expected value. The function logLik gives the log(Likelihood) of a model. The function anova compares mmeln models and gives the total number of parameters, the log(Likelihood), the AIC (Akaike information criterion), the BIC (Bayesian information criterion based on the number of observation) and the BIC2 (BIC based on the number of subjects). Optionally, the Likelihood ratio test is performed. The function print is used for solution given by the estim.mmeln function. The print method gives the number of iterations required for convergence and the statistics for the location, mixture and covariance parameters. Author(s) Charles-Édouard Giguère References Bernard D. Flury (1997), A first course in multivariate statistics, Springer M.S. Srivastava (2002), Methods of Multivariate Statistics, WILEY See Also mmeln Examples ### load an example. data(exY) ### estimation of the parameters of the mixture temps=1:3 mmeln1=mmeln(Y,G=2,form.loc=as.factor(temps)-1,form.mel=-1,cov="CS") mmeln2=mmeln(Y,G=2,form.loc=list(~temps-1,(temps-2)^2),form.mel=-1,cov="CS") mix1=estim(mmeln1,mu=list(rep(1,3),rep(2,3)),tau=c(0), sigma=list(c(1,.4),c(1,.4)),iterlim=100,tol=1e-6) mix2=estim(mmeln2,mu=list(c(2,1),c(5,-1)),tau=c(0), sigma=list(c(1,.4),c(1,.4)),iterlim=100,tol=1e-6) mix1 mix2 anova(mix1,mix2) plot(mix1,main="Mixture of multivariate normal") plot(mix2,main="Mixture of multivariate normal") tau Mixture parameters. By default, those are taken from \( X \) sigma Covariance parameters. By default, those are taken from \( X \) Details This procedure returns the posterior probabilities of membership in each group or the entropy of the model. They were computed as described in McLachlan and Peel (2000). If the parameters \( X \$ \text{param} \) is not null no further parameters are necessary, otherwise you have to give a value for \( \mu \), \( \tau \), \( \sigma \) (this is mainly used inside the \text{estim.mmeln} \ function) Value Returns a matrix \( P \) with \( X$N \) row and \( X$G \) column where \( P[i,j] \) is the posterior probabilities of subject \( i \) being in the group \( j \) or the value of entropy. Author(s) Charles-Édouard Giguère References See Also \text{estim.mmeln} Examples #### load an example. \begin{verbatim} data(exY) #### estimation of the parameters of the mixture temps <- factor(1:3) mmeln1 <- mmeln(Y, G = 2, form.loc = ~temps - 1, form.mel = ~1, cov = "CS") mix1 <- estim(mmeln1, mu = list(rep(1,3),rep(2,3)), tau = c(0), sigma = list(c(1, .4), c(1, .4)), iterlim = 100, tol = 1e-6) post(mix1) entropy(mix1) \end{verbatim} Index *Topic datasets eX, 5 *Topic density dnorm, 3 dnorm, 3 eigen, 3 entropy(post.mmeln,entropy.mmeln), 10 estim, 4 estim.mmeln, 7–9, 11 estimloc.disp.CS1(estim), 4 estimloc.disp.IND1(estim), 4 estimmmlnCS1(estim), 4 estimmmlnIND1(estim), 4 eX, 5 I.CS1(estim), 4 I.IND1(estim), 4 IE.CS1(estim), 4 IE.IND1(estim), 4 logit(estim), 4 logLik.mmeln plot.mmeln,logLik.mmeln,anova.mmeln,print.mmeln, (plot.mmeln,logLik.mmeln,anova.mmeln,print.mmeln), 8 post.mmeln,entropy.mmeln, 10 *Topic multivariate estim, 4 mmeln, 6 plot.mmeln,logLik.mmeln,anova.mmeln,print.mmeln, (plot.mmeln,logLik.mmeln,anova.mmeln,print.mmeln), 8 post.mmeln,entropy.mmeln, 10 *Topic multivariate normal dnorm, 3 *Topic normal estim, 4 mmeln, 6 plot.mmeln,logLik.mmeln,anova.mmeln,print.mmeln, (plot.mmeln,logLik.mmeln,anova.mmeln,print.mmeln), 8 post.mmeln,entropy.mmeln, 10 *Topic package mmeln-package, 2 covNA.wt(estim), 4 dmnorm, 3 dnorm, 3 eigen, 3 entropy(post.mmeln,entropy.mmeln), 10 estim, 4 estim.mmeln, 7–9, 11 estimloc.disp.CS1(estim), 4 estimloc.disp.IND1(estim), 4 estimmmlnCS1(estim), 4 estimmmlnIND1(estim), 4 eX, 5 I.CS1(estim), 4 I.IND1(estim), 4 IE.CS1(estim), 4 IE.IND1(estim), 4 logit(estim), 4 logLik.mmeln plot.mmeln,logLik.mmeln,anova.mmeln,print.mmeln, (plot.mmeln,logLik.mmeln,anova.mmeln,print.mmeln), 8 post.mmeln,entropy.mmeln, 10 mmeln, 4, 6, 9, 10 mmeln-package, 2 mmeln.package, 5 mmeln-package (mmeln-package), 2 multnm (plot.mmeln,logLik.mmeln,anova.mmeln,print.mmeln), 8 post.mmeln,entropy.mmeln, 10 pfQ.intermediate.CS1(estim), 4 plot.mmeln (plot.mmeln,logLik.mmeln,anova.mmeln,print.mmeln), 8 plot.mmeln,logLik.mmeln,anova.mmeln,print.mmeln, 8 post.mmeln,entropy.mmeln, 10 cov.tsf (plot.mmeln,logLik.mmeln,anova.mmeln,print.mmeln), 8 post.mmeln,entropy.mmeln, 10 post.mmeln,entropy.mmeln, 10 print.mmelnSOL (plot.mmeln, logLik.mmeln, anova.mmeln, print.mmeln), 8 SEX (exY), 5 Xinv (estim), 4 Y (exY), 5
{"Source-Url": "http://cran.ms.unimelb.edu.au/web/packages/mmeln/mmeln.pdf", "len_cl100k_base": 4862, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 26388, "total-output-tokens": 5809, "length": "2e12", "weborganizer": {"__label__adult": 0.0003643035888671875, "__label__art_design": 0.0007333755493164062, "__label__crime_law": 0.000514984130859375, "__label__education_jobs": 0.0030536651611328125, "__label__entertainment": 0.0002275705337524414, "__label__fashion_beauty": 0.0001709461212158203, "__label__finance_business": 0.0004978179931640625, "__label__food_dining": 0.0007200241088867188, "__label__games": 0.0010919570922851562, "__label__hardware": 0.000919342041015625, "__label__health": 0.000988006591796875, "__label__history": 0.0005750656127929688, "__label__home_hobbies": 0.0002980232238769531, "__label__industrial": 0.0008568763732910156, "__label__literature": 0.0004229545593261719, "__label__politics": 0.0004162788391113281, "__label__religion": 0.0005235671997070312, "__label__science_tech": 0.2822265625, "__label__social_life": 0.00035858154296875, "__label__software": 0.047637939453125, "__label__software_dev": 0.65625, "__label__sports_fitness": 0.0005207061767578125, "__label__transportation": 0.0003936290740966797, "__label__travel": 0.0003664493560791016}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16893, 0.03848]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16893, 0.56476]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16893, 0.53905]], "google_gemma-3-12b-it_contains_pii": [[0, 882, false], [882, 1820, null], [1820, 2991, null], [2991, 4373, null], [4373, 5668, null], [5668, 8057, null], [8057, 9493, null], [9493, 10793, null], [10793, 13012, null], [13012, 13626, null], [13626, 14881, null], [14881, 16774, null], [16774, 16893, null]], "google_gemma-3-12b-it_is_public_document": [[0, 882, true], [882, 1820, null], [1820, 2991, null], [2991, 4373, null], [4373, 5668, null], [5668, 8057, null], [8057, 9493, null], [9493, 10793, null], [10793, 13012, null], [13012, 13626, null], [13626, 14881, null], [14881, 16774, null], [16774, 16893, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 16893, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16893, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16893, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16893, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16893, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16893, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16893, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16893, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16893, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16893, null]], "pdf_page_numbers": [[0, 882, 1], [882, 1820, 2], [1820, 2991, 3], [2991, 4373, 4], [4373, 5668, 5], [5668, 8057, 6], [8057, 9493, 7], [9493, 10793, 8], [10793, 13012, 9], [13012, 13626, 10], [13626, 14881, 11], [14881, 16774, 12], [16774, 16893, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16893, 0.02029]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
bb1a08428a04b38b16bc729457792506d0b76112
Extending Web Applications with Client and Server Plug-ins Markus Jahn, Reinhard Wolfinger, Hanspeter Mőssenböck Christian Doppler Laboratory for Automated Software Engineering Johannes Kepler University Linz Altenbergerstr. 69, 4040 Linz, Austria {jahn, wolfinger, moessenboeck}@ase.jku.at Abstract: Plug-in frameworks support the development of component-based software that is extensible and customizable to the needs of specific users. However, most current frameworks are targeting single-user rich client applications but do not support plug-in-based web applications which can be extended by end users. We show how an existing plug-in framework (Plux.NET) can be enabled to support multi-user plug-in-based web applications which are dynamically extensible by end users through server-side and client-side extensions. 1 Introduction Although modern software systems tend to become more and more powerful and feature-rich they are still often felt to be incomplete. It will hardly ever be possible to hit all user requirements out of the box, regardless of how big and complex an application is. One solution to this problem are plug-in frameworks that allow developers to build a thin layer of basic functionality that can be extended by plug-in components and thus tailored to the specific needs of individual users. Despite the success of plug-in frameworks so far, current implementations still suffer from several deficiencies: (a) Weak automation. Host components have to integrate extensions programmatically instead of relying on automatic composition. Furthermore, plug-in frameworks usually have no control over whether, how or when a host looks for extensions. (b) Poor dynamic reconfigurability. Host components integrate extensions only at startup time whereas dynamic addition and removal of components is either not supported or requires special actions in the host. (c) Separate configuration files. Composition is controlled by configuration files which are separated from the code of the plug-ins. This causes extra overhead and may lead to inconsistency problems. (d) Limited Web support. Plug-in frameworks primarily target rich clients or application servers. Although some frameworks extend the plug-in idea to web clients, they are still limited in customization and extensibility: They neither support individual plug-in configurations per user, nor do they integrate plug-ins executed on the client. 1 This work was supported by the Christian Doppler Research Association and by BMD Systemhaus GmbH. Over the past few years we developed a client-based plug-in framework called *Plux.NET* which tries to solve the problems described above. While issues (a) to (c) are covered in previous papers [WDP06, WRD08, Wo08, RWG09], this paper deals with issue (d) and describes how Plux.NET can be enabled for multi-user web applications. We show how such web applications can be extended by user-specific plug-ins both on the server side and on the client side. Client-side plug-ins can either run in managed .NET mode [MS08] or in sandbox mode using the Silverlight technology [MS09]. To demonstrate our approach we present a case study, namely a web-based time recorder composed of server-side, client-side, and sandbox components. Our research was done in cooperation with BMD Systemhaus GmbH, a company offering line-of-business software in the ERP domain. ERP applications consist of many different features that can either be used together or in isolation, thus being an ideal test bed for a plug-in approach. This paper is organized as follows: Section 2 describes the plug-in framework Plux.NET as the basis of our work. Section 3 uses a case study to explain the architecture of a component-based web application built with Plux.NET. Section 4 compares our work to related research. The paper closes with a summary and a prospect of future work. ## 2 The Plux.NET framework Plux.NET [WDPM06, Wo10] is a .NET-based plug-in framework that allows composing applications from plug-in components. It consists of a thin core (140 KBytes) that has slots into which extensions can be plugged. Plugging does not require any programming. The user just drops a plug-in (i.e., a DLL file) into a specific directory, where it will be automatically discovered and plugged into one or several matching slots. Removing a plug-in from the directory will automatically unplug it from the application. Thus adding and removing plug-ins is completely dynamic allowing applications to be reconfigured for different usage scenarios on the fly without restarting the application. Plug-in components (so-called *extensions*) can have slots and plugs. A *slot* is basically an interface describing some expected functionality and a *plug* belongs to a class implementing this interface and thus providing the functionality. Slots, plugs and extensions are specified declaratively using .NET attributes. Thus the information that is necessary for composition is stored directly in the metadata of interfaces and classes and not in separate XML files as for example in Eclipse [Ec03]. Let's look at an example. Assume that some host component wants to print log messages with time stamps. The logging should be implemented as a separate component that plugs into the host. We first have to define the slot into which the logger can be plugged. ```csharp [SlotDefinition("Logger")] [Param("TimeFormat", typeof(string))] public interface ILogger { void Print(string msg); } ``` The slot definition is a C# interface tagged with a [SlotDefinition] attribute specifying the name of the slot ("Logger"). Slots can have parameters defined by [Param] at- tributes. In our case we have one parameter TimeFormat of type string, which is to be filled by the extension and used by the host. Now, we are going to write an extension that fits into the Logger slot: ```csharp [Extension("ConsoleLogger")] [Plug("Logger")] [ParamValue("TimeFormat", "hh:mm:ss")] public class ConsoleLogger: ILogger { public void Print(string msg) { Console.WriteLine(msg); } } ``` An extension is a class tagged with an [Extension] attribute. It has to implement the interface of the corresponding slot (here ILogger). The [Plug] attribute defines a plug for this extension that fits into the Logger slot. The [ParamValue] attribute assigns the value "hh:mm:ss" to the parameter TimeFormat. Finally, we implement the host, which is another extension that plugs into the Startup slot of the Plux.NET core. The extension has a slot Logger; this is specified with a [Slot] attribute. This attribute also specifies a method AddLogger that will be called when an extension for this slot is plugged. AddLogger integrates the logger extension and retrieves the TimeFormat parameter. ```csharp [Extension("MyApp")] [Plug("Startup")] [Slot("Logger", OnPlugged="AddLogger")] public class MyApp: IStartup { ILogger logger = null; // the logger extension string timeFormat; // parameter of the logger extension public void Run() { ... if (logger != null) { logger.Print(DateTime.Now.ToString(timeFormat) + "": " + msg); } } public void AddLogger(object s, PlugEventArgs args) { logger = (ILogger) args.Extension; timeFormat = (string) args.GetParamValue("TimeFormat"); } } ``` This is all we have to do. If we compile the interface ILogger as well as the classes ConsoleLogger and MyApp into DLL files and drop them into the plug-in directory everything will fall into place. The Plux.NET runtime will discover the extension MyApp and plug it into the Startup slot of the core. It will also discover the extension ConsoleLogger and plug it into the Logger slot of MyApp. (see Figure 1) ![Figure 1: Composition architecture of the logger example](image) Plux.NET offers a light-weight way of building plug-in systems. Plug-ins are just classes tagged with metadata. They are self-contained, i.e., they include all the metadata neces- sary for discovering them and plugging them together automatically. There is no need for separate XML configuration files. The example also shows that Plux.NET is event-based. Plugging, unplugging and other actions of the runtime core raise events to which the programmer can react. The implementation of Plux.NET follows the plug-in approach itself. For example, the discovery mechanism that monitors the plug-in directory is itself a plug-in and can therefore be replaced by some other way of discovery. Other features of Plux.NET that cannot be discussed here for the lack of space are lazy loading of extensions (in order to keep application startup times small), management of composition rights (e.g., which extensions are allowed to open a certain slot, and which extensions are allowed to fill it), as well as a scripting API that allows experienced developers to override some of the automatic actions of the runtime core. These features are described in more detail in [Wo10]. 3 Extending Plux.NET for the Web The Internet has become fast and ubiquitous enough for implementing software as web applications that can be accessed by multiple clients. Web applications make it easier to bring software to the market. Additionally, updates can be done centrally without bothering administrators in different companies. However, web applications face similar problems as rich-client applications: when they get too big and feature-rich, they become hard to understand and difficult to use. Furthermore, current web applications are hardly customizable and usually not extensible by users. Finally, it is generally not possible to connect the clients’ local hardware to web applications. The goal of our research is to find solutions for these problems. Our idea is to apply the plug-in approach also to web applications by extending Plux.NET so that it becomes web-enabled. Originally, Plux.NET was designed for single-user rich-client applications. In its extended form it supports multi-user web applications where specific users have their individual set of components and their individual composition state. Users will be able to extend web applications with their own plug-ins or with plug-ins from third party developers. Plux.NET web components can be classified according to their composition type and their visibility scope. Depending on their visibility, they can affect just a single user, a group of users (e.g., a department of a company), or all users of a web application. Currently, the composition type of a component can be server-side, client-side, and sandbox integration. Server-side components are installed and executed on the server. Their advantage is that they are smoothly integrated into the server-based web application. They have minimal communication overhead and maximum availability. However, server-side components increase the work load on the server and constitute a security risk. Users need to be authorized to install their extensions on the server. Client-side components are placed on the client and plugged into a web application virtually. Their major advantage is that users can integrate their local resources (e.g. hardware) into web applications. Another benefit is that users without authorization for in- stalling plug-ins on the server can still extend a web application. The disadvantages of client-side extensions are the benefits of server-side extensions: Communication between server-side and client-side components causes a certain overhead. Also, client-side extensions are only available when the client is connected. A third way of composition is to install components on the server, but to execute them in a client-side sandbox such as Adobe Flash or Silverlight (in the case of .NET). This composition type lends itself for building rich user interfaces in a web browser that go beyond the features of HTML or JavaScript, especially if these interfaces should be extensible and customizable without requiring extensions to be installed on the client. Sandbox extensions are copied to the client on demand and are executed there, so they help to keep the work load on the server small. Even though components are executed on different computers and in different environments, the development and composition process in Plux.NET is the same for server-side plug-ins, client-side plug-ins, and sandbox plug-ins. The composition model is based on the metaphor of slots and plugs as in the rich client approach. Developers just specify the components' metadata in a declarative way. The runtime core is responsible for plug-in discovery, composition, and communication. Therefore, if there are no dependencies on hardware-specific resources it is possible to reuse a rich client component also as a server-side or a client-side component for web applications. For reusing rich client components as Silverlight sandbox components, they need to be recompiled in a special way, because Silverlight assemblies are not binary compatible with other .NET assemblies. 3.1 Case study and scenarios To demonstrate the idea of our web-enabled plug-in framework this section shows some scenarios: In a case study we extend a web application by various user-specific plug-ins which are composed by using the different composition types described above and by applying the different visibility scopes to them. Figure 2 shows the component architecture of a web application which is used for recording and evaluating the labour times of employees. The recording and the statistics are implemented by components, each consisting of a GUI component for rendering the user interface and a component for the business logic. The business logic components (Recorder, Statistics) are connected to the Data Provider component while the GUI components are used by the Layout Manager component for building the user interface. The Layout Manager is plugged into the root component named Time Recorder. All components in Figure 2 are server-side components and thus are installed and executed on the server. To keep the scenario simple, some other required components (e.g., Plux.NET runtime components) are hidden in the following pictures. ![Figure 2: Component architecture of a time recorder web application](image-url) To get an impression of how such a web application could look like, Figure 3 shows a possible user interface. The Layout Manager arranges the user interfaces of the Recorder GUI and Statistics GUI components in its window area, depending on some metadata which describes the arrangement of the GUI components. The Recorder GUI has buttons for starting, pausing, and stopping time recording while the Statistics GUI displays the recorded labour time. Figure 3: Possible user interface of the time recorder. Figure 2 contains only server-side components. Thus, the user interface in a client's web browser is restricted to web technologies such as HTML, CSS and JavaScript. However, in our scenario developers want to use the rich internet technology Silverlight for building a more sophisticated user interface. Therefore, they implement the components Layout Manager, Recorder GUI and Statistics GUI as Silverlight components. Since these components are discovered as Silverlight components, they are automatically sent to the client and executed in its Silverlight environment while business logic components stay on the server. Our composition model composes sandbox components in the same way as server-side components. Sandbox components are virtually plugged into server-side components and vice versa. The new composition state is outlined in Figure 4. Figure 4: Silverlight components are virtually plugged into server-side components and vice versa Next, we assume that a user (Client 1) is not fully satisfied with the provided functionality of our time recorder. For annotating his activities during the day he wants to add a note to each time stamp. Therefore, Client 1 implements his own components for this feature. Since he wants to access his components from any computer, he installs the server-side component Notes (a note editor) and the Silverlight component Notes GUI on the server. Similar to the other components of our application the server-side component Note is used for business logic, while the Silverlight component Notes GUI displays the user interface. As Client 1 has set the visibility of these components to private, he is the only user who can see and access them. The individual view of the composition state for Client 1 is shown in Figure 5. ![Diagram](image.png) **Figure 5: User-specific server-side and sandbox components for Client 1** Besides adding components for a specific user, it is also possible to remove them. For example, a company could deny access to the Silverlight component Recorder GUI for several employees. Instead, a hardware time recorder which is represented by a client-side component Hardware Recorder could be installed at the company’s entrance. The Hardware Recorder uses the same server-side Recorder component as the Silverlight component Recorder GUI did. Figure 6 shows how the Silverlight component Recorder GUI is replaced by the client-side component Hardware Recorder for clients 2 to 5. Client-side components can be virtually plugged both into server-side components and into Silverlight components and vice versa. If client-side components are connected to Silverlight components the communication between them does not affect the server. 3.2 Architecture We will now look at the architecture and the internals of Plux.NET for web applications. Some aspects described in this section are still under development but the overall architectural design is finished and has been validated with prototypes. The root component of Plux.NET is the Runtime core, which has two slots, one for a discovery component and one for the application's root component. The default discovery component monitors the plug-in directory. Whenever an extension is dropped into this directory its metadata are read and the Runtime checks all loaded components for open slots into which the new extension can be plugged. After an extension has been plugged, its own slots are opened and the Runtime looks for other extensions (loaded or unloaded) that can fill these slots. These steps are repeated until all matching slots and plugs have been connected. To perform this composition procedure for multi-user web applications the architecture of Plux.NET had to be extended. We now have different environments in which components can live: There is one environment for server-side components, one for client-side components and one for sandbox components (the latter two exist as separate instances for every client connected to the server). Every environment needs its own runtime infrastructure. So we split the Plux.NET runtime core into a Server Runtime, a Client Runtime and a Silverlight Runtime each running in its own environment. Logically these nodes form a single entity represented by the Runtime component on the server (see Figure 7). For discovering extensions on the server and on the client the discovery mechanism also had to be split into a Server Discovery and a Client Discovery component, each monitoring local extension directories. As long as components from the same environment are plugged together everything is like in the rich client case: the local runtime raises a Plugged event to which the host component reacts by integrating the extension component. The interesting new feature is virtual plugging, which is necessary when the host and the extension live in different environments. If an extension E from environment EnvE should be plugged into a host H from environment EnvH proxies have to be generated on both sides. A proxy H_P representing host H is created in EnvE, and a proxy E_P representing extension E is created in EnvH (Figure 8). These proxies carry the same metadata as the components for which they stand so they can be plugged into matching slots like any other component. If H wants to call a method of E it does the call to the local proxy E_P, which uses the local runtime infrastructure to marshal the call and send a message to proxy H_P in the other environment. H_P then does the actual call to E. Any results are sent back in the same way. Thus the communication between components—from the same or from different environments—is completely transparent to the component developer. Whenever a new extension is discovered on the server or on the client the local discovery component sends a broadcast to the runtime nodes of all other environments. The local runtime nodes decide whether proxies have to be generated. In the case of Silverlight extensions that were discovered on the server the Silverlight Runtime on the client requests a transfer of the extension from the server to the sandbox environment on the client. It is also worth noting that all environments have a consistent copy of the web application's composition state, i.e. they know, which components the application consists of and which plugs are connected to which slots. So the runtime node of every environment can easily find out into which slots a new extension fits and whether the extension has to be plugged in locally or virtually through proxy components. Since the runtime core is distributed, the individual runtime nodes have to use a communication channel for exchanging the components' metadata and their composition state. Additionally, the runtime core provides the communication infrastructure for the proxies of virtual plugged components. This infrastructure is based on the Windows Communication Foundation (WCF) API [Ca07]. As WCF supports several communication standards for distributed computing, the actually used communication technology can be configured by administrators. In our case study we used SOAP [GHM07] in combination with HTTP. Since web applications can be used by many clients simultaneously, the runtime core has to deal with several composition states in parallel. In doing so it has to make sure not to run out of memory. Hence, it applies a well-known solution for this problem: Once the server has only little memory left, server-side components get released at the end of a request and are recreated for new requests. This approach ensures that web applications can scale up to serve many simultaneous requests without running out of server memory. This concept also enables the usage of server farms. The drawback is that composition and component states need to be persisted during successive requests. The composition state is persisted automatically while the component state has to be persisted by the components themselves using the infrastructure of the runtime core. Components can either use custom .NET attributes to declare which values should be persisted and restored, or they can react to automatically raised events for persisting and restoring. The runtime does not only persist the server-side state, but also the states of the client environments. Thus, no matter from which computer a client connects to the application, it will always get the same state as it had last time, provided that possibly used client-side components are available on each computer. 4. Related work Plug-in frameworks have become quite popular recently. However, most of them are either targeting rich-client applications, or have no dynamic composition support, or aim at web applications, but cannot be customized and extended by end users. Eclipse [Ec03] is probably the most prominent plug-in platform today. It is written in Java and since version 3.0 it is based on the OSGi framework Equinox [Eq09]. Like Plux.NET it consists of a thin core and a set of plug-ins that provide further functionality. The major differences between Eclipse and Plux.NET are the following: Eclipse declares the metadata of plug-ins in XML files while Plux.NET declares them directly in the code using .NET attributes. Eclipse supports only rich client applications while our approach targets also multi-client web applications with extensions both on the server and on the client. Most importantly, the composition of Eclipse plug-ins has to be done manually, i.e. the host component has to use API calls to discover plug-ins, read their metadata and integrate them. In Plux.NET, plug-ins are discovered automatically and all matching slots are immediately notified by the runtime core, thus automating a substantial amount of composition work. Although Eclipse allows plug-ins to be added dynamically, the code for integrating plug-ins at startup time and at run time is different, whereas Plux.NET uses the same uniform mechanism for both cases. Many component-based web applications are based on the *Java Enterprise Edition* (Java EE) [Su09]. Java EE is a software architecture that allows the development of distributed, component-based, multi-tier software running on an application server. Java EE applications are generally considered to be three-tiered where components can be installed on the Java EE server machine, the database machines, or the client machines. Web components are usually servlets or dynamic web pages (created with JavaServer Faces or Java Server Pages) running on the server, but they can also be defined as application clients or applets running on the client. Even though Java EE provides a framework for building component-based and distributed web applications it provides no automatic composition support for components. Composition has to be done programmatically. Moreover, Java EE does not provide an individual composition state for every end user, so users cannot customize or extend web applications for their special needs. A further component system for distributed, component-based applications is *SOFA 2* [HP06, BHP06, BHP07]. It implements a hierarchical component model with primitive and composite components. Primitive components consist of plain code whereas composite components consist of other subcomponents. SOFA 2 has a distributed runtime environment which automatically generates connectors to support a transparent distribution of applications to different runtime environments. For communication the connectors use method invocation, message passing, streaming, or distributed shared memory. SOFA 2 allows dynamic reconfiguration of applications by adding and removing components as well as dynamic update of components at run time. In contrast to Plux.NET, however, SOFA 2 needs an ADL (Architecture Description Language) for describing the composition of components. Plux.NET does not need such a specification; its runtime composes an application on the fly using the declarations of slots and plugs provided by the Plux.NET components. We argue that this concept is more flexible and easier to maintain than a global ADL specification. Finally, browser-based web applications are not supported by SOFA 2. It has no multi-user support for individual composition states and no mechanism for persisting and restoring the composition states at run time. Currently, the only way how end users can extend a web application is through client-based plug-in systems such as *Mozilla Firefox* [Mo09]. However, client plug-ins are not integrated into web applications. They only add usability features to user interfaces or enable web browsers to use advanced web technologies such as Flash, Silverlight, or Java Applets. ### 5. Summary and future work In this paper we presented a dynamic plug-in framework for rich client and web applications with the focus on multi-user web applications that are extensible by end users. Each client can install its individual set of components and has its personal composition state. Components can be instantiated in different environments and on different computers, but are transparently composed into a single web application. Components for business logic can stay on the server, components that are connected to a client’s local resources can be executed on the client-side, and user interface components can live in a client’s rich internet environment such as Silverlight. Since several aspects mentioned above have been realized only prototypically so far, we are still improving the distributed runtime core of Plux.NET. Furthermore, we are working on a security model for plug-ins, a layout manager for extensible component-based user interfaces, a keyboard shortcut manager and many other helpful tools for component-based software development. References
{"Source-Url": "http://subs.emis.de/LNI/Proceedings/Proceedings159/33.pdf", "len_cl100k_base": 5454, "olmocr-version": "0.1.49", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 23929, "total-output-tokens": 7009, "length": "2e12", "weborganizer": {"__label__adult": 0.000270843505859375, "__label__art_design": 0.00024962425231933594, "__label__crime_law": 0.0002034902572631836, "__label__education_jobs": 0.00033164024353027344, "__label__entertainment": 3.7789344787597656e-05, "__label__fashion_beauty": 0.00010228157043457033, "__label__finance_business": 0.00013506412506103516, "__label__food_dining": 0.00022530555725097656, "__label__games": 0.00027370452880859375, "__label__hardware": 0.0004646778106689453, "__label__health": 0.00022399425506591797, "__label__history": 0.00012421607971191406, "__label__home_hobbies": 4.565715789794922e-05, "__label__industrial": 0.00019419193267822263, "__label__literature": 0.00012069940567016602, "__label__politics": 0.00013399124145507812, "__label__religion": 0.00026154518127441406, "__label__science_tech": 0.00238037109375, "__label__social_life": 5.322694778442383e-05, "__label__software": 0.004791259765625, "__label__software_dev": 0.98876953125, "__label__sports_fitness": 0.0001785755157470703, "__label__transportation": 0.00026226043701171875, "__label__travel": 0.0001513957977294922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31561, 0.01573]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31561, 0.52422]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31561, 0.90318]], "google_gemma-3-12b-it_contains_pii": [[0, 2540, false], [2540, 5674, null], [5674, 8016, null], [8016, 11282, null], [11282, 14288, null], [14288, 15928, null], [15928, 17513, null], [17513, 19305, null], [19305, 21346, null], [21346, 24760, null], [24760, 28189, null], [28189, 31561, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2540, true], [2540, 5674, null], [5674, 8016, null], [8016, 11282, null], [11282, 14288, null], [14288, 15928, null], [15928, 17513, null], [17513, 19305, null], [19305, 21346, null], [21346, 24760, null], [24760, 28189, null], [28189, 31561, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31561, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31561, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31561, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31561, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31561, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31561, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31561, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31561, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31561, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31561, null]], "pdf_page_numbers": [[0, 2540, 1], [2540, 5674, 2], [5674, 8016, 3], [8016, 11282, 4], [11282, 14288, 5], [14288, 15928, 6], [15928, 17513, 7], [17513, 19305, 8], [19305, 21346, 9], [21346, 24760, 10], [24760, 28189, 11], [28189, 31561, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31561, 0.0]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
39c21b78063f05aec6567827be0b8d9d9d480811
Beyond Multiprocessing: Multithreading the SunOS Kernel ABSTRACT Preparing the SunOS/SVR4 kernel for today's challenges: symmetric multiprocessing, multi-threaded applications, real-time, and multimedia, led to the incorporation of several innovative techniques. In particular, the kernel was re-structured around threads. Threads are used for most asynchronous processing, including interrupts. The resulting kernel is fully preemptible and capable of real-time response. The combination provides a robust base for highly concurrent, responsive operation. Introduction When we started to investigate enhancements to the SunOS kernel to support multiprocessors, we realized that we wanted to go further than merely adding locks to the kernel and keeping the user process model unchanged. It was important for the kernel to be capable of a high degree of concurrency on tightly coupled symmetric multiprocessors, but it was also a goal to support more than one thread of control within a user process. These threads must be capable of executing system calls and handling page faults independently. On multiprocessor systems, these threads of control must be capable of running concurrently on different processors. [Powell 1991] described the user-visible thread architecture. We also wanted the kernel to be capable of bounded dispatch latency for real-time threads [Khanna 1992]. Real-time response requires absolute control over scheduling, requiring preemption at almost any point in the kernel, and elimination of unbounded priority inversions wherever possible. The kernel itself is a very complex multi-threaded program. Threads can be used by user applications as a structuring technique to manage multiple asynchronous activities; the kernel benefits from a thread facility that is essentially the same. The resulting SunOS 5.0 kernel, the central operating system component of Solaris 2.0, is fully preemptible, has real-time scheduling, symmetrically supports multiprocessors, and supports user-level multithreading. Several of the locking strategies used in this kernel were described in [Kleiman 1992]. In this paper we'll describe some of the implementation features that make this kernel unique. Overview of the Kernel Architecture A kernel thread is the fundamental entity that is scheduled and dispatched onto one of the CPUs of the system. A kernel thread is very lightweight, having only a small data structure and a stack. Switching between kernel threads does not require a change of virtual memory address space information, so it is relatively inexpensive. Kernel threads are fully preemptible and may be scheduled by any of the scheduling classes in the system, including the real-time (fixed priority) class. Since all other execution entities are built using kernel threads, they represent a fully preemptible, real-time "nucleus" within the kernel. Kernel threads use synchronization primitives that support protocols for preventing priority inversion, so a thread's priority is determined by which activities it is impeding by holding locks as well as by the service it is performing [Khanna 1992]. SunOS uses kernel threads to provide asynchronous kernel activity, such as asynchronous writes to disk, servicing STREAMS queues, and callouts. This removes various diversions in the idle loop and trap code and replaces them with independently scheduled threads. Not only does this increase potential concurrency (these activities can be handled by other CPUs), but it also gives each asynchronous activity a priority so that it can be appropriately scheduled. Even interrupts are handled by kernel threads. The kernel synchronizes with interrupt handlers via normal thread synchronization primitives. If an interrupt thread encounters a locked synchronization variable, it blocks and allows the critical section to clear. A major feature of the new kernel is its support of multiple kernel-supported threads of control, called lightweight processes (LWPs), in any user process, sharing the address space of the process and other resources, such as open files. The kernel supports the execution of user LWPs by associating a kernel thread with each LWP, as shown in Figure 1. While all LWPs have a kernel thread, not all kernel threads have an LWP. Beyond Multiprocessing: Multithreading the SunOS Kernel A user-level library uses LWPs to implement user-level threads [Stein 1992]. These threads are scheduled at user-level and switched by the library to any of the LWPs belonging to the process. User threads can also be bound to a particular LWP. Separating user-level threads from the LWP allows the user thread library to quickly switch between user threads without entering the kernel. In addition, it allows a user process to have thousands of threads, without overwhelming kernel resources. Data Structures In the traditional kernel, the user and proc structures contained all kernel data for the process. Processor data was held in global variables and data structures. The per-process data was divided between non-swappable data in the proc structure, and swappable data in the user structure. The kernel stack of the process, which is also swappable, was allocated with the user structure in the user area, usually one or two pages long. The restructured kernel must separate this data into data associated with each LWP and its kernel thread, the data associated with each process, and the data associated with each processor. Figure 2 shows the relationship of these data structures in the restructured kernel. The per-process data is contained in the proc structure. It contains a list of kernel threads associated with the process, a pointer to the process address space, user credentials, and the list of signal handlers. The proc structure also contains the vestigial user structure, which is now much smaller than a page, and is no longer practical to swap. The LWP structure contains the per-LWP data such as the process-control-block (pcb) for storing user-level processor registers, system call arguments, signal handling masks, resource usage information, and profiling pointers. It also contains pointers to the associated kernel thread and process structures. The kernel stack of the thread is allocated with the LWP structure inside a swappable area. Kernel Thread Scheduling SunOS 5.0 provides several scheduling classes. A scheduling class determines the relative priority of processes within the class, and converts that priority to a global priority. With the addition of multithreading, the scheduling classes and dispatcher operate on threads instead of processes. The scheduling classes currently supported are system, timesharing, and real-time (fixed-priority). The dispatcher chooses the thread with the greatest global priority to run on the CPU. If more than one thread has the same priority, they are dispatched in round-robin order. The kernel has been made preemptible to better support the real-time class and interrupt threads. Preemption is disabled only in a small number of bounded sections of code. This means that a runnable thread runs as soon as it becomes high enough. For example, when thread A releases a lock on which higher priority thread B is sleeping, the running thread A immediately puts itself back on the run queue and allows the CPU to run thread B. On a multiprocessor, if thread A has better priority than thread B, but thread B has better priority than the current thread on another CPU, that CPU is directed to preempt its current thread and choose the best thread to run. In addition, user code run by an underlying kernel thread of sufficient priority (e.g., real-time threads) will execute even though other lower priority kernel threads wait for execution resources. Further details can be found in [Khanna 1992]. **System Threads** System threads can be created for short or long-term activities. They are scheduled like any other thread, but usually belong to the system scheduling class. These threads have no need for LWP structures, so the thread structure and stack for these threads can be allocated together in a non-swappable area, as shown in Figure 3. ![Figure 3: System Threads](image) A new segment driver, `seg_kp`, handles stack allocations. It handles virtual memory allocations for the kernel that can be paged or swapped out; it also provides “red zones” to protect against stack overflow. System threads use `seg_kp` for the stack and the thread structure, in a non-swappable region. LWPs use it to allocate the LWP structure and kernel stack in a swappable region. **Synchronization Architecture** The kernel implements the same synchronization objects for internal use as are provided by the user-level libraries for use in multithreaded application programs [Powell 1991]. These are mutual exclusion locks (`mutexes`), condition variables, semaphores, and multiple readers, single writer (readers/writer) locks. The interfaces are shown in Figure 4. ```c /* Mutual exclusion locks */ void mutex_enter(kmutex_t *lp); void mutex_exit(kmutex_t *lp); void mutex_init(kmutex_t *lp, char *name, kmutex_type_t type, void *arg); void mutex_destroy(kmutex_t *lp); int mutex_tryenter(kmutex_t *lp); /* condition variables */ void cv_wait(kcondvar_t *cp, kmutex_t *lp); int cv_wait_sig(kcondvar_t *cp, kmutex_t *lp); int cv_timedwait(kcondvar_t *cp, kmutex_t *lp, long tim); void cv_signal(kcondvar_t *cp); void cv_broadcast(kcondvar_t *cp); /* multiple reader, single writer locks */ void rw_init(krwlock_t *lp, char *name, krw_type_t type, void *arg); void rw_destroy(krwlock_t *lp); void rw_enter(krwlock_t *lp, krw_t rw); int rw_tryenter(krwlock_t *lp, krw_t rw); void rw_exit(krwlock_t *lp); void rw_downgrade(krwlock_t *lp); int rw_tryupgrade(krwlock_t *lp); /* counting semaphores */ void sema_init(kaena_t *sp, unsigned int val, char *name, ksema_type_t type, void *arg); void sema_destroy(kaena_t *sp); void sema_p(kaena_t *sp); int sema_p_sig(kaena_t *sp); int sema_tryp(kaena_t *sp); void sema_v(kaena_t *sp); ``` Figure 4: Kernel Thread Synchronization Interfaces These are all implemented such that the behavior of the synchronization object is specified when it is initialized. Synchronization operations, such as acquiring a mutex lock, take a pointer to the object as an argument and may behave somewhat differently depending on the type and optional type-specific argument specified when the object was initialized. Most of the synchronization objects have types that enable collecting statistics such as blocking counts or times. A patchable kernel variable can also set the default types to enable statistics gathering. This allows the selection of statistics gathering on particular synchronization objects or on the kernel as a whole. --- \[\text{Note that kernel synchronization primitives must use a different type name than user synchronization primitives so that the types are not confused in applications that read internal kernel data structures.}\] The semantics of most of the synchronization primitives cause the calling thread to be prevented from progressing past the primitive until some condition is satisfied. The way in which further progress is impeded (e.g., sleep, spin, or other) is a function of the initialization. By default, the kernel thread synchronization primitives that can logically block, can potentially sleep. Some of the synchronization primitives are strictly bracketing (e.g., the thread that locks a mutex must be the thread that unlocks it) and a single owner can be determined (i.e., mutexes and writer locks). In these cases, the synchronization primitives support the priority inheritance protocol, as described in [Khanna 1992]. Some synchronization primitives are intended for situations where they may block for long or indeterminate periods. Variants of some of the primitives are provided (e.g., cv_wait_sig() and sema_p_sig()) that allow blocking to be interrupted by a reception of a signal. There is no non-local jump to the head of the system call, as there was in the traditional sleep routine. When a signal is pending, the primitive returns with a value indicating the blocking was interrupted by a signal and the caller must release any resources and return. Mutual Exclusion Lock Implementation. Mutual exclusion locks (mutexes) prevent more than one thread from proceeding when the lock is acquired. They prevent races on access to shared data and are by far the most heavily used primitive. Mutexes are usually held for short intervals. For example, it would not be good to hold a critical system mutex while waiting for disk I/O to complete. Mutexes are not recursive; the owner of the lock cannot again call mutex_enter() for the same lock. If a thread holds a mutex, the same thread must be the one to release the mutex. These rules are enforced to promote good programming practice and to avoid deadlocks. Requests mutex_enter cannot set the lock (because it is already set), the blocking action taken depends on the mutex type that was passed to mutex_init, and stored in the mutex. The default blocking policy for mutexes, called adaptive (type MUTEX_DEFAULT), spins while the owner of the lock (recorded when the lock is acquired) remains running on a processor. This is done by polling the owner's status in the spin wait loop. If the owner ceases to run, the caller stops spinning and sleeps. This gives fast response and low overhead for simple contention. Spin mutexes are available as type MUTEX_SPIN, which takes as its type-specific argument the interrupt level to be disabled while the mutex is held. It is rarely used, as adaptive mutexes are more efficient, in general. Device drivers are restricted to using type MUTEX_DRIVER, which takes a Sun-DDI-defined opaque value as an argument. This argument is basically an interrupt priority in the current implementation, and determines whether the blocking policy is adaptive or spin, based on whether the interrupt priority is above the "thread level" (see below). A simple trick speeds up mutex_enter() for adaptive mutexes. Non-adaptive mutexes use a separate primitive lock field in the mutex data structure, with the lock field used by the adaptive type always in the locked state. This is so that mutex_enter() can always attempt to apply an adaptive lock first, and only if that fails, consider the possibility that the mutex might be another type. Turnstiles vs Queues in Synchronization Objects Each synchronization object requires a way of finding threads that are suspended waiting for that object. It is important to keep the storage cost of synchronization objects small, because many system structures contain synchronization objects, so the queue header is not directly in the object. Instead, two bytes in the synchronization object are used to find a turnstile structure containing the sleep queue header and priority inheritance information [Khanna 1992]. Turnstiles are preallocated such that there are always more turnstiles than the number of threads active. One alternative method would be to select the sleep queue from an array using a hash function on the address of the synchronization object. This is essentially the approach used by sleep() in the traditional kernel. The turnstile approach is favored for more predictable real-time behavior, since they are never shared by other locks, as hashed sleep queues sometimes are. Interrupts as Threads Many implementations [Hamilton 1988] [Peacock 1992] have a variety of synchronization primitives that have similar semantics (e.g., mutual exclusion) yet explicitly sleep or spin for blocking. For mutexes, the spin primitives must hold interrupt priority high enough while the lock is held to prevent any interrupt handlers that may also use the synchronization object from interrupting while the object is locked, causing deadlock. The interrupt level must be raised before the lock is acquired and then lowered after the lock is released. --- Footnotes: 2 In order to avoid locking while inspecting the owner's status during the spin, the state is determined indirectly. The algorithm spins while the current thread pointer of any CPU points to the owning thread, indicating it is running. 3 On uniprocessors, this turns into always sleeping, since the owner cannot be running. This has several drawbacks. First, the raising and lowering of interrupt priority can be an expensive operation, especially on architectures that require external interrupt controllers (remember that mutexes are heavily used). Secondly, in a modular kernel, such as SunOS, many subsystems are interdependent. In several cases (e.g., mapping in kernel memory or memory allocation) these requests can come from interrupt handlers and can involve many kernel subsystems. This in turn, means that the mutexes used in many kernel subsystems must protect themselves at a relatively high priority from the possibility that they may be required by an interrupt handler. This tends to keep interrupt priority high for relatively long periods and the cost of raising and lowering interrupt priority must be paid for every mutex acquisition and release. Lastly, interrupt handlers must live in a constrained environment that avoids any use of kernel functions that can potentially sleep, even for short periods. To avoid these drawbacks, the SunOS 5.0 kernel treats most interrupts as asynchronously created and dispatched high-priority threads. This enables these interrupt handlers to sleep, if required, and to use the standard synchronization primitives. On most architectures putting threads to sleep must be done in software. This must be protected from interrupts if interrupts are to sleep themselves or wakeup other threads. The restructured kernel uses a primitive spin lock protected by raised priority to implement this. This is one of a few bounded sections of code where interrupts are locked out. Traditional UNIX kernel implementations [Leffler 1989] [Bach 1986] also protect the dispatcher by locking out interrupts, usually all interrupts. The restructured kernel has a modifiable level (the “thread level”) above which interrupts are no longer handled as threads and are treated more like non-portable “firmware” (e.g., simulating DMA via programmed I/O). These interrupt handlers can only synchronize using the spin variants of mutex locks and software interrupts. If the “thread level” is set to the maximum priority, then all interrupts are locked out during dispatching. For implementations where the “firmware” cannot tolerate even the relatively small dispatcher lockout time, the “thread level” can be lowered. Typically this is lowered to the interrupt level at which the scheduling clock runs. Implementing Interrupts as Threads Previous versions of SunOS have treated interrupts in the traditional UNIX way. When an interrupt occurs the interrupted process is held captive (pinned) until the interrupt returns. Typically, interrupts are handled on the kernel stack of the interrupted process or on a separate interrupt stack. The interrupt handler must complete execution and get off the stack before anything else is allowed to run on that processor. In these systems the kernel synchronizes with interrupt handlers by blocking out interrupts while in critical sections. In SunOS 5.0, interrupts behave like asynchronously created threads. Interrupts must be efficient, so a full thread creation for each interrupt is impractical. Instead, we preallocate interrupt threads, already partly initialized. When an interrupt occurs, we do the minimum amount of work to move onto the stack of an interrupt thread, and set it as the current thread. At this point, the interrupt thread and the interrupted thread are not completely separate. The interrupt thread is not yet a full-fledged thread (it cannot be descheduled) and the interrupted thread is pinned until the interrupt thread returns or blocks, and cannot proceed on another CPU. When the interrupt returns, we restore the state of the interrupted thread and return. Interrupts may nest. An interrupt thread may itself be interrupted and be pinned by another interrupt thread. If an interrupt thread blocks on a synchronization variable (e.g., mutex or condition variable), it saves state (passivates) to make it a full-fledged thread, capable of being run by any CPU, and then returns to the pinned thread. Thus most of the overhead of creating a full thread is only done when the interrupt must block, due to contention. While an interrupt thread is in progress, the interrupt level it is handling, and all lower-priority interrupts, must be blocked. This is handled by the normal interrupt priority mechanism unless the thread blocks. If it blocks, these interrupts must remain disabled in case the interrupt handler is not reenterable at the point that it blocked or it is still doing high-priority processing (i.e., should not be interrupted by lower-priority work). While it is blocked, the interrupt thread is bound to the processor it started on as an implementation convenience and to guarantee that there will always be an interrupt thread available when an interrupt occurs (though this may change in the future). A flag is set in the cpu structure indicating that an interrupt at that level has blocked, and the minimum interrupt level is noted. Whenever the interrupt level changes, the CPU’s base interrupt level is checked, and the actual interrupt priority level is never allowed to be below that. There is also an interface which allows an interrupt thread to continue as a normal, high-priority thread. When release_interrupt() is called, it saves the state of the the pinned thread and clears the indication that the interrupt thread has blocked, allowing the CPU to lower the interrupt priority. --- 4On SPARC this overhead involves flushing the entire register file. This is only done if the interrupt handler sleeps, not during interrupt handling without contention. An alternative approach to this is to use bounded first-level interrupt handlers to capture device state and then wake up an interrupt thread that is waiting to do the remainder of the servicing [Barnett 1992]. This approach has the disadvantages of requiring device drivers to be restructured and of always requiring a full context switch to the second level thread. The approach used in SunOS 5.0 allows full thread behavior without restructured drivers and with very little additional cost in the no-contention case. **Interrupt Thread Cost** The additional overhead in taking an interrupt is about 40 SPARC instructions. The savings in the mutex enter/exit path is about 12 instructions. However, mutex operations are much more frequent than interrupts, so there is a net gain in time cost, as long as interrupts don’t block too frequently. The work to convert an interrupt into a “real” thread is performed only when there is lock contention. There is a cost in terms of memory usage also. Currently an interrupt thread is preallocated for each potentially active interrupt level below the thread level for each CPU. An additional interrupt thread is preallocated for the clock (one per system). Since each thread requires a stack and a data structure, perhaps 8K bytes or so, the memory cost can be high. However, it is unlikely that all interrupt levels are active at any one time, so it is possible to have a smaller pool of interrupt threads on each CPU. An additional interrupt thread is preallocated for the clock (one per system). Since each thread requires a stack and a data structure, perhaps 8K bytes or so, the memory cost can be high. **Non-MT Driver Support** Some drivers haven’t been modified to protect themselves against concurrency in a multithreaded environment. These drivers are called *MT-unsafe*, because they don’t provide their own locking. In order to provide some interim support for MT-unsafe drivers, we provided wrappers that acquire a single global mutex, unsafe_driver. These wrappers insure that only one such driver will be active at any one time. This wrapper is illustrated by Figure 5. --- There are nine interrupt levels on the Sun SPARC implementation that can potentially use threads. This occurs 100 times a second on current Sun SPARC implementations. --- **Kernel Locking Strategy** The locking approach used almost exclusively in the kernel to ensure data consistency is data-based locking. That is, the mutex and readers/writer locks each protect a set of shared data, as opposed to protecting routines (monitors). Every piece of shared data is protected by a synchronization object. Some aspects of locking in the virtual memory, file system, STREAMS, and device drivers have already been discussed in [Kleiman 1992]. Here we’ll elaborate a bit on device driver issues, as they are closely related to interrupt threads. **Clock Interrupt** The clock interrupt is handled specially. There is only one clock interrupt thread in the system (not one per CPU), and the clock interrupt handler invokes the clock thread only if it is not already active. The clock thread could possibly be delayed for more than one clock tick by blocking on a mutex or by higher-level interrupts. When a clock tick occurs and the clock thread is already active, the interrupt is cleared and a counter is incremented. If the clock thread finds the counter non-zero before it returns, it will decrement the counter and repeat the clock processing. This occurs very rarely in practice. When it occurs, it is usually due to heavy activity at higher interrupt levels. It can also occur while debugging. --- There are several ways a driver may be entered, from the explicit driver entry points, interrupts, and call-backs. Each of these entries must acquire the unsafe_driver mutex if the driver isn’t safe. For example, if an MT-unsafe driver uses timeout() to request a function call at a later time, the callout structure is marked so that the unsafe_driver mutex will be held during the function call. MT-unsafe drivers can also use the old sleep/wakeup mechanism. Sleep() safely releases the unsafe_driver mutex after the thread is asleep, and reacquires it before returning. The longjmp() feature of sleep() is maintained as well. When a thread is signalled in sleep(), if it specified a dispatch value greater than PZERO, a longjmp() takes the thread to a --- Figure 5: Unsafe Driver Wrapper There are several ways a driver may be entered, from the explicit driver entry points, interrupts, and call-backs. Each of these entries must acquire the unsafe_driver mutex if the driver isn’t safe. For example, if an MT-unsafe driver uses timeout() to request a function call at a later time, the callout structure is marked so that the unsafe_driver mutex will be held during the function call. MT-unsafe drivers can also use the old sleep/wakeup mechanism. Sleep() safely releases the unsafe_driver mutex after the thread is asleep, and reacquires it before returning. The longjmp() feature of sleep() is maintained as well. When a thread is signalled in sleep(), if it specified a dispatch value greater than PZERO, a longjmp() takes the thread to a setjmp() that was performed in the unsafe driver entry wrapper, which returns EINTR to the caller of the driver. Sleep() checks to make sure it is called by an MT-unsafe driver, and panics if it isn’t. It isn’t safe to use sleep() from a driver which does its own locking. It is fairly easy to provide at least simple locking for a driver. So almost all drivers in the system have some of their own locking. These drivers are called MT-safe, regardless of how fine-grained their locking is. Some developers have used the term MT-hot to indicate that a driver does fine-grained locking. SVR4/MP DKI Locking Primitives As we implemented our driver interfaces, UNIX International and USL were defining the SVR4 Multiprocessor Device Driver Interface and Driver-Kernel Interface (DDI/DKI), with a different set of locking primitives, based around the traditional UNIX interrupt-blocking model. SunOS 5.0 implements those interfaces to the extent defined so far, using our locking primitives and ignoring any spin semantics. This allows drivers using those interfaces to be more easily ported. SunOS drivers typically use the the SunOS synchronization primitives. Implementation Technology Some interesting techniques made it easier to get this all working. Kernel Time Slicing Since the kernel is fully preemptible we were able to make kernel threads time-slice. We simply added code to the clock interrupt handler to preempt whatever thread was interrupted. This allows even a uniprocessor to have almost arbitrary code interleavings. Increasing the clock interrupt rate made this even more valuable in finding windows where data was unprotected. By causing kernel threads to preempt each other as often as possible we were able to find locking problems using uniprocessor hardware before multiprocessor hardware was available. Even when working multiprocessor hardware arrived, there were far more uniprocessors available than multiprocessors. We intend this only as a debugging feature, since it does have some adverse performance impact, however slight. Lock Hierarchy Violation Detection Instead of establishing a system lock hierarchy a priori, we developed a static analysis tool that would check for lock ordering violations in the system. This lint-like tool, called locknest, reads C source code, constructs call graphs and reports on locking cycles. We feel it helped during early implementation debugging, and probably reduced the amount of time spent debugging deadlocks. A similar tool is described in [Korty 1989]. Deadlock Detection A side-benefit of the priority inheritance mechanism [Khanna 1992], is that deadlocks caused by hierarchy violations are usually detected at run time as well. It does a good job on mutexes and readers/writer locks held for write, but since there isn’t a complete list of threads holding a read lock, it can’t always find deadlocks involving readers/writer locks. There are other deadlocks possible with condition variables; these aren’t detected. Summary SunOS 5.0 is a multithreaded and symmetric multiprocessor version of the SVR4 kernel. The primary features are: - Fully preemptible, real-time kernel - High degree of concurrency on symmetric multiprocessors - Support for user threads - Interrupts handled as independent threads - Adaptive mutual-exclusion locks The thread models inside the kernel and at user level are almost identical. The scheduling of kernel threads onto CPUs is analogous to the way the threads library schedules user-level threads onto LWPs. The use of threads for structuring the kernel has mostly good effects though they can be overused. Threads do have a cost. The stacks are large, and must be allocated on separate pages if protection for potential stack overruns is needed. Also, context switching is still expensive. Some things are still better implemented by callouts and other “zero-weight” processes, but threads provide a nice structuring paradigm for the kernel. References Beyond Multiprocessing: Multithreading the SunOS Kernel J. R. Eykholt, ... Author Information Joseph Eykholt is a Senior Staff Engineer and technical leader in the OS-Multithreading group at SunSoft. He received an MSEE from Purdue University in 1978, and a BSEE from Purdue in 1977. Prior to coming to Sun, he was one of the leading developers of multiprocessing features for the Amdahl UTS system, and a logic designer for the Amdahl 580 CPU. His address is SunSoft, Inc., M/S MIV5-40, 2550 Garcia Avenue, Mountain View, CA, 94043. His E-mail address is jre@Eng.Sun.COM. By phone: (415) 336-1849. Steve Kleiman is a Distinguished Engineer in the Operating Systems Technology Department of Amdahl Corp. He received an M.S. in Electrical Engineering from Stanford University in 1978 and a B.S. in Electrical Engineering and Computer Science from M.I.T in 1977. He has been involved with the design and development of UNIX and workstation architecture since 1977; first at Bell Telephone Laboratories and then at Sun. He was one of the developers of NFS, Vnodes, and of the original port of SunOS to SPARC. His E-mail address is srk@Eng.Sun.COM. By phone: (415) 336-7295. Steve Barton graduated from the University of California, Santa Cruz in 1982 with a BA in Computer and Information Sciences. Since then he has worked at Zilog Inc., Parallel Computers, Counter-Point Computers, and Telestream Corp. He is currently a Member of Technical Staff at SunSoft. He's been with Sun for the last four years. Reach him electronically at steve.barton@Eng.Sun.COM. Roger Faulkner is a Senior Staff Engineer in the OS-Multithreading group at SunSoft. He received a B.S. in Physics from N. C. State University in 1963 and a Ph.D. in Physics from Princeton University in 1967, then joined Bell Laboratories, where he was seduced by computers. He has been actively involved in the inner workings of the UNIX kernel since 1976 and has done compiler and debugger development along the way. He is one of the principals involved in the development of the /proc file system for SVR4. His E-mail address is raf@Eng.Sun.COM. By phone: (415) 336-1115. Anil Shivalingiah is a Staff Engineer in the OS-Virtual Memory group at SunSoft. He received an M.S. in Computer Science from University of Texas, Arlington in 1983 and a B.S. in Electronics Engineering from UVCE, India in 1981. He's been with Sun for the last three years. His E-mail address is ans@Eng.Sun.COM. Mark Smith graduated with a B.S. in Computer Science from the University of California, Santa Barbara in 1986. He is currently a Member of Technical Staff at SunSoft in the OS-Multithreading group. Prior to coming to Sun he worked in the Design Automation department of Amdahl Corp. He can be reached by E-mail at mds@Eng.Sun.COM. Dan Stein is currently a Member of Technical Staff at SunSoft where he is one of the developers of the SunOS Multi-thread Architecture. He graduated from the University of Wisconsin in 1981 with a BS in Computer Science. Jim Voll works in the OS-Multithreading group at SunSoft. He received his B.S. from the University of California, Santa Barbara in 1981. Prior to working at Sun he has worked at Amdahl and Cygent Systems. He routinely destroys his home directory. His E-mail address is jvj@Eng.Sun.COM. Mary Weeks has been a member of technical staff at Sun Microsystems since 1986. Prior to Sun, she worked at Xerox. She received her B.A. in computer science from the University of California at Berkeley in 1984. Dock Williams is a Staff Engineer in the OS-Multithreading group at SunSoft. He received a B.S. in Electrical Engineering and Computer Science from M.I.T. in 1980. He has been with Sun for over six years. Prior to joining Sun, he worked at American Information Systems, ONYX Systems, Tri-Comp Systems, and Hughes Aircraft Radar Systems. His E-mail address is dock@Eng.Sun.COM, phone: (415) 336-1246.
{"Source-Url": "https://www.usenix.org/legacy/publications/library/proceedings/sa92/eykholt.pdf", "len_cl100k_base": 7254, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 25029, "total-output-tokens": 8458, "length": "2e12", "weborganizer": {"__label__adult": 0.00034499168395996094, "__label__art_design": 0.0003516674041748047, "__label__crime_law": 0.00031828880310058594, "__label__education_jobs": 0.000457763671875, "__label__entertainment": 7.31348991394043e-05, "__label__fashion_beauty": 0.0001577138900756836, "__label__finance_business": 0.0002598762512207031, "__label__food_dining": 0.0003247261047363281, "__label__games": 0.0007004737854003906, "__label__hardware": 0.005706787109375, "__label__health": 0.0004167556762695313, "__label__history": 0.0003108978271484375, "__label__home_hobbies": 0.00011366605758666992, "__label__industrial": 0.0008306503295898438, "__label__literature": 0.00015497207641601562, "__label__politics": 0.00027251243591308594, "__label__religion": 0.0005288124084472656, "__label__science_tech": 0.09173583984375, "__label__social_life": 6.252527236938477e-05, "__label__software": 0.0114593505859375, "__label__software_dev": 0.88427734375, "__label__sports_fitness": 0.00031447410583496094, "__label__transportation": 0.0006809234619140625, "__label__travel": 0.0002038478851318359}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37151, 0.00733]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37151, 0.41258]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37151, 0.92458]], "google_gemma-3-12b-it_contains_pii": [[0, 4427, false], [4427, 6881, null], [6881, 11159, null], [11159, 16495, null], [16495, 22168, null], [22168, 27370, null], [27370, 32205, null], [32205, 37151, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4427, true], [4427, 6881, null], [6881, 11159, null], [11159, 16495, null], [16495, 22168, null], [22168, 27370, null], [27370, 32205, null], [32205, 37151, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37151, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37151, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37151, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37151, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37151, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37151, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37151, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37151, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37151, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37151, null]], "pdf_page_numbers": [[0, 4427, 1], [4427, 6881, 2], [6881, 11159, 3], [11159, 16495, 4], [16495, 22168, 5], [22168, 27370, 6], [27370, 32205, 7], [32205, 37151, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37151, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
f0889c78ef5e0080f9eb358d167eeaeef18bf215
INTRODUCTION TO AGILIO SMARTNICS AND NETWORK FLOW PROCESSORS (NFP) The Agilio SmartNICs deliver high-performance server-based networking applications such as network virtualization, security, load balancing, quality of service, and telemetry. The Netronome Network Flow Processors (NFP-4000 and NFP-6000 family of devices) are used in the Agilio SmartNICs. Server-based networking deployments have become mainstream in COTS servers, and this includes implementations where networking functions are implemented inline between the network port on PCIe server adapters and host applications or virtual machines (VM) and containers implemented in servers. It also includes networking functions implemented in VMs (as in virtual network functions or VNFs). The NFP-4000 and NFP-6000 family of devices (collectively called the NFPs in the rest of the document) are programmable flow processors that can perform a range of packet processing operations for different applications. RELATED DOCUMENTS <table> <thead> <tr> <th>DESCRIPTIVE NAME</th> <th>DESCRIPTION</th> </tr> </thead> <tbody> <tr> <td>Netronome Agilio Software version 2.0 Getting Started Guide</td> <td>A guide to new users of Netronome’s Agilio Software for server-based networking applications</td> </tr> <tr> <td>Agilio Software v2.0 Programmer’s Reference Manual (PRM)</td> <td>Describes the list of APIs supported by the Agilio Software</td> </tr> </tbody> </table> ### NETWORK FLOW PROCESSOR (NFP) PROGRAMMING BLOCKS As shown in figure below, the Network Flow Processor (NFP) has the following internal blocks that allow for networking datapath configuration and programmability. The NFP processing elements are distributed across the chip in the form of “islands” with a high-speed CPP (command push pull) bus connecting the islands for transfer of data between the islands. The number of islands on a NFP depends on the SKU of the chip (NFP-4000 or NFP-6000). The NFP has the following main components: 1. MAC island 2. Ingress and Egress processing islands (NBI) 3. FPC islands with 12 FPCs (Flow processing cores) 4. PCI, Arm, Crypto and Interlaken islands with 4 FPCs 5. Memory Units All the islands with the FPCs are programmable using one or more of the following programming languages P4, C, or Microcode as explained in the future sections. --- **Figure 1. NFP Programming Architecture** Fixed Function Hardware Blocks The following are fixed function hardware blocks in the NFP: - I/O Interface blocks (10G/40G/100G MACs and 4xPCI Gen3x8) - Packet Classifier - DMAs between internal blocks - Traffic Manager - Packet Modifier - Packet sequencer The above blocks can be configured through the control and access registers accessible through the following: - Host CPU via the PCI bus - On Chip Arm (control processor) - Flow processing cores (via the on chip configuration bus) The NFP also has configurable packet classifier blocks referred as packet processing engines in above figure, these blocks are state machine based configurable engines, which are capable of lookup-based L2 and L3 classification of packets. Netronome provides firmware in binary form that configures these packet processing engines. Programmable Flow Processing Cores (FPCs) The flow processing cores or FPCs constitute the main programmable blocks of the NFP. The FPCs can be used for packet classification and packet modification operations that can go well beyond 5-tuple classification. Depending on the NFP SKU selected, there can be up to 120 of these FPCs in the device. FPCs are distributed across the Islands within the NFP. Each FPC is multi-threaded and has its own instruction memory, data memory and registers for program execution. FPCs also have access to a data bus called command-push-pull (CPP) bus and configuration bus called eXpansion Bus (XPB) for transferring data and accessing the control and status registers respectively. FPC programming is also assisted by: - Hierarchical memory structure (up to 30MB within the NFP and up to 24GB connected as off chip) accessed through the CPP commands. - Hardware accelerators such as lookup engine, statistics engine, crypto engine, packet engine etc. (accessible via the commands from FPCs). The FPCs can be programmed using high level languages such as P4 or C with the Netronome provided SDK tool chain. P4 programs are compiled using the open source P4 compiler from the P4 consortium. P4 complier is integrated with the Netronome C compiler to provide an integrated development environment where both P4 and C-based applications can be applied to the data-path. The SDK tool chain also provides the capability to simulate and debug the programs on the NFP software simulator and NFP based hardware targets such as the Agilio SmartNICs. Detailed description of each of the above blocks is beyond the scope of this document and can be found in the NFP 4000/6000 Family Data Books. NFP PROGRAMMING MODELS Both high level and low level programming models are supported. For example P4 and API based programming are high level and when such models are used, the developer need not be aware of internal architectural blocks of the NFP such as the CPP bus, MAC and packet classifier. Other models support addition of C or P4-based applications as a sandbox or plug-in to the Netronome provided Agilio Software datapath. NFP-based Agilio SmartNICs support the following programming models: 1. Host API-based Programming Model: Using Agilio Software supported APIs 2. User Datapath Programming Model: C-based programming with configuration APIs 3. User Datapath Programming Model: P4 and C-based programming with configuration APIs 4. User Datapath Programming Model: Programming a C (or P4) sandbox or plug-in application into the Agilio Software datapath Each of the programming models above is described in detail in the next sections. Host API-based Programming Model This high level programming model is supported with Netronome Agilio SmartNICs. In this model, the SmartNIC is supplied with the production quality Agilio Software in binary form. The features supported includes: - PCIe and Network I/O configuration (includes the host interface drivers) - Standard datapath features (via Open vSwitch (OVS) offload, tunneling, match-action etc.) usable via supported API calls The Agilio Software v2.0 that is currently available supports the OVS v2.3 and v2.4 datapaths – for further details please see the appropriate Agilio Software documentation. The host API programming model works with the Agilio Software datapath. Netronome has developed the Agilio Software provided with the Agilio SmartNICs using the NFP internal blocks and hierarchical memory most efficiently. The users of the Agilio Software are expected to just call the Agilio Software APIs to meet the requirements of their use cases. The Agilio Software datapath supports match-action processing along with support for a unique hardware-based flow cache implementation for fast path processing. The classification of the ingress packets is performed using the match fields as configured by the user and action is taken on individual packets based on the match entry. The Agilio Software is based on the OVS datapath that is offloaded from the kernel to the NFP, along with the additional features such as auto learning flow-cache, load balancer and tunneling. Additional features are planned for the Agilio Software with future releases. The features supported by the Agilio Software can be used through the Agilio API calls described in the Agilio Software Programmer’s Reference Manual (PRM). The user is not required to learn about the internal architecture of the NFP. Figure 2 below explains the high level architecture of the Agilio Software. The Agilio Software blocks shown in the figure are implemented using the three NFP hardware blocks described above. Any configuration and FPC programming details are kept hidden from user. The kernel mode portion of the OVS software (repeated hash tables and OVS kernel actions) is replicated and offloaded in the NFP. The Agilio Software features are available to the user through API calls. These APIs can be called via the command line or integrated into the application software to configure the datapath in the Agilio SmartNIC. The following set of API calls are available for use with the Agilio Software: 1. **Local Flow Table APIs** - Used to manipulate the Agilio Software flow tables. They are compatible with OpenFlow v1.3 specifications. The Agilio Software is provided with sample applications that demonstrate the use of these APIs. 2. **Local Packet APIs** – Allows the users to interact with the network packets on the host installed with Agilio SmartNIC and Software. Sample applications that demonstrate the use of these APIs are included in the Agilio Software package. 3. **Group APIs** – These APIs allow the users to manipulate (add, modify, delete) group entries in Agilio Software-supported group tables. 4. **Meter APIs** – Allows the user to manipulate (add, modify, delete) meter entries in the Agilio Software-supported meter tables. 5. **Health monitoring APIs** – Allows the user to monitor the system health on the host OS. Users can create their own health monitoring applications. Sample applications that demonstrate the use of these APIs are included with the Agilio Software. Please refer to the Agilio Software Programmer’s Reference Manual (PRM) for details on how to use the above API calls. Using the above API calls the following example applications can be configured on the NFP-based Agilio SmartNIC: - Traffic engineering and network virtualization - Learning L2 bridge - Layer-3 routing functionality - Load balancing to physical and virtual ports. - Wire – fast path configuration Adding and removing packet tags Running any host DPDK application with NFP virtual functions. Accelerated origination and termination of VXLAN and NVGRE tunnels on NFP. The above list just shows some example applications and does not cover the complete functionality of the Agilio Software. Please refer to complete documentation supplied with the Agilio Software for additional details. **User Datapath Programming Models** This model is targeted for users who want to program the datapath on the Agilio SmartNIC and NFP. Both simple and complex datapaths can be programmed utilizing the three methods described in this section. In general, while selecting which model is most suitable, the user should consider the characteristic of the desired datapath: - Datapath is based on a free form pipeline where a developer can write any packet-processing pipeline. - Datapath is based on the match-action paradigm, which is similar to the Open vSwitch (OVS) pipeline supported by the Agilio Software as described in above section. In this model, the match-action datapath for example, may be fixed by a P4 program while the custom actions are defined by a C program. **C-Based Programming with Configuration APIs** This model allows for the complete control of the programmable datapath using the FPCs. The user utilizes software drivers and configuration APIs that provide the needed functionality of the fixed function configurable blocks, configurable classifier and part of the programmable FPCs. These API sets include the following: - PCIe driver for the host – C code for receiving and sending packets from and to host respectively - GRO – Global Re-Ordering software in C - Sample NIC send/receive applications C - 10GbE/40GbE network ports initialization APIs - Hardware Classifier initialization APIs - Hardware Traffic Manager configuration APIs - Look up based pre classification firmware - Buffer list Manager Code – For ingress and egress packet buffering and DMA to the Flow Processing Cores (FPCs) Using the above APIs and source code written in C supported by the Netronome SDK tool chain, users can modify, add or write their own datapath application. One example of this model is shown in Figure 3. In this model, users write the C code for the packet classification and actions functions in the datapath and compile using the Netronome SDN tool chain. Figure 3. Example of NFP datapath written in C This programming model is facilitated by sample C libraries and applications provided by Netronome. Figure 4 provides the list of programming resources provided for this model. Figure 4. Resources for NFP Programming Model using C **Netronome C Libraries** This C-based Programming Model also comes with Netronome-provided C libraries for basic packet operations and low-level NFP access functions. These libraries are pre-verified and include the detailed documentation describing their functionality. The following are some standard C libraries provided as sample code: - Memory Lookups - Hash lookup - Index lookup - Lookup and add - Header Parsing - Ethernet - ARP/IPv4/IPv6 - TCP/UDP - GRE/NVGRE/VXLAN - Common NFP Access and Engines - PCIe Read/Write/DMA - Memory Ring operation (push/pop) - TCAM lookups - CRC computations - Register configuration - Packet Operations - Receiving/sending packet from/to Network ports - Reading number of bytes from memory - L3/L4 checksum calculation - Packet modification script - DMA packet between memories - Drop a packet and free buffer **P4 and C-based Programming with Configuration APIs** This NFP programming model is meant for users who want to program the datapath in a hardware-agnostic way. In this model, users do not need to know the details of the underlying NFP architecture. P4 is target independent network programming language where users can write the forwarding behavior of the network devices (ASIC/NPUs/FPGAs) using the standard forwarding model defined in the P4 architecture. P4 allows the user to create their own packet headers and protocols along with their processing behavior in a networking device. The packet-processing model proposed by the P4 language is shown in Figure 5. User writes the datapath of a network device in P4 language without any knowledge of the target hardware. The P4 tool chain developed by the device vendor converts the P4 program into the device specific firmware. P4 tool chain also generates a run time API (similar to open flow model) to allow the match-action table modification. While the P4 language enables hardware agnostic programming of the network device, there may still be a need for device specific customizations. Some examples of such custom features include: - Stateful packet filtering - Stateful statistics collection (based on the flows) - Sending messages to Host (Control and Data) - Hash Table modifications - Atomic operations - QoS implementation (Traffic Manager and buffering) To enable such extensions to P4-based programmability, Netronome provides the ability to extend P4 datapath features with C-based custom applications. This is also referred to as application of C-based sandbox or plugins to a P4-defined datapath. For example, the following suggested steps may be taken to implement this programming model: - Use of Netronome provided API calls for PCIe and Network I/O configuration functions - Use of Netronome provided classification firmware - Use of sample programs that demonstrate the use of P4 and C datapath programming using this model - Review of provided C libraries that can be used as a Sandbox or plugin application - Use the SDK tool chain to compile the P4 and C programs to generate and install the needed datapath on the NFP Figure 6 below shows an example of this programming model as implemented on the NFP. The implementation described above requires the Netronome SDK tool chain with the P4 programming support. The details of the SDK tool chain and features supported are described in a future section. **Programming a C (or P4) Sandbox App into the Agilio Software Datapath** This NFP programming model is still under planning however some initial architecture and high-level plans are discussed in this section. Figure 7 represents the proposed C or P4 Sandbox application implementation along with the main datapath defined and supported by the Agilio Software. Since Netronome has implemented Agilio Software by taking advantage of all the NFP hardware resources and accelerators, this method potentially represents one of the most feature-rich, performance and resource-optimized implementation of packet processing datapath on the NFP, which includes the applications that require network-to-host, host-to-network and network-to-network ingress and egress packet flows. This programming model is offered with the following components: - Agilio Software in binary form - C Sandbox libraries - SDK tool chain that supported integrated C and P4-based programming - Sample sandbox programs in P4 and C The sandbox (or plugin) application shown in Figure 7 can be implemented either in C or P4. Also the sandbox functionality can be replicated in the user/kernel space to allow for the fallback path of the sandbox implementation. In the sandbox implementation example shown, an OVS match-action table implements one of the actions as “send-to-sandbox,” where the “send-to-sandbox” action can be one of the logical ports directing to the C sandbox code. This sandbox can be implemented as - Custom/Primitive action in the form of a plugin stateful function running on the same FPC. - Extended match action functionality implemented in P4/C (as described in an earlier section) running on separate set of FPCs. In this case the packets from Agilio Software to the sandbox are transferred via the ring based transfer mechanism between the FPCs. The proposed implementation of the P4/C sandbox is not designed to feed every packet into the sandbox code, as by doing so there may be performance implications. ![Diagram](image_url) **Figure 7. Agilio Software with the C app sandbox implementation on the NFP** The NFP datapath associated with the Agilio Software based sandbox model is shown in Figure 8. The details of this programming model such as sandbox functionality actions, number of FPCs available for the sandbox functionality, libraries etc. is still in the planning and development stage. More details will be available in future version of this and other related documents. ![Diagram](image_url) **Figure 8. NFP datapath with Agilio Software and C/P4 sandbox** SDK TOOL CHAIN FOR NFP PROGRAMMING This section provides a brief introduction to the SDK tool chain for programming the NFP device in Agilio SmartNICs. The SDK tool chain can be used to exercise the programming options described in user datapath programming models above. Netronome’s SDK tools can run on Windows and Linux platforms. The Windows version of the tool chain comes as an integrated development environment (IDE) that combines both C and P4 programming, and allows the full visibility of the chip features with a graphical user interface (GUI) that includes a simulator and a hardware debugger. The Linux version of the tool chain provides a similar set of features and can be exercised through the Linux command line interface (CLI). Figure 9 below shows all the components of the SDK tool chain available in the Windows and Linux environments. Figure 9. SDK tool chain components for NFP programming The following is a summary of code development tools and features included in the SDK tool chain: - Integrated development environment (IDE) in windows - P4 front end compiler (see below for further information on P4 support) - IR (intermediate representation) back end compiler - Netronome C compiler - C scripting for the NFP configuration - Netronome microcode assembler (for any legacy microcode/assembly based applications and libraries) - Linker (links the compiler code to generate the NFP firmware) - Loader (loads the NFWF files to the NFP) The following is a summary of profiler and debugger features included in the SDK tool chain: - Cycle accurate event and queue history - Cycle accurate history collection for FPC threads - Performance profiling and bus bandwidth estimates - Hardware debugger which runs on the Host and communicates to NFP via the PCIe bus to NFP to allow runtime debugging of Netronome SmartNICs Detailed description of each of the SDK components mentioned above is available in the documentation supplied with the SDK package. Netronome supports the P4 programming language as defined in the P4 Consortium (www.p4.org). Netronome has integrated the open source P4 compiler developed by the P4 Consortium to generate an intermediate representation (App.IR) of the P4 program in the yaml format, which is further compiled by the Netronome’s back end compiler to generate target specific C implementation of the NFP datapath. **Figure 10. Implementation of P4 datapath with C sandbox on NFP using SDK** The back end compiler generated C files (from P4 code) are compiled together with the custom (sandbox) C files to generate the firmware to be downloaded on the NFP. Table entries compiled in the JSON format are programmed into the NFP in the Agilio SmartNIC using run time APIs. These APIs support functions such as addition, modification or deletion of the NFP datapath match-action table entries. **CONCLUSION** Netronome’s Agilio SmartNICs with Agilio Software provides high-performance networking for modern data center servers. The Agilio solution accelerates server-based networking functions such as network virtualization, security, load balancing and telemetry. The Agilio solution is built on the Netronome NFP which is a programmable device optimized for network datapath processing. The NFP along with the SDK tool chain supports a variety of datapath programming models where users can select one of the models based on their requirements. The Host API model utilizes the Netronome-supplied production quality Agilio Software that implements standard datapaths such as in OVS. This model is suitable for users who want to use available features in the Agilio software datapath, and are not interested in any custom programming of the datapath or understanding architectural details of the NFP. The C-based programming model can be used to program a new datapath. An example of this can be development of complete packet classification and processing pipeline for server-based networking applications such as telemetry or load balancing. The P4 only or P4 with C sandbox model of programming is suitable for the users who want to program the NFP hardware but is not interested in learning the architectural details of the NFP device. The P4 language can be used to implement a hardware-agnostic match-action processing datapath. C-based sandbox applications may be added to such a pipeline for special features such as stateful filtering. This requires some familiarity with the Netronome NFP data structures and knowledge of the SDK tool chain. The Agilio Software-based Host API model combined with the C sandbox model holds the promise of delivering high performance as well as faster go to market capability while allowing for datapath customizations, provided the features supported by the Agilio Software meets most of the users' needs. The above programming models are enabled by Netronome’s SDK tool chain described above. Netronome’s SDK v6.0 will come with an integrated development environment that supports both P4 and C software development and debugging environment.
{"Source-Url": "https://www.netronome.com/m/documents/WP_NFP_Programming_Model.pdf", "len_cl100k_base": 4922, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 32838, "total-output-tokens": 5496, "length": "2e12", "weborganizer": {"__label__adult": 0.0005373954772949219, "__label__art_design": 0.0003256797790527344, "__label__crime_law": 0.0003948211669921875, "__label__education_jobs": 0.0002300739288330078, "__label__entertainment": 0.00010097026824951172, "__label__fashion_beauty": 0.00025153160095214844, "__label__finance_business": 0.0002837181091308594, "__label__food_dining": 0.00041365623474121094, "__label__games": 0.000766754150390625, "__label__hardware": 0.033477783203125, "__label__health": 0.0004575252532958984, "__label__history": 0.00024056434631347656, "__label__home_hobbies": 0.00013303756713867188, "__label__industrial": 0.001361846923828125, "__label__literature": 0.000156402587890625, "__label__politics": 0.00020420551300048828, "__label__religion": 0.0007238388061523438, "__label__science_tech": 0.05694580078125, "__label__social_life": 5.4001808166503906e-05, "__label__software": 0.0280303955078125, "__label__software_dev": 0.87353515625, "__label__sports_fitness": 0.0003969669342041016, "__label__transportation": 0.0007467269897460938, "__label__travel": 0.00020313262939453125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23397, 0.0083]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23397, 0.37152]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23397, 0.88793]], "google_gemma-3-12b-it_contains_pii": [[0, 1347, false], [1347, 2284, null], [2284, 4834, null], [4834, 8000, null], [8000, 9717, null], [9717, 12095, null], [12095, 12649, null], [12649, 14272, null], [14272, 15559, null], [15559, 17252, null], [17252, 18339, null], [18339, 19809, null], [19809, 21754, null], [21754, 23397, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1347, true], [1347, 2284, null], [2284, 4834, null], [4834, 8000, null], [8000, 9717, null], [9717, 12095, null], [12095, 12649, null], [12649, 14272, null], [14272, 15559, null], [15559, 17252, null], [17252, 18339, null], [18339, 19809, null], [19809, 21754, null], [21754, 23397, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 23397, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23397, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23397, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23397, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23397, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23397, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23397, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23397, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23397, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23397, null]], "pdf_page_numbers": [[0, 1347, 1], [1347, 2284, 2], [2284, 4834, 3], [4834, 8000, 4], [8000, 9717, 5], [9717, 12095, 6], [12095, 12649, 7], [12649, 14272, 8], [14272, 15559, 9], [15559, 17252, 10], [17252, 18339, 11], [18339, 19809, 12], [19809, 21754, 13], [21754, 23397, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23397, 0.02116]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
99b18f9ebb624a4536ec300687d1cc9b4bd73cb1
Towards Ubiquitous EWS-based Network Management Hong-Taek Ju and James Won-Ki Hong DP&NM Lab. Dept. of Computer Science and Engineering POSTECH, Pohang Korea Email: {juht.jwkhong}@postech.ac.kr http://dpnm.postech.ac.kr/ Abstract Most Internet networking devices are now equipped with a Web server for providing Web-based element management so that an administrator may take advantage of this enhanced and powerful management interface. On the other hand, for network management, an administrator normally buys and deploys SNMP-based network management platform to be customized to his network. Each management scheme has mutually exclusive advantages; consequently, two schemes coexist in the real world. This results in both a high development cost and a dual management interface for administrator. We propose an embedded Web server (EWS)-based network management architecture as an alternative to an SNMP based network management and to leverage on already existing embedded web server. We extend EWS-based element management architecture to the network management architecture. Our proposed architecture uses HTTP as a communication protocol with management information and operation encoding. Further we designed a management system on the basis of our proposed architecture that supports basic management functions. 1. Introduction World-Wide Web (WWW) is one of the most widely used Internet applications [3]. The technology is very rapidly penetrating many social and business areas. The system and network management are no exception. Web-based network management is the use of this technology to manage networks and systems. The HTTP (Hypertext Transfer Protocol)[5] is the primary transfer protocol used by Web and the HTML (Hypertext Markup Language)[6] is a platform-independent document description language. Typically in Web-based Network Management, HTTP is used as the transport of management information in HTML format between communicating entities: Web server and browser. One of the key technologies in Web is the Java that has high portability and code mobility. These unique features of Java make it a challenge for building new management solutions. The eXtensible Markup Language (XML)[14] are designed to add structure and convey information about documents and data. Management information represented in the form of an XML document can be useful for passing data between management applications. All of these HTML, HTTP, Java, XML and the other Web technologies are playing an important role in Web-based network management and recent advance in this technology astonishing; in fact, experts cannot predict improvements beyond several years in Web-based network management [5, 6, 10, 14]. Two principal industrial bodies are playing a leading role in contributing standardization to Web-based network management: the Web-Based Enterprise Management (WBEM) [8] and the Java Management eXtension (JMX) [9]. The WBEM from Distributed Management Task Force (DMTF) is an initiative based on a set of management and Internet standard technologies developed to unify the management of enterprise computing environments. WBEM provides the ability for the industry to deliver an integrated set of standard-based management tools leveraging the emerging technologies such as XML [14] and CIM [15]. The JMX is based on Java technology so it draws on Sun's experience with Java management. JMX provides the tools for building Java based solutions for managing devices, applications and service-driven networks. There are three primary benefits in applying Web technology to network management. The first is that development costs can be reduced by using open technology; plenty of open source and supporting tools. Further, the platform-neutral feature of Web technology makes it possible to unify the network and system management for separated management platform. Finally, the Web browser is ubiquitous, so that Web-based management user interface is easy and simple for most, if not all, operators. 2. Typical case of Web-based element and network management 2.1 HTTP-based element management using Embedded Web Server Network devices can be equipped with Web server to provide Web-based element management. This type of Web server is called an Embedded Web Server (EWS) [1, 2, 4]. Most commercial network devices, such as routers, bridges, and hubs, are equipped with EWS. For the EWS equipped devices, administrators point their Web browsers at the home page residing within the devices in order to configure, monitor and control the network device. This element management scheme provides an administrator with a simple but enhanced, more powerful and ubiquitous user interface. Moreover, there are no porting and distribution efforts for user application program since the Web browser of administrator's computer is enough for managing network element. 2.2 SNMP-based network management using front-end Web Server The SNMP manager program runs as an application program over the operating system in general computer and collects management information from network and system devices based on the SNMP framework [7]. When a user points his or her browser at the Web page provided by front-end Web server of the SNMP manager, aggregated management information is relayed to the browser. This network management scheme can leverage the capabilities provided by the host operating system, such as increased memory, processing power and storage space. A more important point is that this mechanism provides management information for network level that was aggregated, processed by a SNMP manager. This type of Web access serves as a useful addition to existing management platform, particularly those based on SNMP. The leading providers of management platforms, such as Cabletron, Computer Associates, IBM/Tivoli, Hewllet-Packard and Sun equip their products with Web-extensions. 2.3 Dual Management Interface We introduced two typical Web-based network management schemes: an HTTP-based element management using Embedded Web server (EWS-based EM) and an SNMP-based network management using front-end Web server (SNMP-based NM). These two schemes have their own pros and cons. With respect to flexibility of management function, EWS-based EM is better than SNMP-based NM. Through the EWS, the device vendor controls everything about the device and its management from operation to user interface. The management function only depends on Web browser. This feature makes it possible to create device-specific Web pages, with control and user interface. On the other hand, SNMP-based NM has a limitation in functions it provides. Functional limitation results from the simplicity of SNMP. For instance, a version control function of the device program is provided by most management user interfaces from EWS-based EM, not from SNMP-based NM. Version control is essential in device management, especially the firmware upgrade capability. EWS-based EM has its limitations with respect to scalability – configuring hundreds of routers and switches via a Web browser is simply not scalable. If there are many EWS-equipped network elements typical for large systems and networks, an administrator must open a Web browser for each device. This approach also tends to be device-centric and may not be able to provide logging and other high-level management capabilities that are normally essential for network management. On the other hand, SNMP-based NM normally has no limitation in scalability. Each model (EWS-based EM and SNMP-based NM) has mutually exclusive advantages, which coexist in real world. Moreover, most devices support two models: network management such as topology information, alarm correlation and history can be made available to external SNMP manager based on SNMP. And element management such as system and protocol configuration and firmware upgrade is integrated within the device and provided through the Web browser. Administrators can enjoy the enhanced user interface by way of applying Web technology to the network management. They still need an expensive network management platform for SNMP-based network level management. And they must use another user interface for device specific element management provided by EWS. For device vendors, they must put EWS and SNMP agent into the network device. Difficulty in supporting dual management interfaces result in poor time-to-market and high cost in development. The obvious questions is why not extend the EWS-based element management to the network management? The goal of our work is to answer this question, that is, to build an EWS-based network management framework. 3. Related work Recently, there are two promising approaches in Web-based management from industrial standardization bodies: Web-Based Enterprise Management (WBEM) [8] and Java Management eXtension (JMX) [9]. WBEM multi-vendor alliance launched in July 1996 and worked for establishing standards for Web-based network management software. In 1997, WBEM adopted HTTP as its transfer protocol and selected the Extensible Markup Language (XML) [14] as a representation for management information. DMTF and WBEM worked together by giving the way for the encoding of the Common Information Model (CIM) schema in XML [14, 15]. The CIM is an object-oriented information model, standardized within the DMTF for the purpose of providing a conceptual framework within which any management data may be modeled. Allowing CIM information to be represented in the form of XML brings all the benefits of XML and its related technologies to management information which uses the CIM meta-model [15]. The XML encoding specification defines XML elements, written in Document Type Definition (DTD), which is used to represent CIM classes and instances. The encoded XML message could be encapsulated within HTTP. Further, WBEM defines a mapping of CIM operations onto HTTP that allows implementations of CIM to operate in a standardized manner. Much work in WBEM is currently under way: seventeen working groups are updating specifications. The result from WBEM is fairly stable, but still not quite ready for deployment. Another promising approach to the Web-based management is being realized by Sun: JMX (formally JMAPI) [9, 16]. Sun announced JMX in order to provide ubiquitous management framework and promote an abundance of management application in Java. Based on the early JMAPI work as well as research taken from Java DMK development, JMX ended public review in July 1999 and is awaiting completion of the reference implementation [16]. The JMX specification defines the interface for basic services as a registry (Mbean Server) for Mbeans (JavaBeans for management) [9, 10]. These services enable agents to manage their own resources and let managers forward information back and forth between agents and management applications. In the JMX architecture, both services and devices are treated as managed objects [9]. The components, Mbeans, can be added and removed as needs evolve. Appropriate protocol adapters can provide a recognizable object to the Browser or JMX manager whose specification is under way. JMX depends greatly on Java. In order to be instrumented in accordance with the JMX, a resource must be fully written in the Java programming language or just offer a Java technology-based wrapper. Java Virtual Machine is a basic requirement for the management application. This heavy technology dependency on Java results in less generality. 4. Designed Architecture 4.1 Target domain As mentioned earlier, our goal is to develop an EWS-based network management system (NMS). The management system assumes that all devices are equipped with EWS. And it must provide all the functionalities and information that SNMP-based NMS provides and more. Moreover, low development cost and reduced development time must be considered in management system development. After all, powerful, user-friendly, ubiquitous, flexible and scalable management interfaces will be offered to the administrator. There are some issues involved in selecting a target domain. The first issue concerns the organization of the target network. A management target network can be classified into two categories: closed (homogeneous) and open (heterogeneous) network. A closed target network is composed of the same functional devices from different vendors or different devices from one vendor. Usually, a management system targeting open network is developed as a family of products for managing a single vendor’s device. An open network is composed of devices having different functionalities or made by different vendors. Usually, a management system targeting open network is deployed for managing an enterprise network as a management platform. Intuitively, it is easier to apply an EWS technology to the closed target network than the open target network. A second issue concerns the extent of device’s computing resources. Network devices have different computing resource levels, such as speed of CPU, size of RAM & ROM, etc. The functionality of a device is very important factor in architecture design. Differences in device’s functionality have effect on functionality assignment. A third issue deals with integration with an SNMP. SNMP is a very important legacy protocol in the systems and network management application. Even though the target device for management is equipped with EWS, most development features, such as design and implementation, depends on whether the devices have SNMP agent. Another point to be considered is whether devices having SNMP agent only must be included or not in the target managed network. Considering these, we narrow down the target network domain into a closed network with rich computing resources in the device and no integration with SNMP. Results from standard bodies that are introduced before are still incomplete. There are few commercial devices conforming to the standards. Outputs from EWS varies in format and are hard to manipulate. With respect to interoperability, it is more realistic to select closed network as a target domain. Recently, great advancements in hardware technology make it possible for a device to have richer computing resources. This advantage must be of benefit to EWS-based network management development. To avoid duplicated investment in closed targets, we assume that devices are equipped with EWS only. 4.2 Design concept With the defined target domain, we designed the architecture for an EWS-based network management system. In the previous work, we proposed an element management architecture having an EWS as a core component. We have developed an HTTP/1.1 compliant embedded Web server (called POS-EWS) that supports our proposed architecture [4, 13]. We also extended the EWS-based element management architecture to the network management architecture. We applied the thin-client and fat-server paradigm to the architecture. From numerous computing resources in the device, we deduced this design concept. The first extended version from element management architecture is 2-tier architecture, thereafter modified to a 3-tier architecture as a provision for the lack computing resources in device. We use Java technology, especially the Java applet. The Java applet is downloaded from the Web server and run on a Web browser. The Java applet is a mobile code over the Internet. They are stored in Web server and executed on the browser. There is no Java execution environment on devices. The Java applet has an inherent security problem: it is restricted in accessing local disk, executing another program and network connection to other hosts by Java applet security mode (Sandbox model) [10]. Code signing extends applet capabilities to make network connection with other hosts. HTTP is used as transport protocol between EWS-based agent and management station. We defined an information encoding scheme for management data. The encoded message is encapsulated into HTTP message. With the encoding scheme, management operation such as get and set are encoded into URL that is part of HTTP message [5, 11]. Our EWS-based network management system supports four basic management functions: notification, data collection, agent discovery and data setting. Notification determines which events in device occurs on the basis of the event message and customizes the event message to notify administrator. Data collection gathers management data from the device and stores the data it collects in the database. It also performs threshold monitoring and generates threshold events. Agent discovery polls an EWS agent to initially discovery EWS equipped device and then detects configuration and status changes in the network. Data setting provides an administrator a mechanism for changing management information of a device. 4.3 Two-tier EWS-based NM architecture HTTP is a client-server scheme. One of its side effects is that first Web page is served from a Web server to a Web browser, and subsequent Web pages cannot be included in the Web display except for image. For a network management system, this is very strict limitation. Network management system must gather management information from multiple devices, and formatted management data from multiple devices can be placed in the same page. This is why Java applets are used [10, 12]. Java applets are downloaded by a browser. Once the applet is loaded, it can control the location from where it receives its data and how to display or manipulate that data. Java applets by nature are cross-platform and act the same within any browser. Fortunately, it is a straightforward task to design an applet to make connections with multiple devices if Java applets are programmed on the basis of a Java security model and signing utility. Java implementation of an HTTP manager is a key component in a 2-tier architecture. The Java HTTP manager source code is written and compiled to produce a Java HTTP manager applet. This applet is stored in a network device and is transferred by the EWS to the browser over the network at run time. After loaded on the JVM of a browser, the Java HTTP applet communicates with devices specified in the list to perform a management task. Management Information Server (MIS) responds to the Java HTTP manager request. It performs basic management functions explained before: Notification, Data collection, Agent registration. 4.4 Communication using HTTP Java HTTP manager and management information server communicate using HTTP. By reusing existing communication protocol, developers can avoid adding new specific management protocols. HTTP 1.1 compliant Web Server supports persistent TCP connection [5]. Persistent connection allows multiple requests to be pipelined on a single connection, with a mechanism to encode each response as a series of chunks, making it unnecessary to buffer the entire response before transmission. This mechanism avoids overhead of frequently setting up and tearing down connections. HTTP is a TCP-based application protocol, therefore it is more reliable than UDP-based application protocol such as SNMP. In order to manage network resources using HTTP, a Java HTTP manager can specify management operation with the name of managed resource. We define a mapping between URL and management operation with the name of managed resource. The mapped URL is compliant with the standard URL syntax, therefore it can be handled by a conventional Web server and Web browser [11]. We define three management operations: get, set, getnext. The format of mapped URL syntax is depicted below. http://host/resource/management-operation?parameter Management information is encoded into the HTTP data part in chunk of binary data of arbitrary size. Java HTTP manager request management operation to the management information server [5]. The management information server responds with an HTTP header followed by a MIME-typed proprietary encoded data. This data can be compressed with gzip if the Web browser supports the decompression capability. The format of TCP payload of HTTP message transferring encoded management information is depicted below. 4.5 Three-tier architecture We modified the two-tier architecture into a three-tier architecture by removing the management information server from the device and putting it into a stand-alone system. A two-tier architecture assumes that the network devices have enough computing resources so that they can support most management functions with the management information server. But this assumption is not always true and the administrator may prefer a three-tier architecture to have a more stable manager system. A module acting like a browser must be attached to the Management Information Server in order to request raw management information from the network element. The communication scheme between Management Information Server and network element is in the same way as two-tier architecture, as well as between Java HTTP manager and Management Information Server. Management Information Server performs the predefined basic management tasks as usual using HTTP. When an administrator points his Web browser at Management Information Server’s Web page, he can retrieve the aggregated or processed management information and controls a device through the provided Web page by the Management Information Server. 4.6 Management Information Server This slide shows the detailed design of the management information server. The Data Collector gathers information data and performs threshold monitoring. This module requests polling data with polling duration and frequency to the Polling Engine. The Polling Engine schedules a series of HTTP requests based on requests from Data Collector. The Request Handler maintains communication channels for requests from management server to EWS agent. In order to manage multiple channels, this module acts asynchronously. The Event Handler periodically polls each device to discover the existence of events and acts as a buffer if necessary. This module logs polled events to the Data Base by use of a Logging module and next forwards them to the Alarm Handler. The Alarm Handler executes scripts or command for notifying administrator upon receipt of an event and predefined action schedule. The Discovery Engine polls the EWS agent to initially discover EWS equipped device. The Map Generator manages and detects network topology. 5. Conclusion and Future work There are lots of benefits in applying Web technology to network management. Most commercial network devices are equipped with EWS, which are used for element management only. We have proposed an EWS-based network management architecture. We assume that the target network is composed of the same functional devices from different vendors or different devices from one vendor. We have applied the thin-client and fat-server paradigm to the architecture and used Java technology, especially the Java applet. HTTP is used as transport protocol between management entities. On the basis of proposed architecture, we are currently implementing the management system. Our future work involves enhancing our architecture for application to an open (heterogeneous) target network. References
{"Source-Url": "http://www.apnoms.org/2016/Call_For_Papers/apnoms-IS-sample.pdf", "len_cl100k_base": 4311, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 26800, "total-output-tokens": 5435, "length": "2e12", "weborganizer": {"__label__adult": 0.0003955364227294922, "__label__art_design": 0.0005950927734375, "__label__crime_law": 0.0003342628479003906, "__label__education_jobs": 0.0005970001220703125, "__label__entertainment": 0.00012624263763427734, "__label__fashion_beauty": 0.0001780986785888672, "__label__finance_business": 0.0005006790161132812, "__label__food_dining": 0.0003314018249511719, "__label__games": 0.0004963874816894531, "__label__hardware": 0.01178741455078125, "__label__health": 0.0005383491516113281, "__label__history": 0.00042128562927246094, "__label__home_hobbies": 0.0001194477081298828, "__label__industrial": 0.00093841552734375, "__label__literature": 0.000209808349609375, "__label__politics": 0.00019633769989013672, "__label__religion": 0.0005240440368652344, "__label__science_tech": 0.1807861328125, "__label__social_life": 7.402896881103516e-05, "__label__software": 0.03857421875, "__label__software_dev": 0.76123046875, "__label__sports_fitness": 0.0002453327178955078, "__label__transportation": 0.0007500648498535156, "__label__travel": 0.0002486705780029297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25063, 0.02793]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25063, 0.42343]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25063, 0.91171]], "google_gemma-3-12b-it_contains_pii": [[0, 1327, false], [1327, 4028, null], [4028, 5918, null], [5918, 8689, null], [8689, 11539, null], [11539, 14468, null], [14468, 16897, null], [16897, 18490, null], [18490, 20240, null], [20240, 21463, null], [21463, 22525, null], [22525, 25063, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1327, true], [1327, 4028, null], [4028, 5918, null], [5918, 8689, null], [8689, 11539, null], [11539, 14468, null], [14468, 16897, null], [16897, 18490, null], [18490, 20240, null], [20240, 21463, null], [21463, 22525, null], [22525, 25063, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25063, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25063, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25063, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25063, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25063, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25063, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25063, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25063, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25063, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25063, null]], "pdf_page_numbers": [[0, 1327, 1], [1327, 4028, 2], [4028, 5918, 3], [5918, 8689, 4], [8689, 11539, 5], [11539, 14468, 6], [14468, 16897, 7], [16897, 18490, 8], [18490, 20240, 9], [20240, 21463, 10], [21463, 22525, 11], [22525, 25063, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25063, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
ec9015f4fb8f31c972f9f2548b0b76f451039653
Evaluating Intel’s RealSense SDK 2.0 for 3D Computer Vision Using the RealSense D415/D435 Depth Cameras By the staff of BDTi May 2018 OVERVIEW Today’s 2D computer vision applications have been limited by the lack of an important third dimension: depth. Unlike 2D, 3D vision enables machines to accurately understand shapes, sizes, distances and to maneuver in the real 3D world. But, historically, depth-sensing cameras have been expensive and difficult to use. Intel’s RealSense D400 Series depth camera technology represents an important milestone in that it introduces a suite of inexpensive, easy-to-use 3D cameras for both indoor and outdoor use. Furthermore, the technology is offered in a range of scalable hardware configurations to meet market demands from one unit (for developers) to millions of units (for volume production). BDTI, an independent technology analysis firm, performed a technical evaluation of the Intel RealSense Software Development Kit (SDK) 2.0 for developing 3D vision applications using the RealSense D415/D435 depth cameras. In our evaluation, we sought to understand the ease of use and developer efficiency for developing 3D vision applications using this SDK. To get hands-on with the SDK, we developed a simple, real-world computer vision application that uses depth information to produce real-time information about the physical size of objects in a video stream generated by the depth camera. The application leveraged commonly used computer vision packages, such as OpenCV, and deep learning algorithms for object localization and detection. Overall we had a very positive development experience and found the RealSense SDK 2.0 to be a complete and easy-to-use development platform. The SDK supported all of the needed building blocks to develop our application, including good support for OpenCV. Interfacing with the depth cameras was straightforward using the SDK’s API. Contributing to the ease of use, the entire SDK is available in open source form, enabling developers to review and experiment with the code. Contents 1. Introduction ........................................ 2 2. About BDTI ........................................ 2 3. The Intel RealSense D415/D435 Depth Cameras ........................................ 2 4. What’s in the Intel RealSense SDK 2.0? ...... 3 5. BDTI’s Evaluation Methodology .............. 4 6. Step 1: Installation................................... 4 7. Step 2: Exploration................................. 5 8. Step 3: Application Design and Development........................................ 5 10. Conclusions ....................................... 8 11. References.......................................... 8 1. Introduction This report presents an independent evaluation of the Intel RealSense SDK 2.0 using the RealSense D415 and D435 depth-sensing cameras. The focus of the evaluation was on the ease of use and developer efficiency for developing 3D-vision applications. As a key component of the evaluation, BDTI implemented a simple 3D application that uses depth information to measure the height of objects (in this case people) in a live video stream generated by the depth camera. In addition to the RealSense SDK 2.0, we leveraged commonly used computer vision packages, such as OpenCV, and deep learning algorithms for object localization and detection. The target audience for this report is application developers and managers interested in an independent perspective on the development experience using the RealSense SDK 2.0 and the RealSense D415/D435 depth cameras. [1] 2. About BDTI This independent evaluation was performed by BDTI, a technology analysis and software development firm specializing in embedded computer vision and deep learning applications. BDTI has extensive experience developing, optimizing and deploying computer vision applications across many different platforms. In addition to its software development work, for more than 25 years BDTI has performed in-depth, hands-on evaluations of numerous processors, development kits and tools. For more information about BDTI, please visit https://www.bdti.com/contact. For questions about this report, please email us at info@bdti.com. 3. The Intel RealSense D415/D435 Depth Cameras Today's 2D vision-based applications have been limited by the lack of an important third dimension: depth. Unlike 2D, 3D vision enables machines to accurately understand shapes, sizes, distances and to maneuver in the real 3D world. Historically, 3D cameras have been expensive and difficult to use. Intel's RealSense D400 Series depth camera technology represents an important milestone in that it introduces a suite of inexpensive, easy-to-use 3D cameras for both indoor and outdoor use. Furthermore, the technology is offered in a range of scalable hardware configurations to meet market demands from one unit (for developers) to millions of units (for volume production). For developers to get started quickly, the RealSense D415/D435 depth cameras are ready-to-use cameras that plug into a USB port. Figure 1: The RealSense D415 and D435 Depth Cameras For higher levels of hardware integration, Intel also offers the RealSense Depth Module D400 Series. These modules feature the same camera technology as the D400 cameras but are offered in a sub-assembly module, enabling developers to integrate depth sensing into their hardware product. For higher volume applications, developers can choose to integrate the RealSense Vision Processor D4 Series chip directly into their board-level hardware design. The D4 Series chip is also found in the D400 Series Cameras and Modules and computes a 3D depth map without the use of a GPU or host processor. <table> <thead> <tr> <th>Feature</th> <th>D415 Depth Camera</th> <th>D435 Depth Camera</th> </tr> </thead> <tbody> <tr> <td>Image sensor technology</td> <td>Rolling shutter</td> <td>Global shutter</td> </tr> <tr> <td>Depth Field of View (H x V) for 16:9</td> <td>Narrow: 63.4° x 40.4°</td> <td>Wide: 85.2° x 58°</td> </tr> <tr> <td>Camera dimensions</td> <td>99 mm x 20 mm x 23 mm</td> <td>90 mm x 25 mm x 25 mm</td> </tr> <tr> <td>Intended use case</td> <td>Precise measurement. Narrower FOV results in higher depth resolution.</td> <td>Low light and wide field of view. Wider FOV enables coverage of more area, resulting in fewer “blind spots.”</td> </tr> <tr> <td>Use environment</td> <td>Indoor/Outdoor</td> <td></td> </tr> <tr> <td>Depth technology</td> <td>Active infrared stereo</td> <td></td> </tr> <tr> <td>Depth resolution</td> <td>Up to 1280 x 720 at 30 frames per second (fps)</td> <td>Approximately 10 meters</td> </tr> <tr> <td>Maximum range</td> <td></td> <td></td> </tr> </tbody> </table> Figure 2: Specifications for the RealSense Depth Cameras [2] 4. What’s in the Intel RealSense SDK 2.0? The purpose of the RealSense SDK 2.0 is to ease the development of computer vision applications using depth information provided by the RealSense depth cameras. We expect the majority of RealSense SDK 2.0 users will have previous experience developing computer vision applications, but little to no experience with the use of depth information. These developers will want to leverage familiar tools, libraries, and frameworks (e.g., OpenCV). There are four major components of the SDK: the librealsense2 library, tools, sample code and wrappers: - **librealsense2.** This library is the core component of the SDK and provides an API to configure, control and access the streaming data from the depth cameras. The API allows getting started with the camera basic functionality using the high-level API, or get full control of all camera settings using the low-level API. [3] - **Tools.** Provided in the SDK are two application tools and a set of five debug tools. The application tools include the RealSense Viewer (enabling easy visualization of the video and depth streams and setting the camera’s configuration), and the Depth Quality Tool (enabling developers to test the camera’s depth quality). - **Sample code.** There are approximately twenty code examples demonstrating how to use the SDK and the depth cameras for various tasks, including aligning depth frames to their corresponding color frames, displaying the distance from the camera to the object in the center of the image and how to use the depth cameras with existing deep neural network (DNN) algorithms. - **Wrappers.** Support for a broad range of languages and software platforms is provided by the included wrappers, including python, .NET, Node.js, Robot Operating System (ROS), LabView, Point-Cloud Library (PCL) and the Unity gaming platform. Intel has made the entire SDK and all of its major components available in source code form under the Apache 2.0 license and hosted the SDK on GitHub. 5. BDTI’s Evaluation Methodology The focus of BDTI’s evaluation was to explore the overall development experience, including ease of use and developer efficiency. The evaluation explored topics such as: - How hard is it to get things working out of the box? - How hard is it to develop a new 3D vision application using the SDK? - What are the SDK’s strengths and weaknesses viewed through the eyes of someone trying to develop a depth-sensing computer vision application? We did not focus on the technical characteristics of the cameras. For example, we did not measure things like the RMS depth sensing error of the cameras, their sensitivity under different lighting conditions or their color correctness. For a realistic assessment of the overall development experience, we followed a typical developer’s journey to develop a new application with a new SDK. The journey included installation, exploration, application design, development and testing. To put the SDK through its paces in a real-world use case, BDTI developed a simple 3D application that uses depth information to measure the height of people in a live video stream generated by the depth camera. The people detector we implemented is deep-learning based and could be retrained to detect other objects; one can imagine this application being useful in several types of use cases, such as in a warehouse setting where packages over a certain height are flagged for a different type of transport, or in an amusement park setting where children shorter than 36” are not permitted on the ride. Our evaluation was performed by a BDTI engineer experienced in computer vision application development, but with no prior experience with depth cameras. 6. Step 1: Installation For our hardware platform, we chose to use one of our existing PCs with 32 GB RAM and a USB 3.0 port. While we did not evaluate other hardware platforms, it is notable that Intel offers unsupported versions of the SDK on smaller platforms such as the Raspberry Pi 3 Model B and Android. To enable extensibility, full source code for the SDK is available for developers needing to support other platforms. Intel states that the RealSense SDK 2.0 is fully supported on Microsoft Windows and Linux platforms; in addition, there is limited support for macOS. We decided to test the implementation under Linux. Unfortunately, we immediately ran into problems. We discovered that our Linux kernel was newer than what was then supported by the SDK. While there is SDK documentation to inform the developer of the supported kernel versions, we believe that Linux kernel compatibility issues will likely be a recurring problem for developers. According to Intel, the RealSense SDK 2.0 requires that Intel make modifications to the Linux kernel drivers. Since it takes time for the Intel team to add the modifications whenever the newest Linux kernel is released, the version of kernel supported in the SDK will tend to lag the version of the kernel in the latest Linux distribution. Intel reports that a more universal solution for this type of Linux installation problem is planned. After installing an older Linux kernel, we completed configuring our system by plugging the D415 depth camera into our USB 3.0 port. We then launched the Viewer application (included with the SDK), which enabled us to visualize video streams with depth information. We repeated the process using the D435 depth camera. Both worked fine. Based on our experience, developers should have no trouble installing the system and visualizing real-time video and depth frames in less than a day. 7. Step 2: Exploration With a working 3D camera and basic software functionality demonstrated, we began exploring the SDK’s tools and sample applications. Included in the SDK are two application tools (RealSense Viewer and Depth Quality Tool) and five debug tools. We started our evaluation with the Viewer tool, which allows users to stream and visualize RGBD streams [i.e., RGB images plus depth (“D”) information for each pixel] from the camera easily and quickly. We found this tool to be easy to use, and using it, we were able to quickly get depth and color images displayed on the screen. Intel has organized the provided code samples into three categories: basic, intermediate and advanced. We ran samples from each of these three categories. - rs-capture (Basic): This sample code demonstrates how to configure the camera for streaming and rendering depth and RGB data to the screen. We ran this sample with no problems. For most applications, developers should explicitly align the depth and color streams prior to running this sample code (see rs-align). - rs-align (Intermediate): This sample code demonstrates the alignment of the depth frames to their corresponding color frames. We ran this sample with no problems. However, this tool could be made more intuitive for new developers by showing the color and depth frames before and after alignment and also to show the alignment of the color frame to the depth frame and vice versa. - rs-measure (Advanced): This sample code lets developers measure distance between two points in the physical world. Included in this example are a half dozen critical measurement concepts and details of performance techniques such as the use of multiple threads of execution. This sample code ran well with no problems. Entry-level developers would benefit from a simplified version of this sample demonstrating basic 3D measurement concepts. We were pleasantly surprised to see the “all-in” use of open source for the RealSense SDK 2.0. This is especially attractive for computer vision application developers who are already avid users of open source tools and libraries such as OpenCV. Making all of the RealSense SDK 2.0 code available in source code, using the friendly and well known Apache 2.0 license, removes licensing hurdles and makes it easy for developers to review and experiment with the code. The RealSense GitHub developer portal is well organized with basic documentation, including theoretical (e.g., technical white papers), practical (e.g., troubleshooting guide and API How To) and reference material (e.g., API architecture, frame management and camera-specific topics). However, with no prior experience with depth cameras, developing a 3D vision application is non-trivial. We experienced a longer-than-desired learning curve on the use of depth cameras. It was a challenge to learn the many configuration (e.g., color and depth frame alignment) and measurement concepts fundamental to depth cameras and the implications for constructing a 3D vision application. Developers would benefit from the addition of a well-organized series of basic tutorials that are designed to rapidly onboard developers to the RealSense platform. These tutorials should include guided learning paths for entry to advanced developers covering topics ranging from prerequisite educational materials for depth cameras to more advanced topics specific to each of the supported environments and platforms (e.g., OpenCV, ROS, LabVIEW and Unity). We would also have appreciated more advanced tutorials that offered guided learning paths specific to vertical application solutions (e.g., robotics, drones, AR/VR, etc.). These advanced tutorials would accelerate the time-to-first application for developers working in these domains. Intel stated it’s planning on expanding their tutorials to meet the needs of both experienced and novice developers. 8. Step 3: Application Design and Development As mentioned, the application we decided to develop identifies people in an image and measures their height in real time, using depth information. Now, you might wonder, why do we need a depth camera to do this? Can’t you just see how tall a person is (in pixels) and go from there? The reason is that, without depth information, you have no way to know how far away a person is from the camera. As a result, a short person standing closer to the camera might appear to be taller than a tall person standing further away. By combining both RGB data (i.e., images from the camera) and depth information, we can accurately determine a person’s height. The core functionality of our simple height detection application includes interfacing with the depth cameras to receive live video and depth streams, identifying people in the image, detecting each person’s distance from the camera and then calculating the person’s height. With this application functionality in mind, we started thinking about how we might implement it. The popular OpenCV library seemed like a good starting place as it enables developers to quickly implement and test computer vision algorithms. Fortunately, it turned out that interoperability between the SDK and OpenCV is well designed. For example, converting a depth frame from the camera into OpenCV matrices is well documented in the OpenCV sample code included with the SDK. With the decision that our application would leverage OpenCV, we were able to complete our initial design and construct our high level architecture diagram as shown in Figure 3. ![Figure 3: Diagram of the Application Architecture](image) Now that we had an overall application design, we were ready to start writing the code. The key functions required by the application include interfacing with the depth camera to capture the color and depth frames, detecting people, determining distance, calculating heights and rendering output. Our first task was to interface with the depth camera. The RealSense depth cameras can capture color and depth frames continuously. The RealSense SDK 2.0, with the librealsense2 library, provides the API functions needed to align the color and depth frames. The alignment is important because the object detection works on color frames and we wanted to measure the distance of the same object on depth frames. Using the API, our application was able to configure the depth camera, capture the color and depth frames and then align them. We then converted the frames to OpenCV matrices. The next function we needed to code was the object detector. Selecting the appropriate object detector that meets the accuracy, performance and computational constraints of an application can be a very involved and time-consuming process. Fortunately, OpenCV includes a deep learning module (DNN) with support for a number of deep learning frameworks, including Caffe, TensorFlow and others. This DNN API works with pre-trained deep learning models. Sample code included in the SDK demonstrates the interoperability among the depth camera, OpenCV, and OpenCV’s DNN API to localize and detect an object in the image, as well as how to detect the object's mean distance from the camera. Our application called the OpenCV DNN API to detect people using a pre-trained Caffe model (MobileNet + SSD). [4] Now that we had developed our people detector, it was time to code our distance detection. Each pixel in the depth frame includes the distance (in meters) from the camera. Since the color and depth frames were aligned, we had the distance for all the pixels within the bounding box. The challenge was to find which pixels belonged to the person and which pixels were associated with other objects in the image. We found the SDK examples for distance measurement were simplified and results could be obscured by the surrounding objects. The examples used either the distance of the center pixel or the mean distance of a region of interest (ROI) that was deemed the distance of the object. Unfortunately, these techniques left us with an inconsistent distance measurement. To improve the consistency of the distance measurement, we implemented a more robust way to measure the distance of each detected person. Each bounding box was an ROI, and we computed a histogram for that ROI on the depth frame. The distance of the histogram peak was deemed the distance of the detected person. We found this approach to be more resistant to occlusions and background pixels. The final piece of functionality we needed to code was our height calculation and output rendering stage. This stage included drawing a bounding box around each detected person and calculating the height of the bounding box (in pixels). Applying our formula, based on the distance (in meters) of the person in the bounding box, the height of the bounding box (in pixels), and the camera’s focal length (in pixels), we could then compute the height of the person (in meters). This assumes that the height of the person is represented by the height of the bounding box. The focal length (in pixels) is one of the camera’s intrinsics, i.e., a parameter determined by the camera optics and components. The RealSense SDK 2.0 provides an API to access such intrinsics. See Figure 4 and the formula below. $$PH = \frac{(D \cdot bh)}{f};$$ where: - PH = Person Height (meters) - D = Distance between the camera and the object (meters) - bh = Bounding box height (pixels) - f = Focal length in pixels (generated by the SDK as shown in the referenced “API How To”) [5] ![Figure 4: Person height calculation](image) 9. Step 4: Application Testing As is typical in developing any application, we developed code and tested concurrently. Our testing objective was to validate our code for these four components: receiving and displaying RGBD streams, object localization and detection (people), distance measurement and calculating the height of the bounding box. Our tests were performed indoors with the D415 and D435 depth cameras. We had people standing in front of the camera at certain distances. Our application detected each person and displayed the measured distance on the screen and confirmed with the ground truth distance. To get an accurate height measurement, people had to stand away from the camera so that their whole bodies were seen by the camera. With the D415 depth camera, our application displayed the height of each person, which was then verified with the ground truth. When we repeated the above test using the D435 camera, our application displayed an incorrect height of each person. Despite receiving technical support from Intel, we were ultimately unable to determine the root cause of this problem in the time we had available. For a final implementation using the D435, we would need either to troubleshoot the problem further or “back calculate” the focal distance $f$ used in our height calculation formula. 10. Conclusions Overall, we had a very positive development experience and found the RealSense SDK 2.0 to be an easy-to-use development platform. The SDK supported all the needed building blocks to develop our application, including good support for OpenCV. Interfacing with the depth cameras was straightforward. Based on our experience, developers should have no trouble installing the system and visualizing real-time video and depth frames in less than a day. Contributing to the ease of use, the entire SDK is available in open source form, enabling developers to review and experiment with the code. Moreover, developers can build the SDK from source code to target non-supported hardware platforms. That said, there is more work to do to improve the SDK's out-of-box experience, as we encountered problems with installing the SDK on our standard Linux platform. More significantly, we experienced a longer-than-desired learning curve because we had no prior experience with depth cameras. Our developer, who was new to depth cameras, would have greatly benefited from a well-organized series of basic tutorials designed to rapidly onboard developers to the RealSense platform, including prerequisite educational materials for depth cameras. Given that 3D sensing is still relatively new, we expect that this will be the case for many developers. The other dimension of developer experience that we explored is developer efficiency. For us, efficiency means how quickly a developer can develop a high-quality 3D vision application that meets his business requirements using the contents of the SDK. For our simple 3D application, the SDK aided our developer efficiency by offering a high-level API for interfacing with the depth camera, making our work to interface to the depth camera simple. Another efficiency boost came from the SDK's interoperability with OpenCV, an extensive library of functions that enables developers to easily implement algorithms for vision-based applications. However, we recognize that our simple 3D vision application is not representative of the sophisticated solutions required by the emerging 3D markets. Missing from the SDK are a series of advanced tutorials that offer guided learning paths specific to vertical application solutions (e.g., robotics, drones, AR/VR, etc.). Developer efficiency, and thus time-to-deployed application, would be greatly improved for developers of these types of applications. 11. References [1] https://realsense.intel.com
{"Source-Url": "https://www.bdti.com/MyBDTI/pubs/Evaluating-Intel-RealSense-SDK-2.0.pdf", "len_cl100k_base": 5250, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 24331, "total-output-tokens": 5672, "length": "2e12", "weborganizer": {"__label__adult": 0.0008754730224609375, "__label__art_design": 0.0012025833129882812, "__label__crime_law": 0.0006890296936035156, "__label__education_jobs": 0.000545501708984375, "__label__entertainment": 0.0001709461212158203, "__label__fashion_beauty": 0.0004529953002929687, "__label__finance_business": 0.0005311965942382812, "__label__food_dining": 0.000640869140625, "__label__games": 0.001399993896484375, "__label__hardware": 0.039764404296875, "__label__health": 0.0009050369262695312, "__label__history": 0.00063323974609375, "__label__home_hobbies": 0.00025534629821777344, "__label__industrial": 0.001628875732421875, "__label__literature": 0.0003104209899902344, "__label__politics": 0.0003476142883300781, "__label__religion": 0.0007472038269042969, "__label__science_tech": 0.18896484375, "__label__social_life": 9.059906005859376e-05, "__label__software": 0.009796142578125, "__label__software_dev": 0.7470703125, "__label__sports_fitness": 0.0006213188171386719, "__label__transportation": 0.001895904541015625, "__label__travel": 0.0003769397735595703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26385, 0.03121]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26385, 0.22463]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26385, 0.92464]], "google_gemma-3-12b-it_contains_pii": [[0, 2064, false], [2064, 5319, null], [5319, 8804, null], [8804, 12833, null], [12833, 17201, null], [17201, 20548, null], [20548, 23560, null], [23560, 26385, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2064, true], [2064, 5319, null], [5319, 8804, null], [8804, 12833, null], [12833, 17201, null], [17201, 20548, null], [20548, 23560, null], [23560, 26385, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26385, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26385, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26385, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26385, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26385, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26385, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26385, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26385, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26385, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26385, null]], "pdf_page_numbers": [[0, 2064, 1], [2064, 5319, 2], [5319, 8804, 3], [8804, 12833, 4], [12833, 17201, 5], [17201, 20548, 6], [20548, 23560, 7], [23560, 26385, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26385, 0.09524]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
35ef696bc3308a2b9d6ba92299fe1fb0c0752a02
<table> <thead> <tr> <th>Date</th> <th>Contents</th> <th>Lecturer/TA</th> </tr> </thead> <tbody> <tr> <td>2/18</td> <td>DZPerl editor, debugging, and introduction to final project</td> <td>楊永正、劉玉凡</td> </tr> <tr> <td>2/25</td> <td>Basic Perl review</td> <td>張傳雄、劉玉凡</td> </tr> <tr> <td>3/3</td> <td>Hash, reference, and complex data structure</td> <td>張傳雄、劉玉凡</td> </tr> <tr> <td>3/10</td> <td>Concept of object-oriented programming</td> <td>張傳雄、劉玉凡</td> </tr> <tr> <td>3/17</td> <td>Use Perl and BioPerl modules</td> <td>張傳雄、劉玉凡</td> </tr> <tr> <td>3/24</td> <td>Database (MySQL)</td> <td>張傳雄、鄧詠文</td> </tr> <tr> <td>3/31</td> <td>CGI modules and web services</td> <td>張傳雄、鄧詠文</td> </tr> <tr> <td>4/7</td> <td>Graphics: GD module</td> <td>張傳雄、傅瓊玲</td> </tr> <tr> <td>4/14</td> <td>How to write Perl modules</td> <td>張傳雄、劉玉凡</td> </tr> <tr> <td>4/21</td> <td>XML introduction</td> <td>許鈞南、劉玉凡</td> </tr> <tr> <td>4/28</td> <td>XML: KGML, etc.</td> <td>張佑誠、劉玉凡</td> </tr> <tr> <td>5/5</td> <td>Use YMBC modules</td> <td>張傳雄、劉玉凡</td> </tr> <tr> <td>5/12</td> <td>Introduction to Ensembl</td> <td>張傳雄、賴俊吉</td> </tr> <tr> <td>5/19</td> <td>Project discussion and progress report</td> <td>張傳雄、劉玉凡</td> </tr> <tr> <td>5/26</td> <td>水上運動會停課一天</td> <td></td> </tr> <tr> <td>6/2</td> <td>Project discussion and progress report</td> <td>張傳雄、劉玉凡</td> </tr> <tr> <td>6/9</td> <td>Project discussion and progress report</td> <td>張傳雄、劉玉凡</td> </tr> <tr> <td>6/16</td> <td>Project final presentation</td> <td>張傳雄、楊永正</td> </tr> </tbody> </table> Reference book - 天璣圖書有限公司 - 台北市重慶南路一段107號 - (02)2331-8868 Copyright 2004 © NYMU Bioinformatics Reading in reference book - Chapter 1. An Overview of Perl - Chapter 2. Bits and Pieces - Chapter 3. Unary and Binary Operators - Chapter 4. Statements and Declarations Operating environment and software - Operating environment - Microsoft XP - Linux RedHat 9.0 - Software - Active Perl (freeware) http://www.activestate.com/Products/ActivePerl/ - DZPerl (shareware) http://www.dzsoft.com/dzperl.htm Write a program to count and display the different sites between the “TW.txt” and “HK.txt” - Download sequences of “33411399” and “30023963” from NCBI Entrez database save as “TW.txt” and “HK.txt”, respectively. - Display total nucleotide number for each sequence and the positions of different nucleotides between the “TW.txt” and “HK.txt”. The format for homework Student_ID: g389030 Homework_ID: 1 (第幾次) Program_Name: homework1.pl __Start__ #!/usr/bin/perl -w __End__ Program_Name: homework2.pl __Start__ #!/usr/bin/perl -w __End__ The results of Test1 Taiwan SARS: 29727 bp HongKong SARS: 29742 bp <p>| | | |</p> <table> <thead> <tr> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>1782</td> <td>T =&gt; C</td> <td></td> </tr> <tr> <td>2601</td> <td>T =&gt; C</td> <td></td> </tr> <tr> <td>3852</td> <td>C =&gt; T</td> <td></td> </tr> <tr> <td>7930</td> <td>G =&gt; A</td> <td></td> </tr> <tr> <td>8387</td> <td>G =&gt; C</td> <td></td> </tr> <tr> <td>8417</td> <td>G =&gt; C</td> <td></td> </tr> <tr> <td>11493</td> <td>T =&gt; C</td> <td></td> </tr> <tr> <td>13494</td> <td>G =&gt; A</td> <td></td> </tr> <tr> <td>13495</td> <td>T =&gt; G</td> <td></td> </tr> <tr> <td>18065</td> <td>G =&gt; A</td> <td></td> </tr> <tr> <td>25569</td> <td>T =&gt; A</td> <td></td> </tr> <tr> <td>26477</td> <td>C =&gt; T</td> <td></td> </tr> <tr> <td>26600</td> <td>C =&gt; T</td> <td></td> </tr> <tr> <td>29728</td> <td>=&gt; A</td> <td></td> </tr> <tr> <td>29729</td> <td>=&gt; A</td> <td></td> </tr> <tr> <td>29730</td> <td>=&gt; A</td> <td></td> </tr> <tr> <td>29731</td> <td>=&gt; A</td> <td></td> </tr> <tr> <td>29732</td> <td>=&gt; A</td> <td></td> </tr> <tr> <td>29733</td> <td>=&gt; A</td> <td></td> </tr> <tr> <td>29734</td> <td>=&gt; A</td> <td></td> </tr> <tr> <td>29735</td> <td>=&gt; A</td> <td></td> </tr> <tr> <td>29736</td> <td>=&gt; A</td> <td></td> </tr> <tr> <td>29737</td> <td>=&gt; A</td> <td></td> </tr> <tr> <td>29738</td> <td>=&gt; A</td> <td></td> </tr> <tr> <td>29739</td> <td>=&gt; A</td> <td></td> </tr> <tr> <td>29740</td> <td>=&gt; A</td> <td></td> </tr> <tr> <td>29741</td> <td>=&gt; A</td> <td></td> </tr> <tr> <td>29742</td> <td>=&gt; A</td> <td></td> </tr> </tbody> </table> The answer of Test1 ``` #!/usr/bin/perl open TW, "C:/TW.txt"; while ($line = <TW>) { chomp($line); $TW_seq .= $line; } close TW; @list_TW = split('//', $TW_seq); open HK, "C:/HK.txt"; while ($line = <HK>) { chomp($line); $HK_seq .= $line; } close HK; @list_HK = split('//', $HK_seq); for my $pos (0..$#list_HK) { if ($list_TW[$pos] ne $list_HK[$pos]) { $real_pos = $pos+1; print $real_pos,"\t",$list_TW[$pos]," "; } } ``` ## Concept of Test1 ### @list_TW <table> <thead> <tr> <th>0</th> <th>1</th> <th>2</th> <th>3</th> <th>4</th> <th>5</th> <th>6</th> <th>7</th> <th>8</th> <th>9</th> <th>10</th> <th>11</th> <th>12</th> <th>13</th> <th>14</th> <th>15</th> <th>16</th> <th>17</th> <th>18</th> <th>19</th> <th>20</th> <th>21</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>T</td> <td>A</td> <td>T</td> <td>T</td> <td>T</td> <td>A</td> <td>G</td> <td>G</td> <td>T</td> <td>T</td> <td>T</td> <td>T</td> <td>T</td> <td>T</td> <td>T</td> <td>A</td> <td>C</td> <td>C</td> <td>T</td> <td>A</td> <td>C</td> </tr> </tbody> </table> ### @list_HK <table> <thead> <tr> <th>0</th> <th>1</th> <th>2</th> <th>3</th> <th>4</th> <th>5</th> <th>6</th> <th>7</th> <th>8</th> <th>9</th> <th>10</th> <th>11</th> <th>12</th> <th>13</th> <th>14</th> <th>15</th> <th>16</th> <th>17</th> <th>18</th> <th>19</th> <th>20</th> <th>21</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>T</td> <td>A</td> <td>T</td> <td>T</td> <td>T</td> <td>A</td> <td>G</td> <td>G</td> <td>T</td> <td>T</td> <td>T</td> <td>T</td> <td>T</td> <td>T</td> <td>T</td> <td>A</td> <td>C</td> <td>C</td> <td>T</td> <td>A</td> <td>C</td> </tr> </tbody> </table> ![Diagram](image) Not equal 20 C => A Copyright 2004 © NYMU Bioinformatics # Kimura two-parameter <table> <thead> <tr> <th></th> <th>A</th> <th>C</th> <th>G</th> <th>T</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>Transition</td> <td>Transition</td> <td>Transversion</td> <td></td> </tr> <tr> <td>C</td> <td>Transversion</td> <td>Transversion</td> <td>Transition</td> <td></td> </tr> <tr> <td>G</td> <td>Transition</td> <td>Transversion</td> <td>Transversion</td> <td></td> </tr> <tr> <td>T</td> <td>Transversion</td> <td>Transition</td> <td>Transversion</td> <td></td> </tr> </tbody> </table> <table> <thead> <tr> <th></th> <th>A</th> <th>C</th> <th>G</th> <th>T</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>0.6</td> <td>0.1</td> <td>0.2</td> <td>0.1</td> </tr> <tr> <td>C</td> <td>0.1</td> <td>0.6</td> <td>0.1</td> <td>0.2</td> </tr> <tr> <td>G</td> <td>0.2</td> <td>0.1</td> <td>0.6</td> <td>0.1</td> </tr> <tr> <td>T</td> <td>0.1</td> <td>0.2</td> <td>0.1</td> <td>0.6</td> </tr> </tbody> </table> K2P model Homework 1 - Download sequences of “33411399” and “30023963” from NCBI Entrez database save as “TW.txt” and “HK.txt”, respectively. - Please calculate the ratio of transition and transversion between the two sequences base on the pair-wised alignment \[ R = \frac{\# \text{ transition}}{\# \text{ transversion}} \] Email: perl@ym.edu.tw Numbers - All numbers are the same Format internally - Floating-Point Literals - 1.25 - 255.000 - 7.25e45 - -1.2e-23 - Integer Literals - 0 - 2003 - -40 - 61298040283768 (61_298_040_283_768) - Nondecimal integer literals - 0377 (octal same as 255 decimal) - 0xff (hexadecimal, also 255 decimal) - 0b11111111 (binary, also 255 decimal) Numeric operators - addition (2 + 3) - subtraction (5.1 - 2.4) - multiplication (3 * 12) - division (14 / 2) - modulus (10 % 3) - exponentiation (2**3) Strings - Typical strings are printable sequence of letters and digits and punctuation in the ASCII32 to ASCII126 range. - Single-Quoted String Literals - 'fred' - 'Don\'t let an apostrophe end this string prematurely!' - 'hello world\n' - '\'' - Double-Quoted String Literals - "barney" - "hello world\n" - "coke\tsprite" - Double-Quoted String Backslash escape <table> <thead> <tr> <th>Escape</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>\n</td> <td>Newline</td> </tr> <tr> <td>\r</td> <td>Carriage Return</td> </tr> <tr> <td>\t</td> <td>Tab</td> </tr> <tr> <td>\f</td> <td>Formfeed</td> </tr> <tr> <td>\b</td> <td>Backspace</td> </tr> <tr> <td>\v</td> <td>Vertical Tab</td> </tr> <tr> <td>\a</td> <td>Bell</td> </tr> <tr> <td>\e</td> <td>Escape</td> </tr> <tr> <td>\001</td> <td>Octal ASCII value (here Ctrl-A)</td> </tr> <tr> <td>\x20</td> <td>Hex ASCII value (here space)</td> </tr> <tr> <td>\cD</td> <td>Control character (here Ctrl-D)</td> </tr> <tr> <td>\</td> <td>Backslash</td> </tr> <tr> <td>&quot;</td> <td>Double Quote</td> </tr> <tr> <td>\l</td> <td>Lowercase next letter</td> </tr> <tr> <td>\L</td> <td>Lowercase all following letters until \E</td> </tr> <tr> <td>\u</td> <td>Uppercase next letter</td> </tr> <tr> <td>\U</td> <td>Uppercase all following letters until \E</td> </tr> <tr> <td>\E</td> <td>Terminate \L or \U</td> </tr> </tbody> </table> String operators - String concatenation ("Hello"."Liu") - String repetition ("mcu" x 3) - Automatic conversion between numbers and string Scalar variables - A variable is a name for a container that holds one or more values. - Scalar variable names begin with a dollar sign ($) followed by what we'll call a Perl identifier: - A letter or underscore, and then possible more letters, or digits, or underscores - $fred - $a_very_long_variable_that_ends_in_1 - $_name - $123abc # wrong identifier - Choosing good variable names - most variable name prefer lowercase: - $r # not very descriptive - $super_bow # better name - $ARGV # is special identifier for internal using Scalar assignment - The most common operation on a scalar variable is assignment, which is the way to give a value to a variable. - \$fred = 17; - \$barney = 'hello'; - \$barney = \$fred + 3; - \$barney = \$barney *2; - Binary assignment operators - \$fred = \$fred +5; - \$fred += 5; - \$barney = \$barney * 2; - \$barney *= 2 - \$str = \$str."hello"; - \$str .= "hello"; Interpolation of Scalar Variables into strings - Substitution of a scalar variable reference with its value done inside a double-quoted string literal. - Can use `{}` around the name of a variable to delimit it. ```plaintext $name = "Bob Tarr"; $str1 = "My name is $name"; # $str1 is My name is Bob Tarr $str2 = "My name is $names"; # $str2 is My name is $str3 = "My name is ${name}s"; # $str3 is My name is Bob Tarrs $x = '$name'; y = "$x"; # $y is $name ``` Operator precedence and associativity - 2+3*4 - (2+3)*4 Homework 2: The problem of associativity - 4**3**2 - [A]: 64**2 - [B]: 4**9 Email: perl@ym.edu.tw Copyright 2004 © NYMU Bioinformatics Comparison operators <table> <thead> <tr> <th>Comparison</th> <th>Numeric</th> <th>String</th> </tr> </thead> <tbody> <tr> <td>Equal</td> <td>==</td> <td>eq</td> </tr> <tr> <td>Not equal</td> <td>!=</td> <td>ne</td> </tr> <tr> <td>Less than</td> <td>&lt;</td> <td>lt</td> </tr> <tr> <td>Greater than</td> <td>&gt;</td> <td>gt</td> </tr> <tr> <td>Less than or equal to</td> <td>&lt;=</td> <td>le</td> </tr> <tr> <td>Greater than or equal to</td> <td>&gt;=</td> <td>ge</td> </tr> </tbody> </table> - 35 != 30 + 5 # false - 35 == 35.0 # true - '35' eq '35.0' # false - 'fred' lt 'barney' # false - 'fred' eq 'fred' # true Copyright 2004 © NYMU Bioinformatics The **if else** control structure - Once you can compare two values, you’ll probably want your program to make decisions based upon that comparison. ```perl #!/C:/Perl/bin/perl.exe # # The if Control structure # if ($name eq 'fred') { print "Hello world!".$name."\n"; } if ($name eq 'fred') { print "Hello world!".$name."\n"; } else { print "The name is not fred!\n"; } ``` The **chomp** operator - If the string ends in a newline character, chomp can get rid of the newline. ```perl #!/C:\Perl\bin\perl.exe # # The chomp operator a # $text = "a line of text\n"; chomp($text); ``` The **while** control structure - The while loop repeats a block of code as long as a condition is true ```perl #!/C:\Perl\bin\perl.exe # # The while operator # $count = 0; while ($count < 10) { $count += 1; print "count is now $count\n"; } ``` Get user **input** - You can get the value or string from keyboard into Perl program by using “line-input” operator `<STDIN>` ```perl #!/C:/Perl/bin/perl.exe # # The if Control structure # Run in command prompt # print "Enter your name? "; $name = <STDIN>; if ($name eq "\n") { print "That was just a blank line\n"; } else { print "Hello world! ".$name."\n"; } ``` Homework 3 - Write a program to do complementary sequence - Function 1: Could let user key in the sequence (either upper or lower case) - Function 2: Error check (not ATGC bases) - Function 3: Display the user key in sequence and complementary sequence Email: perl@ym.edu.tw What is Lists? - A list is a sequence of scalar values enclosed in parentheses. - A list is an ordered collection of scalars. - Each element of an array or list is a separate scalar variable with an independent scalar value. - Example - (35, 12.4, "hello", 1.72e30, "bye\n") <table> <thead> <tr> <th>Element number</th> <th>Values</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>35</td> </tr> <tr> <td>1</td> <td>12.4</td> </tr> <tr> <td>2</td> <td>&quot;hello&quot;</td> </tr> <tr> <td>3</td> <td>1.72e30</td> </tr> <tr> <td>4</td> <td>&quot;bye\n&quot;</td> </tr> </tbody> </table> Accessing elements of an Array Fred = ("yabba", "dabba", "doo", "hello", "world"); Fred[0] = "yabba"; Fred[1] = "dabba"; Fred[2] = "doo"; Homework 4 Fred[3] = ? Fred[4] = ? Email: perl@ym.edu.tw Special Array Indices - $rocks[0] = 'bedrock'; - $rocks[1] = 'slate'; - $rocks[2] = 'lava'; - $rocks[3] = 'crushed rock'; - $rocks[99] = 'schist'; - $end = $#rocks; # 99 last element’s index - $number_of_rocks = $#rock+1 # 100 - $#rocks = 2 # forget all rocks after ‘slate’ - $#rocks = 99 # add 97 undef elements - $rocks[$#rocks] = 'hard rock'; - $rock[-1] = 'hard rock'; - $dead_rock = $rocks[-100] # gets ‘bedrock’ - $rocks[-200] = 'crystal' # fatal error Practice by yourself - random number generator ``` #!/C:/Perl/bin/perl.exe # collect the random numbers $count = 1; while ($count <= 100) { $randnum = int( rand(10) ) + 1; $randtotal[$randnum] += 1; $count += 1; } # print the total of each number $count = 1; print ("Total for each number:\n"); while ($count <= 10) { print ("\tnumber $count: $randtotal[$count]\n"); $count += 1; } ``` List literals - (1,2,3) # list of three values 1, 2, and 3 - (1,2,3,) # the same three values (the trailing comma is ignored) - (“fred”, 4.5) # two values, “fred” and 4.5 - () # empty list – zero element - (1..100) # list of 100 integers - (1..5) # same as (1,2,3,4,5) - (1.7..5.7) # same as (1,2,3,4,5) - (5..1) # empty list - (1, 2..6, 10, 12) # same as (1,2,3,4,5,6,10,12) - ($a..$b) # range determine by current values of $a and $b The qw shoutcut qw stands for “quoted words” or “quoted by whitespace” Example qw /fred barney betty wilma dino/ eq(“fred”, “barney”, “betty”, “wilma”, “dino”) List Assignment - ($fred, $barney, $dino) = ("flintstone", "rubble", undef) - ($fred, $barney) = ($barney, $fred) # swap those values - ($fred, $barney) = qw/flintstone rubble slate granite/ - @rocks = qw/talc mica feldspar quartz/ - ($rocks[0], $rocks[1], $rocks[2], $rocks[3]) = qw/talc mica feldspar quartz/ - @tiny = (1..3) - @stuff = (6,9) - @all = (@tiny, @stuff) - @all = (1,2,3,6,9) - @copy = @quarry # copy a list from one array to another Homework 5 - List Assignment - \( @\text{tiny} = (1..16) \) - \( @\text{stuff} = (\text{"nymu"}, \text{"taigen"}, \text{"mcu"}) \) - \( @\text{all} = (@\text{tiny}, @\text{stuff}) \) Please answer following questions - \$\text{all}[1] = ? \) - \$\text{all}[-1] = ? \) - \$\text{all}[12] = ? \) - \$\text{all}[14] = ? \) Email: perl@ym.edu.tw The pop and push operators - You could remove from the right-side of the list by pop operator - `@array = (5,6,7,8,9)` - `$fred = pop(@array)` # $fred gets 9, @array now has (5,6,7,8) - `$barney = pop(@array)` # $barney gets 8, @array now has (5,6,7) - `pop(@array)` # @array now has (5,6) (7 is discarded) - You could add new items to the end of an array by push operator - `@array = (5,6)` - `push (@array, 0)` # @array now has (5,6,0) - `push (@array, 1..4)` # @array now has (5,6,0,1,2,3,4) - `@others = (9, 10, 11)` - `push(@array, @others)` # @array now has (5,6,0,1,2,3,4,9,10,11) The shift and unshift operators - You could remove from the left-side of the list by shift operator ```perl @array = qw(dino fred barney) $a = shift(@array) # $a get “dino”, @array now has (“fred”, “barney”) $b = shift @array # $b get “fred”, @array now has (“barney”) shift(@array) # @array is now empty ``` - You could add new items to the end of an array by unshift operator ```perl @array = (1,2); unshift(@array,5) # @array now has (5,1,2) unshift @array 6 # @array now has (6,5,1,2) @others = (3..4) unshift(@array, @others) # @array now has (3,4,6,5,1,2) ``` The foreach control structure - It’s handy to be able to process an entire array or list, so Perl provides a control structure to do just that. - The `foreach` loop steps through a list of values, executing one iteration (time through the loop) for each value. ```perl #!/C:/Perl/bin/perl.exe @rocks = qw/ bedrock slate lava /; foreach $rock (@rocks) { $rock = "\t$rock"; $rock .= "\n"; } print "The rocks are: \n", @rocks; ``` Perl’s Favorite Default : - If you omit the control variable from the beginning of the `foreach` loop, Perl uses its favorite default variable, `$_`. - Example ```perl #!/C:\Perl\bin\perl.exe foreach (1..10) { print "I can count to \$_!\n"; } ``` Output: ``` I can count to 1! I can count to 2! I can count to 3! I can count to 4! I can count to 5! I can count to 6! I can count to 7! I can count to 8! I can count to 9! I can count to 10! ``` The reverse operator - The reverse operator takes a list of values (which may come from an array) and returns the list in the opposite order. - \texttt{@fred} = 6..10 - \texttt{@barney} = reverse (@fred) \# gets 10, 9, 8, 7, 6 - \texttt{@wilma} = reverse 6..10 \# gets 10, 9, 8, 7, 6 - \texttt{@fred} = reverse @fred \# gets 6, 7, 8, 9, 10 Copyright 2004 © NYMU Bioinformatics The sort operator - The sort operator takes a list of values (which may come from an array) and sorts them in the internal character ordering. For ASCII strings, that would be ASCII betical order. ``` @rocks = qw/ bedrock slate rubble granite/ @sorted = sort (@rocks) # gets “bedrock”, “granite”, “rubble”, “slate” @back = reverse sort @rocks # gets “slate”, “rubble”, “granite”, “bedrock” @rocks = sort @rocks @number = sort (97..102) # gets 100, 102, 102, 97, 98, 99 ``` Sort numerically ascending or descending - sort numerically ascending ```perl #!/C:/Perl/bin/perl.exe @list = (5, 10, 12, 34, 12); @sorted = sort { $a <=> $b } @list ``` - sort numerically descending ```perl #!/C:/Perl/bin/perl.exe @list = (5, 10, 12, 34, 12); @sorted = sort { $b <=> $a } @list ``` Copyright 2004 © NYMU Bioinformatics More advance feature of If control structure - simple *if statement* in Perl has the following syntax. ```perl #!/C:\Perl\bin\perl.exe test_var = 0; if ( test_var < 1 ) { print "test_var is less than one\n"; } ``` - In this example, Perl executes a print statement if `$test_var` is less than one. In this case the condition is true so the statement is printed. But what if the *if statement* had evaluated to false? Well, nothing would have happened. More advance feature of If control structure - If you want to find out how $test_var is being evaluated, add an `else` block, which divides the `if statement` into two sections: one for `true`, and one for `false`. ```perl #!/C:\Perl\bin\perl.exe $test_var = 6; if ( $test_var < 1 ) { print "test_var is less than one\n"; } else { print "test_var is greater than one\n"; } ``` More advance feature of If control structure - You can optionally expand on this logic by adding an `elsif` block. If the condition evaluates as `false`, then Perl drops to the `elsif` block and evaluates another conditional. If that is also `false`, Perl will drop to the `else` block and execute the code there. The **split** operator for transfer string into list - `@fields = split /separator/, string` **example:** - `@fields = split /:/, "abc:def:g:h"` - `@fields = ("abc","def","g",f");` Homework 6 - Write a program to count the A, T, G, C number of key in sequence - Function 1 - user key in sequence (either upper or lower cases) - Function 2 - error check - Function 3 - Display the total nucleotides and A, T, G, C numbers ``` Please key in your DNA sequence? ATGCATGCAATGCATT You key in DNA sequence is: ATGCATGCAATGCATT The length of sequence is: 16 nucleotides Total 4 are A symbol. Total 6 are T symbol. Total 3 are G symbol. Total 3 are C symbol. 您按任意键繼續... ``` Email: perl@ym.edu.tw
{"Source-Url": "http://binfo.ym.edu.tw:80/ib/courses/binfo_sp92/Week2.pdf", "len_cl100k_base": 7926, "olmocr-version": "0.1.48", "pdf-total-pages": 47, "total-fallback-pages": 0, "total-input-tokens": 62718, "total-output-tokens": 8750, "length": "2e12", "weborganizer": {"__label__adult": 0.0005354881286621094, "__label__art_design": 0.0007443428039550781, "__label__crime_law": 0.0004024505615234375, "__label__education_jobs": 0.03656005859375, "__label__entertainment": 0.00013935565948486328, "__label__fashion_beauty": 0.00023365020751953125, "__label__finance_business": 0.00033926963806152344, "__label__food_dining": 0.0005450248718261719, "__label__games": 0.0009136199951171876, "__label__hardware": 0.0013761520385742188, "__label__health": 0.0006337165832519531, "__label__history": 0.0004436969757080078, "__label__home_hobbies": 0.00029659271240234375, "__label__industrial": 0.0006575584411621094, "__label__literature": 0.0006389617919921875, "__label__politics": 0.00031685829162597656, "__label__religion": 0.0006833076477050781, "__label__science_tech": 0.0171356201171875, "__label__social_life": 0.0003612041473388672, "__label__software": 0.01139068603515625, "__label__software_dev": 0.92431640625, "__label__sports_fitness": 0.0003647804260253906, "__label__transportation": 0.0005049705505371094, "__label__travel": 0.0003135204315185547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19707, 0.05615]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19707, 0.50798]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19707, 0.67285]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 1512, false], [1512, 1739, null], [1739, 1909, null], [1909, 2158, null], [2158, 2501, null], [2501, 2699, null], [2699, 3292, null], [3292, 3759, null], [3759, 4496, null], [4496, 5062, null], [5062, 5402, null], [5402, 5760, null], [5760, 5913, null], [5913, 6294, null], [6294, 7552, null], [7552, 7691, null], [7691, 8277, null], [8277, 8672, null], [8672, 9139, null], [9139, 9339, null], [9339, 9965, null], [9965, 10353, null], [10353, 10562, null], [10562, 10817, null], [10817, 11193, null], [11193, 11476, null], [11476, 11984, null], [11984, 12183, null], [12183, 12698, null], [12698, 13107, null], [13107, 13554, null], [13554, 13717, null], [13717, 14173, null], [14173, 14519, null], [14519, 15133, null], [15133, 15729, null], [15729, 16167, null], [16167, 16620, null], [16620, 17003, null], [17003, 17485, null], [17485, 17845, null], [17845, 18306, null], [18306, 18693, null], [18693, 19008, null], [19008, 19192, null], [19192, 19707, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 1512, true], [1512, 1739, null], [1739, 1909, null], [1909, 2158, null], [2158, 2501, null], [2501, 2699, null], [2699, 3292, null], [3292, 3759, null], [3759, 4496, null], [4496, 5062, null], [5062, 5402, null], [5402, 5760, null], [5760, 5913, null], [5913, 6294, null], [6294, 7552, null], [7552, 7691, null], [7691, 8277, null], [8277, 8672, null], [8672, 9139, null], [9139, 9339, null], [9339, 9965, null], [9965, 10353, null], [10353, 10562, null], [10562, 10817, null], [10817, 11193, null], [11193, 11476, null], [11476, 11984, null], [11984, 12183, null], [12183, 12698, null], [12698, 13107, null], [13107, 13554, null], [13554, 13717, null], [13717, 14173, null], [14173, 14519, null], [14519, 15133, null], [15133, 15729, null], [15729, 16167, null], [16167, 16620, null], [16620, 17003, null], [17003, 17485, null], [17485, 17845, null], [17845, 18306, null], [18306, 18693, null], [18693, 19008, null], [19008, 19192, null], [19192, 19707, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 19707, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19707, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19707, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19707, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 19707, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19707, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19707, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19707, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19707, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19707, null]], "pdf_page_numbers": [[0, 0, 1], [0, 1512, 2], [1512, 1739, 3], [1739, 1909, 4], [1909, 2158, 5], [2158, 2501, 6], [2501, 2699, 7], [2699, 3292, 8], [3292, 3759, 9], [3759, 4496, 10], [4496, 5062, 11], [5062, 5402, 12], [5402, 5760, 13], [5760, 5913, 14], [5913, 6294, 15], [6294, 7552, 16], [7552, 7691, 17], [7691, 8277, 18], [8277, 8672, 19], [8672, 9139, 20], [9139, 9339, 21], [9339, 9965, 22], [9965, 10353, 23], [10353, 10562, 24], [10562, 10817, 25], [10817, 11193, 26], [11193, 11476, 27], [11476, 11984, 28], [11984, 12183, 29], [12183, 12698, 30], [12698, 13107, 31], [13107, 13554, 32], [13554, 13717, 33], [13717, 14173, 34], [14173, 14519, 35], [14519, 15133, 36], [15133, 15729, 37], [15729, 16167, 38], [16167, 16620, 39], [16620, 17003, 40], [17003, 17485, 41], [17485, 17845, 42], [17845, 18306, 43], [18306, 18693, 44], [18693, 19008, 45], [19008, 19192, 46], [19192, 19707, 47]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19707, 0.18693]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
1f95235ad6b1208c686b1ff970432eaf9e670a89
PIT-HOM: an Extension of Pitest for Higher Order Mutation Analysis Thomas LAURENT Complex Software Lab University College Dublin Belfield, Ireland & Lero, The Irish Software Research Centre thomas.laurent@ucdconnect.ie Anthony VENTRESQUE Complex Software Lab University College Dublin Belfield, Ireland & Lero, The Irish Software Research Centre anthony.ventresque@ucd.ie Abstract—Mutation testing is a well-known, effective, fault-based testing criterion. First order mutation introduces defects in the form of a single small syntactic change. While the technique has been shown to be effective, it has some limits. Higher order mutation, where the faults introduced include multiple changes, has been proposed as a way to address some of these limits. Although the technique has shown promising results, there is no practical tool available for the application and study of higher order mutation on Java programs. In this paper we present PIT-HOM, an extension of Pitest (PIT) for higher order mutation. Pitest is a practical mutation analysis tool for Java, applicable on real-world codebases. PIT-HOM combines mutants in a same class to create higher order mutants of user-defined orders, it runs the mutants and reports the results in an easy to process format. We validate PIT-HOM using two small Java programs and report its performance as well as some characteristics of the mutants it creates. Index Terms—Mutation analysis, Tool, Higher order mutation, Pitest I. INTRODUCTION Mutation analysis is a well known-fault-based testing criterion. Mutation analysis creates mutant versions of the system under test (SUT), often by introducing a single syntactic change, and evaluates the capacity of a test suite to detect the difference between the SUT and the mutants. It has been the focus of much work in the last few decades [1] and has shown its effectiveness [2]. In particular, mutation analysis has been applied to the Java language, and many tools have been developed and made available for the mutation of Java programs [3]–[6]. The availability of these tools encouraged the development of the technique in both research and industry. Despite all the work done on it, and the effectiveness of the technique, mutation analysis still suffers from several problems. The two main setbacks for the adoption of mutation analysis are its cost and the equivalent mutant problem. As mutation analysis creates a potentially large number of mutants against which tests must be executed, it is a computationally expensive testing criterion and often provides delayed feedback. The equivalent mutant problem refers to mutants that, although syntactically different from the original system, are semantically equivalent, and thus not detectable by any test. These equivalent mutants are not detectable automatically and are left to the tester to manually analyse, creating noise in the results of the mutation analysis and adding to the human cost of using the technique. Higher order mutation analysis, where the injected faults are composed of several syntactic changes, has been proposed to alleviate these problems [7]. Although the technique has seen much interest in recent years [8], no practical tool for higher order mutation of Java programs has been made readily available. This absence of open tools hinders the progress of research and adoption of the technique in industry. In this paper, we propose PIT-HOM, an extension for higher order mutation of the well-known Java mutation tool PIT [6]. PIT is an efficient tool for first order Java bytecode mutation that integrates with many modern build systems. PIT is used both in research [9], [10] and industry [11]. We extend PIT with higher order mutation capabilities, allowing the combination of single order mutants located in a same class. We validate using two small programs and make it available for further research and extension at https://github.com/ucd-csl/pitest. The remainder of this paper is structured as follows. First we introduce background and related work on higher order mutation in Section II, then we describe PIT and our extension of the tool in Section III. We describe our validation of PIT-HOM and report our results in Section IV, and discuss the results in Section V. Finally, we conclude in Section VI. II. BACKGROUND AND RELATED WORK In this section, we introduce background and related work on higher order mutation. We first review the history of the technique, then its benefits, before mentioning previous tools developed for higher order mutation. Higher order mutation analysis is an extension of classic, first order, mutation analysis. In classic mutation analysis, we only consider mutants consisting of one syntactic change, hence the term first order. Higher order mutation analysis considers mutants that are composed of multiple changes. Historically, the field of mutation analysis was focused on First Order Mutants (FOMs). As the number of Higher Order Mutants (HOMs) is combinatorial to the number of first order mutants, it was considered that there were too many HOMs for them to be usable. Additionally, two firmly held beliefs worked against higher order mutation in the common wisdom: the competent programmer hypothesis and the coupling effect. The competent programmer hypothesis states that, as programmers can be considered to be competent, they will not make big mistakes. This means that, even if a fault is introduced by a programmer, the faulty version of the program will be syntactically very close to the correct program. The coupling effect [12], [13] is the observation that tests designed to detect simple faults will detect many complex faults, i.e. that complex faults are coupled with simple ones. According to the competent programmer hypothesis and the coupling effect, single order mutation was enough: simple mutants would resemble the small faults made by developers and any complex faults would likely be coupled with the simple mutants anyway. In [7], Harman et al. rehabilitate HOMs by tackling the “myths” of higher order mutation testing. First they argue that the competent programmer hypothesis is misinterpreted. Although they agree that programmers are competent, they say this means that programmers produce software that is close in behaviour to a correct program, and not software that is “within a few keystrokes of correctness”. They also refer to studies [14]–[16] showing that many faults come from misunderstanding of requirements, and are larger than the hypothesis suggests. Then they say that, although the coupling effect stands, it is not a valid argument against using HOMs. In [13], they agree that “Complex faults are coupled to simple ones in such a way that test data which find all simple ones will detect a high percentage of complex faults.”, but suggest that, as there are very many HOMs, having only a small percentage of uncoupled HOMs still makes them valuable in absolute numbers. The final myth against higher order mutation that the authors tackle is that the number of HOMs makes them unusable. They show that, using Search Based Software Engineering (SBSE), it is possible to navigate the very large search space that is the space of HOMs, and to effectively use higher order mutation analysis. Harman et al. also explore an interesting property of HOMs in [7]: subsumption. They define a HOM as subsuming its component FOMs when the set of tests killing the HOM is smaller than the union of the sets of tests killing its component FOMs. A HOM strongly subsumes its component FOMs if the set of tests killing it is included in the intersection of the sets of tests killing its component FOMs. Based on these definitions, the authors propose a classification of HOMs as non-weakly- or strongly-subsuming. They further explore strongly subsuming higher order mutants and their effect on the mutation analysis process in [17]. They show that using subsuming HOMs instead of FOMs reduces the number of mutants to analyse by 35 to 45% while improving the test efficiency. The improved efficiency provided by HOMs in [17] can be explained by HOMs in which constituent FOMs interact to create new behaviours. These new behaviours in turn exert the tests in new ways and improve the efficiency of a mutation-adequate test suite. Omar et al. explore the construction of such mutants in [18] and coin the phrase “subtle mutants” to refer to them. They find a large number of subtle HOMs for 10 projects, with subtle HOMs of order more than 8, showing how complex interactions between multiple mutants can generate new behaviours. Although higher order mutation has been the focus of much work, only few tools are available that implement the technique. MILU [19] is a higher order mutation tool for the C language proposed by Jia et al. Omar et. al. introduce HOMAJ [20], a tool for higher order mutation of Java code that integrates several search based mutant selection techniques. However, the tool is not publicly available. More recently, LittleDarwin [21], a higher order mutation tool for Java, was released, although the authors report longer mutation analysis run-times than with PIT. We were unable to find precise documentation regarding its HOM features on its website and to compare it to our tool. III. PIT AND PIT-HOM In this section, we describe how PIT and PIT-HOM carry out the mutation analysis process. We first review how PIT generates, manages and analyses mutants, then how this process was modified in PIT-HOM, and finally how to use PIT-HOM. A. Mutation analysis process in PIT PIT creates mutants at the bytecode level, instead of from the Java sources. PIT performs bytecode manipulation using the ASM library [22], an all purpose Java bytecode manipulation and analysis framework. This allows for significant performance advantages, as each mutant does not need to be recompiled, and bytecode manipulation is computationally inexpensive. Manipulating bytecode also allows for the mutants to never be stored to disk and to be fully stored and processed in main memory. In order to reduce memory cost, the full bytecode of a mutant is stored in memory only when the mutant is run, otherwise only metadata about the mutant is kept. PIT carries out the mutation process in three stages, which are represented in figure 1. The first stage is mutant generation. All classes under analysis are scanned in a single pass to discover the available mutation points. During this pass, all possible mutants are created and stored in the form of a MutationIdentifier object, that uniquely identifies the mutant. This first pass actually generates the mutated bytecode, but it is immediately discarded, and only the mutants’ metadata, the MutationIdentifier objects, are stored. A MutationIdentifier object records the location in the SUT of a mutant and the mutation operator used to create the mutant. The location consists of the class, method and bytecode instruction number where the mutation happens. As PIT’s mutation operators uniquely map a bytecode instruction to a mutated instruction, the position and operator are enough information to uniquely identify each mutant. Additional information, such as the tests to be run against the mutant are stored in a MutationDetails objects, linked to the MutationIdentifier object. Storing the mutants in this way makes them take very little memory and allows PIT to store the many mutants that can be generated, even on large projects. Once all mutants have been generated, the second stage, mutant evaluation, can start. This process is performed in child JVM processes. The main process passes the MutationIdentifier to the child process and it generates the bytecode corresponding to the mutant and injects it into the SUT. The tests targeting the mutant are then run against it to evaluate whether it is killed or not. This status is reported to the main process. By default, PIT will run tests against a mutant only until it is killed and will ignore further tests, but PIT provides an option to the user to run all tests that cover a mutant against this mutant, collecting the result of each test. This option, which creates a full mutation matrix, is particularly interesting in a research context. As starting a new JVM child process is a very expensive operation, a natural approach would be to evaluate all mutants using the same JVM process. The problem with this approach is that executing tests against a mutant can modify the state of the JVM, i.e. “poison” the JVM, through modifying static variables for example. Such a poisoning of the JVM could modify the results of the analysis of other mutants, which is undesirable. A solution would be to run each mutant in a new JVM child process, but this proves extremely slow. PIT adopts a tradeoff where mutants are grouped in test units, and mutants of a same unit are run in the same JVM. By default, PIT groups mutants of a same class together, isolating the analysis of each class but allowing intra-class contamination of results. This behaviour can be changed by the user by specifying a maximum size of the test units. The user can then choose to follow the default behaviour, or take a safer approach with smaller test units, or even run each mutant in their separate JVM to get the most reliable results possible. The final stage is reporting. The results collected in the evaluation stage are written to disk in a human-readable (HTML), or an easily processed (XML, CSV) report. These reports contain information about the different mutants, as stored in the MutationIdentifier and MutationDetails objects, and on their status (killed or survived), as well as the potential crashing test(s). The HTML report offers a visual and interactive view of the information, as well as information on basic code coverage of the tests. B. Mutation analysis process in PIT-HOM PIT-HOM extends PIT and thus also mutates bytecode, maintaining the same performance benefits. As PIT-HOM allows higher order mutation, where multiple mutations are performed in a single mutant, the overall mutation process had to be modified. The first modification was to change the way mutants are represented in PIT. As a mutant can now consist of multiple mutations, the combination of a location and a mutation operator is no longer enough to identify it uniquely. The MutationIdentifier object was modified to hold a list of mu- tation locations and of operators. Similarly to the FOMs, this is enough information to uniquely identify any HOM, and is more effective than storing the bytecode of the mutant. The MutationDetails object and its creation were also adapted. In particular, when considering a HOM, any test that covers any of the component FOMs is considered to cover the HOM. This approach considers every test that can potentially kill a HOM, although it includes many tests that only partially execute the HOM (i.e. do not execute all mutations points when run against the HOM). Considering only tests that fully execute a HOM might be more interesting but is non-trivial, as introducing mutations changes the coverage of the tests. The process of mutant generation was also adapted for higher order mutation. The single pass that PIT performs reveals all mutation points and creates all possible FOMs. The FOMs then need to be combined to create HOMs. PIT-HOM creates all possible HOMs of the desired order in a class by combining any FOMs not on the same bytecode instruction, as combining two FOMs on the same instruction would mean the second FOM “overriding” the first. As the number of HOMs is combinatorial to that of FOMs, there are very many possible HOMs, and holding them all in main memory at the same time becomes problematic. PIT- HOM thus breaks with PIT’s process of creating all mutants. We propose two ways to carry out the mutation analysis process: streaming and batch-streaming. These two processes are summarised in Algorithms 1 and 2 and we compare them in Section IV. In the streaming method (Algorithm 1), for each class under analysis PIT’s classic pass is performed (line 2), creating all the FOMs. If first order is to be performed, the FOMs are processed, each in a separate JVM (lines 3-7). The mutants of the class are then combined to create the HOMs of each targeted order. As soon as a correct combination of FOMs is found, the HOM is created and processed in a new JVM (lines 8-13). Combinations of FOMs are found using the findNextCombination function, which takes the list of FOMs and a target order as input. This function enumerates all combinations of FOMs of the correct size and checks if the FOMs in the combination do not affect the same instruction. If each FOM in the combination affects a different instruction, they form a valid HOM and this HOM is created and returned. As the mutants are processed as soon as they are created they do not need to be all kept in memory at the same time. Furthermore, as the process of creating the FOMs and combining them into HOMs is often faster than running the tests, and as the run function is run in another thread and is thus non-blocking here, a guard is put in place to ensure that not too many HOMs are created and await processing (not shown in the algorithms). The streaming method allows the HOMs to be processed without running out of memory, as all mutants do not need to be held in main memory at the same time, but also requires each mutant to be processed in a new JVM, creating a significant overhead. The batch-streaming process (Algorithm 2) provides a bal- ance between PIT’s normal process of creating all mutants before starting evaluating them and the streaming method of evaluating each mutant separately. For each class, the FOMs are created, made into testing units, and evaluated as in PIT’s normal process (lines 2-5). The HOMs are then created and stored until a set number (10,000 in our tests) have been created. The batch of HOMs is then turned into a testing unit (multiple testing units if a maximum unit size less than the batch size has been specified by the user) and processed in a new JVM (line 12). This process greatly reduces the number of JVMs created compared to the streaming method, thus reducing the overhead of JVM creation, while ensuring that the number of mutants to be kept in memory at once is reasonable. ![Algorithm 1: Mutation analysis process with streaming method](image-url) ```plaintext Input: classesToAnalyse: List of classes that should be mutated ordersToRun: List of mutation orders that should be run 1: for class ∈ classesToAnalyse do 2: foms ← MutationSource.findMutants(class) 3: if 1 ∈ ordersToRun then 4: for mutant ∈ foms do 5: run(new TestUnit(mutant)) 6: end for 7: end if 8: for order ∈ ordersToRun do 9: hom ← findNextCombination(foms, order) 10: while hom ≠ null do 11: run(new TestUnit(mutant)) 12: hom ← findNextCombination(foms) 13: end while 14: end for 15: end for ``` The batch-streaming process (Algorithm 2) provides a bal- ance between PIT’s normal process of creating all mutants before starting evaluating them and the streaming method of evaluating each mutant separately. For each class, the FOMs are created, made into testing units, and evaluated as in PIT’s normal process (lines 2-5). The HOMs are then created and stored until a set number (10,000 in our tests) have been created. The batch of HOMs is then turned into a testing unit (multiple testing units if a maximum unit size less than the batch size has been specified by the user) and processed in a new JVM (line 12). This process greatly reduces the number of JVMs created compared to the streaming method, thus reducing the overhead of JVM creation, while ensuring that the number of mutants to be kept in memory at once is reasonable. PIT-HOM supports the CSV and XML report formats. It can be run without running lower orders first. As of this writing, it accepts an additional option to specify the orders of mutation analysis tools. Thus there is no need for additional effort when integrated to a variety of build tools, IDEs, and static code analysis tools. PIT-HOM follows the same workflow as PIT and is fully configured in the same way as PIT and is fully integrated to a variety of build tools, IDEs, and static code analysis tools. Thus there is no need for additional effort when one of these common tools is used. Algorithm 2. Mutation analysis process with batch-streaming method **Input:** - classesToAnalyse: List of classes that should be mutated - ordersToRun: List of mutation orders that should be run 1: for class ∈ classesToAnalyse do 2: foms ← MutationSource.findMutants(class) 3: if 1 ∈ orderToRun then 4: run(makeTestUnits(mutants, maxTestUnitSize)) 5: end if 6: for order ∈ ordersToRun do 7: hom ← findNextCombination(foms, order) 8: homsToRun ← {} 9: while hom ≠ null do 10: homsToRun.add(hom) 11: if homsToRun.size() == 10000 then 12: run(makeTestUnit(homsToRun, maxTestUnitSize)) 13: end if 14: hom ← findNextCombination(foms) 15: end while 16: if homsToRun.size() > 0 then 17: run(makeTestUnits(homsToRun, maxTestUnitSize)) 18: end if 19: end for 20: end for C. Using PIT-HOM PIT-HOM follows the same workflow as PIT and is fully integrated to a variety of build tools, IDEs, and static code analysis tools. Thus there is no need for additional effort when one of these common tools is used. PIT-HOM can be configured in the same way as PIT and accepts an additional option to specify the orders of mutation to be run which the user can pass as a list. An order can be run without running lower orders first. As of this writing, PIT-HOM supports the CSV and XML report formats. IV. VALIDATION In this section we describe how we validate PIT-HOM to show its capacity to perform higher order mutation, and to compare the different methods of performing the mutation process. We also briefly explore some of the properties of the HOMs created by the tool and the influence of the mutation operators used. We first describe the experiment we performed before reporting the results. A. Experiment Setup The experiment was performed using two small sized projects. The first project is Triangle [23], an example project used to demonstrate how to set up and use PIT and provided with PIT. The second project used is the Bisect project used in [24] that computes square roots. We use all the tests provided by Kintis et. al. in our evaluation. Characteristics of the test subjects used in the validation are summarised in Table I. <table> <thead> <tr> <th>Project</th> <th>SLOC</th> <th>#tests</th> </tr> </thead> <tbody> <tr> <td>Triangle</td> <td>69</td> <td>12</td> </tr> <tr> <td>Bisect</td> <td>37</td> <td>64</td> </tr> </tbody> </table> For each project, we run PIT-HOM, based on PIT-1.4.3, at different orders in order to evaluate the number of HOMs generated and the properties they have. We run PIT-HOM both using all operators available in PIT-1.4.3, and using them plus the extended set of operators proposed in [25]. When using the extended set of operators, overlapping operators between PIT’s original operators and the extended ones are removed in order to avoid creating the same mutant multiple times. We also run 3 configurations of PIT-HOM that implement the 3 mutation analysis processes described before: generating all mutants before running them, streaming and batch-streaming. This allows us to compare the performance of each method. Finally we run the triangle project using PIT’s full mutation matrix mode, where all tests covering a mutant are run against said mutant. This allows us to compute metrics such as the easiness of a mutant to be killed, i.e. the proportion of tests covering a mutant that kill it. Computing the full mutation matrix could not be performed for both projects because of time constraints. Both projects were thus also run using PIT’s usual stop condition of killing the mutant in order to collect comparable run times. Our experiments were performed on a quad-core Intel Xeon processor (3.1GHz) with 8GB RAM running Debian 9.6 "stretch". PIT-HOM was run with the "threads" option set to 3. B. Results Table II reports the metrics concerning the mutants generated. For each project, we report the number of mutants generated at each order of mutation and the obtained mutation score, i.e. the proportion of mutants killed by the tests. The results are given for the two sets of mutation operators: the operators available in PIT 1.4.3 and the ones introduced in [25]. Mutation analysis at order 3 using the extended set of mutants was not run for the Bisect project because of time constraints. For the triangle project we also report the average easiness of killing a mutant, i.e. the proportion of tests covering a mutant that kill it. This shows whether the created HOMs are trivial or not. We also report the size of the disjoint mutants set which, when compared to the size of the set of created mutants, shows how redundant the created mutants are. Disjoint mutants [26], or minimal mutants [27] are the set of mutants that subsume the full set of generated mutants. TABLE II METRICS ABOUT MUTANTS GENERATED AT EACH ORDER FOR EACH PROJECT WITH EACH SET OF OPERATORS <table> <thead> <tr> <th></th> <th>Order 1</th> <th>Order 2</th> <th>Order 3</th> </tr> </thead> <tbody> <tr> <td></td> <td>Original</td> <td>Extended</td> <td>Original</td> </tr> <tr> <td>Triangle</td> <td></td> <td></td> <td></td> </tr> <tr> <td>#mutants</td> <td>94</td> <td>401</td> <td>4,282</td> </tr> <tr> <td>Mutation score</td> <td>0.86</td> <td>0.77</td> <td>0.98</td> </tr> <tr> <td>Easiness to kill</td> <td>0.23</td> <td>0.18</td> <td>0.41</td> </tr> <tr> <td>#disjoint mutants</td> <td>10</td> <td>16</td> <td>64</td> </tr> <tr> <td>Bisect</td> <td></td> <td></td> <td></td> </tr> <tr> <td>#mutants</td> <td>31</td> <td>209</td> <td>445</td> </tr> <tr> <td>Mutation score</td> <td>0.90</td> <td>0.89</td> <td>0.99</td> </tr> </tbody> </table> As in [25], we observe on both projects that the extended mutation operator set generates many more mutants than PIT’s original set and that the mutants are harder to kill. This effect is also observed at higher orders of mutation. Results also indicate that the HOMs are in average harder to kill than the FOMs. This result is tied to our definition of a test covering a HOM. The results also show for the triangle project that the mutants generated by the extended set are not subsumed by the ones generated by the original set of operators. At higher orders of mutation, as the triangle project is very small and only offers a few test, we see that the many HOMs created are mostly redundant and that there is some sort of saturation. Table III reports the time taken, in seconds, to run the mutation analysis process using the different methods described above at different mutation orders and for both the original and extended sets of mutation operators. In this table MemErr means that we were not able to run the mutation analysis as PIT-HOM ran out of memory. Either the tool crashed or the children processes running the mutants did, polluting the results. The times in italic are approximations based on partial runs, the full runs were not performed because of time constraints. On both projects, we observe that the extended set of operators, producing more mutants, takes longer to run than the original one. We also observe that the time taken to run the mutation analysis process rapidly increases when considering higher order mutants. In the triangle project we observe similar results for the generate all and batch-stream strategies when the number of mutants is low, as they will then both only generate a single child JVM process. Both methods perform better than the stream method. On the bisect project, generate all and batch-stream perform again very similarly, but this time are outperformed by stream. This is explained by the fact that a single class is under analysis and that a low number of mutants are generated. This means that the generate all and the batch stream methods only generate one testing unit, and thus execute all tests on the same thread, sequentially. On the other hand, the stream method creates a testing unit for each mutant, and can thus take advantage of the 3 threads allowed for PIT to use. As Bisect’s tests are rather long to run (14 tests timed out out of 164 test executions at order one), the overhead introduced by creating the children JVM processes is compensated by the gain of running 3 mutants in parallel. V. DISCUSSION We validated PIT-HOM by running it against two well-known, small Java programs and have compared the different ways of performing the mutation process we proposed. As initial results seem to show that no one method is always preferable, a more in-depth evaluation is needed, using larger and more diverse subject programs. PIT is a well engineered, mature tool, that allows for much extendability, but the changes made in this work were quite profound and not trivial. It is probable that opportunities for optimisation were missed and that PIT-HOM can be made more efficient. This is why we make our tool’s code available at https://github.com/ucd-csl/pitest in hopes the community can both use and improve it. Our results confirm the previous intuition that for higher order mutation to be usable and profitable, some kind of mutant selection process has to be used. Although we only considered two very small classes for our validation, the run time for second order mutation was already in the scale of hours for the Bisect project, and order 3 could not be run as it would take days. The results of this small-scale experiment indicate that HOMs are largely redundant. This confirms the previous conclusion that careful selection is needed to capitalise on the benefits that HOMs bring, such as subsuming and subtle HOMs. Again, further investigation into the characteristics of the generated HOMs is needed, using larger and more diverse subjects. VI. CONCLUSION We presented PIT-HOM, an extension of PIT for higher order mutation of Java programs. PIT-HOM automates the generation and analysis of higher order mutants in Java byte-code. We make PIT-HOM and its source code freely available for the community to use it and expand on it. We are extending PIT-HOM to use different mutant selection techniques and to allow for HOMs combining mutants of different classes. This will improve the performance of PIT-HOM, as less mutants will be considered, and will allow for more complex mutants to be created. ACKNOWLEDGEMENT This work was supported, in part, by Science Foundation Ireland grant 13/RC/2094 and co-funded under the European Regional Development Fund through the Southern & Eastern Regional Operational Programme to Lero - the Irish Software Research Centre (www.lero.ie). Thomas Laurent is supported by an Irish Research Council grant (GOIPG/2017/1829). REFERENCES
{"Source-Url": "https://researchrepository.ucd.ie/rest/bitstreams/42669/retrieve", "len_cl100k_base": 6849, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 23967, "total-output-tokens": 9129, "length": "2e12", "weborganizer": {"__label__adult": 0.00033926963806152344, "__label__art_design": 0.0002105236053466797, "__label__crime_law": 0.00026607513427734375, "__label__education_jobs": 0.00040078163146972656, "__label__entertainment": 4.166364669799805e-05, "__label__fashion_beauty": 0.00014257431030273438, "__label__finance_business": 0.0001208186149597168, "__label__food_dining": 0.000293731689453125, "__label__games": 0.0003998279571533203, "__label__hardware": 0.000507354736328125, "__label__health": 0.00034928321838378906, "__label__history": 0.000141143798828125, "__label__home_hobbies": 6.186962127685547e-05, "__label__industrial": 0.00023365020751953125, "__label__literature": 0.0001932382583618164, "__label__politics": 0.0002015829086303711, "__label__religion": 0.0003571510314941406, "__label__science_tech": 0.00328826904296875, "__label__social_life": 7.742643356323242e-05, "__label__software": 0.003755569458007813, "__label__software_dev": 0.98779296875, "__label__sports_fitness": 0.0002734661102294922, "__label__transportation": 0.0003235340118408203, "__label__travel": 0.0001748800277709961}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36809, 0.02497]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36809, 0.31493]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36809, 0.91712]], "google_gemma-3-12b-it_contains_pii": [[0, 5203, false], [5203, 11393, null], [11393, 13518, null], [13518, 19852, null], [19852, 25203, null], [25203, 30993, null], [30993, 36809, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5203, true], [5203, 11393, null], [11393, 13518, null], [13518, 19852, null], [19852, 25203, null], [25203, 30993, null], [30993, 36809, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36809, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36809, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36809, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36809, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36809, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36809, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36809, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36809, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36809, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36809, null]], "pdf_page_numbers": [[0, 5203, 1], [5203, 11393, 2], [11393, 13518, 3], [13518, 19852, 4], [19852, 25203, 5], [25203, 30993, 6], [30993, 36809, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36809, 0.05703]]}
olmocr_science_pdfs
2024-12-12
2024-12-12
1010b518a5207b9ac159b56fe5b41e146eb0f6c3
Use offense to inform defense. Find flaws before the bad guys do. Interested in learning more? Check out the list of upcoming events offering "Hacker Tools, Techniques, Exploits, and Incident Handling (SEC504)" at https://pen-testing.sans.org/events/ GIAC IHHE Practical Assignment: BIND 8.2 NXT Remote Buffer Overflow Exploit Prepared for: SANS Global Incident Analysis Center By: Robert McMahon rwm@mcmahoncpa.com Aug 13, 2000 GIAC IHHE Practical Assignment 1. Exploit Details .................................................................................................................. 2 2. Protocol Description .......................................................................................................... 2 3. Description of Variants .................................................................................................... 3 4. How the Exploit Works .................................................................................................... 3 5. How to Employ the Exploit ............................................................................................. 6 6. Attack Signature ................................................................................................................ 8 7. How to Protect Against the Attack .................................................................................. 9 8. Source Code/Pseudo Code ............................................................................................... 10 1. **Exploit Details** - **Name:** BIND 8.2 NXT remote buffer overflow exploit - **CVE Number:** CVE-1999-0833 - **CERT Advisories:** - [http://www.cert.org/advisories/CA-2000-03.html](http://www.cert.org/advisories/CA-2000-03.html) - [http://www.cert.org/advisories/CA-99-14-bind.html](http://www.cert.org/advisories/CA-99-14-bind.html) - **Operating System:** Systems running BIND 8.2, 8.2.1 with Linux, Solaris, FreeBSD, OpenBSD, and NetBSD Unix operating systems. Prior versions of BIND, including 4.x, are not vulnerable to this particular exploit. - **Protocols/Services:** TCP/UDP, port 53 - **Description:** The early versions of BIND that introduced the NXT resource record extension improperly validated these records inputs. This “bug” permits a remote attacker to execute a buffer overflow in order to gain access to a target system at the same privilege level the *named* daemon is running at, e.g., root. 2. **Protocol Description** The Domain Name System (DNS) is one of the most widespread protocols utilized on the Internet because of its function - resolving domain names to IP addresses. Email messaging and web browsing would be at best chaotic if DNS was denied to public use. DNS is based on a client-server distributed architecture composed of resolvers and name servers. Name servers that perform **recursive** resolution (as apposed to **iterative** resolution) are of particular interest because of their vulnerable to the NXT remote exploit on certain DNS implementations.¹ DNS uses both UDP and TCP transport protocols. Resolvers and name servers query other name servers using UDP, port 53 for almost all standard queries. TCP is used for zone transfers and also for queries of “larger size” domain names (e.g., exceeding 512 Bytes), which has relevance to the subject exploit. Earlier versions of DNS were regarded as insecure since there was no ability to authenticate name servers. In an attempt to make this protocol more secure and permit authentication, DNS Security Extensions were developed. One of these extensions is the **NXT** Resource Record (RR). The **NXT** RR provides the ability to “securely” deny the existence of a queried resource record *owner name* and *type*. Ironically, it is this security feature that opens the door for the subject buffer overflow attack and is the reason why earlier versions of BIND were not exposed. The details of the **NXT** Resource Record and associated data fields can be found in RFC 2065, [http://www.freesoft.org/CIE/RFC/2065/index.htm](http://www.freesoft.org/CIE/RFC/2065/index.htm). --- The BIND (Berkeley Internet Name Domain) implementation of DNS is the most popular version deployed on the Internet. The BIND 8.2 implementation of the NXT RR was developed with a programming bug in it that permits remote intruders (via another name server) to execute arbitrary code with the privileges of the user running the *named* daemon. The specifics on this programming bug are discussed in paragraph 4 below. 3. Description of Variants The version of the NXT exploit addressed in this paper was written by Horizon and Plaguez of the ADM CreW [ftp://freelsd.net/pub/ADM/exploits/t666.c]. This version has successfully engaged several name servers. Another version of the NXT remote exploit, *Exploit for BIND-8.2/8.2.1 (NXT)*, was written by the TESO group [http://teso.scene.at/releases.php3/teso-nxt.tar.gz]. Because the author “z-“ gives thanks to Horizon, it is assumed this code was developed after the ADM-NXT version.² Some key differences that were noticed with a cursory examination, other than the differences due to programming style, were the following. - The ADM-NXT version was tampered with by the authors to make it harder for “script kiddies” to employ. - The TESO-NXT version was only designed to run against Linux and FreeBSD operating systems memory stacks. 4. How the Exploit Works The BIND 8.2 NXT exploit is based on a buffer overflow of the stack memory. This buffer overflow is possible because of insecure coding practices. Many programmers employ functions that use routines that do not check the bounds of input variables. The reasons for this may be intentional (e.g., for performance reasons) or possibly just lacks of understanding of secure programming techniques. At any rate, this is an all too common practice and can be exploited by a hacker who has access to source code and can run utilities like *strings* that find insecure routines. (An understanding of C and assembly programming languages along with lots of patience would also be helpful.) Of particular relevance to the BIND 8.2 NXT exploit as well as other buffer overflow attacks, is stack memory manipulation. Stack memory is the type of memory that programs use to store function local variables and parameters. An important concept regarding stack memory exploitation is related to the return pointer. The return pointer contains the address of the place in the calling program control is returned to after completion of the function³. The following example is a simple illustration describing how a buffer overflow attack can be used to overwrite the return pointer while inserting executable machine code in the stack. --- ² Based on research performed 12-13 August 2000, it could not be ascertained if the TESO-NXT version has ever been successfully deployed. Example 1: A given function has two variables defined, var1[20] and var2[12]. For simplicity, the variables and the return point, ptr[0], are allocated addresses 00000000 through 00000020 as depicted in the figure 1. Figure 1: Before Exploit <table> <thead> <tr> <th>Addresses</th> <th>Contents</th> </tr> </thead> <tbody> <tr> <td>00000000</td> <td>STACK[0] = var1</td> </tr> <tr> <td>00000004</td> <td>STACK[1] = var1</td> </tr> <tr> <td>00000008</td> <td>STACK[2] = var1</td> </tr> <tr> <td>0000000C</td> <td>STACK[3] = var1</td> </tr> <tr> <td>00000010</td> <td>STACK[4] = var1</td> </tr> <tr> <td>00000014</td> <td>STACK[5] = var2</td> </tr> <tr> <td>00000018</td> <td>STACK[6] = var2</td> </tr> <tr> <td>0000001C</td> <td>STACK[7] = var2</td> </tr> <tr> <td>00000020</td> <td>STACK[8] = PTR[0]</td> </tr> <tr> <td>00000024</td> <td>STACK[9]</td> </tr> <tr> <td>00000028</td> <td>STACK[10]</td> </tr> </tbody> </table> To demonstrate the buffer overflow, if an input var2 exceeding 12 characters is entered into the stack via a routine such as strcpy(), the return pointer, PTR[0] is overwritten by the overflow data of var2* (see figure 2 below). In this case, var2* was carefully crafted to be another return pointer that directs the flow of the function to address 00000024. This new address is the location of the attack payload, which was delivered as part of the overflow code. The payload can be machine executable code, such as /bin/sh –c, which will run at the same privilege level of the program being exploited. This example is a very simple demonstration on how a stack overflow can take control of a function to execute hacker-defined executable code. In reality it can be quite an endeavor to build a buffer overflow program, especially if the programmer has to predict the specific arrangements of the stack. One of the best resources available that describes stack memory concepts is the article “Smashing the Stack for Fun and Profit” by Aleph One, in Phrack Magazine, Issue 49, Article 14, [http://phrack.infonexus.com/search.phtml?view&article=p49-14]. THE ADM-NXT BIND buffer overflow exploit works when the target name server performs a recursive DNS query on a hacker host. The query basically fetches a maliciously-constructed NXT record which contains the code that exploits the BIND server memory stack. The exploit code can be successfully engaged against primary, secondary, and even caching-only name servers. The next paragraph explains in more detail how the attack is actually employed. ![Figure 2: After Exploit](image) <table> <thead> <tr> <th>Addresses</th> <th>Contents</th> </tr> </thead> <tbody> <tr> <td>0000000000</td> <td>STACK[0] = var1</td> </tr> <tr> <td>0000000004</td> <td>STACK[1] = var1</td> </tr> <tr> <td>0000000008</td> <td>STACK[2] = var1</td> </tr> <tr> <td>000000000C</td> <td>STACK[3] = var1</td> </tr> <tr> <td>0000000010</td> <td>STACK[4] = var1</td> </tr> <tr> <td>0000000014</td> <td>STACK[5] = var2</td> </tr> <tr> <td>0000000018</td> <td>STACK[6] = var2</td> </tr> <tr> <td>000000001C</td> <td>STACK[7] = var2</td> </tr> <tr> <td>0000000020</td> <td>STACK[8] = var2*</td> </tr> <tr> <td>0000000024</td> <td>Attack_Payload[1]</td> </tr> <tr> <td>0000000028</td> <td>Attack_Payload[2]</td> </tr> </tbody> </table> 5. How to Employ the Exploit The BIND 8.2 NXT remote buffer overflow exploit can be performed by a single machine, however, for purposes of providing a clear understanding of the host functions, the participating name server and hacker host (with NXT exploit code) will be denoted as separate machines (see figure 3 below). Figure 3: BIND 8.2 NXT Remote Exploit Geometry Step 1. Hacker host (rwm.hackernet.net) identifies and negotiates target name server. - Determine if target name server, ns1.targetnet.com, is vulnerable to NXT exploit via dig or nslookup. Like most firewall configurations on the Internet, the targetnet firewall permits DNS queries to UDP and TCP ports 53 from “any” host. - Set up a resolver (/etc/resolv.conf) on rwm.hackernet.net to query ns1.xxx.net for its name services. - Perform DNS queries of ns1.target.com in order to determine if it takes on burden of performing name queries – if so, then it performs recursive queries (e.g., name server does not just refer the requesting name server to different name server like it would for iterative query.) Step 2: Create and delegate subdomain - Create following records on ns1.xxx.net: aaa.xxx.net NS A rwm.hackernet.net rwm.hackernet.net IN A 10.233.131.222 - Reinitialize in.named daemon...kill -HUP <in.named pid> Step 3: Compile BIND 8.2 NXT exploit code (ADM-NXT version: t666.c)\(^4\) - Edit source code to change `/adm/sh` to `/bin/sh` (in hex) by searching the source code for `0x2f,0x61,0x64,0x6d,0x2f` and replacing it with `0x2f,0x62,0x69,0x6e,0x2f`. (The authors of the program, to put it in their words, wanted to raise the bar a little to make it harder for script kiddies to blindly execute this code.) - Compile the `t666.c` source code with gnu C compiler and execute the `bind_nxt` executable ``` rwm #/tmp gcc t666.c -o bind_nxt rwm #/tmp ./bind_nxt ``` Step 4: Request ns1.targetnet.com to do recursive query in order to resolve www.aaa.xxx.net - a host with subdomain delegated to rwm.hackernet.net as per NS record. ``` rwm #nslookup > server ns1.targetnet.com > www.aaa.xxx.net ``` Step 5: Target NS performs recursive queries to resolve www.aaa.xxx.net - Queries ns1.xxx.net first since it is primary for top-level domain xxx.net. Receives message from ns1.xxx.net to query rwm.hacknet.net that is primary for subdomain aaa.xxx.net as per NS record. - Queries rwm.hacknet.net to resolve www.aaa.xxx.net - It should be noted that ns1.targetnet.net is running `in. named` with UID = 0 --- \(^4\) From the article, *BIND 8.2 - 8.2.2 Remote root Exploit How-To* by E-Mind, [http://www.hack.co.za/daem0n/named/NXT-Howto.txt](http://www.hack.co.za/daem0n/named/NXT-Howto.txt). Step 6: rwm.hacnet.net engages ns1.targetnet.com with a NXT buffer overflow attack - rwm.hacnet.net sends a large NXT record containing code that exploits the remote BIND server memory stack with a buffer overflow (will use TCP instead of UDP because of the size of the transaction.) - Hacker on rwm.hacnet.net gains shell access with privileges as root since in.named on target was running as root. Step 7. Sets up user account and back channel - Set up user account and backdoor (e.g., netcat listener) before exiting shell account (since buffer overflow caused DNS to crash). - Come back and set up favorite rootkit. 6. Attack Signature There are a number of signatures that the BIND 8.2 NXT remote buffer overflow (ADM-NXT) has. In many of the signatures, the two authors of the exploit source code, Horizon and Plaguez, deliberately leave their “signature” in various portions of the character array definitions portion. The ASCII and HEX versions of the code shown below can be easily retrieved by promiscuous-mode packet analyzers such as tcpdump, Snort, and Solaris’ Snoop. With regard to the seven signatures listed, there is a strong likelihood more exist. **Signature 1:** The recursive query request of a domain name that is not associated with the domain name of the server being queried. This could possibly be explained by a mistake in typing the domain name in the DNS query. However, it is assessed that this probability would become exponentially lower for domain names with characters exceeding four. **Signature 2:** Some of the compromised systems had one of the following empty directories on systems where the NXT record vulnerability was successfully exploited [http://www.cert.org/advisories/CA-2000-03.html]: ``` /var/named/ADMROCKS /var/named/O ``` **Signature 3:** On the BSD code version of the exploit, an empty file is created. The following came from the “char bsdcode[]=“ portion of the source code: ``` 0x74,0x6f,0x75,0x63,0x68,0x20,0x2f,0x74,0x6d,0x70,0x59,0x4f,0x59,0x4f,0x59,0x4f,0x0}; ``` The above code yields the ASCII characters... touch /tmp/YOYOYO **Signature 4:** On all versions of the exploit, the “unpatched” version of the exploit would execute the “/adm/sh –c” command. The following came from character array definitions portion of the source code: ``` 0x2f,0x61,0x64,0x6d,0x2f,0x6b,0x73,0x68,0x0,0x2d,0x63 ``` --- Conversely, the patch as prescribed by E-Mind, would change this code such that “/bin/sh –c” would be executed in the stack instead. Horizon himself provides a clue to this in his comments. **Signature 5:** In all versions of the exploit, the ASCII characters “ADM Rocks” is visible. The following line came from the character array definitions portion of source code: ``` 0x41,0x44,0x4d,0x52,0x6f,0x6c,0x61,0x6e,0x74,0x68,0x69,0x73,0x74,0x73, ``` **Signature 6:** The following came from the “char linuxcode[ ]” and “char bsdcode[ ]” portion of the source code: ``` 0x70,0x6c,0x61,0x67,0x75,0x65,0x6a,0x55,0x41,0x44,0x4d,0x53 ``` The above code yields the ASCII characters... plaguez[ADM]. **Signature 7:** The following came from the “char linuxcode[ ]” portion of the ADM-NXT version by Horizon and Plaguez: ``` 0x0,0x0,0x0,0x10,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x74,0x68,0x69,0x73,0x74,0x73, ``` Lance Spitzner’s forensics was able to obtain the following readable ASCII code: ``` 00 00 00 10 00 00 00 00 00 00 00 74 68 69 73 69 ..........thisisometempspacefor 72 74 68 65 73 6f 66 69 63 6f 64 69 66 69 65 67 6e 73 73 6f 6d 65 74 65 6d 70 73 70 61 63 65 66 6f 79 65 61 68 79 65 61 68 69 66 69 65 67 6e 73 73 6f 6d 65 74 65 6d 70 73 70 61 63 65 66 6f 73 69 73 6c 61 6d 62 75 74 61 6e 79 77 61 79 77 68 6f 63 61 72 65 73 68 6f 72 69 7a 6f 6e 67 6f 74 69 74 6f 72 6b 69 6e 67 63 6f 6f 6c EB 86 5E 56 8D 46 08 50 46 8b 08 69 73 63 6f 6f 6c ``` 7. **How to Protect Against the Attack** - Upgrading to BIND version 8.2.2 patch level 5, or higher, is strongly recommended for all users of BIND. With regard to the subject exploit, this is the easiest and best way to mitigate this attack. - Change UID and GID of in.named daemon to a non-root UID and GID. This is analogous to why web server run as “nobody”. - A more holistic approach to counter buffer overflows in general, is to practice secure coding practices that employ argument validation routines and “safe” compilers. Also the use of secure routines such as fgets(), strncpy(), and --- 5 A White Paper, authored by Lance Spitzner, *Know Your Enemy: A Forensics Analysis*, focuses on how SNORT was used as a forensics tool to piece together the actions of a real intruder. This paper greatly facilitated the analysis of the ADM-NXT exploit with regard to signatures 6 and 7. strncat() will reduce the likelihood of buffer overflows. Security representation on configuration control boards is also necessary and should be a matter of routine whenever any code is modified. 8. Source Code/Pseudo Code Source code for both the ADM and TESO versions of the BIND 8.2 NXT remote buffer overflow attack can be found in paragraph 3 above. Pseudo code for this exploit is as follows: 1. Determine if target name server is vulnerable to NXT exploit via dig or nslookup 2. Perform DNS queries of target name server in order to determine if target name server performs recursive queries 3. Create subdomain delegation records on name server that is an accomplice to the attack and reinitialize in.named daemon…kill –HUP <in.named pid> 4. Edit source code to change /adm/sh to /bin/sh (in hex) by searching the source code for 0x2f,0x61,0x64,0x6d,0x2f and replacing it with x2f,0x62,0x69,0x6e,0x2f on hacker_host 5. Compile the t666.c source code with C compiler on hacker_host 6. Execute the compiled and linked executable on hacker_host 7. Request target name server to perform recursive query in order to resolve a hostname with subdomain that was delegated to hacker_host. 8. hacker_host sends a large NXT record containing code that exploits the remote BIND server memory stack with a buffer overflow. 9. hacker_host gains shell access with privileges as in.named daemon on target name server. 10. Attacker sets up user account and back channel on name server; exits shell. --- Upcoming SANS Penetration Testing <table> <thead> <tr> <th>Event Name</th> <th>Location</th> <th>Dates</th> <th>Organizer</th> </tr> </thead> <tbody> <tr> <td>SANS Oslo March 2020</td> <td>Oslo, Norway</td> <td>Mar 23, 2020 - Mar 28, 2020</td> <td>CyberCon</td> </tr> <tr> <td>SANS Seattle Spring 2020</td> <td>Seattle, WA</td> <td>Mar 23, 2020 - Mar 28, 2020</td> <td>CyberCon</td> </tr> <tr> <td>SANS Madrid March 2020</td> <td>Madrid, Spain</td> <td>Mar 23, 2020 - Mar 28, 2020</td> <td>CyberCon</td> </tr> <tr> <td>SANS Secure Canberra 2020</td> <td>Canberra, Australia</td> <td>Mar 23, 2020 - Mar 28, 2020</td> <td>CyberCon</td> </tr> <tr> <td>SANS Philadelphia 2020</td> <td>Philadelphia, PA</td> <td>Mar 30, 2020 - Apr 04, 2020</td> <td>CyberCon</td> </tr> <tr> <td>SANS Frankfurt March 2020</td> <td>Frankfurt, Germany</td> <td>Mar 30, 2020 - Apr 04, 2020</td> <td>CyberCon</td> </tr> <tr> <td>SANS 2020</td> <td>Orlando, FL</td> <td>Apr 03, 2020 - Apr 10, 2020</td> <td>CyberCon</td> </tr> <tr> <td>SANS Minneapolis 2020</td> <td>Minneapolis, MN</td> <td>Apr 14, 2020 - Apr 19, 2020</td> <td>CyberCon</td> </tr> <tr> <td>SANS Bethesda 2020</td> <td>Bethesda, MD</td> <td>Apr 14, 2020 - Apr 19, 2020</td> <td>CyberCon</td> </tr> <tr> <td>CS-Cybersecure Catalyst New Canadians Academy SEC504</td> <td>Brampton, ON</td> <td>Apr 20, 2020 - Apr 25, 2020</td> <td>Community SANS</td> </tr> <tr> <td>SANS London April 2020</td> <td>London, United Kingdom</td> <td>Apr 20, 2020 - Apr 25, 2020</td> <td>CyberCon</td> </tr> <tr> <td>CS Cybersecure Catalyst Women Academy SEC504</td> <td>Brampton, ON</td> <td>Apr 20, 2020 - Apr 25, 2020</td> <td>Community SANS</td> </tr> <tr> <td>SANS Boston Spring 2020</td> <td>Boston, MA</td> <td>Apr 20, 2020 - Apr 25, 2020</td> <td>CyberCon</td> </tr> <tr> <td>CS-Cybersecure Catalyst New Career Academy SEC504</td> <td>Brampton, ON</td> <td>Apr 20, 2020 - Apr 25, 2020</td> <td>Community SANS</td> </tr> <tr> <td>SANS Brussels April 2020</td> <td>Brussels, Belgium</td> <td>Apr 20, 2020 - Apr 25, 2020</td> <td>CyberCon</td> </tr> <tr> <td>SANS Pen Test Austin 2020</td> <td>Austin, TX</td> <td>Apr 27, 2020 - May 02, 2020</td> <td>CyberCon</td> </tr> <tr> <td>SANS Baltimore Spring 2020</td> <td>Baltimore, MD</td> <td>Apr 27, 2020 - May 02, 2020</td> <td>CyberCon</td> </tr> <tr> <td>Community SANS Nashville SEC542</td> <td>Nashville, TN</td> <td>Apr 27, 2020 - May 02, 2020</td> <td>Community SANS</td> </tr> <tr> <td>SANS Security West 2020</td> <td>San Diego, CA</td> <td>May 06, 2020 - May 13, 2020</td> <td>CyberCon</td> </tr> <tr> <td>SANS Amsterdam May 2020</td> <td>Amsterdam, Netherlands</td> <td>May 11, 2020 - May 18, 2020</td> <td>CyberCon</td> </tr> <tr> <td>SANS Hong Kong 2020</td> <td>Hong Kong, Hong Kong</td> <td>May 11, 2020 - May 16, 2020</td> <td>Live Event</td> </tr> <tr> <td>SANS Northern Virginia- Alexandria 2020</td> <td>Alexandria, VA</td> <td>May 17, 2020 - May 22, 2020</td> <td>CyberCon</td> </tr> <tr> <td>SANS San Antonio 2020</td> <td>San Antonio, TX</td> <td>May 17, 2020 - May 22, 2020</td> <td>CyberCon</td> </tr> <tr> <td>SANS Autumn Sydney 2020</td> <td>Sydney, Australia</td> <td>May 18, 2020 - May 23, 2020</td> <td>Live Event</td> </tr> <tr> <td>SANS Atlanta Spring 2020</td> <td>Atlanta, GA</td> <td>May 26, 2020 - May 31, 2020</td> <td>CyberCon</td> </tr> <tr> <td>Cloud Security Summit &amp; Training 2020</td> <td>CyberCast,</td> <td>May 27, 2020 - Jun 03, 2020</td> <td>CyberCon</td> </tr> <tr> <td>Community SANS Seattle SEC560</td> <td>Seattle, WA</td> <td>Jun 01, 2020 - Jun 06, 2020</td> <td>Community SANS</td> </tr> <tr> <td>Community SANS Scottsdale SEC504</td> <td>Scottsdale, AZ</td> <td>Jun 01, 2020 - Jun 06, 2020</td> <td>Community SANS</td> </tr> <tr> <td>SANS London June 2020</td> <td>London, United Kingdom</td> <td>Jun 01, 2020 - Jun 06, 2020</td> <td>Live Event</td> </tr> </tbody> </table>
{"Source-Url": "https://pen-testing.sans.org/resources/papers/gcih/bind-82-nxt-remote-buffer-overflow-exploit-100356", "len_cl100k_base": 6698, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 33943, "total-output-tokens": 7381, "length": "2e12", "weborganizer": {"__label__adult": 0.001007080078125, "__label__art_design": 0.0006279945373535156, "__label__crime_law": 0.0238189697265625, "__label__education_jobs": 0.002613067626953125, "__label__entertainment": 0.0002827644348144531, "__label__fashion_beauty": 0.0004074573516845703, "__label__finance_business": 0.0008816719055175781, "__label__food_dining": 0.0006799697875976562, "__label__games": 0.0034465789794921875, "__label__hardware": 0.00559234619140625, "__label__health": 0.0011873245239257812, "__label__history": 0.0006237030029296875, "__label__home_hobbies": 0.0002658367156982422, "__label__industrial": 0.0016260147094726562, "__label__literature": 0.0006246566772460938, "__label__politics": 0.0009751319885253906, "__label__religion": 0.0008068084716796875, "__label__science_tech": 0.232666015625, "__label__social_life": 0.0002448558807373047, "__label__software": 0.06732177734375, "__label__software_dev": 0.65234375, "__label__sports_fitness": 0.0006914138793945312, "__label__transportation": 0.0007891654968261719, "__label__travel": 0.00025010108947753906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23234, 0.08798]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23234, 0.27163]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23234, 0.8398]], "google_gemma-3-12b-it_contains_pii": [[0, 252, false], [252, 434, null], [434, 1501, null], [1501, 4226, null], [4226, 7137, null], [7137, 9069, null], [9069, 10110, null], [10110, 11196, null], [11196, 12825, null], [12825, 15204, null], [15204, 17559, null], [17559, 19170, null], [19170, 23234, null]], "google_gemma-3-12b-it_is_public_document": [[0, 252, true], [252, 434, null], [434, 1501, null], [1501, 4226, null], [4226, 7137, null], [7137, 9069, null], [9069, 10110, null], [10110, 11196, null], [11196, 12825, null], [12825, 15204, null], [15204, 17559, null], [17559, 19170, null], [19170, 23234, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23234, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23234, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23234, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23234, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 23234, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23234, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23234, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23234, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23234, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23234, null]], "pdf_page_numbers": [[0, 252, 1], [252, 434, 2], [434, 1501, 3], [1501, 4226, 4], [4226, 7137, 5], [7137, 9069, 6], [9069, 10110, 7], [10110, 11196, 8], [11196, 12825, 9], [12825, 15204, 10], [15204, 17559, 11], [17559, 19170, 12], [19170, 23234, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23234, 0.277]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
ce08a308e723eee0aad63065a148d1b59ea29ce3
Lecture 13: Containers Advanced Practical Data Science, MLOps Pavlos Protopapas Institute for Applied Computational Science, Harvard Outline 1. Recap 2. Motivation / Tutorial 3. What is a Container 4. Tutorial: Building & Running Containers using Docker 5. Why use Containers? Virtual Environments **Pros** - Reproducible research - Explicit dependencies - Improved engineering collaboration **Cons** - Difficulty setting up your environment - Not isolation - Does not always work across different OS Virtual Machines Pros • Full autonomy • Very secure • Lower costs • Used by all Cloud providers for on demand server instances Cons • Uses hardware in local machine • Not very portable since size of VMs are large • There is an overhead associated with virtual machines Wish List We want a system that: ● Automatically set up (installs) all OS and extra libraries and set up the python environment ● It is isolated ● Uses less resources ● Startups quickly What is a CONTAINER - Extremely **portable** and lightweight - **Fully packaged** software with all dependencies included - Can be used for development, training, and deployment - Development teams can easily **share** containers **Docker** is an open source platform for building, deploying, and managing containerized applications. Environments vs Virtualization vs Containerization - **Virtual Environments** - Python - Operating System - Infrastructure - Virtual Environments - **Virtualization** - Hypervisor - Operating System - Infrastructure - Virtualization - **Containerization** - Docker - Operating System - Infrastructure - Containerization Tutorial - Create a GCS Bucket and read/write files to it - Let us run the simple-translate app using Docker - For this we will do the following: - Create a VM Instance - SSH into the VM - Install Docker inside the VM - Run the Containerized simple-translate app - Full instructions can be found [here](#) What is a Container - **Standardized** packaging for software dependencies - **Isolate** apps from each other - **Works** for all major Linux distributions, MacOS, Windows What Makes Containers so Small? Container = User Space of OS - User space refers to all of the code in an operating system that lives outside of the kernel How to run a docker container • We use a simple text file, the **Dockerfile**, to build the **Docker Image**, which consists of an iso file and other files. • We run the Docker Image to get **Docker Container**. What is the difference between an image and container Docker Image is a template aka a blueprint to create a running Docker container. Docker uses the information available in the Image to create (run) a container. Image is like a recipe, container is like a dish. Alternatively, you can think of an image as a class and a container is an instance of that class. Inside the Dockerfile FROM: This instruction in the Dockerfile tells the daemon, which base image to use while creating our new Docker image. In the example here, we are using a very minimal OS image called alpine (just 5 MB of size). You can also replace it with Ubuntu, Fedora, Debian or any other OS image. RUN: This command instructs the Docker daemon to run the given commands as it is while creating the image. A Dockerfile can have multiple RUN commands, each of these RUN commands create a new layer in the image. ENTRYPONT: The ENTRYPONT instruction is used when you would like your container to run the same executable every time. Usually, ENTRYPONT is used in scenarios where you want the container to behave exclusively as if it were the executable it's wrapping. CMD: The CMD sets default commands and/or parameters when a docker container runs. CMD can be overwritten from the command line via the docker run command. Multiple containers from same image How can you run multiple containers from the same image? Yes, you could think of an image as instating a class. Wouldn’t they all be identical? Not necessarily. You could instate it with different parameters using the CMD and therefore different containers will be different. ``` FROM ubuntu:latest RUN apt-get update ENTRYPOINT ["/bin/echo", "Hello"] CMD ["world"] ``` > docker build -t hello_world_cmd:first -f Dockerfile_cmd . > docker run -it hello_world_cmd:first > Hello world > docker run -it hello_world_cmd:first Pavlos > Hello Pavlos When we execute the build command, the daemon reads the Dockerfile and creates a layer for every command. Image Layering - **Container**: (Writable, running application) - **Layered Image 2** - **Layered Image 1** - **Platform Image**: (Runtime Environment) **A application sandbox** - Each container is based on an image that holds necessary config data - When you launch a container, a writable layer is added on top of the image **A static snapshot of the container configuration** - Layer images are read-only - Each image depends on one or more parent images **An Image that has no parent** - Platform images define the runtime environment, packages and utilities necessary for containerized application to run Docker layers for a container running Debian and a Python environment using Pipenv. Some Docker Vocabulary **Docker Image** The basis of a Docker container. Represent a full application. **Docker Container** The standard unit in which the application service resides and executes. **Docker Engine** Creates, ships and runs Docker containers deployable on a physical or virtual, host locally, in a datacenter or cloud service provider. **Registry Service (Docker Hub or Docker Trusted Registry)** Cloud or server-based storage and distribution service for your images. Installing Docker Desktop 1. Install **Docker Desktop**. Use one of the links below to download the proper Docker application depending on your operating system. - For Mac users, follow this link- [https://docs.docker.com/docker-for-mac/install/](https://docs.docker.com/docker-for-mac/install/). - For Windows users, follow this link- [https://docs.docker.com/docker-for-windows/install/](https://docs.docker.com/docker-for-windows/install/) Note: You will need to install Hyper-V to get Docker to work. - For Linux users, follow this link- [https://docs.docker.com/install/linux/docker-ce/ubuntu/](https://docs.docker.com/install/linux/docker-ce/ubuntu/) 2. Once installed run the docker desktop. 3. Open a Terminal window and type `docker run hello-world` to make sure Docker is installed properly. Let us build the simple-translate app **Docker Container** For this we will do the following: - Clone or download [code](#) - Build a container - Run a container - Pavlos will update a container on Docker Hub - You will pull the new container and run it For detail instruction go [here](#) Tutorial: Docker commands Check what version of Docker ``` docker --version ``` Get version of Docker CLI List all running docker containers - `docker container ls` List all docker images Docker command: `docker image ls` Docker command for image Docker command option to list all images Tutorial: Docker commands Build an image based on a Dockerfile ``` docker build -t ac215-d1 -f Dockerfile . ``` - **docker command**: `docker build -t ac215-d1 -f Dockerfile .` - **Build the image** - **Name the image**: `ac215-d1` - **Name of docker file and “.” means look at the current working directory** Run a docker container using an image from Docker Hub ``` docker run --rm --name ac215-d1 -ti --entrypoint /bin/bash ac215-d1 ``` - **Run the container** - `docker run`: Command to run a container - `--rm`: Automatically clean up the container and remove the file system when the container exit - `--name ac215-d1`: Name of the container - `-ti`: 't' is to give us a terminal and 'i' is for interactive mode - `--entrypoint /bin/bash`: Default command to execute on startup - `ac215-d1`: Name of the image to use Open another command prompt and check how many container and images we have ``` docker container ls ``` ``` docker image ls ``` Exit from all containers and let us clear of all images Docker command Docker command for system Docker command option to remove all images not referenced by any containers Check how many containers and images we have currently ``` docker container ls ``` ``` docker image ls ``` Docker Image as Layers ```bash > docker build -t hello_world_cmd -f Dockerfile_cmd . ``` Sending build context to Docker daemon 34.3kB Step 1/4 : FROM ubuntu:latest ``` latest: Pulling from library/ubuntu 54ee1f796a1e: Already exists f7bfea53ad12: Already exists 46d371e02073: Already exists b66c17bbf772: Already exists ``` Digest: sha256:31dfb10d52ce76c5ca0aa19d10b3e6424b830729e32a89a7c6eee2cda2be67a5 Status: Downloaded newer image for ubuntu:latest ```bash --- Step 2/4 : RUN apt-get update ---> Running in e3e1a87e8d6e Get:1 http://archive.ubuntu.com/ubuntu focal InRelease [265 kB] Get:2 http://security.ubuntu.com/ubuntu focal-security InRelease [107 kB] Get:3 http://security.ubuntu.com/ubuntu focal-security/universe amd64 Packages [67.5 kB] Get:4 http://archive.ubuntu.com/ubuntu focal-updates InRelease [111 kB] Get:5 http://archive.ubuntu.com/ubuntu focal-backports InRelease [98.3 kB] Get:6 http://security.ubuntu.com/ubuntu focal-security/main amd64 Packages [231 kB] Get:7 http://archive.ubuntu.com/ubuntu focal/restricted amd64 Packages [33.4 kB] Get:8 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages [1275 kB] Get:9 http://security.ubuntu.com/ubuntu focal-security/multiverse amd64 Packages [1078 B] ``` ... >docker build -t hello_world_cmd -f Dockerfile_cmd . .... Step 3/4 : ENTRYPOINT ["/bin/echo", "Hello"] ---> Running in 52c7a98397ad Removing intermediate container 52c7a98397ad ---> 7e4f8b0774de Step 4/4 : CMD ["world"] ---> Running in 353adb968c2b Removing intermediate container 353adb968c2b ---> a89172ee2876 Successfully built a89172ee2876 Successfully tagged hello_world_cmd:latest ### Docker Image as Layers ```bash > docker images <table> <thead> <tr> <th>REPOSITORY</th> <th>TAG</th> <th>IMAGE ID</th> <th>CREATED</th> <th>SIZE</th> </tr> </thead> <tbody> <tr> <td>hello_world_cmd</td> <td>latest</td> <td>a89172ee2876</td> <td>7 minutes ago</td> <td>96.7MB</td> </tr> <tr> <td>ubuntu</td> <td>latest</td> <td>4e2eef94cd6b</td> <td>3 weeks ago</td> <td>73.9MB</td> </tr> </tbody> </table> ``` ```bash > docker image history hello_world_cmd <table> <thead> <tr> <th>IMAGE</th> <th>CREATED</th> <th>CREATED BY</th> <th>SIZE</th> <th>COMMENT</th> </tr> </thead> <tbody> <tr> <td>a89172ee2876</td> <td>8 minutes ago</td> <td>/bin/sh -c #(nop) CMD [&quot;world&quot;]</td> <td>0B</td> <td></td> </tr> <tr> <td>7e4f8b0774de</td> <td>8 minutes ago</td> <td>/bin/sh -c #(nop) ENTRYPOINT [&quot;/bin/echo&quot; &quot;…&quot;]</td> <td>0B</td> <td></td> </tr> <tr> <td>cfc0c414a914</td> <td>8 minutes ago</td> <td>/bin/sh -c apt-get update</td> <td>22.8MB</td> <td></td> </tr> <tr> <td>4e2eef94cd6b</td> <td>3 weeks ago</td> <td>/bin/sh -c #(nop) CMD [&quot;/bin/bash&quot;]</td> <td>1.01MB</td> <td></td> </tr> <tr> <td>&lt;missing&gt;</td> <td>3 weeks ago</td> <td>/bin/sh -c mkdir -p /run/systemd &amp;&amp; echo 'do…'</td> <td>7B</td> <td></td> </tr> <tr> <td>&lt;missing&gt;</td> <td>3 weeks ago</td> <td>/bin/sh -c set -xe &amp;&amp; echo '#!/bin/sh' &gt; /…</td> <td>811B</td> <td></td> </tr> <tr> <td>&lt;missing&gt;</td> <td>3 weeks ago</td> <td>/bin/sh -c [-z &quot;$(apt-get index-targets)&quot; ]</td> <td>1.01MB</td> <td></td> </tr> <tr> <td>&lt;missing&gt;</td> <td>3 weeks ago</td> <td>/bin/sh -c #(nop) ADD file:9f937f4889e7bf646…</td> <td>72.9MB</td> <td></td> </tr> </tbody> </table> ``` Why Layers Why build an image with multiple layers when we can just build it in a single layer? Let’s take an example to explain this concept better, let us try to change the Dockerfile_cmd we created and rebuild a new Docker image. ``` > docker build -t hello_world_cmd -f Dockerfile_cmd . Sending build context to Docker daemon 34.3kB Step 1/4 : FROM ubuntu:latest ---> 4e2ee94c9d6b Step 2/4 : RUN apt-get update ---> Using cache ---> cfc0414a914 Step 3/4 : ENTRYPOINT ["/bin/echo", "Hello"] ---> Using cache ---> 7e4f8b0774de Step 4/4 : CMD ["world"] ---> Using cache ---> a89172ee2876 Successfully built a89172ee2876 Successfully tagged hello_world_cmd:latest ``` As you can see that the image was built using the existing layers from our previous docker image builds. If some of these layers are being used in other containers, they can just use the existing layer instead of recreating it from scratch. Why use Containers? • Imagine you are building a large complex application (e.g. Online Store) • Traditionality you would build this using a Monolithic Architecture Monolithic Architecture Browser Apps HTML / REST / JSON Server Storefront UI Module Catalog Module Reviews Module Orders Module REST / JSON Mobile Apps Database Monolithic Architecture Browser Apps Mobile Apps Server Storefront UI Module Catalog Module Reviews Module Orders Module Database Oracle Java HTML / REST / JSON REST / JSON Monolithic Architecture - Advantages Simple to **Develop, Test, Deploy** and **Scale**: 1. Simple to develop because all the tools and IDEs support the applications by default 2. Easy to deploy because all components are packed into one bundle 3. Easy to scale the whole application Monolithic Architecture - Disadvantages 1. Very difficult to maintain 2. One component failure will cause the whole system to fail 3. Very difficult to create the patches for monolithic architecture 4. Adapting to new technologies is challenging 5. Take a long time to startup because all the components need to get started Applications have changed dramatically A decade ago Apps were monolithic Built on a single stack (e.g. .NET or Java) Long lived Deployed to a single server Today Apps are constantly being developed Build from loosely coupled components Newer versions are deployed often Deployed to a multitude of servers Applications have changed dramatically A decade ago Apps were monolithic Built on a single stack (e.g. .NET or Java) Long lived Deployed to a single server Today Apps are constantly being developed Build from loosely coupled components Newer version are deployed often Deployed to a multitude of servers Data Science Apps are being integrated with various data types/sources and models Today: Microservice Architecture - **Browser Apps** - REST / JSON - HTML - **Mobile Apps** - REST / JSON - **Edge Device Apps** - REST / JSON **API Service** - **Browser Apps** - **Mobile Apps** - **Edge Device Apps** **Storefront UI** - **Catalog Module** - **Reviews Module** - **Orders Module** - **Recommendation Module** **Database** - **Cloud Store** - **Models** Software Development Workflow (no Docker) OS Specific installation in every developer machine Software Development Workflow (no Docker) Every team member moves code to source control OS Specific installation in every developer machine Every team member moves code to source control. Build server needs to be **installed** with all required softwares/frameworks. Production build is performed by pulling code from source control. OS Specific **installation** in every developer machine. Software Development Workflow (no Docker) Every team member moves code to source control Build server needs to be **installed** with all required softwares/frameworks Production build is performed by pulling code from source control Production server needs to be **installed** with all required softwares/frameworks Production server will be different OS version than development machines OS Specific installation in every developer machine Software Development Workflow (with Docker) Development machines only need Docker installed Containers need to be setup only once Software Development Workflow (with Docker) Development machines only needs **Docker installed** **Containers** need to be setup only once Every team member moves code to source control Source Control GitHub Software Development Workflow (with Docker) Every team member moves code to source control. Build server only needs Docker installed. Docker images are built for a release and pushed to container registry. Containers need to be setup only once. Software Development Workflow (with Docker) Every team member moves code to source control. Build server only needs Docker installed. Docker images are built for a release and pushed to container registry. Production server only needs Docker installed. Production server pulls Docker images from container registry and runs them. Containers need to be setup only once. Development machines only needs Docker installed. Source Control GitHub Build Server Production/ Test Servers Production server Linux Linux Production server Linux Production server Linux Build server Docker installed Docker images container registry Docker images container registry Docker images container registry Docker images container registry Docker images container registry Docker images container registry Docker images container registry Comparison <table> <thead> <tr> <th></th> <th>Virtual ENV</th> <th>Docker</th> <th>VM</th> <th>JH</th> </tr> </thead> <tbody> <tr> <td>Computational Cost</td> <td>LOW</td> <td>MEDIUM</td> <td>HIGH</td> <td>?</td> </tr> <tr> <td>Memory Footprint</td> <td>LOW</td> <td>LOW</td> <td></td> <td></td> </tr> <tr> <td>Deployment</td> <td>EASY</td> <td>MEDIUM</td> <td>INIT HIGH THEN EASY</td> <td>N/A</td> </tr> <tr> <td>Versatility (Types of Apps)</td> <td>MEDIUM HIGH</td> <td>MEDIUM HIGH</td> <td>MEDIUM HIGH</td> <td>LOW</td> </tr> <tr> <td>Portability</td> <td>MEDIUM</td> <td>HIGH</td> <td>HIGH</td> <td>HIGH</td> </tr> </tbody> </table> - **Computational Science** - **DevOps** - **Data Science (No Pipeline)** - **Data Science (Pipelines)** THANK YOU
{"Source-Url": "https://harvard-iacs.github.io/2021-AC215/lectures/lecture7/presentation/lecture13.pdf", "len_cl100k_base": 4593, "olmocr-version": "0.1.50", "pdf-total-pages": 50, "total-fallback-pages": 0, "total-input-tokens": 56698, "total-output-tokens": 6355, "length": "2e12", "weborganizer": {"__label__adult": 0.0002112388610839844, "__label__art_design": 0.0003666877746582031, "__label__crime_law": 0.0001596212387084961, "__label__education_jobs": 0.0018768310546875, "__label__entertainment": 5.59687614440918e-05, "__label__fashion_beauty": 9.703636169433594e-05, "__label__finance_business": 0.0002446174621582031, "__label__food_dining": 0.00022983551025390625, "__label__games": 0.0004353523254394531, "__label__hardware": 0.0009946823120117188, "__label__health": 0.0001919269561767578, "__label__history": 0.00016069412231445312, "__label__home_hobbies": 0.00014543533325195312, "__label__industrial": 0.00031828880310058594, "__label__literature": 0.00015163421630859375, "__label__politics": 0.00010901689529418944, "__label__religion": 0.0002779960632324219, "__label__science_tech": 0.0160369873046875, "__label__social_life": 0.0001181960105895996, "__label__software": 0.02685546875, "__label__software_dev": 0.9501953125, "__label__sports_fitness": 0.00014102458953857422, "__label__transportation": 0.00025773048400878906, "__label__travel": 0.00014388561248779297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18103, 0.02147]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18103, 0.5976]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18103, 0.75719]], "google_gemma-3-12b-it_contains_pii": [[0, 135, false], [135, 280, null], [280, 506, null], [506, 777, null], [777, 965, null], [965, 1301, null], [1301, 1648, null], [1648, 1963, null], [1963, 2136, null], [2136, 2294, null], [2294, 2508, null], [2508, 2874, null], [2874, 3810, null], [3810, 4394, null], [4394, 4500, null], [4500, 5114, null], [5114, 5198, null], [5198, 5686, null], [5686, 6505, null], [6505, 6797, null], [6797, 6906, null], [6906, 6966, null], [6966, 7092, null], [7092, 7405, null], [7405, 7932, null], [7932, 8062, null], [8062, 8238, null], [8238, 8347, null], [8347, 9633, null], [9633, 10023, null], [10023, 11981, null], [11981, 12908, null], [12908, 13075, null], [13075, 13243, null], [13243, 13425, null], [13425, 13710, null], [13710, 14035, null], [14035, 14342, null], [14342, 14734, null], [14734, 15120, null], [15120, 15215, null], [15215, 15358, null], [15358, 15612, null], [15612, 16059, null], [16059, 16191, null], [16191, 16404, null], [16404, 16653, null], [16653, 17505, null], [17505, 18094, null], [18094, 18103, null]], "google_gemma-3-12b-it_is_public_document": [[0, 135, true], [135, 280, null], [280, 506, null], [506, 777, null], [777, 965, null], [965, 1301, null], [1301, 1648, null], [1648, 1963, null], [1963, 2136, null], [2136, 2294, null], [2294, 2508, null], [2508, 2874, null], [2874, 3810, null], [3810, 4394, null], [4394, 4500, null], [4500, 5114, null], [5114, 5198, null], [5198, 5686, null], [5686, 6505, null], [6505, 6797, null], [6797, 6906, null], [6906, 6966, null], [6966, 7092, null], [7092, 7405, null], [7405, 7932, null], [7932, 8062, null], [8062, 8238, null], [8238, 8347, null], [8347, 9633, null], [9633, 10023, null], [10023, 11981, null], [11981, 12908, null], [12908, 13075, null], [13075, 13243, null], [13243, 13425, null], [13425, 13710, null], [13710, 14035, null], [14035, 14342, null], [14342, 14734, null], [14734, 15120, null], [15120, 15215, null], [15215, 15358, null], [15358, 15612, null], [15612, 16059, null], [16059, 16191, null], [16191, 16404, null], [16404, 16653, null], [16653, 17505, null], [17505, 18094, null], [18094, 18103, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 18103, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18103, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18103, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18103, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18103, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18103, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18103, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18103, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18103, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18103, null]], "pdf_page_numbers": [[0, 135, 1], [135, 280, 2], [280, 506, 3], [506, 777, 4], [777, 965, 5], [965, 1301, 6], [1301, 1648, 7], [1648, 1963, 8], [1963, 2136, 9], [2136, 2294, 10], [2294, 2508, 11], [2508, 2874, 12], [2874, 3810, 13], [3810, 4394, 14], [4394, 4500, 15], [4500, 5114, 16], [5114, 5198, 17], [5198, 5686, 18], [5686, 6505, 19], [6505, 6797, 20], [6797, 6906, 21], [6906, 6966, 22], [6966, 7092, 23], [7092, 7405, 24], [7405, 7932, 25], [7932, 8062, 26], [8062, 8238, 27], [8238, 8347, 28], [8347, 9633, 29], [9633, 10023, 30], [10023, 11981, 31], [11981, 12908, 32], [12908, 13075, 33], [13075, 13243, 34], [13243, 13425, 35], [13425, 13710, 36], [13710, 14035, 37], [14035, 14342, 38], [14342, 14734, 39], [14734, 15120, 40], [15120, 15215, 41], [15215, 15358, 42], [15358, 15612, 43], [15612, 16059, 44], [16059, 16191, 45], [16191, 16404, 46], [16404, 16653, 47], [16653, 17505, 48], [17505, 18094, 49], [18094, 18103, 50]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18103, 0.04795]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
46354240ae310aeb696a8acd72105e16dcebca41
Simulation of two-way pushdown automata revisited Glück, Robert Published in: Semantics, abstract interpretation, and reasoning about programs DOI: 10.4204/EPTCS.129.15 Publication date: 2013 Document version Publisher's PDF, also known as Version of record Document license: CC BY Citation for published version (APA): Download date: 29. dec., 2023 Simulation of Two-Way Pushdown Automata Revisited Robert Glück DIKU, Dept. of Computer Science, University of Copenhagen gluck@acm.org Dedicated to David A. Schmidt on the Occasion of his 60th Birthday The linear-time simulation of 2-way deterministic pushdown automata (2DPDA) by the Cook and Jones constructions is revisited. Following the semantics-based approach by Jones, an interpreter is given which, when extended with random-access memory, performs a linear-time simulation of 2DPDA. The recursive interpreter works without the dump list of the original constructions, which makes Cook’s insight into linear-time simulation of exponential-time automata more intuitive and the complexity argument clearer. The simulation is then extended to 2-way nondeterministic pushdown automata (2NPDA) to provide for a cubic-time recognition of context-free languages. The time required to run the final construction depends on the degree of nondeterminism. The key mechanism that enables the polynomial-time simulations is the sharing of computations by memoization. 1 Introduction We revisit a result from theoretical computer science from a programming perspective. Cook’s surprising theorem [4] showed that two-way deterministic pushdown automata (2DPDA) can be simulated faster on a random-access machine (in linear time) than they may run natively (in exponential time). This insight was utilized by Knuth [8] to find a linear-time solution for the left-to-right pattern-matching problem, which can easily be expressed as a 2DPDA: “This was the first time in Knuth’s experience that automata theory had taught him how to solve a real programming problem better than he could solve it before.” [8, p. 339] Cook’s original construction in 1971 is obscured by the fact that it does not follow the control flow of a pushdown automaton running on some input. It traces all possible flows backward thereby examining many unreachable computation paths, which makes the construction hard to follow. Jones clarified the essence of the construction using a semantics-based simulator that interprets the automaton in linear time while following the control flow forward thereby avoiding unreachable branches [6]. The simulator models the symbol stack of the automaton on its call stack using recursion in the meta-language and maintains a local list of surface configurations (dump list) to record their common terminator in a table when a pop-operation is simulated. We follow Jones’ semantics-based approach and give a simplified recursive simulator that does not require a local dump list and captures the essence of Cook’s speedup theorem in a (hopefully) intuitive and easy to follow form. Furthermore, we then extend the construction from a simulation of deterministic automata to a simulation of two-way nondeterministic pushdown automata (2NPDA). The simulations are all realized by deterministic computation on a random-access machine. Even though some time has passed since the theorem was originally stated, it continues to inspire studies in complexity theory and on the computational power of more practical programming paradigms, such as subclasses of imperative and functional languages (e.g. \cite{2,3,7,10}). It therefore appears worthwhile to capture the computational meaning of this classic result in clear and simple terms from a programming perspective. It is hoped that the progression from simple interpreters to simulators with memoization and termination detection makes these fundamental theoretical results more accessible. We begin with a simple interpreter for two-way deterministic pushdown automata (Sect. 2) that we extend to simulate deterministic PDA in linear time (Sect. 3). We then introduce a nondeterministic choice operator (Sect. 4) and show the simulation of nondeterministic PDA (Sect. 5). 2 Deterministic PDA Interpreter A two-way deterministic pushdown automaton (2DPDA) consists of a finite-state control attached to a stack and an input tape with one two-way read-only head \cite{4}. The state $p$, the symbol read at head position $i$, and the symbol $A$ on top of the stack determine the next action for a given tape, which is the automaton’s input. Only when the stack top is popped does the symbol below the top become relevant for the following computation. The set of states, the set of input symbols and the set of stack symbols are fixed for an automaton. A transition function chooses the next action depending on the current surface configuration $c = (p,i,A)$, shortly referred to as configuration. The instantaneous description $(c,\text{stack-rest})$ of an automaton includes the current configuration $c$ and the stack below the top symbol $A$. The automaton can push and pop stack symbols, and perform an operation $\text{op}$ that modifies the current configuration without pushing or popping (e.g., move to a new tape position). The stack bottom and the left and right tape ends are marked by distinguished symbols. The head position $i$ in a configuration $(p,i,A)$ is always kept within the tape bounds and one can determine an empty stack. The automaton answers decision problems. It is said to accept an input if, when started in initial state $p_0$ with an empty stack and the head at the left endmarker, it terminates with accept, an empty stack and the head at the right endmarker. It can just halt with an empty stack without accepting an input. In the exposition below we tacitly assume some fixed input tape. **Termination.** A configuration in which a pop-operation occurs is a terminator \cite{4}. Every configuration $c$ in a terminating computation has a unique terminator, that is the earliest terminator reached from $c$ that returns the stack below the height at $c$. This case is illustrated below (i): $d$ is the terminator of $c$. Terminator $d$ can be viewed as the result of configuration $c$. Configuration $c$ will always lead to $d$ regardless of what is on the stack below. A configuration that accepts or halts the automaton is also a terminator. If a configuration $c$ is met again before the terminator is reached, which means that the stack never returned below the level at which $c$ occurred for the first time, then the automaton is in an infinite loop. The second occurrence of $c$ will eventually lead to a third occurrence of $c$, ad infinitum. The only two possible situations are illustrated below: either $c$ repeats at the same level of the stack (ii) or at a higher level after some stack-operations have been performed (iii). In both cases, the contents of the stack below $c$ (shaded) is untouched and irrelevant to the computation: $c$ will always lead to an infinite loop. ![Diagram](image) **Running Time.** The number of configurations that an automaton can enter during a computation depends on the input tape. The states and symbols are fixed for an automaton. The number of head positions on the input tape is bounded by the length of the input tape. The number of configurations is therefore linear in the length of the input tape, \( n = O(|\text{tape}|) \). We remark that the number of configurations of an automaton with \( k \) independent heads on the input tape is \( n = O(|\text{tape}|^k) \). The \( k \) head positions are easily accommodated by configurations of the form \( c = (p, i_1, \ldots, i_k, A) \). An automaton can carry out an exponential number of steps before it terminates. For example, an automaton that during its computation forms all stacks consisting of \( n \) zeros and ones takes \( O(2^n) \) steps. **Interpreter.** Figure 1 shows the interpreter for 2DPDA written in the style of an imperative language with recursion and call-by-value semantics. The interpreter \( \text{Int} \) can be run on a random-access machine (RAM). A call \( \text{Int}(c) = d \) computes the terminator \( d \) of a configuration \( c \), where \( \text{pop}(d) \). There is no symbol stack and no loop in the interpreter. All operations are modeled on the call stack of the implementation language by recursive calls to the interpreter. A recursive call takes constant time, thus a call stack just adds a constant-time overhead compared to a data stack. Statements **accept** and **halt** stop the interpreter and report whether the input was accepted or not. The automaton is assumed to be correct and no special checks are performed by the interpreter. We will now discuss the interpreter in more detail. It is the basis for the three interpreters and simulators in the following sections. In the interpreter we abstract from the concrete push-, op- and pop-operations. We define predicates \( \text{push}(c), \text{op}(c), \text{pop}(c), \text{accept}(c), \text{halt}(c) \) to be true if a configuration \( c \) causes the corresponding operation in the automaton. Their actual effect on a configuration is not relevant as long as the next configuration can be determined by the built-in operations next and follow. We let \( \text{next}(c) \) be the operation that yields in one step the next configuration, if \( \text{op}(c) \) or \( \text{push}(c) \), and \( \text{follow}(c, d) \) be the operation that yields in one step the next configuration given \( c \) and \( d \), if \( \text{pop}(d) \). Each of these operations takes constant time, including next and follow that calculate the next configuration. In case a configuration \( c \) causes a pop-operation, that is \( \text{pop}(c) \) is true in the cond-statement (Fig. 1), \( c \) is a terminator and the interpreter returns it as result. If a configuration \( c \) causes a push-operation, that is \( \text{push}(c) \) is true, first the terminator of the next configuration is calculated by \( \text{Int}(\text{next}(c)) = d \). The terminator always causes a pop-operation and interpretation continues at configuration \( \text{follow}(c, d) \) which follows from \( c \) and terminator \( d \). In case \( \text{op}(c) \) is true, that is the operation neither pushes nor pops, the terminator of \( c \) is equal to the terminator \( \text{Int}(\text{next}(c)) \) of the next configuration. The effect of the operations on the configurations and the call stack can be summarized as follows. \[ \begin{align*} \text{c} &= (p, i, A) \quad \text{push}(c) \quad (q, j, B) = \text{next}(c) \\ \text{c} &= (p, i, A) \quad \text{op}(c) \quad (q, j, B) = \text{next}(c) \\ \text{d} &= (q, j, B) \quad \text{pop}(d) \quad (r, k, C) = \text{follow}(c, d) \end{align*} \] \(^1\)The conventional ‘pop’ just removes the top symbol from the stack. Our generalization that defines the next configuration by \( \text{follow}(c, d) \) does not affect the complexity arguments later and is convenient from a programming language perspective. procedure Int(c: conf): conf; cond push(c): d := Int(follow(c,Int(next(c)))); op(c): d := Int(next(c)); pop(c): d := c; halt(c): halt; accept(c): accept; end; return d Figure 1: A recursive interpreter for deterministic PDA. A push-operation may, for example, push a constant symbol B onto the stack or duplicate the current top A. Likewise, an op-operation may replace the current top A by a new top B, but without pushing or popping the stack, and move the tape head by changing position i into j. A pop-operation may just remove the stack top A to uncover B below or replace the uncovered symbol by a symbol C depending on A and B. The abstract pop-operation covers many common binary stack-operations familiar from stack programming languages (e.g., it may choose from symbols A and B the one that is smaller according to some order). Depending on the concrete set of binary operators and stack symbols this allows to express a number of interesting functions as pushdown automata. **Properties.** The body of the interpreter contains no loop, only sequential statements. The time it takes to execute each of the statements is bounded by a constant (ignoring the time to evaluate a recursive call to a result). No side-effects are performed and no additional storage is used except for the local variable d. Even though written in an imperative style, the interpreter is purely functional. It terminates if and only if the pushdown automaton terminates on the same input. The correctness of the interpreter should be evident as it merely interprets the automaton one step at a time. Note the simplicity of the construction by recursively calling the interpreter for each action of the automaton. Also an op-operation that does not change the height of the symbol stack converts into a (tail-recursive) call on the call stack. In a terminating computation, no call Int(c) can lead to a second call Int(c) as long as the first call has not returned a result, which means that it is still on the call stack. If a second call Int(c) occurs while the first one is still on the call stack, the interpreter is in an infinite recursion. As a consequence, in a terminating computation the height of the call stack is bounded by n, the number of configurations, and the same call stack cannot repeat during the interpretation. After exhausting all possible call stacks of height up to n, that is all permutations of up to n configurations, the interpreter must terminate, that is within $O(n^n)$ steps. The interpreter can have a running time exponential in the number of configurations. 3 Linear-Time Simulation of Deterministic PDA The 2DPDA-interpreter in Fig. 1 is purely functional and has no persistent storage. Each time the terminator d of a configuration c is computed and the same configuration is reached again, the terminator has to be recomputed by a call Int(c), which means the entire subcomputation is repeated. To store known terminators and to share them across different procedure invocations, we extend the interpreter with memoization [9]. This straightforward extension gives linear-time simulation of 2DPDA. The sharing of terminators is the reason why Cook’s speedup theorem works. Procedure Sim(c: conf): conf; if defined(T[c]) then return T[c]; /* find shortcut */ cond push(c): d := Sim(follow(c, Sim(next(c)))); op(c): d := Sim(next(c)); pop(c): d := c; halt(c): halt; accept(c): accept; end; T[c] := d; /* memoize result */ return d Figure 2: A linear-time simulator for deterministic PDA. RAM extension. Figure 2 shows the interpreter with memoization, called simulator. It works in the same way as the interpreter except that each time before a call Sim(c) returns the terminator d of c, the terminator is stored in a table T by assignment T[c] := d. Next time the terminator is needed, it can be retrieved from T, avoiding its recomputation. Terminators are now shared dynamically at runtime and over the entire simulation. Table T can be implemented as a one-dimensional array indexed by configurations and can hold one terminator for each of the n configurations that can occur during a computation. All table entries are initially undefined. It is easy to see that the shortcut (if-statement) and the memoization (table assignment) do not change the result of the automaton. Storing and retrieving a terminator takes constant time on a RAM (see Cook for a charged RAM model instead of a unit-cost model [4]). An “automatic storage management” also means that many terminators are recorded during a computation that are not needed later, but we shall see that this does not affect the linearity argument. A more thorough analysis would surely reveal that memoization points are only required at a few places in an automaton (cf. [3,10]). Linear-time simulation. In a terminating computation, before a second call Sim(c) is made, the first call must have returned and stored the terminator d of c at T[c]. Once the terminator is known, it need not be recomputed and can be fetched from the table. Hence, the cond-statement, which is guarded by a lookup in T, is executed at most once for any c. Recursive calls to the simulator occur only from within the cond-statement, namely one call if op(c) and two calls if push(c). Consequently, Sim can be called at most 2n times, where n is the number of possible configurations. This also limits how often the if-statement guarding the cond-statement is executed. Hence, the total number of execution steps during a terminating simulation is bounded linearly by n. Recall that n is linear in the length of the input tape, n = O(|tape|). This concludes the argument for the linear-time simulation of a 2DPDA on a RAM. Discussion. Deterministic pushdown automata are the accepting device for deterministic context-free languages. More precisely, they are exactly recognized by 1-way deterministic pushdown automata (1DPDA), that is, deterministic pushdown automata that never move their head to the left on the input. The LR grammar of a deterministic context-free language is easy to convert into a 1DPDA (e.g. [5]). Thus, recognition of this subclass of context-free languages using the memoizing simulator Sim (Fig. 2) takes at most linear time (as does the classic LR-parsing algorithm by Knuth). In the following we extend the simulator to recognize all context-free languages in cubic time. The method by Aho et al. [1] requires O(n^2) for simulating 2DPDA, a result which was then strength- procedure Int(c: conf): confset; if visited(T[c]) then return \{\}; /* detect infinite branch */ T[c] := Visited; /* mark configuration */ cond push(c): d := \bigcup Int(follow(c,e)) where e \in Int(next(c)); op(c): d := Int(next(c)); choose(c): d := Int(nextleft(c)) \cup Int(nextright(c)); pop(c): d := \{c\}; halt(c): d := \{\}; accept(c): accept; end; T[c] := Undef; /* unmark configuration */ return d Figure 3: A recursive interpreter for nondeterministic PDA. ened to \(O(n)\) by Cook \[4\]. Both methods work bottom-up. In contrast, the simulator Sim works top-down following the forward control flow as does the one by Jones \[6\]. It clearly shows that the key mechanism that turns a recursive pushdown interpreter into a linear-time simulator is memoization. 4 Interpretation of Nondeterministic PDA In a two-way nondeterministic pushdown automaton (2NPDA) the computation path is not uniquely determined. A deterministic automaton can be made nondeterministic by introducing an operation choose that allows the automaton to select any of two computation paths in a configuration \(c\) (cf. \[7\]). This means that a configuration no longer has a unique terminator, but a set of possible terminators. We let \(\text{nextleft}(c)\) and \(\text{nextright}(c)\) be the abstract operations that yield in one step the two next configurations that are possible if choose\((c)\). For simplicity, the new operation can neither push nor pop stack symbols. With a choose-operation two transitions are possible: \[ \begin{align*} \text{choose}(c) & \quad (q,j,B) = \text{nextleft}(c) \\ (p,i,A) & \quad \ldots \quad (r,k,C) = \text{nextright}(c) \end{align*} \] A nondeterministic automaton is said to accept an input if it has at least one accepting computation when started in the initial state \(p_0\) with an empty stack and the head at the left tape end. It has the ability to guess the right choice that leads to the shortest accepting computation. In an interpreter this “angelic nondeterminism” can be thought of as searching through a tree of all possible computation sequences, some of which may be infinite or non-accepting, to find at least one accepting sequence. Branching in the computation tree is due to nondeterministic choose-operations in the automaton. Interpreter. The interpreter for nondeterministic PDA that can be run on a RAM is shown in Fig. 3. Two main changes to the original interpreter in Fig. 1 are necessary to accommodate the “guessing”: (1) a set of terminators instead of a single terminator is returned, and (2) a termination check (“seen before”) that stops interpretation along an infinite computation sequence. We detail the two modifications below. 1. **Terminator sets**: A choose-operation requires the union of the terminator sets obtained from the two next configurations, \text{nextleft}(c) and \text{nextright}(c). In case of a push-operation, and this is the most involved modification, each configuration \( e \) in the terminator set obtained by the inner call \text{Int}(\text{next}(c)) must be followed by an outer call. The big set union used for this purpose is a shorthand for a while-loop over the inner terminator set. A pop-operation now returns a singleton set \( \{ c \} \) instead of \( c \). Finally, instead of making a full stop at a halt-operation, an empty set is returned in order not to miss an accepting computation along another possible branch. 2. **Termination check**: As discussed before, non-termination occurs when the interpreter is called a second time with the same configuration \( c \) as argument while the first call has not yet returned. This situation can be detected by marking \( c \) in a table when a call \text{Int}(c) is made and unmarking \( c \) when the call returns. If a call with a marked \( c \) as argument is made, an infinite computation is detected and the interpreter returns an empty terminator set. The same table \( T \) as before can be used, but can now hold the additional value \text{Visited}. Initially all table entries are set to \text{Undef}. The cardinality of a terminator set is bounded by \( n \), the number of configurations that can occur in a computation. The most costly set operation in the interpreter is the union of terminator sets. Assuming a suitable choice of data structures, a union operation takes time linear in the total cardinality of the sets, that is the union of two sets with cardinalities \( u \) and \( v \) takes time \( O(u + v) \). All remaining set-operations needed in the interpreter are straightforward and take constant time: creating a set (empty, singleton), and picking and removing an arbitrary element from a set (in the set comprehension). In the discussion below we assume such an implementation of the set operations. A choose-operation, which unites two terminator sets each of cardinality up to \( n \), takes linear time \( O(n) \). A push-operation, where the inner call \text{Int}(\text{next}(c)) returns a set of at most \( n \) terminators, each of which, when followed by the outer call \text{Int}(\text{follow}(c,e)) can again return a set of up to \( n \) terminators, requires the union of \( n \) sets each of cardinality up to \( n \), which then takes quadratic time \( O(n^2) \). This is the most expensive set-operation in the cond-statement. In the case of a deterministic automaton, that is, an automaton without choose-operation, the new interpreter in Fig. 3 operates with singleton sets only, and the set-operations introduce at most a constant-time overhead compared to the original interpreter in Fig. 1. This is useful because the new interpreter “falls back” to its original behavior and, except for a constant time overhead in the new interpreter, there is no penalty in using it to run deterministic PDA and, as an extra benefit, it always terminates. There is a major pitfall. If a nondeterministic automaton is left-recursive, then the termination check may stop left-recursion too early and miss useful branches contributing to a terminator set. In the case of 1NPDA there always exists a non-left-recursive version (presumably the same for 2NPDA). Alternatively, one might bound the unfolding of a left-recursion in terms of the input assuming some normal-form automaton (the termination check in Fig. 3 limits left-recursion unfolding to one). 5 **Cubic-Time Simulation of Nondeterministic PDA** To turn the new interpreter (Fig. 3) into a fast simulator (Fig. 4) we use the same memoization method as in Sect. 3. The use of table \( T \) parallels its use in the deterministic case except that for each of the \( n \) possible configurations the table can now hold a set of up to \( n \) terminators and the value \text{Visited}. The body of the simulator is again guarded by an if-statement (first line) that returns the terminator set of a \[ ^2 \text{A straightforward implementation of such a set data structure might be a Boolean array of length } n \text{ to indicate membership of a configuration } c \text{ in a set together with an unsorted list of all configurations contained in that set.} \] procedure Sim(c: conf): confset; if defined(T[c]) then return T[c]; /* find shortcut */ if visited( T[c]) then return {}; /* detect infinite branch */ T[c] := Visited; /* mark configuration */ cond push(c): d := ∪ Sim(follow(c,e)) where e ∈ Sim(next(c)); op(c): d := Sim(next(c)); choose(c): d := Sim(nextleft(c)) ∪ Sim(nextright(c)); pop(c): d := {c}; halt(c): d := {}; accept(c): accept; end; T[c] := d; /* memoize result */ return d Figure 4: A cubic-time simulator for nondeterministic PDA. configuration c, if it is available in table T. Otherwise, and if no infinite computation path is detected, c is marked as Visited in T and its terminator set is computed. Before returning, terminator set d of c is stored in T, which overwrites the mark Visited. The cond- statement is executed at most once for each configuration. The mark Visited is only needed the first time the procedure is called, when the table does not yet contain a terminator set for c. Thus, the same table can be used for marking configurations and for storing their terminator sets. A terminator set may be empty if none of the branches rooted in c is accepting. Otherwise, the interpreter is unchanged. Cubic-time simulation. As before, the cond-statement is executed at most once for each of the n configuration due to the guards at the beginning of Sim. Up to n + 1 calls to Sim may occur in the case of a push-operation, namely one inner call and at most n outer calls, one for each e ∈ Sim(next(c)). Hence, Sim can be called at most O(n²) times during a simulation. This also limits how often the if-statements guarding the cond-statement are executed. In the cond-statement, as before, the simulation of the op-, pop-, halt-, accept-operations takes con- stant time, O(1). The union of two sets of at most n terminators in case of a choose-operation may take linear time, O(n). The union of the terminator sets in a push-operation is the most costly operation and may take quadratic time, O(n²). A push is simulated at most once per execution of a cond-statement, which is at most n times. Hence, the total number of execution steps during a simulation is cubic in the number of configurations, O(n³). Recall that n is linear in the length of the input tape, \( n = O(|\text{tape}|) \). This ends the argument for the cubic-time simulation of (non-left-recursive) 2NPDA on a RAM. Discussion. We observe that the “complexity generator” in the cond-statement is not the choose- operation, even though it introduces two computation branches, rather the handling of up to n con- tinuations and the union of their terminator sets in case of a push-operation. If the cardinality of each terminator set that can occur during a simulation is bounded by a constant, that is, not dependent on the input, the simulation time is linear in the input as before. Deterministic automata, where the cardinality of each terminator set is at most one, and a class of nondeterministic automata, where the cardinality is bounded by some k, are all simulated in linear time by Sim. The top-down method is useful because the same simulator runs them in the time corresponding to their degree of nondeterminism. One-way nondeterministic pushdown automata (1NPDA) are the accepting device for context-free languages. Every context-free language has a grammar without left recursion and it is straightforward to convert the grammar into a 1NPDA. This means that recognition of context-free languages using the simulator (Fig. 4) has the same worst-case time complexity as the classic parsing algorithms that can handle the full class of context-free languages (Earley, Cocke-Younger-Kasami), that is $O(|\text{string}|^3)$. As discussed before, the performance of the simulator is determined by the degree of nondeterminism in the automaton. Recognition of deterministic context-free languages using the simulator takes, again, at most linear time. In practice, of course, specialized parsing algorithms will have better run times (due to the constant term hidden in the $O$-notation) and use less space than the recursive simulator. Again, the mechanism that enables polynomial-time simulation is the sharing of computations by memoization. Acknowledgements. Thanks are due to Nils Andersen, Holger Bock Axelsen, Julia Lawall, Torben Mogensen, Chung-chieh Shan, and the anonymous reviewers for various insightful comments, to Neil D. Jones for pointing the author to Cook’s construction, and to Akihiko Takano for providing the author with excellent working conditions at the National Institute of Informatics, Tokyo. References
{"Source-Url": "https://static-curis.ku.dk/portal/files/180850485/Gl_ck_2013_Simulation_of_two_way.pdf", "len_cl100k_base": 6697, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 29886, "total-output-tokens": 8119, "length": "2e12", "weborganizer": {"__label__adult": 0.0005979537963867188, "__label__art_design": 0.0004794597625732422, "__label__crime_law": 0.000545501708984375, "__label__education_jobs": 0.0008311271667480469, "__label__entertainment": 0.00017058849334716797, "__label__fashion_beauty": 0.0002884864807128906, "__label__finance_business": 0.00032258033752441406, "__label__food_dining": 0.0006537437438964844, "__label__games": 0.00109100341796875, "__label__hardware": 0.0018157958984375, "__label__health": 0.0010938644409179688, "__label__history": 0.00049591064453125, "__label__home_hobbies": 0.0001512765884399414, "__label__industrial": 0.0008382797241210938, "__label__literature": 0.0008368492126464844, "__label__politics": 0.0005135536193847656, "__label__religion": 0.0009756088256835938, "__label__science_tech": 0.1708984375, "__label__social_life": 0.0001405477523803711, "__label__software": 0.006183624267578125, "__label__software_dev": 0.80908203125, "__label__sports_fitness": 0.0004642009735107422, "__label__transportation": 0.0011157989501953125, "__label__travel": 0.00027489662170410156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31546, 0.01739]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31546, 0.46767]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31546, 0.86073]], "google_gemma-3-12b-it_contains_pii": [[0, 667, false], [667, 3811, null], [3811, 7579, null], [7579, 11404, null], [11404, 14611, null], [14611, 17882, null], [17882, 20648, null], [20648, 25061, null], [25061, 28275, null], [28275, 31546, null]], "google_gemma-3-12b-it_is_public_document": [[0, 667, true], [667, 3811, null], [3811, 7579, null], [7579, 11404, null], [11404, 14611, null], [14611, 17882, null], [17882, 20648, null], [20648, 25061, null], [25061, 28275, null], [28275, 31546, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31546, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31546, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31546, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31546, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31546, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31546, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31546, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31546, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31546, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31546, null]], "pdf_page_numbers": [[0, 667, 1], [667, 3811, 2], [3811, 7579, 3], [7579, 11404, 4], [11404, 14611, 5], [14611, 17882, 6], [17882, 20648, 7], [20648, 25061, 8], [25061, 28275, 9], [28275, 31546, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31546, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
35c2b4fdfe9e9c0ab212c0ff82aa8b6bc7992cc7
An Efficient Approach based on BERT and Recurrent Neural Network for Multi-turn Spoken Dialogue Understanding Weixing Xiong\textsuperscript{a}, Li Ma\textsuperscript{b} and Hongtao Liao Ubtech Robotics Corp, Nanshan I Park, No.1001 Xueyuan Road, Shenzhen, China weixing.xiong@ubtrobot.com, m201570013@outlook.com Keywords: BERT, Recurrent Neural Network, Multi-turn, Spoken Dialogue Understanding. Abstract: The main challenge of the Spoken Language Understanding (SLU) is how to parse efficiently natural language into effective meanings, such as its topic intents, acts and pairs of slot-values that can be processed by computers. In multi-turn dialogues, the combination of context information is necessary to understand the user's objectives, which can be used to avoid ambiguity. An approach processing multi-turn dialogues, based on the combination of BERT encoding and hierarchical RNN, is proposed in this paper. More specifically, it combines the current user's utterance with each historical sequence to formulate an input to the BERT module to extract the semantic relationship, then it uses a model derived from the hierarchical-RNN for the understanding of intents, actions and slots. According to our experiments by testing with multi-turn dialogue dataset Sim-R and Sim-M, this approach achieved about 5% improvement in FrameAcc compared with models such as MemNet and SDEN. 1 INTRODUCTION In a task-oriented dialogue, all the information needed for a specific task or purpose may not be given in a single turn expression by the user’s utterance. It’s necessary to engage a multi-turn dialogue to obtain mandatory information. The main function of a SLU module is to extract efficiently, from the user’s input, the intents, actions and slots-value pairs (Dilek Hakkani-Tur et al., 2016). Figure 1: An example semantic frame with slot, intent and dialogue act annotations, following the IOB tagging scheme. \begin{verbatim} U1: I need to make reservation for 6 pm. S1: What date and restaurant would you like? U2: Next Monday at Il Fornaio. Slot: B-date I-date 0 B-rest I-rest 0 Intent: reserve_restaurant Dialogue Acts: inform(date=next monday), inform(rest=Il Fornaio) \end{verbatim} Figure 1 shows the semantic frame information about a task-oriented dialogue, in which the word slot is represented in a general IOB format. In real dialogue, all necessary information will be specified along with the dialogue flow. Let’s continue the above example: S2: “How many people will attend the dinner?” U3: “5.” So, the user utterance,"5", corresponds to the entity category “B-#people”, for the circumstance of the booking restaurant task. The difficulty is how a computer system can analyze all the information to extract the intents, acts and slots. Such as the one proposed by P. Xu and R. Sari-kaya, 2013, B. Liu et al., 2016, or Zhang et al., 2016, use the method of jointly modeling intents and slots with RNN, but they do not take the necessary context information into account. MemNet, proposed by S. Sukhbaatar et al., 2015; Chen et al., 2016, or SDEN, proposed by Ankur Bapna et al., 2017, Raghav et al., 2018, can effectively take into account the historical context to understand semantic frames by encoding information as sentence vectors by GRU. However, in MemNet and SDEN, each sentence needs to be encoded as a single vector, this leads to the loss of lexical-granularity information when analysing the relationship between the current sentence and the historical inputs. In this paper, we propose a model based on BERT\(^1\) (Jacob Devlin et al., 2018) and hierarchical RNN. We encode the current user’s utterance and historical dialogues successively as the input of BERT module, and then to encode the memory from context, we use a modified hierarchical-RNN to process the outputs of BERT module. There are three main aspects of our contribution. Firstly, by using the BERT model, the attention of adjacent words is introduced for word and sentence embedding. Secondly, by concatenating the current utterance with each historical utterance, the model can calculate attention with other turns of utterance when performing BERT encoding. Thirdly, by using a modified hierarchical-RNN to process the outputs of BERT module, information from the context can be more effectively encoded. The following sections in this paper are organized as: In section 2, we describe the general architecture of our model. In section 3, we list the experimental results and analyze them. In last section, the conclusion and discussion would be illustrated. \[\text{u}_a = \{w_n^{1}, w_n^{2}, w_n^{3}, \cdots, w_n^{\text{len}(u_a)-1}, w_n^{\text{len}(u_a)}\}; \quad (1)\] Where \(w_n\) represents word token in the utterance and \(\text{len}(u_a)\) means the number of tokens in \(u_a\). So, there are \(n-1\) user utterances and \(n-1\) system replies in the historical dialogue, represented as formula (2). \[D = \{u_1, s_1, u_2, s_2, \cdots, u_{n-1}, s_{n-1}\} \quad (2)\] In multi-turn dialogues, the important matter is how to use effectively the context information in the conversation to track the current state. In order to obtain the related information between the current user utterance and the historical utterances, we need to build some relationship between them. Therefore, we --- \(^1\) The open source BERT implementation based on Pytorch is available at https://github.com/huggingface/pytorch-pretrained-BERT. Note that the pre-trained BERT has two version. In our experiment, we use the base version. have designed a concatenation method, its detail is given in the Part 2.1. ## 2.1 Concatenation Method We concatenate the current user utterance \( u_1 \) with each utterance in the historical turns \((u_1, s_1), (u_2, s_2), \ldots, (u_n, s_n)\) to form a new couple sentence vector \( C \). For instance, the concatenation of \((u_0, s_0,1)\) is expressed as followings: \[ C_{t_{0-1},u} = \{[CLS], w_1^1, \ldots, w_{\text{len}(u_1)}^1, [SEP], w_1^2, \ldots, w_{\text{len}(u_1)}^2, [SEP], [PAD], \ldots, [PAD]\} \] \[ B_{s_{n-1},u} = \{0, \ldots, 0, 1, \ldots, 1, 0, \ldots, 0\}_{\text{len}(u_1)+2} \] \[ C_{u_{n},u} = \{[CLS], w_1^1, \ldots, w_{\text{len}(u_n)}^1, [SEP], w_1^2, \ldots, w_{\text{len}(u_n)}^2, [SEP], [PAD], \ldots, [PAD]\} \] \[ B_{u_{n}} = \{0, \ldots, 0, 1, \ldots, 1, 0, \ldots, 0\}_{\text{len}(u_n)+2} \] Where \( w_{\text{un}}^i \) indicates the \( i \)-th token in utterance \( u \), \( \text{len}(u) \) and \( \text{len}(s_{n-1}) \) means respectively the number of tokens in utterance \( u \) and \( s_{n-1} \). \{CLS\}, \{SEP\} and \{PAD\} are special tags in BERT inputs, where \{CLS\} represents the beginning of a sequence, \{SEP\} is a separator for two sequences, and \{PAD\} is used to pad all sequences to the same length. In order to facilitate the calculation of the model, we need to add paddings in the sequence to a fixed number, in our experiment case, it is set to 64. Then we generate a Boolean vector to indicates which words are from the current user utterance \( C \). The generated Boolean vector \( B \) is shown in formula (4) and formula (6). In our example, we obtain 64-\( \text{len}(u) \) of 0 and \( \text{len}(u) \) of 1. In real application scenario, the user's current utterance could not only be a response to the last system utterance, but also a response to an earlier system utterance or a supplement to previous user utterance. Therefore, in order to get a better modeling effect, we need to take all historical utterances from \( u_i \) to \( s_{n-1} \) into account, rather than just concatenating \( u_1 \) with last system utterance \( s_{n-1} \). The tricks are as formula (5-6). In this way, the results shown in formula (7) can be obtained one by one. Then we can obtain 2*(\( n-1 \)) pairs. \[ \text{inputs} = \{(C_{u_{n},u}, B_{u_{n}}), \ldots, (C_{u_{n},u}, B_{u_{n}})\} \] We will then use two types of RNN to further process the concatenated utterance pairs. The first one is used to encode the relationship between words in a single utterance pair, calling tokens-level RNN; the second one is used to integrate information about all utterance pairs, calling utterances-level RNN. ### 2.2 RNN over Utterance Tokens In order to get the relationship between word tokens in the dialogue utterance, we feed the pre-trained BERT model with the concatenated pairs \( (C, B) \). The outputs are represented by \( H \) with \( k \) vectors of 64*768 dimension, in formula (8), where, \( k \) is the pairs of the historical utterances, 64 is the sequence number after padding, 768 is the default size of hidden layer in BERT’s outputs. \[ H_i = \text{BERT}(C_{u_{n},u}, B_{u_{n}}) \\ H_i = \text{BERT}(C_{u_{n},u}, B_{u_{n}}) \\ \vdots \\ H_i = \text{BERT}(C_{u_{n},u}, B_{u_{n}}) \] We use the BASE version of pre-trained BERT in our test, and the parameters of BERT model are fixed during training, for the concern of computing speed. The outputs of BERT model are introduced into the tokens-level RNN, which is a BiGRU model, for fine tuning. Specific encoding results are shown as formula (9): \[ (o_{i_1}, o_{i_2}, \ldots, o_{i_k}, o_{i_1}, o_{i_2}, \ldots, o_{i_k}) = \text{BiGRU}_{l}(H_i) \\ (o_{i_1}, o_{i_2}, \ldots, o_{i_k}, o_{i_1}, o_{i_2}, \ldots, o_{i_k}) = \text{BiGRU}_{l}(H_i) \\ \vdots \\ (o_{i_1}, o_{i_2}, \ldots, o_{i_k}, o_{i_1}, o_{i_2}, \ldots, o_{i_k}) = \text{BiGRU}_{l}(H_i) \] In the above equations, each \( o \) with subscript \( f \) is the result hidden layer of BiGRU forward propagation calculation, and each \( o \) with subscript \( b \) is the result of BiGRU backward propagation calculation, in our experiment, the size of \( a \) is set to 64, and \( i \) is the number of input sequence after padding set to 64. 2.3 RNN over Utterance Context When parsing the concatenated utterance pairs, we consider following formula (10): \[ \mathbf{h}_1^{\text{len}(1)} = \mathbf{f}(\mathbf{B}_{\mathbf{u}_1} \times [\mathbf{o}_{1f} \oplus \mathbf{o}_{1b} \ldots \mathbf{o}_{kf} \oplus \mathbf{o}_{kb}]) \] \[ \mathbf{h}_2^{\text{len}(1)} = \mathbf{f}(\mathbf{B}_{\mathbf{u}_2} \times [\mathbf{o}_{1f} \oplus \mathbf{o}_{1b} \ldots \mathbf{o}_{kf} \oplus \mathbf{o}_{kb}]) \] \[ \vdots \] \[ \mathbf{h}_k^{\text{len}(1)} = \mathbf{f}(\mathbf{B}_{\mathbf{u}_k} \times [\mathbf{o}_{1f} \oplus \mathbf{o}_{1b} \ldots \mathbf{o}_{kf} \oplus \mathbf{o}_{kb}]) \] Where \( \mathbf{B}_{\mathbf{u}_k} \) and \( \mathbf{B}_{\mathbf{u}_k} \) are diagonal matrix generated from Boolean vectors built from formula (6), those diagonal matrix have size 64*64. \( \bigoplus \) is the concatenator operator, the size of \( \mathbf{u} \) is set to 64, and the size of those concatenated vectors will be 128. Vectors wrapped with bracket make up a matrix, the elements in the bracket represent the row vectors that make up the matrix. In our experiments, the concatenated matrix size is 64*128. \( f \) is the selection operator used to fetch out all none-zero row vectors from a matrix. It is easy to deduce that the number of non-0 row vectors of each matrix is equal to the number of non-0 elements on the diagonal line of \( \mathbf{B} \), which equals to \( \text{len}(\mathbf{u}_k) \), the number of tokens in the utterance \( \mathbf{u}_k \), so the fetching operation finally allow us to obtain \( \text{len}(\mathbf{u}_k) \) row vectors. Each of these vectors represents the embedding of a special word in the current user utterance, with the attention information. The outputs of formula (10) are then used as the inputs of the utterances-level RNN to encode the slot-tag of each tokens in user utterance. The prediction of intents and actions needs different information on context from that of slots. The utterances-level RNN used for the prediction of intents and actions does not share parameters whereas it does for that of slots. In each row in formula (9), the last hidden layer in both directions is concatenated to encode information for the whole utterances pair. In this way, we proceed a concatenation of \((O_1, O_2, \ldots, O_{\text{len}(1)}\) ) to get vectors \( O_\mathbf{n}(n=1,\ldots,k) \) for full text understanding, shown in formula (11). \[ O_1 = \mathbf{o}_1^f \oplus \mathbf{0}_{1b}^f \] \[ O_2 = \mathbf{o}_2^f \oplus \mathbf{0}_{2b}^f \] \[ \vdots \] \[ O_k = \mathbf{o}_k^f \oplus \mathbf{0}_{kb}^f \] Then we take all vectors from \( O_1 \) to \( O_k \) as inputs to the utterances-level RNN model, and take the last hidden layer as context embedding, which is expressed in formula (12). \[ \mathbf{G} = \text{GRU}_g(O_1, O_2, \ldots, O_{\text{len}(1)}, O_k) \] Note that in formula (12), the output \( \mathbf{G} \) used is the last hidden layer of \( \text{GRU}_g \), which gives the classification or prediction of intents and acts. For the prediction of slots, the selected attention distribution is used as input to the utterances-level RNN, given by the formula (13). \[ \mathbf{S}_j = \text{GRU}_s(h_1^j, h_2^j, \ldots, h_{k-1}^j, h_k^j) \] In formula (13), \( \mathbf{S}_j \) is the last hidden layer of \( \text{GRU}_s \), its output gives information for slot prediction corresponding to the \( j \)-th token in the current user utterance, and \( j \) satisfies \( 1 \leq j \leq \text{len}(\mathbf{u}_k) \). Thus, the named-entity information of each word in current utterance can be obtained with the attention information taking into account of the dialogue context, as shown in formula (14): \[ \mathbf{S} = \{ \mathbf{S}_1, \mathbf{S}_2, \ldots, \mathbf{S}_{\text{len}(\mathbf{u}_k)} \} \] The \( \mathbf{G} \) and \( \mathbf{S} \) obtained above are used for the for the determination of intents, acts and slots, computed as formula (15), (16) and (17), the same as that used by MemNet and SDEN. \[ \mathbf{P}^{\text{Intent}} = \text{Softmax}(\mathbf{UG}) \] \[ \mathbf{P}^{\text{Act}} = \text{Sigmoid}(\mathbf{VG}) \] Inspired by memory network and SDEN, we take the value of \( \mathbf{S} \) as the inputs of another BiLSTM model, take the value of \( \mathbf{G} \) as the original hidden layer \( h(0) \) of this model, Then we put the result of its output layer into the Softmax layer to get the named-entity prediction of each word in the current utterance. \[ \mathbf{P}_j^{\text{Slot}} = \text{Softmax}(\text{BiLSTM}(\mathbf{S} | \mathbf{G})) \] In formula (17), \( \mathbf{P}_j^{\text{Slot}} \) means the estimated probability vector of the \( i \)-th word in current utterance, each element in the vector represents the probability that the \( i \)-th word belongs to the corresponding entity category. In the expression \( \text{BiLSTM}(\mathbf{S}|\mathbf{G}) \), \( i \) means the \( i \)-th hidden layer of BiLSTM as output, the parameter \( \mathbf{G} \) coming from formula (12) means initial hidden layer of BiLSTM. We compute the loss based on cross-entropy for each sub-task, take the sum of them as the total loss, and we optimize our model based on the total loss. 3 EXPERIMENTS In order to verify the effectiveness of the model, we use the dataset Sim-R and Sim-M for training and test, the same preformed with those used by MemNet and SDEN. 3.1 Dataset The datasets Sim-R and Sim-M (Shah P et al. 2018) are widely used in context-based intents, acts and slots joint recognition tasks. Sim-R is a dataset in multi-turn conversation for the restaurant domain, the training set contains 1116 dialogues, 11234 interactions; and Sim-M is a dataset in multi-turn conversation for the movie domain, the training set contains 384 dialogues, 3562 interactions. Table 1 gives a glance at this dataset. We combine together two training sets respectively into one for the training, and then test the model using uncombined test set and combined test set. In some cases, when user utterances appear at the beginning of the dialogue without act labels, the corresponding act label is set to be "OTHER". 3.2 Baselines For benchmark, we use a 2-layer RNN model without considering the context information, and also MemNet and SDEN. For MemNet and SDEN. In order to study the effectiveness of each component of the proposed model MSDU, we made survey specific ablation tests. In the first case, we remove both the BERT module and the concatenate process. In the second case, the BERT module was replaced by random initialized words embedding. In the third case, we don’t concatenate the sentence explained above for the BERT module. And finally, we proceed the concatenation of sentences with the BERT module. We used also CRF module with MSDU for slots recognition. In the following, we give a more detailed explanation above the different models used for comparison. **NoContext:** Regardless of dialogue information from context, the model’s structure is the same as the current utterance processing module in MemNet and SDEN. A two-layer RNN structure consist of one GRU and one LSTM is adopted. The difference is that the initial hidden layer of LSTM is all-zero vector, so it does not contain any information about context. **MemNet:** The attention scores of current sentence vectors and historical sentence vectors are calculated based on cosine similarity, context vector is the sum of historical sentence vectors weighted by their attention scores. The current sentence processing module is the same as **NoContext**, and the word embedding is randomly initialized. **MemNet+FastText:** The MemNet model used with word embedding matrix initialized with 300-dimensional pre-trained word embedding from FastText³ (E. Grave, P. Bojanowski et al. 2018). **SDEN:** It is a modification of MemNet with randomly initialized word embedding. It calculates the attention between the current utterance vector and the historical ones using a linear full connection layer, and inputs the attention vectors into a GRU model for obtaining the context vector. **SDEN+FastText:** It is a SDEN model with the word embedding matrix initialized with 300-dimensional pre-trained word vectors FastText. **MSDU-BERT-Concat:** This MSDU model does not use BERT module nor concatenation process, it uses hierarchical-GRU to encode context memory. **MSDU-BERT:** In this case, a random initialized 300-dimensional words embedding matrix is used instead of BERT module. **MSDU-Concat:** No dialogue utterances concatenation used with the MSDU model, we embedded each historical utterance into a 300-dimensional vector using BERT-GRU. Then all these vectors are embedded into a single 300-dimensional vector to GRU to characterize context memory. Further, we use this vector as first hidden state of GRU when processing current utterance. **MSDU+CRF:** It is a MSDU, in the process of prediction, we use Viterbi algorithm to solve the sequence with the highest probability. 3.3 Training and Evaluation The hyperparameters of all models are set as follows: - Batch size: 64 - Dropout ratio: 0.3 - Word embed size: 64 - Hidden size for sent encoding: 64 --- ² This dataset could be downloaded from http://github.com/google-research-datasets/simulated-dialogue Table 1: Profile of datasets used in the experiments, with values of intents, acts, slots, and number of dialogues (A dialogue may include many turns of interactions between user and system). <table> <thead> <tr> <th>Dataset</th> <th>Intents</th> <th>Acts</th> <th>Slots</th> <th>No. Train</th> <th>No. Dev</th> <th>No. Test</th> </tr> </thead> <tbody> <tr> <td>Sim-R</td> <td>FIND_RESTAURANT, RESERVE_RESTAURANT</td> <td>THANK_YOU, INFORM, AFFIRM, CANT_UNDERSTAND, REQUEST_ALTS, NEGATE, GOOD_BYE, OTHER</td> <td>price_range, location, restaurant_name, category, num_people, date, time</td> <td>1116</td> <td>349</td> <td>775</td> </tr> <tr> <td>Sim-M</td> <td>BUY_MOVIE_TICKETS</td> <td>OTHER, GREETING, GOOD_BYE, CANT_UNDERSTAND, THANK_YOU, NEGATE, AFFIRM, INFORM</td> <td>theatre_name, movie, date, time, num_people</td> <td>384</td> <td>120</td> <td>264</td> </tr> </tbody> </table> Table 2: SLU results on test sets with baselines and MSDU, when trained on Sim-M + Sim-R. "Overall" means the test set is Sim-R + Sim-M. Because any of the above models can add CRF module, we did not consider MSDU+CRF when marking the maximum value using bold print. <table> <thead> <tr> <th>Model</th> <th>Intent F1</th> <th>Act F1</th> <th>Slot F1</th> <th>FrameAcc</th> </tr> </thead> <tbody> <tr> <td>Sim-R Sim-M Overall</td> <td>82.04</td> <td>99.82</td> <td>99.88</td> <td>99.80</td> </tr> <tr> <td>Sim-R Sim-M Overall</td> <td>68.47</td> <td>99.63</td> <td>99.93</td> <td>99.93</td> </tr> <tr> <td>Sim-R Sim-M Overall</td> <td>78.33</td> <td>99.56</td> <td>99.90</td> <td>99.90</td> </tr> <tr> <td>Sim-R Sim-M Overall</td> <td>88.37</td> <td>99.75</td> <td>99.60</td> <td>99.60</td> </tr> <tr> <td>Sim-R Sim-M Overall</td> <td>88.74</td> <td>99.94</td> <td>99.85</td> <td>99.85</td> </tr> <tr> <td>Sim-R Sim-M Overall</td> <td>88.48</td> <td>99.38</td> <td>99.62</td> <td>99.62</td> </tr> <tr> <td>Sim-R Sim-M Overall</td> <td>97.56</td> <td>97.64</td> <td>97.39</td> <td>97.39</td> </tr> <tr> <td>Sim-R Sim-M Overall</td> <td>94.70</td> <td>94.00</td> <td>94.44</td> <td>94.44</td> </tr> <tr> <td>Sim-R Sim-M Overall</td> <td>96.64</td> <td>96.56</td> <td>96.55</td> <td>96.55</td> </tr> <tr> <td>Sim-R Sim-M Overall</td> <td>71.13</td> <td>86.90</td> <td>83.76</td> <td>83.76</td> </tr> <tr> <td>Sim-R Sim-M Overall</td> <td>45.89</td> <td>65.90</td> <td>67.23</td> <td>67.23</td> </tr> <tr> <td>Sim-R Sim-M Overall</td> <td>63.96</td> <td>80.88</td> <td>79.13</td> <td>79.13</td> </tr> </tbody> </table> For all models, we use the same ADAM optimizer. The initial learning rate is set to 0.001, which decreases to 0.0001 after 125th epoch and 0.00001 after 250th epoch, betas=(0.9, 0.999), eps=1e-8. The results are evaluated for verification set on every epoch. The model saved when the FrameAcc breaks the historical record, and the training process is terminated after 500 epochs. We used the last saved model for test. Figure 3 shows the main performance rates of MSDU with the training epochs. The F1-score of intention recognition reached a high level even at the first epoch, and the performance curves stay at their highest level as soon as the 10th epoch. Figure 3: Main performance measures on evaluation set change with the training epochs. 3.4 Results and Analysis Table 2 shows the results of each model with the respective intents, acts and slots recognition. The last column of FrameAcc shows the proportion of correct recognition of intents, acts and slots for each model used in the test experiment. The second row lists the test data set used, and “Overall” represents the new test set combination of Sim-R and Sim-M. For MemNet and SDEN, we find that the model using random initialized word embedding gives better performance on sim-R dataset with larger sample size. However, with the sim-M dataset with smaller sample size, the model with the pre-trained word embedding is more satisfied. For the recognition of intent, the model NoContext is significantly worse than all other models. This can explain that the task of intent recognition is more dependent on context. Due to the introduction of contextual information, all other models obtain high accuracy in intent recognition. MSDU model achieves obviously the best results compared with other models. For the task of act recognition, the performance of NoContext is still lower than other models, which proves that the information from context is still helpful. The performance of MSDU in the act recognition is obviously better than that of other models, that means MSDU has a stronger ability in understanding the relationship between context and the current user utterance. For the recognition of slot tagging, there is no significant difference of performance for the Models MemNet, SDEN and NoContext. In the other hand, MSDU and its variant models achieve better results. At the same time, we also find that MSDU-Concat is nearly the same as MSDU for slot recognition, meaning that the concatenation process is not very useful for slot recognition improvement. From the test results, we find that the MSDU model achieves about 5% better for FrameAcc than MemNet and SDEN models. It is interesting to notice that SDEN does not obtain a better result than MemNet even although the forth one using a more complex context encoding method. MSDU-BERT-Concat and above two models use the same random initialized word embedding method. The difference lies mainly in term of context encoding: the model MSDU-BERT-Concat uses a hierarchical-GRU to encode context information, which is even simpler than the context encoding method used by MemNet, however it obtains about 2% better for FrameAcc than MemNet and SDEN. This causes a doubt for the necessity of attention mechanism in context encoding. From the results produced by the MSDU variant models, we can also conclude that the concatenation procedure brings about 1.1% of improvement, the BERT module brings about 2.7%, and the combination of the both gives 3.4% of improvement. 4 CONCLUSIONS AND FUTURE WORKS The MSDU model is proposed for the recognition of intents, acts and slots with the historical information in a multi-turn spoken dialogue through training with different datasets and variant modification. The test result shows that the design concept of MSDU model is more effective and brings important improvement. For future works, we will study how to apply this new model architecture for higher level dialogue understanding tasks, such as ontology-based slot recognition, and the alignment of intent-act-slot. For the moment, we have not discussed the subordinate relationship among intents, acts and slots, which is essential to dialogue understanding. REFERENCES REFERENCES
{"Source-Url": "https://www.scitepress.org/Papers/2020/91012/91012.pdf", "len_cl100k_base": 7209, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 26538, "total-output-tokens": 8941, "length": "2e12", "weborganizer": {"__label__adult": 0.0007662773132324219, "__label__art_design": 0.0011396408081054688, "__label__crime_law": 0.0007357597351074219, "__label__education_jobs": 0.0025310516357421875, "__label__entertainment": 0.001079559326171875, "__label__fashion_beauty": 0.0003075599670410156, "__label__finance_business": 0.0003666877746582031, "__label__food_dining": 0.0006914138793945312, "__label__games": 0.001918792724609375, "__label__hardware": 0.00211334228515625, "__label__health": 0.0009326934814453124, "__label__history": 0.00042176246643066406, "__label__home_hobbies": 9.822845458984376e-05, "__label__industrial": 0.0007452964782714844, "__label__literature": 0.003711700439453125, "__label__politics": 0.0005898475646972656, "__label__religion": 0.0008258819580078125, "__label__science_tech": 0.3056640625, "__label__social_life": 0.0002627372741699219, "__label__software": 0.046600341796875, "__label__software_dev": 0.62744140625, "__label__sports_fitness": 0.00038743019104003906, "__label__transportation": 0.0006265640258789062, "__label__travel": 0.00023984909057617188}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29735, 0.04226]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29735, 0.33514]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29735, 0.83075]], "google_gemma-3-12b-it_contains_pii": [[0, 3260, false], [3260, 5555, null], [5555, 9796, null], [9796, 15012, null], [15012, 19188, null], [19188, 22190, null], [22190, 27261, null], [27261, 29735, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3260, true], [3260, 5555, null], [5555, 9796, null], [9796, 15012, null], [15012, 19188, null], [19188, 22190, null], [22190, 27261, null], [27261, 29735, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29735, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29735, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29735, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29735, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29735, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29735, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29735, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29735, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29735, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29735, null]], "pdf_page_numbers": [[0, 3260, 1], [3260, 5555, 2], [5555, 9796, 3], [9796, 15012, 4], [15012, 19188, 5], [19188, 22190, 6], [22190, 27261, 7], [27261, 29735, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29735, 0.09626]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
a70a484c6ef53ae1cc473ffa2e79a86de7994a52
ECE/CS 757: Advanced Computer Architecture II Instructor: Mikko H Lipasti Spring 2017 University of Wisconsin-Madison Lecture notes based on slides created by John Shen, Mark Hill, David Wood, Guri Sohi, Jim Smith, Natalie Enright Jerger, Michel Dubois, Murali Annavaram, Per Stenström and probably others Lecture Outline • Introduction to Parallel Software – Sources of parallelism – Expressing parallelism • Programming Models • Major Abstractions – Processes & threads – Communication – Synchronization • Shared Memory – API description – Implementation at ABI, ISA levels – ISA support • Message Passing – API description – Implementation at ABI, ISA levels – ISA support Parallel Software • Why is it so hard? – Conscious mind is inherently sequential – (sub-conscious mind is extremely parallel) • Identifying parallelism in the problem • Expressing parallelism to the hardware • Effectively utilizing parallel hardware – Balancing work – Coordinating work • Debugging parallel algorithms Finding Parallelism 1. Functional parallelism – Car: {engine, brakes, entertain, nav, ...} – Game: {physics, logic, UI, render, ...} – Signal processing: {transform, filter, scaling, ...} 2. Automatic extraction – Decompose serial programs 3. Data parallelism – Vector, matrix, db table, pixels, ... 4. Request parallelism – Web, shared database, telephony, ... Mikko Lipasti-University of Wisconsin 1. Functional Parallelism 1. Functional parallelism - Car: \{engine, brakes, entertain, nav, ...\} - Game: \{physics, logic, UI, render, ...\} - Signal processing: \{transform, filter, scaling, ...\} • Relatively easy to identify and utilize • Provides small-scale parallelism - 3x-10x • Balancing stages/functions is difficult 2. Automatic Extraction 2. Automatic extraction - Decompose serial programs • Works well for certain application types - Regular control flow and memory accesses • Difficult to guarantee correctness in all cases - Ambiguous memory dependences - Requires speculation, support for recovery • Degree of parallelism - Large (1000x) for easy cases - Small (3x-10x) for difficult cases 3. Data Parallelism 3. Data parallelism - Vector, matrix, db table, pixels, web pages,... - Large data => significant parallelism - Many ways to express parallelism - Vector/SIMD - Threads, processes, shared memory - Message-passing - Challenges: - Balancing & coordinating work - Communication vs. computation at scale 4. Request Parallelism - Multiple users => significant parallelism - Challenges - Synchronization, communication, balancing work Balancing Work • Amdahl’s parallel phase f: all processors busy • If not perfectly balanced – (1-f) term grows (f not fully parallel) – Performance scaling suffers – Manageable for data & request parallel apps – Very difficult problem for other two: • Functional parallelism • Automatically extracted Coordinating Work • Synchronization – Some data somewhere is shared – Coordinate/order updates and reads – Otherwise $\rightarrow$ chaos • Traditionally: locks and mutual exclusion – Hard to get right, even harder to tune for perf. • Research to reality: Transactional Memory – Programmer: Declare potential conflict – Hardware and/or software: speculate & check – Commit or roll back and retry – IBM, Intel, others, now support in HW Expressing Parallelism • SIMD – introduced by Cray-1 vector supercomputer – MMX, SSE/SSE2/SSE3/SSE4, AVX at small scale • SPMD or SIMT – GPGPU model (later) – All processors execute same program on disjoint data – Loose synchronization vs. rigid lockstep of SIMD • MIMD – most general (this lecture) – Each processor executes its own program • Expressed through standard interfaces – API, ABI, ISA MP Interfaces • *Levels of abstraction* enable complex system designs (such as MP computers) • Fairly natural extensions of uniprocessor model – Historical evolution Programming Models • High level paradigm for expressing an algorithm – Examples: • Functional • Sequential, procedural • Shared memory • Message Passing • Embodied in high level languages that support concurrent execution – Incorporated into HLL constructs – Incorporated as libraries added to existing sequential language • Top level features: – For conventional models – shared memory, message passing – Multiple threads are conceptually visible to programmer – Communication/synchronization are visible to programmer (c) 2007 Jim Smith Application Programming Interface (API) • Interface where HLL programmer works • High level language plus libraries – Individual libraries are sometimes referred to as an “API” • User level runtime software is often part of API implementation – Executes procedures – Manages user-level state • Examples: – C and pthreads – FORTRAN and MPI Application Binary Interface (ABI) • Program in API is compiled to ABI • Consists of: – OS call interface – User level instructions (part of ISA) Instruction Set Architecture (ISA) - Interface between hardware and software - What the hardware implements - Architected state - Registers - Memory architecture - All instructions - May include parallel (SIMD) operations - Both non-privileged and privileged - Exceptions (traps, interrupts) Programming Model Elements • For both Shared Memory and Message Passing • Processes and threads – **Process**: A shared address space and one or more threads of control – **Thread**: A program sequencer and private address space – **Task**: Less formal term – part of an overall job – Created, terminated, scheduled, etc. • Communication – Passing of data • Synchronization – Communicating control information – To assure reliable, deterministic communication sub-Outline • Shared Memory Model – API-level Processes, Threads – API-level Communication – API-level Synchronization • Shared Memory Implementation – Implementing Processes, Threads at ABI/ISA levels – Implementing Communication at ABI/ISA levels – Implementing Synchronization at ABI/ISA levels In order of decreasing complexity: synchronization, processes&threads, communication • Repeat the above for Message Passing Shared Memory - Flat shared memory or object heap - Synchronization via memory variables enables reliable sharing - Single process - Multiple threads per process - Private memory per thread - Typically built on shared memory hardware system (c) 2007 Jim Smith Threads and Processes • Creation – generic -- Fork • (Unix forks a process, not a thread) – pthread_create(....*thread_function....) • creates new thread in current address space • Termination – pthread_exit • or terminates when thread_function terminates – pthread_kill • one thread can kill another Example - Unix process with two threads (PC and stack pointer actually part of ABI/ISA implementation) Shared Memory Communication - Reads and writes to shared variables via normal language (assignment) statements (e.g. assembly load/store) Shared Memory Synchronization • What really gives shared memory programming its structure • Usually explicit in shared memory model – Through language constructs or API • Three major classes of synchronization – Mutual exclusion (mutex) – Point-to-point synchronization – Rendezvous • Employed by application design patterns – A general description or template for the solution to a commonly recurring software design problem. (c) 2007 Jim Smith Mutual Exclusion (mutex) • Assures that only one thread at a time can access a code or data region • Usually done via *locks* – One thread acquires the lock – All other threads excluded until lock is released • Examples – pthread_mutex_lock – pthread_mutex_unlock • Two main application programming patterns – Code locking – Data locking Code Locking - Protect shared data by locking the code that accesses it - Also called a *monitor* pattern - Example of a *critical section* ```c update(args) mutex code_lock; ... lock(code_lock); <read data1> <modify data> <write data2> unlock(code_lock); ... return; ``` Data Locking - Protect shared data by locking data structure Data Locking - Preferred when data structures are read/written in combinations - Example: ```plaintext <thread 0> Lock (mutex_struct1) Lock (mutex_struct2) <access struct1> <access struct2> Unlock (mutex_data1) Unlock (mutex_data2) <thread 1> Lock (mutex_struct1) Lock (mutex_struct3) <access struct1> <access struct3> Unlock (mutex_data1) Unlock (mutex_data3) <thread 2> Lock (mutex_struct2) Lock (mutex_struct3) <access struct2> <access struct3> Unlock (mutex_data2) Unlock (mutex_data3) ``` Deadlock • Data locking is prone to deadlock – If locks are acquired in an unsafe order • Example: <thread 0> Lock (mutex_data1) Lock (mutex_data2) <access data1> <access data2> Unlock (mutex_data1) Unlock (mutex_data2) <thread 1> Lock (mutex_data2) Lock (mutex_data1) <access data1> <access data2> Unlock (mutex_data1) Unlock (mutex_data2) • Complexity – Disciplined locking order must be maintained, else deadlock – Also, composability problems • Locking structures in a nest of called procedures (c) 2007 Jim Smith Efficiency • Lock Contention – Causes threads to wait • Function of lock *granularity* – Size of data structure or code that is being locked • Extreme Case: – “One big lock” model for multithreaded OSes – Easy to implement, but very inefficient • Finer granularity + Less contention - More locks, more locking code - perhaps more deadlock opportunities • Coarser granularity – opposite +/- of above Point-to-Point Synchronization • One thread signals another that a condition holds – Can be done via API routines – Can be done via normal load/stores • Examples – pthread_cond_signal – pthread_cond_wait • suspends thread if condition not true • Application program pattern – Producer/Consumer ```c <Producer> while (full == 1) {}; wait buffer = value; full = 1; <Consumer> while (full == 0) {}; wait b = buffer; full = 0; ``` Rendezvous - Two or more cooperating threads must reach a program point before proceeding - Examples - wait for another thread at a join point before proceeding - example: pthread_join - barrier synchronization - many (or all) threads wait at a given point - Application program pattern - Bulk synchronous programming pattern Bulk Synchronous Program Pattern Thread 1 Thread 2 \ldots Thread N Barriers Compute Communicate Compute Communicate Compute (c) 2007 Jim Smith Summary: Synchronization and Patterns • mutex (mutual exclusion) – code locking (monitors) – data locking • point to point – producer/consumer • rendezvous – bulk synchronous sub-Outline • Shared Memory Model – API-level Processes, Threads – API-level Communication – API-level Synchronization • Shared Memory Implementation – Implementing Processes, Threads at ABI/ISA levels – Implementing Communication at ABI/ISA levels – Implementing Synchronization at ABI/ISA levels In order of decreasing complexity: synchronization, processes&threads, communication • Repeat the above for Message Passing API Implementation - Implemented at ABI and ISA level - OS calls - Runtime software - Special instructions Processes and Threads • Three models – OS processes – OS threads – User threads OS Processes • Thread == Process • Use OS fork to create processes • Use OS calls to set up shared address space (e.g. shmget) • OS manages processes (and threads) via run queue • Heavyweight thread switches – OS call followed by: – Switch address mappings – Switch process-related tables – Full register switch • Advantage – Threads have protected private memory OS (Kernel) Threads • API pthread_create() maps to Linux clone() – Allows multiple threads sharing memory address space • OS manages threads via run queue • Lighter weight thread switch – Still requires OS call – No need to switch address mappings – OS switches architected register state and stack pointer User Threads - If memory mapping doesn’t change, why involve OS at all? - Runtime creates threads simply by allocating stack space - Runtime switches threads via user level instructions - thread switch via jumps Implementing User Threads - Multiple kernel threads needed to get control of multiple hardware processors - Create kernel threads (OS schedules) - Create user threads that runtime schedules onto kernel threads Implementing User Threads Processor 1 Processor 2 Processor N Kernel Threads User Threads Kernel Thread Queue OS Scheduler User Thread Queue Runtime Scheduler (c) 2007 Jim Smith Communication • *Easy* Just map high level access to variables to ISA level loads and stores • *Except for* Ordering of memory accesses -- later Synchronization • Implement locks and rendezvous (barriers) • Use loads and stores to implement lock lock: <thread 0> <thread 1> . . . . LAB1: Load R1, Lock LAB2: Load R1, Lock Branch LAB1 if R1==1 Branch LAB2 if R1==1 Ldi R1, 1 Ldi R1,1 Store Lock, R1 Store Lock, R1 . . <critical section> <critical section> . . Ldi R1, 0 Ldi R1, 0 Store Lock, R1 Store Lock, R1 (c) 2007 Jim Smith • *Does not work* • Violates mutual exclusion if both threads attempt to lock at the same time – In practice, may work *most* of the time... – Leading to an unexplainable system hang every few days ```<thread 0> . . LAB1: Load R1, Lock Branch LAB1 if R1==1 Ldi R1, 1 Store Lock, R1 </thread 0> ```<thread 1> . . LAB2: Load R1, Lock Branch LAB2 if R1==1 Ldi R1, 1 Store Lock, R1 </thread 1>``` Lock Implementation • Reliable locking can be done with *atomic* read-modify-write instruction • Example: test&set – read lock and write a one – some ISAs also set CCs (test) <thread 1> <thread 2> . LAB1: Test&Set R1, Lock LAB2: Test&Set R1, Lock Branch LAB1 if R1==1 Branch LAB2 if R1==1 . . <critical section> <critical section> . Reset Lock Reset Lock (c) 2007 Jim Smith Atomic Read-Modify-Write - Many such instructions have been used in ISAs \[ \begin{align*} \text{Test}&\text{Set}(reg,lock) & \quad \text{Fetch}&\text{Add}(reg,value,sum) & \quad \text{Swap}(reg,opnd) \\ reg & \leftarrow \text{mem}(lock); & \quad \text{reg} & \leftarrow \text{mem}(sum); & \quad \text{temp} & \leftarrow \text{mem}(opnd); \\ \text{mem}(lock) & \leftarrow 1; & \quad \text{mem}(sum) & \leftarrow \text{mem}(sum)+value; & \quad \text{mem}(opnd) & \leftarrow \text{reg}; \\ \end{align*} \] - More-or-less equivalent - One can be used to implement the others - Implement Fetch&Add with Test&Set: \[ \begin{align*} \text{try:} & \quad \text{Test}&\text{Set}(lock); \\ & \quad \text{if lock} == 1 \text{ go to try;} \\ & \quad \text{reg} \leftarrow \text{mem}(sum); \\ & \quad \text{mem}(sum) \leftarrow \text{reg}+value; \\ & \quad \text{reset } (lock); \\ \end{align*} \] (c) 2007 Jim Smith Sub-Atomic Locks - Use two instructions: - Load linked + Store conditional - Load linked - reads memory value - sets special flag - writes address to special global address register - Flag cleared on - operations that may violate atomicity - (implementation-dependent) - e.g., write to address by another processor - can use cache coherence mechanisms (later) - context switch - Store conditional - writes value if flag is set - no-op if flag is clear - sets CC indicating or failure Load-Linked Store-Conditional • Example: atomic swap (r4, mem(r1)) ``` try: mov r3,r4 ;move exchange value ll r2,0(r1) ;load locked sc r3,0(r1) ;store conditional beqz r3,try ;if store fails mov r4,r2 ;load value to r4 ``` • RISC-style implementation – Like many early RISC ideas, it seemed like a good idea at the time... register windows, delayed branches, special divide regs, etc. Lock Efficiency • Spin Locks – tight loop until lock is acquired \[ \text{LAB1: } \text{Test\&Set R1, Lock} \] \[ \text{Branch LAB1 if R1==1} \] • Inefficiencies: – Memory/Interconnect resources, spinning on read/writes – With a cache-based systems, writes \(\Rightarrow\) lots of coherence traffic – Processor resource • not executing useful instructions Efficient Lock Implementations - **Test&Test&Set** - spin on check for unlock only, then try to lock - with cache systems, all reads can be local - no bus or external memory resources used ``` test_it: load reg, mem(lock) branch test_it if reg==1 lock_it: test&set reg, mem(lock) branch test_it if reg==1 ``` - **Test&Set with Backoff** - Insert delay between test&set operations (not too long) - Each failed attempt $\Rightarrow$ longer delay (Like ethernet collision avoidance) Efficient Lock Implementations • Solutions just given save memory/interconnect resource – Still waste processor resource • Use runtime to suspend waiting process – Detect lock – Place on wait queue – Schedule another thread from run queue – When lock is released move from wait queue to run queue Point-to-Point Synchronization - *Can* use normal variables as flags ```c while (full == 1){} ;spin a = value; full = 1; ``` ```c while (full == 0){} ;spin b = value; full = 0; ``` - Assumes sequential consistency (later) - Using normal variables may cause problems with relaxed consistency models - May be better to use special opcodes for flag set/clear Barrier Synchronization • Uses a lock, a counter, and a flag – lock for updating counter – flag indicates all threads have incremented counter ```c Barrier (bar_name, n) { Lock (bar_name.lock); if (bar_name.counter = 0) bar_name.flag = 0; mycount = bar_name.counter++; Unlock (bar_name.lock); if (mycount == n) { bar_name.counter = 0; bar_name.flag = 1; } else while(bar_name.flag = 0) {}; /* busy wait */ } ``` Scalable Barrier Synchronization - Single counter can be point of contention - Solution: use tree of locks - Example: - threads 1,2,3,4,6 have completed Memory Ordering • Program Order – Processor executes instructions in architected (PC) sequence *or at least appears to* • Loads and stores from a single processor execute in *program order* – Program order *must* be satisfied – It is part of the ISA • What about ordering of loads and stores from *different* processors Memory Ordering • Producer/Consumer example: \[\begin{align*} &T_0: & A=0; & T_1: \\ & & \text{Flag} = 0; & \\ & & \ldots & \\ & & A=9; & \text{While (Flag==0)\{}; \\ & & \text{Flag} = 1; & L_2: \text{if (A==0)\ldots} \end{align*}\] • Intuitively it is *impossible* for \(A\) to be 0 at \(L_2\) \(\text{But}\) it can happen if the updates to memory are reordered by the memory system • In an MP system, memory ordering rules must be carefully defined and maintained Practical Implementation • Interconnection network with contention and buffering T0: A=0; T1: Flag = 0; .... A=9; While (Flag==0){}; Flag = 1; if (A==0)... (c) 2007 Jim Smith "A system is sequentially consistent if the result of any execution is the same as if the operations of all processors were executed in some sequential order and the operations of each individual processor appears in this sequence in the order specified by its program“ -- Leslie Lamport Memory Coherence • WRT individual memory locations, consistency is always maintained – In producer/consumer examples, coherence is always maintained • Practically, memory coherence often reduces to cache coherence – Cache coherence implementations to be discussed later • Summary – **Coherence** is for a single memory location – **Consistency** applies to apparent ordering of all memory locations – Memory coherence and consistency are ISA concepts Thread0: Store A ← 0 Store A ← 9 Thread1: Load A = 9 (a) Thread0: Store Flag ← 0 Thread1: Load Flag = 0 Store Flag ← 1 Load Flag = 1 (b) sub-Outline • Message Passing Model – API-level Processes, Threads – API-level Communication – API-level Synchronization • Message Passing Implementation – Implementing Processes, Threads at ABI/ISA levels – Implementing Communication at ABI/ISA levels – Implementing Synchronization at ABI/ISA levels Message Passing - Multiple processes (or threads) - Logical data partitioning - No shared variables - Message Passing - Threads of control communicate by sending and receiving messages - May be implicit in language constructs - More commonly explicit via API MPI – Message Passing Interface API • A widely used standard – For a variety of distributed memory systems • SMP Clusters, workstation clusters, MPPs, heterogeneous systems • Also works on Shared Memory MPs – Easy to emulate distributed memory on shared memory HW • Can be used with a number of high level languages Processes and Threads • Lots of flexibility (advantage of message passing) – 1) Multiple threads sharing an address space – 2) Multiple processes sharing an address space – 3) Multiple processes with different address spaces • and different OSes • 1 and 2 are easily implemented on shared memory hardware (with single OS) – Process and thread creation/management similar to shared memory • 3 probably more common in practice – Process creation often external to execution environment; e.g. shell script – Hard for user process on one system to create process on another OS Process Management • Processes are given identifiers (PIds) – “rank” in MPI • Process can acquire own PId • Operations can be conditional on PId • Message can be sent/received via PIds Process Management - Organize into groups - For collective management and communication Communication and Synchronization • Combined in the message passing paradigm – Synchronization of messages part of communication semantics • Point-to-point communication – From one process to another • Collective communication – Involves groups of processes – e.g., broadcast Point to Point Communication • Use sends/receives • send(RecProc, SendBuf,...) – RecProc is destination (wildcards may be used) – SendBuf names buffer holding message to be sent • receive (SendProc, RecBuf,...) – SendProc names sending process (wildcards may be used) – RecBuf names buffer where message should be placed MPI Examples • MPI_Send(buffer, count, type, dest, tag, comm) buffer – address of data to be sent count – number of data items type – type of data items dest – rank of the receiving process tag – arbitrary programmer-defined identifier tag of send and receive must match comm – communicator number • MPI_Recv(buffer, count, type, source, tag, comm, status) buffer – address of data to be sent count – number of data items type – type of data items source – rank of the sending process; may be a wildcard tag – arbitrary programmer-defined identifier; may be a wildcard tag of send and receive must match comm – communicator number status – indicates source, tag, and number of bytes transferred Message Synchronization • After a send or receive is executed... – Has message actually been sent? or received? • Asynchronous versus Synchronous – Higher level concept • Blocking versus non-Blocking – Lower level – depends on buffer implementation • but is reflected up into the API Synchronous vs Asynchronous • Synchronous Send – Stall until message has actually been received – Implies a message acknowledgement from receiver to sender • Synchronous Receive – Stall until message has actually been received • Asynchronous Send and Receive – Sender and receiver can proceed regardless – Returns request handle that can be tested for message receipt – Request handle can be tested to see if message has been sent/received Asynchronous Send - **MPI_Isend**(buffer,count,type,dest,tag,comm,request) - buffer – address of data to be sent - count – number of data items - type – type of data items - dest – rank of the receiving process - tag – arbitrary programmer-defined identifier - tag of send and receive must match - comm – communicator number - request – a unique number that can be used later to test for completion (via Test or Wait) - Sending process is immediately free to do other work - Must test *request* handle before another message can be safely sent - **MPI_test** – tests request handle # and returns status - **MPI_wait** – blocks until request handle# is “done” Asynchronous Receive - MPI_Irecv(buffer, count, type, source, tag, comm, request) - buffer – address of data to be sent - count – number of data items - type – type of data items - source – rank of the sending process; may be a wildcard - tag – arbitrary programmer-defined identifier; may be a wildcard - tag of send and receive must match - comm – communicator number - request – a unique number that can be used later to test for completion - Receiving process does not wait for message - Must test request handle before message is known to be in buffer - MPI_test – tests request handle # and returns status - MPI_wait – blocks until request handle# is “done” MPI Example: Comm. Around a Ring ```c int main(argc,argv) int argc; char *argv[]; { int numprocs, rank, next, prev, buf[2], tag1=1, tag2=2; MPI_Request reqs[4]; MPI_Status stats[4]; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD, &numprocs); MPI_Comm_rank(MPI_COMM_WORLD, &rank); prev = rank - 1; next = rank + 1; if (rank == 0) prev = numprocs - 1; if (rank == (numprocs - 1)) next = 0; MPI_Irecv(&buf[0], 1, MPI_INT, prev, tag1, MPI_COMM_WORLD, &reqs[0]); MPI_Irecv(&buf[1], 1, MPI_INT, next, tag2, MPI_COMM_WORLD, &reqs[1]); MPI_Isend(&rank, 1, MPI_INT, prev, tag2, MPI_COMM_WORLD, &reqs[2]); MPI_Isend(&rank, 1, MPI_INT, next, tag1, MPI_COMM_WORLD, &reqs[3]); MPI_Waitall(4, reqs, stats); MPI_Finalize(); } ``` Deadlock - Blocking communications may deadlock `<Process 0>` Send(Process1, Message); Receive(Process1, Message); `<Process 1>` Send(Process0, Message); Receive(Process0, Message); - Requires careful (safe) ordering of sends/receives `<Process 0>` Send(Process1, Message); Receive(Process1, Message); `<Process 1>` Receive(Process0, Message); Send(Process0, Message); - Also depends on buffering - System buffering may not eliminate deadlock, just postpone it Collective Communications • Involve all processes within a communicator • Blocking • MPI_Barrier (comm) – Barrier synchronization • MPI_Bcast (*buffer,count,datatype,root,comm) – Broadcasts from process of rank “root” to all other processes • MPI_Scatter (*sendbuf,sendcnt,sendtype,*recvbuf, ...... recvcnt,recvtype,root,comm) – Sends different messages to each process in a group • MPI_Gather (*sendbuf,sendcnt,sendtype,*recvbuf, ...... recvcount,recvtype,root,comm) – Gathers different messages from each process in a group • Also reductions (c) 2007 Jim Smith Communicators and Groups • Define collections of processes that may communicate – Often specified in message argument – MPI_COMM_WORLD – predefined communicator that contains all processes Broadcast Example (c) 2007 Jim Smith Scatter Example Process 0 SendBuf | 23 | | 37 | | 42 | | 55 | Process 0 RecvBuf | 23 | Process 1 RecvBuf | 37 | Process 2 RecvBuf | 42 | Process 3 RecvBuf | 55 | (c) 2007 Jim Smith Gather Example Process 0 SendBuf 23 Process 1 SendBuf 37 Process 2 SendBuf 42 Process 3 SendBuf 55 (c) 2007 Jim Smith Message Passing Implementation - At the ABI and ISA level - No special support (beyond that needed for shared memory) - Most of the implementation is in the runtime - user-level libraries - Makes message passing relatively portable - Three implementation models (given earlier) 1) Multiple threads sharing an address space 2) Multiple processes sharing an address space 3) Multiple processes with non-shared address space and different OSes Multiple Threads Sharing Address Space • Runtime manages buffering and tracks communication – Communication via normal loads and stores using shared memory • Example: Send/Receive – Send calls runtime, runtime posts availability of message in runtime-managed table – Receive calls runtime, runtime checks table, finds message – Runtime copies data from send buffer to store buffer via load/stores • Fast/Efficient Implementation – May even be advantageous over shared memory paradigm • considering portability, software engineering aspects – Can use runtime thread scheduling – Problem with protecting private memories and runtime data area Multiple Processes Sharing Address Space - Similar to multiple threads sharing address space - Would rely on kernel scheduling - May offer more memory protection - With intermediate runtime buffering - User processes cannot access others’ private memory Multiple Processes with Non-Shared Address Space • Most common implementation • Communicate via networking hardware • Send/receive to runtime – Runtime converts to OS (network) calls • Relatively high overhead – Most HPC systems use special low-latency, high-bandwidth networks – Buffering in receiver’s runtime space may save some overhead for receive (doesn’t require OS call) At the ISA Level: Shared Memory - Multiple processors - Architected shared virtual memory - Architected Synchronization instructions - Architected Cache Coherence - Architected Memory Consistency At the ISA Level: Message Passing - Multiple processors - Shared or non-shared real memory (multi-computers) - Limited ISA support (if any) - An advantage of distributed memory systems -- Just connect a bunch of small computers - Some implementations may use shared memory managed by runtime (c) 2007 Jim Smith Lecture Summary • Introduction to Parallel Software • Programming Models • Major Abstractions – Processes & threads – Communication – Synchronization • Shared Memory – API description – Implementation at ABI, ISA levels – ISA support • Message Passing – API description – Implementation at ABI, ISA levels – ISA support • Not covered: openmp, intel tbb, CUDA (later), etc.
{"Source-Url": "http://ece757.ece.wisc.edu/lect04-mp-software.pdf", "len_cl100k_base": 7758, "olmocr-version": "0.1.48", "pdf-total-pages": 86, "total-fallback-pages": 0, "total-input-tokens": 126786, "total-output-tokens": 11119, "length": "2e12", "weborganizer": {"__label__adult": 0.0003712177276611328, "__label__art_design": 0.0006518363952636719, "__label__crime_law": 0.00037479400634765625, "__label__education_jobs": 0.006397247314453125, "__label__entertainment": 0.00011247396469116212, "__label__fashion_beauty": 0.00017595291137695312, "__label__finance_business": 0.00022208690643310547, "__label__food_dining": 0.0003795623779296875, "__label__games": 0.0009336471557617188, "__label__hardware": 0.0033740997314453125, "__label__health": 0.0004224777221679687, "__label__history": 0.0004127025604248047, "__label__home_hobbies": 0.00017571449279785156, "__label__industrial": 0.0008707046508789062, "__label__literature": 0.00027942657470703125, "__label__politics": 0.0003120899200439453, "__label__religion": 0.0006341934204101562, "__label__science_tech": 0.076416015625, "__label__social_life": 0.00013959407806396484, "__label__software": 0.010223388671875, "__label__software_dev": 0.8955078125, "__label__sports_fitness": 0.0004315376281738281, "__label__transportation": 0.0009636878967285156, "__label__travel": 0.0002613067626953125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31166, 0.023]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31166, 0.24501]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31166, 0.77381]], "google_gemma-3-12b-it_contains_pii": [[0, 310, false], [310, 707, null], [707, 1039, null], [1039, 1462, null], [1462, 1804, null], [1804, 2199, null], [2199, 2534, null], [2534, 2666, null], [2666, 2984, null], [2984, 3438, null], [3438, 3847, null], [3847, 4016, null], [4016, 4587, null], [4587, 4937, null], [4937, 5088, null], [5088, 5394, null], [5394, 5874, null], [5874, 6314, null], [6314, 6580, null], [6580, 6907, null], [6907, 7013, null], [7013, 7152, null], [7152, 7610, null], [7610, 7964, null], [7964, 8238, null], [8238, 8300, null], [8300, 8799, null], [8799, 9361, null], [9361, 9781, null], [9781, 10227, null], [10227, 10570, null], [10570, 10721, null], [10721, 10907, null], [10907, 11347, null], [11347, 11461, null], [11461, 11548, null], [11548, 11923, null], [11923, 12239, null], [12239, 12454, null], [12454, 12665, null], [12665, 12853, null], [12853, 13004, null], [13004, 13836, null], [13836, 14260, null], [14260, 14773, null], [14773, 15686, null], [15686, 16257, null], [16257, 16668, null], [16668, 17047, null], [17047, 17592, null], [17592, 17901, null], [17901, 18282, null], [18282, 18740, null], [18740, 18896, null], [18896, 19229, null], [19229, 19718, null], [19718, 19896, null], [19896, 20184, null], [20184, 20812, null], [20812, 21128, null], [21128, 21396, null], [21396, 21721, null], [21721, 22313, null], [22313, 22501, null], [22501, 22592, null], [22592, 22881, null], [22881, 23213, null], [23213, 23950, null], [23950, 24247, null], [24247, 24702, null], [24702, 25385, null], [25385, 26073, null], [26073, 26858, null], [26858, 27368, null], [27368, 27949, null], [27949, 28143, null], [28143, 28181, null], [28181, 28373, null], [28373, 28496, null], [28496, 28956, null], [28956, 29616, null], [29616, 29875, null], [29875, 30261, null], [30261, 30458, null], [30458, 30775, null], [30775, 31166, null]], "google_gemma-3-12b-it_is_public_document": [[0, 310, true], [310, 707, null], [707, 1039, null], [1039, 1462, null], [1462, 1804, null], [1804, 2199, null], [2199, 2534, null], [2534, 2666, null], [2666, 2984, null], [2984, 3438, null], [3438, 3847, null], [3847, 4016, null], [4016, 4587, null], [4587, 4937, null], [4937, 5088, null], [5088, 5394, null], [5394, 5874, null], [5874, 6314, null], [6314, 6580, null], [6580, 6907, null], [6907, 7013, null], [7013, 7152, null], [7152, 7610, null], [7610, 7964, null], [7964, 8238, null], [8238, 8300, null], [8300, 8799, null], [8799, 9361, null], [9361, 9781, null], [9781, 10227, null], [10227, 10570, null], [10570, 10721, null], [10721, 10907, null], [10907, 11347, null], [11347, 11461, null], [11461, 11548, null], [11548, 11923, null], [11923, 12239, null], [12239, 12454, null], [12454, 12665, null], [12665, 12853, null], [12853, 13004, null], [13004, 13836, null], [13836, 14260, null], [14260, 14773, null], [14773, 15686, null], [15686, 16257, null], [16257, 16668, null], [16668, 17047, null], [17047, 17592, null], [17592, 17901, null], [17901, 18282, null], [18282, 18740, null], [18740, 18896, null], [18896, 19229, null], [19229, 19718, null], [19718, 19896, null], [19896, 20184, null], [20184, 20812, null], [20812, 21128, null], [21128, 21396, null], [21396, 21721, null], [21721, 22313, null], [22313, 22501, null], [22501, 22592, null], [22592, 22881, null], [22881, 23213, null], [23213, 23950, null], [23950, 24247, null], [24247, 24702, null], [24702, 25385, null], [25385, 26073, null], [26073, 26858, null], [26858, 27368, null], [27368, 27949, null], [27949, 28143, null], [28143, 28181, null], [28181, 28373, null], [28373, 28496, null], [28496, 28956, null], [28956, 29616, null], [29616, 29875, null], [29875, 30261, null], [30261, 30458, null], [30458, 30775, null], [30775, 31166, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31166, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 31166, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31166, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31166, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31166, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31166, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31166, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31166, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31166, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31166, null]], "pdf_page_numbers": [[0, 310, 1], [310, 707, 2], [707, 1039, 3], [1039, 1462, 4], [1462, 1804, 5], [1804, 2199, 6], [2199, 2534, 7], [2534, 2666, 8], [2666, 2984, 9], [2984, 3438, 10], [3438, 3847, 11], [3847, 4016, 12], [4016, 4587, 13], [4587, 4937, 14], [4937, 5088, 15], [5088, 5394, 16], [5394, 5874, 17], [5874, 6314, 18], [6314, 6580, 19], [6580, 6907, 20], [6907, 7013, 21], [7013, 7152, 22], [7152, 7610, 23], [7610, 7964, 24], [7964, 8238, 25], [8238, 8300, 26], [8300, 8799, 27], [8799, 9361, 28], [9361, 9781, 29], [9781, 10227, 30], [10227, 10570, 31], [10570, 10721, 32], [10721, 10907, 33], [10907, 11347, 34], [11347, 11461, 35], [11461, 11548, 36], [11548, 11923, 37], [11923, 12239, 38], [12239, 12454, 39], [12454, 12665, 40], [12665, 12853, 41], [12853, 13004, 42], [13004, 13836, 43], [13836, 14260, 44], [14260, 14773, 45], [14773, 15686, 46], [15686, 16257, 47], [16257, 16668, 48], [16668, 17047, 49], [17047, 17592, 50], [17592, 17901, 51], [17901, 18282, 52], [18282, 18740, 53], [18740, 18896, 54], [18896, 19229, 55], [19229, 19718, 56], [19718, 19896, 57], [19896, 20184, 58], [20184, 20812, 59], [20812, 21128, 60], [21128, 21396, 61], [21396, 21721, 62], [21721, 22313, 63], [22313, 22501, 64], [22501, 22592, 65], [22592, 22881, 66], [22881, 23213, 67], [23213, 23950, 68], [23950, 24247, 69], [24247, 24702, 70], [24702, 25385, 71], [25385, 26073, 72], [26073, 26858, 73], [26858, 27368, 74], [27368, 27949, 75], [27949, 28143, 76], [28143, 28181, 77], [28181, 28373, 78], [28373, 28496, 79], [28496, 28956, 80], [28956, 29616, 81], [29616, 29875, 82], [29875, 30261, 83], [30261, 30458, 84], [30458, 30775, 85], [30775, 31166, 86]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31166, 0.00866]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
0972fae409dfc71822b951f29d334434ce7f290e
Automated Projectile Design Software Mark Steinhoff Arrow Tech Associates Projectile Design • Ballistic Projectile Design – Performance Specs – System Interface – Candidate Design • Mass Properties • Aerodynamics • Ballistic Effectiveness • Payload Effectiveness Projectile Design • Ballistic Projectile Design – Performance Specs – System Interface – Candidate Design • Mass Properties • Aerodynamics • Ballistic Effectiveness • Payload Effectiveness • Guided Projectiles – Same as ballistic plus – Control mechanisms – Sensors – Autopilot – Guidance strategy Mission • Ballistic Mission – How often will you hit the target ($P_h$) – When you hit it, what is the likelihood of a kill ($P_{k/h}$) Mission • Ballistic Mission — How often will you hit the target ($P_h$) — When you hit it, what is the likelihood of a kill ($P_{k/h}$) • Guided — Same as Ballistic, plus — Remove system errors — Trajectory shaping • Glide for extended range • Dive to clear obstacles or for lethality Bottom Line • You now have to: – Design the projectile – Decide on a control mechanism – Design the auto pilot – Implement a Guidance strategy Commercial/Military Projectile Design Tools - Custom/Proprietary Software - Developer uses different analysis modules handing off data from one to the other - Stand alone modelers - Model building (PRO-E or Solid Works) - Aerodynamic estimation (CFD, Missile DATCOM, MILS3 or AP) - Simulation codes - Hand coded custom solutions - Typically Project A evolves into Project B evolves into Project C - PRODAS - Legacy codes embedded into an integrated software system - Validated simulations - Macro language - MATLAB/Simulink - Like Legacy Simulation codes except within an environment - Pre-built simulation blocks and integration engines # Software Metrics <table> <thead> <tr> <th>Category</th> <th>Custom</th> <th>PRODAS</th> <th>MATLAB/Simulink</th> </tr> </thead> <tbody> <tr> <td>Complexity Of Models</td> <td></td> <td></td> <td></td> </tr> <tr> <td>Speed of Execution</td> <td></td> <td></td> <td></td> </tr> <tr> <td>Push to Hardware</td> <td></td> <td></td> <td></td> </tr> <tr> <td>Speed/Cost of Implementation</td> <td></td> <td></td> <td></td> </tr> <tr> <td>Pre-Validated</td> <td></td> <td></td> <td></td> </tr> </tbody> </table> First Level - Conceptual Design Studies (Proposal) - For a Given Projectile, What Improvements in Performance Can be Obtained IF a Control Force and/or Moment is Available? - Simple to model and assess the Benefit of a Flight Control System Second Level - Detailed Design Studies (Design) - Perform Parametric Trade Studies to design the details of the Control Mechanism - Assess the Performance of a Smart Weapon Third Level - Final Detailed Design (Test) - At this Stage, Detailed Models of the Sensor Suite and Control Law are Included in Analysis - Models will include real time loop rates - Model should generate C code for embedded processors Guided Projectile Development Cycle Proposal - Requirements - Concepts - Trades - System Evaluation PRODAS Model, predict Aeros, Trajectories and System Effectiveness Guided Projectile Development Cycle - Requirements - Concepts - Trades - System Evaluation **Proposal** - PRODAS Model, predict Aeros, Trajectories and System Effectiveness **Design** - System Design - Component Design - GN&C Design Preliminary GN&C Development PRODAS Guided Projectile Development Cycle Proposal - Requirements - Concepts - Trades - System Evaluation PRODAS Model, predict Aeros, Trajectories and System Effectiveness Design - System Design - Component Design - GN&C Design Preliminary GN&C Development PRODAS Final GN&C Development MATLAB MATLAB Overall System Simulation ARROW TECH Guided Projectile Development Cycle Proposal Requirements Concepts Trades System Evaluation PRODAS Model, predict Aeros, Trajectories and System Effectiveness Design System Design Component Design GN&C Design Preliminary GN&C Development Final GN&C Development MATLAB Overall System Simulation Test Component Test Window Tunnel Test Ballistic Test Open Loop Test Closed Loop Test PRODAS to predict Tests and evaluate Aero Test Data MATLAB HIL and final GN&C Tuning Example #1 Ballistic Projectile - Design a 50 caliber projectile that will minimize wind sensitivity at 1000m - Start with basic shape - Vary boat tail length and Ogive length and shape Subtle Changes to Ogive Shape 154mm radius 254mm radius 664cm radius Not So Subtle Changes to Ogive Length Standard -3mm +3mm Not So Subtle Changes to Boat Tail - Standard - +15% - +30% Example #1 Metrics • 27 designs evaluated • 7 analysis modules executed • 50 seconds run time • 200 lines of PRODAS macro code • 4 hours to develop • 1 Excel file of results • Results: – Decreased wind sensitivity by 7.6% – Some configurations increased by as much as 10% Example #1 Extended Metrics - Started with previous macro - 125 designs evaluated - 7 analysis modules executed - 4 minutes run time - 212 lines of PRODAS macro code - 2 minutes to modify - 1 Excel file of results - Results: - Decreased wind sensitivity by 7.8% - Some configurations increased by as much as 25% Example #2 Guided Projectile • Basic design of mortar body is fixed, evaluate different fin/canard designs to meet multiple requirements. – Find maximum ballistic range – Find maximum gliding range using open loop control – Find maximum target range with vertical impact using open loop control Analysis Map Base line Plus - +/- 10% and 20% Fin span - +/- 10% and 20% Canard span - +/- 10% and 20% Canard chord Vary Fin and Canard Dimensions Estimate Aerodynamics 125 Different Projectile Designs New Projectile New Projectile New Projectile New Projectile Data File Data Collection Maximum Ballistic Range Maximum Gliding Range Maximum Vertical Impact ARROW TECH Macro Map Macro #1 Base line Plus - +/- 10% and 20% Fin span - +/- 10% and 20% Canard span - +/- 10% and 20% Canard chord Vary Fin and Canard Dimensions Estimate Aerodynamics Data Collection 125 Different Projectile Designs New Projectile New Projectile New Projectile New Projectile Data File Macro #2 Maximum Ballistic Range Macro #3 Maximum Gliding Range Macro #4 Maximum Vertical Impact Guided Flight Macros - Simple open loop canard controller embedded in the GN&C Prototype tool - Maximum Gliding Range Macro - Macro to iterate these design variables: - Quadrant elevation - Time Glide on - Canard application level to limit total AOA - Maximum Range with Vertical Impact - Macro to iterate these design variables: - Quadrant elevation - Time Glide on - Time Dive on - Canard application level to limit total AOA Example #2 Results • About half of the configurations unstable • 40% met the ballistic requirement • 15% met the extended range • 8% met all the requirements • Iterated this analysis with three different air frames Conclusions • Thorough ballistic development is tough – Automation lessens the burden – Guided projectiles are even worse • Match the tool to the job – Where are you in the development cycle? – Fast or Detailed? – Do you have to validate the sims? • Tools are readily available Example #1 PRODAS Script • The following script is included as an example showing how to modify your model with code. • It will: – Modify a base model – Calculate Mass Properties – Estimate muzzle velocity using Frankfort Interior ballistics – Estimate aerodynamics using Spinner 2000 – Fly a 6DOF trajectory with no wind – Estimate stability at cold temperatures – Fly another trajectory with 10 knot cross wind – Prepare of comma separated value file with the results • If you have questions or comments, please contact – Mark Steinhoff, Arrow Tech Associates (802) 865-3460 ext 18 How to Run this Script • Open a new PRODAS macro window • Copy and Paste the lines from the following slides • Change the path and projectile name of interest – The projectile model should include a case and propellant • The script expects certain elements to be named in your projectile model – Name the: • Ogive outer skin – OG • Ogive void element – OGV • Lead that fills the Ogive – OGL • Body outer skin – BD • Body void element – BDV • Lead that fills the Body – BDL • Boat tail outer skin – BT • Boat tail void element – BTV • Lead that fills the Boat tail – BTL • Void in the propellant for the Boat tail - PROPV SUB MAIN "PRODAS MACRO SCRIPT FILE 12/14/09 FILENAME= T:\PRODASV35\SCRIPTS\BUILDERV1.PVB" PROJDIR = "R:\ARROW TECH AUTHORED PAPERS 2010\" TESTPROJ = PROJDIR & "50 CAL BALLISTIC SNIPER PROJECT.PR3" 'INITIALIZE AN ARRAY TO STORE THE SUMMARY TABLE DIM LINEHDR(50) DIM LINECOL(50) ACTIVECOL=20 LINEHDR(0)="CONFIG" LINEHDR(1)="MASS" LINEHDR(2)="IX" LINEHDR(3)="IY" LINEHDR(4)="CG" LINEHDR(5)="PROP" LINEHDR(6)="CHAMBER" LINEHDR(7)="MV" LINEHDR(8)="X1" LINEHDR(9)="Y1" LINEHDR(10)="Z1" LINEHDR(11)="MACH" LINEHDR(12)="GYRO" LINEHDR(13)="X2" LINEHDR(14)="Y2" LINEHDR(15)="Z2" LINEHDR(16)="TOF" LINEHDR(17)="X ERROR" LINEHDR(18)="Y ERROR" LINEHDR(19)="Z ERROR" LINEHDR(20)="RMS ERROR" 'INITIALIZE THE TEXT FILE TO ACCUMULATE THE RESULTS SET RESULTS = MACROSYSTEM.INITIALIZERESULTSFILE RESULTSFILENAME = PROJDIR & "RESULT " & MONTH(DATE) & ", " & DAY(DATE) & " " & HOUR(TIME) & ", " & MINUTE(TIME) & ".TXT" RESULTS.OPENFILE RESULTSFILENAME 'Reset the RESULTS PATH RESULTS.WRITEHEADER LINEOUT= "" RESULTS.WRITESTRING LINEOUT FOR J = 0 TO ACTIVECOL LINEOUT= LINEOUT & LINEHDR(J) IF J < ACTIVECOL THEN LINEOUT=LINEOUT & CHR(9) NEXT RESULTS.WRITESTRING LINEOUT FOR BTINDEX= 1 TO 3 FOR OGLINDEX = 1 TO 3 FOR OGRINDEX = 1 TO 3 SET PROJ = MACROSYSTEM.INITIALIZEPROJECTILE IOPEN= PROJ.OPENDATAFILE(TESTPROJ) PROJ.FORCEUNLOCKPROJECTILE 'MAKE SURE IT IS READY TO BE CHANGED SET MODEL = PROJ.MODEL("SYSTEM") SET BT = MODEL.RETURNNAMEDELEMENT("BT") SELECT CASE BTINDEX CASE 1 'DO NOTHING DELTALENGTH = 0.0 CASE 2 DELTALENGTH = BT.LENGTH * 0.15 CASE 3 DELTALENGTH = BT.LENGTH * 0.3 END SELECT BTTAG="BT+" & DELTALENGTH*1000 & "MM| Example PRODAS Script SET BTV = MODEL.RETURNNAMEDELEMENT("BTV") SET BTL = MODEL.RETURNNAMEDELEMENT("BTL") SET PROPV = MODEL.RETURNNAMEDELEMENT("PROPV") BT.LENGTH = BT.LENGTH + DELTALENGTH BT.REF_LENGTH= BT.REF_LENGTH- DELTALENGTH BTV.LENGTH = BTV.LENGTH + DELTALENGTH BTV.REF_LENGTH= BTV.REF_LENGTH- DELTALENGTH BTL.LENGTH = BTL.LENGTH + DELTALENGTH BTL.REF_LENGTH= BTL.REF_LENGTH- DELTALENGTH PROPV.LENGTH = PROPV.LENGTH + DELTALENGTH PROPV.REF_LENGTH= PROPV.REF_LENGTH- DELTALENGTH SELECT CASE OGLINDEX CASE 1 'DO NOTHING DELTALENGTH = 0.00 'M CASE 2 DELTALENGTH =-0.003 'M CASE 3 DELTALENGTH = 0.003 'M END SELECT OGLTAG="OG +" & DELTALENGTH*1000 & "MM|" SET OG = MODEL.RETURNNAMEDELEMENT("OG") SET OGV = MODEL.RETURNNAMEDELEMENT("OGV") SET OGL = MODEL.RETURNNAMEDELEMENT("OGL") SET BD = MODEL.RETURNNAMEDELEMENT("BD") SET BDV = MODEL.RETURNNAMEDELEMENT("BDV") SET BDL = MODEL.RETURNNAMEDELEMENT("BDL") OG.LENGTH = OG.LENGTH + DELTALENGTH OG.REF_LENGTH= OG.REF_LENGTH - DELTALENGTH OGL.LENGTH = OGL.LENGTH + DELTALENGTH OGL.REF_LENGTH= OGL.REF_LENGTH - DELTALENGTH BD.LENGTH = BD.LENGTH - DELTALENGTH BDV.LENGTH = BDV.LENGTH - DELTALENGTH BDL.LENGTH = BDL.LENGTH - DELTALENGTH SELECT CASE OGRINDEX CASE 1 'DO NOTHING DELTARADIUS= 0.0 ' MUST BE IN METERS CASE 2 DELTARADIUS= -100.0/1000.0 ' MUST BE IN METERS CASE 3 DELTARADIUS= 400.0/1000.0 ' MUST BE IN METERS END SELECT SET OG = MODEL.RETURNNAMEDELEMENT("OG") OG.RADIUS = OG.RADIUS + DELTARADIUS ' MUST BE IN METERS OGRTAG="OGR + & DELTARADIUS *1000 & "MM|" INITIALPROP_VOLUME= PROJ.GETDATAPointVALUE("MASSPROP","CALC_PROPELLANT_VOLUME") PROJ.EXECUTEANALYSIS "MASS2000" WEIGHT= PROJ.GETDATAPointVALUE("MASSPROP","CALC_FLY_WEIGHT") AXIALINERTIA= PROJ.GETDATAPointVALUE("MASSPROP","CALC_FLY_AXIALINERTIA") TRANSINERTIA= PROJ.GETDATAPointVALUE("MASSPROP","CALC_FLY_TRANSINERTIA") CGNOSE= PROJ.GETDATAPointVALUE("MASSPROP","CALC_FLY_CGNOSE") LINECOL(0)=BTTAG & OGTAG & OGRTAG LINECOL(1)=WEIGHT *1000 LINECOL(2)=AXIALINERTIA LINECOL(3)=TRANSINERTIA LINECOL(4)=CGNOSE *1000 PROP_WEIGHT= PROJ.GETDATAPointVALUE("MASSPROP","CALC_PROPELLANT_WEIGHT") PROP_VOLUME= PROJ.GETDATAPointVALUE("MASSPROP","CALC_PROPELLANT_VOLUME") LINECOL(5)=PROP_WEIGHT *1000 LINECOL(6)=PROP_VOLUME*100*100*100 Example PRODAS Script Page 5 PROJ.SETDATAPointVALUE "GUNINFO","CHAMBERVOLUME",PROP_VOLUME PROJ.SETDATAPointVALUE "INTERIORBALLISTICS","PROPMASS(FRANKFORT)",PROP_WEIGHT PROJ.EXECUTEANALYSIS "IBAL2000FRANKFORT" MV = PROJ.GETDATAPointVALUE("TRAJECTORY","MUZZLEVELOCITY") LINECOL(7)=MV PROJ.EXECUTEANALYSIS "SPIN2000" PROJ.SETDATAPointVALUE "TRAJECTORY","RANGEFINAL",1000 PROJ.SETDATAPointVALUE "MET","METTYPE",3 'SET TO STD PROJ.EXECUTEANALYSIS "TRAJ20006D" SET TABLE= PROJ.OPENDATATABLE("TRAJECTORY","TRAJRESULTSDATA") 'INIT A DATA TABLE OBJECT LAUNCHMACH = TABLE.CELLVALUE(1,11) X1 = TABLE.CELLVALUE(TABLE.ROWS,2) Y1 = TABLE.CELLVALUE(TABLE.ROWS,3) Z1 = TABLE.CELLVALUE(TABLE.ROWS,4) LINECOL(8)=X1 LINECOL(9)=Y1 LINECOL(10)=Z1 PROJ.SETDATAPointVALUE "MET","METTYPE",1 'SET TO COLD PROJ.EXECUTEANALYSIS "SPINSTAB2000" SET TABLE= PROJ.OPENDATATABLE("AEROSTABILITY","STABILITYBASIC") 'INIT A DATA TABLE OBJECT IF LAUNCHMACH < TABLE.CELLVALUE(1,1) THEN MYGYRO = TABLE.CELLVALUE(1,2) ELSEIF LAUNCHMACH > TABLE.CELLVALUE(30,1) THEN MYGYRO = TABLE.CELLVALUE(30,2) ELSE FOR I=2 TO TABLE.ROWS IF LAUNCHMACH < TABLE.CELLVALUE(I,1) THEN RATIO = (TABLE.CELLVALUE(I,1)-LAUNCHMACH )/(TABLE.CELLVALUE(I,1)-TABLE.CELLVALUE(I-1,1)) MYGYRO = TABLE.CELLVALUE(I,2) - RATIO * ( TABLE.CELLVALUE(I,2) - TABLE.CELLVALUE(I,1)) EXIT FOR END IF NEXT END IF Example PRODAS Script LINECOL(11)=LAUNCHMACH LINECOL(12)=MYGYRO PROJ.SETDATAPointVALUE "MET","PRES AT SEA LEVEL", 1013.3 PROJ.SETDATAPointVALUE "MET","TEMP AT SEA LEVEL", 15 PROJ.SETDATAPointVALUE "MET","WIND DIRECTION", 0.0 'FROM THE NORTH PROJ.SETDATAPointVALUE "MET","WIND SPEED", 5.14 'M/SEC = 10 KNOTS PROJ.EXECUTEANALYSIS "MET2000" PROJ.SETDATAPointVALUE "MET","METTYPE", 6 'USER MET PROJ.EXECUTEANALYSIS "TRAJ20006D" SET TABLE= PROJ.OPENDATATABLE("TRAJECTORY","TRAJRESULTSDATA") 'INIT A DATA TABLE OBJECT X2 = TABLE.CELLVALUE(TABLE.ROWS,2) Y2 = TABLE.CELLVALUE(TABLE.ROWS,3) Z2 = TABLE.CELLVALUE(TABLE.ROWS,4) LINECOL(13)=X2 LINECOL(14)=Y2 LINECOL(15)=Z2 LINECOL(16)= TABLE.CELLVALUE(TABLE.ROWS,1) LINECOL(17)=X1-X2 LINECOL(18)=Y1-Y2 LINECOL(19)=Z1-Z2 LINECOL(20)= SQRT((X1-X2)^2 + (Y1-Y2)^2 + (Z1-Z2)^2 ) LINEOUT= "" FOR J = 0 TO ACTIVECOL LINEOUT= LINEOUT & LINECOL(J) IF J < ACTIVECOL THEN LINEOUT=LINEOUT & CHR(9) NEXT RESULTS.WRITESTRING LINEOUT NEWFILENAME = "50 CAL BALLISTIC SNIPER PROJECT" &BTINDEX & OGLINDEX &OGRINDEX & ".PR3" PROJ.SAVEASDATAFILE PROJDIR & NEWFILENAME PROJ.CLOSEDATAFILE 'YOU ARE DONE WITH THIS PROJECTILE NEXT NEXT NEXT RESULTS.CLOSEFILE 'CLOSE THE RESULTS FILE SET RESULTS = NOTHING MSGBOX "RESULTS FILE CAN BE FOUND IN " & RESULTSFILENAME END SUB
{"Source-Url": "http://www.prodas.com/XQ/ASP/P.603/QX/Documents/Arrow%20Tech%20-%20Steinhoff%20NDIA%20Paper%20-%20Automated%20Projectile%20Design%20Software.pdf", "len_cl100k_base": 4595, "olmocr-version": "0.1.53", "pdf-total-pages": 34, "total-fallback-pages": 0, "total-input-tokens": 49795, "total-output-tokens": 5999, "length": "2e12", "weborganizer": {"__label__adult": 0.00072479248046875, "__label__art_design": 0.0008745193481445312, "__label__crime_law": 0.0020885467529296875, "__label__education_jobs": 0.0012063980102539062, "__label__entertainment": 0.00016057491302490234, "__label__fashion_beauty": 0.0004227161407470703, "__label__finance_business": 0.0005426406860351562, "__label__food_dining": 0.0007338523864746094, "__label__games": 0.0027751922607421875, "__label__hardware": 0.003299713134765625, "__label__health": 0.0006122589111328125, "__label__history": 0.0006852149963378906, "__label__home_hobbies": 0.000335693359375, "__label__industrial": 0.006961822509765625, "__label__literature": 0.00024580955505371094, "__label__politics": 0.00066375732421875, "__label__religion": 0.00083160400390625, "__label__science_tech": 0.2010498046875, "__label__social_life": 0.00018775463104248047, "__label__software": 0.03668212890625, "__label__software_dev": 0.73291015625, "__label__sports_fitness": 0.0034942626953125, "__label__transportation": 0.0023632049560546875, "__label__travel": 0.00027942657470703125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15457, 0.0344]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15457, 0.26973]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15457, 0.60375]], "google_gemma-3-12b-it_contains_pii": [[0, 75, false], [75, 287, null], [287, 619, null], [619, 760, null], [760, 1065, null], [1065, 1217, null], [1217, 1890, null], [1890, 2422, null], [2422, 3073, null], [3073, 3243, null], [3243, 3520, null], [3520, 3862, null], [3862, 4337, null], [4337, 4528, null], [4528, 4600, null], [4600, 4660, null], [4660, 4721, null], [4721, 4721, null], [4721, 4999, null], [4999, 5317, null], [5317, 5619, null], [5619, 5996, null], [5996, 6396, null], [6396, 6857, null], [6857, 7074, null], [7074, 7365, null], [7365, 7970, null], [7970, 8636, null], [8636, 9317, null], [9317, 10440, null], [10440, 11641, null], [11641, 12765, null], [12765, 14153, null], [14153, 15457, null]], "google_gemma-3-12b-it_is_public_document": [[0, 75, true], [75, 287, null], [287, 619, null], [619, 760, null], [760, 1065, null], [1065, 1217, null], [1217, 1890, null], [1890, 2422, null], [2422, 3073, null], [3073, 3243, null], [3243, 3520, null], [3520, 3862, null], [3862, 4337, null], [4337, 4528, null], [4528, 4600, null], [4600, 4660, null], [4660, 4721, null], [4721, 4721, null], [4721, 4999, null], [4999, 5317, null], [5317, 5619, null], [5619, 5996, null], [5996, 6396, null], [6396, 6857, null], [6857, 7074, null], [7074, 7365, null], [7365, 7970, null], [7970, 8636, null], [8636, 9317, null], [9317, 10440, null], [10440, 11641, null], [11641, 12765, null], [12765, 14153, null], [14153, 15457, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 15457, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15457, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15457, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15457, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 15457, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15457, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15457, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15457, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15457, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15457, null]], "pdf_page_numbers": [[0, 75, 1], [75, 287, 2], [287, 619, 3], [619, 760, 4], [760, 1065, 5], [1065, 1217, 6], [1217, 1890, 7], [1890, 2422, 8], [2422, 3073, 9], [3073, 3243, 10], [3243, 3520, 11], [3520, 3862, 12], [3862, 4337, 13], [4337, 4528, 14], [4528, 4600, 15], [4600, 4660, 16], [4660, 4721, 17], [4721, 4721, 18], [4721, 4999, 19], [4999, 5317, 20], [5317, 5619, 21], [5619, 5996, 22], [5996, 6396, 23], [6396, 6857, 24], [6857, 7074, 25], [7074, 7365, 26], [7365, 7970, 27], [7970, 8636, 28], [8636, 9317, 29], [9317, 10440, 30], [10440, 11641, 31], [11641, 12765, 32], [12765, 14153, 33], [14153, 15457, 34]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15457, 0.01449]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
269cd4038897b5c32951247ee1933d802dc6876e
Efficiency Meter Unleashing Real-Time Precision In Web-Based Time Tracking Ms. Sterlin Minish T N Department of Computer Science Engineering, Presidency University, Bangalore, India Mr. Karthik R Department of Computer Science Engineering, Presidency University, Bangalore, India Ms. Rajashree C Department of Computer Science Engineering, Presidency University, Bangalore, India Mr. Ankith Sharma M Department of Computer Science Engineering, Presidency University, Bangalore, India Abstract: Introducing the Resource Time Tracker – a robust tool strategically crafted to impeccably monitor, calculate, and evaluate the allocation of resource time within the context of an organization, harnessing the capabilities of React JS. This versatile tool meticulously classifies and tracks time distribution for a range of tasks such as documentation, coding, SQL-related tasks, internet usage, and custom activities. With the aid of React JS, the recorded data is effortlessly integrated and safeguarded in a centralized database, facilitating the production of advanced analytics. These analytics offer a detailed insight into resource usage trends, pinpointing both areas of productivity and inefficiency. The valuable information gleaned from these analytics allows for informed decision-making and empowers organizations to boost overall efficiency and productivity. I. INTRODUCTION In today's dynamic and competitive business environment, resource productivity is a critical factor in the success of any organization. Efficient utilization of time and resources is paramount for achieving organizational goals and maintaining a competitive edge. This research paper explores the development and implementation of a comprehensive tool designed to capture and calculate the time spent by resources across various activities. By leveraging modern web technologies such as HTML, CSS, JavaScript, React JS, and Firebase Console Database, this tool aims to provide organizations with valuable insights into how their resources allocate their time, thus identifying areas for improvement and optimization. As industries continue to evolve, the demand for effective time management and resource utilization becomes more pressing. Many organizations grapple with the challenge of understanding where their resources are spending the majority of their time and, consequently, where potential inefficiencies lie. Traditional methods of time tracking often fall short in providing a holistic view of resource activities, and as such, there is a growing need for a sophisticated solution that not only captures but also analyzes the time spent on various tasks. The primary objective of this research is to develop a robust and user-friendly tool that allows organizations to systematically track the time spent by resources on different activities. By utilizing web technologies like HTML, CSS, JavaScript, React JS, and Firebase Console Database, the tool aims to create a seamless and efficient platform for capturing real-time data on documentation, coding, SQL queries, internet usage, and other pertinent activities. Subsequently, the collected data will be stored in a centralized Firebase database for further analysis and reporting. Understanding how resources allocate their time is essential for organizations aiming to enhance productivity and efficiency. This research addresses the critical need for a comprehensive time-tracking tool that goes beyond mere data collection, offering a sophisticated analytics component. The insights generated from the tool's analytics will empower organizations to make informed decisions, identify areas of improvement, and implement strategies to optimize resource productivity. In the following sections, we will delve into the technological aspects of the project, exploring the design and implementation of the time-tracking tool using HTML, CSS, JavaScript, React JS, and Firebase Console Database. Additionally, we will discuss the methodology employed in capturing and storing time data, the challenges encountered during the development process, and the potential impact of the tool on organizational productivity. II. OBJECTIVES 2.1 Develop a Comprehensive Time Tracking Tool: The primary objective is to create an accessible and user-friendly interface for seamless time tracking. By utilizing HTML, CSS, and React JS, the tool aims to provide a smooth and intuitive experience for users, ensuring efficiency in capturing their time spent on various activities. The research emphasizes the need for intuitive features within the tool to capture time spent on diverse activities accurately. By focusing on user engagement, the tool aims to enhance the overall reliability of the captured data, to ensure real-time data storage and accessibility, the tool will be integrated with Firebase Console Database. This objective aims to create a robust back-end infrastructure for storing time data securely and efficiently. 2.2 Capture and Centralize Time Data: The research aims to develop a structured framework for capturing time data across a spectrum of tasks, providing granularity in understanding resource activities. This objective is crucial for obtaining a comprehensive overview of how time is allocated. Ensuring the security and reliability of stored data is paramount. This objective focuses on implementing robust mechanisms within Firebase to store time data securely, guaranteeing accessibility and integrity. 2.3 Calculate and Analyze Time Allocation: The research emphasizes the need for sophisticated algorithms to calculate aggregated time spent by each resource. This objective forms the basis for generating accurate analytics, providing insights into individual resource allocation. By developing comprehensive analytics, the research aims to provide visual representations of resource time allocation patterns. This objective enables stakeholders to gain a holistic view of how time is distributed among various activities. The tool's analytics will be utilized to identify activities consuming significant time resources and potential bottlenecks. This objective aims to highlight areas for improvement in organizational processes. 2.4 Enhance Productivity Insights The primary focus is on translating analytics into actionable insights. This objective ensures that the information gathered is not only informative but also practical for enhancing resource productivity. The research aims to identify time losses within the organization by analyzing analytics. This objective sets the stage for proposing strategies to mitigate time losses and optimize resource utilization. A user-friendly dashboard is crucial for decision-makers to interpret analytics efficiently. This objective focuses on creating an accessible platform for stakeholders to make informed decisions based on the insights derived. III. METHODOLOGY 3.1 Tool Development: Tool Development encompasses both front-end and back-end aspects. Front-end development utilizes HTML, CSS, JavaScript, and React JS to create an engaging user interface. The choice of these technologies ensures a modern and responsive design. The back-end development involves utilizing Firebase Console Database, a cloud-based solution, to securely store time data in real-time. Security is prioritized through the implementation of secure authentication mechanisms. 3.1.1 Front-end Development: The front-end development phase will utilize industry-standard web technologies to craft an engaging and responsive user interface for the time tracking tool. HTML, CSS, and JavaScript will contribute to the foundational structure and styling, while React JS will enhance user interactivity and experience. 3.1.2 Back-end Development: The Firebase Console Database will serve as the back-end infrastructure for the tool, ensuring secure and real-time storage of time data. This phase is integral to establishing a robust foundation for data management and retrieval. 3.1.3 Security Implementation: Security is paramount in ensuring data integrity and user privacy. This procedure involves the implementation of robust authentication mechanisms, allowing only authorized users to access and interact with the time tracking tool. 3.2 User Testing: User Testing is a crucial phase involving real users interacting with the tool. Usability testing assesses the effectiveness and user-friendliness of the tool, identifying any areas for improvement. Continuous feedback collection ensures that user expectations shape the evolution of the tool, making it more intuitive and aligned with user needs. 3.2.1 Usability Testing: Usability testing involves real users interacting with the tool to identify any potential usability issues. This step is crucial for assessing how well the tool meets user expectations and refining its design for optimal user experience. 3.2.2 Feedback Collection: Continuous user feedback is collected throughout the testing phase to refine and enhance the tool's features. This iterative process ensures that user expectations are met, and the tool evolves based on real-world usage. 3.3 Data Analysis: Data Analysis focuses on deriving meaningful insights from the stored time data. Algorithm development is key to extracting relevant patterns, while data visualization techniques are employed to present these insights in an easily interpretable format. This phase ensures that the analytics generated by the tool are actionable and can inform decision-making. 3.3.1 Algorithm Development: This procedure involves the development of algorithms tailored to analyze the time data stored in the Firebase database. These algorithms are designed to extract meaningful insights and patterns from the collected data. 3.3.2 Data Visualization: Data visualization is a critical aspect of presenting analytics in a visually comprehensible manner. This procedure involves the use of charts, graphs, and other visualization techniques to facilitate easy interpretation of the insights derived from the time tracking data. IV. SYSTEM DESIGN & IMPLEMENTATION 4.1 User-Centric Interface Design: The user-centric interface design is anchored in principles that prioritize an intuitive and engaging experience. With a minimalist approach, the design focuses on clarity and simplicity, ensuring users can seamlessly navigate through different sections. Intuitive navigation, featuring a clean menu layout, enhances overall user satisfaction. The responsive design adapts dynamically to various devices, guaranteeing an optimal viewing and interaction experience. 4.2 Efficient Database Implementation: The database implementation follows a normalized relational schema for efficient data storage and retrieval. Key components include the User Profiles Table, capturing essential user information, and the Time Entries Table, storing detailed records of each time entry. Integration with cloud-based storage solutions enhances data accessibility and scalability. Database replication mechanisms and encrypted data storage within the cloud ensure security, aligning with industry best practices. 4.3 Advanced Analytics Module: The advanced analytics module employs sophisticated algorithms to transform raw time-tracking data into meaningful insights. Time-series analysis identifies temporal patterns, while predictive modeling techniques forecast future resource trends. Customizable reporting and dashboards cater to individual users, project managers, and team leaders, providing insights into time management habits, productivity trends, and cognitive load dynamics. Integration with popular business intelligence tools like Tableau and Power BI enhances the utility of analytics outputs, offering a holistic view of resource management within the larger context of business operations. 4.4 Security Measures in System Design: Security is paramount in the design and implementation of the Resource Time Tracker. This sub-section delves into the measures taken to safeguard sensitive time-tracking information. The implementation of secure authentication mechanisms, encrypted data storage within the cloud, and adherence to industry best practices ensures protection against unauthorized access. The section highlights the significance of data security in maintaining the integrity of the time-tracking system. 4.5 Scalability and Fault Tolerance: This sub-topic explores the strategies employed to enhance the scalability and fault tolerance of the Resource Time Tracker. The integration with cloud-based storage solutions incorporates database replication mechanisms to maintain synchronized copies across multiple instances. This not only ensures data accessibility but also enhances fault tolerance, allowing continued operation in the event of a localized system failure. The section emphasizes the importance of scalability and fault tolerance for a robust and reliable time-tracking system. DATABASE ARCHITECTURE The Client Layer, located at the top of the diagram, plays a crucial role in the communication between the client and the server. Using the Client Layer, the client is able to send instructions and requests to the Server Layer. This can be done through the Command Prompt or by using the intuitive GUI screen, with valid MySQL commands and expressions. The Client Layer is responsible for ensuring that these commands and expressions are valid, and if they are, it displays the output on the screen. The Client Layer offers important services such as connection handling, authentication, and security. When a client sends a request to the server, the server accepts it and establishes a connection with the client. During this process, the client is assigned its own thread for the connection. This thread plays a crucial role in executing all the queries sent by the client. React Architecture Directory/Folder Structure: The project directory structure in React.js plays a pivotal role in ensuring a clear organization and easy maintenance. The src directory is typically structured into subdirectories such as components for reusable UI components, containers for components interacting with Redux, redux for managing state with actions and reducers, services for API services and utility functions, styles for global styles and styling variables, translations for language files facilitating internationalization, and views for application pages. The App.js file serves as the main component orchestrating the overall app structure, while index.js acts as the entry point for rendering the application. App Title and Favicon: In the public directory, the index.html file is where you can set the <title> of your application and include the favicon for better branding and identification. Route for Navigation: React applications often use the react-router-dom library for handling navigation and defining routes. Routes can be configured in the App.js file using components like BrowserRouter and Route. This allows for the creation of a single-page application with dynamic content based on the current route. Redux + Thunk: To manage state and asynchronous actions, many React.js applications leverage Redux along with the Thunk middleware. The redux directory contains files for actions, reducers, and the store. Actions define the tasks to be performed, reducers handle state modifications, and the store holds the application state. Thunk middleware enables handling of asynchronous logic in Redux actions, allowing for more complex state management. Material UI: Material UI is a popular React component library that follows the Material Design principles. It provides a set of pre-designed components that can be easily integrated into the application, ensuring a consistent and visually appealing user interface. Components like buttons, cards, and navigation elements from Material UI can be utilized across the application. I18n (Internationalization): For applications catering to a diverse audience, internationalization (I18n) is crucial. The translations directory contains language files or modules that facilitate the localization of the application. Libraries like react-i18next or react-intl can be integrated to manage translations efficiently. Database Connectivity: React.js, being a front-end library, typically does not directly handle database connectivity. Instead, it interacts with a back-end server or API, which, in turn, connects to the database. This separation ensures a more secure and scalable architecture. Hosting, CI/CD, and .env Setup: Hosting React.js applications involves deploying the built static files to a web server or cloud platform. Continuous Integration and Continuous Deployment (CI/CD) pipelines automate the testing, building, and deployment processes. The .env file is used for environment-specific configurations, such as API endpoints or secret keys, providing flexibility across different deployment environments. Proper setup and integration of CI/CD pipelines streamline the development workflow and ensure consistent application deployment. V. OUTCOMES 5.1 Enhanced Productivity: Users are poised to experience heightened productivity facilitated by the Time Tracking Web Tool. Through improved time management, clearer task visibility, and streamlined workflows, the tool becomes a catalyst for enhancing overall efficiency. 5.2 Accurate Project Tracking: The tool serves as a reliable means for tracking and analyzing project timelines, milestones, and resource allocation. This precision in project tracking contributes to more accurate project planning and execution, fostering successful project outcomes. 5.3 Effective Resource Management: Detailed insights into time allocation across tasks and projects empower organizations to optimize resource allocation. By ensuring teams focus on priority tasks, the Time Tracking Web Tool becomes instrumental in achieving effective resource management. 5.4 Real-Time Monitoring: Enabling real-time tracking of tasks and projects, the tool empowers stakeholders with prompt progress monitoring. This capability allows for the identification of potential delays, facilitating timely and informed decision-making. 5.5 Data-Driven Decision-Making: Comprehensive reports and analytics generated by the Time Tracking Web Tool provide decision-makers with data-driven insights. This aspect significantly contributes to more informed and strategic decision-making processes within organizations. 5.6 Efficient Resource Management: The granular insights into time allocation across various tasks and projects enable organizations to optimize resource allocation efficiently. This outcome ensures that teams are aligned with high-priority tasks, maximizing their collective impact. 5.7 Enhanced Collaboration: Facilitating improved collaboration among team members, the tool acts as a centralized platform for tracking and managing tasks. This centralized approach promotes transparency and teamwork, enhancing overall project collaboration. 5.8 Compliance and Accountability: The anticipated outcomes include improved compliance with project timelines and enhanced accountability. The tool aids in tracking individual contributions and adherence to project schedules, ensuring organizational and individual accountability. 5.9 Cost Optimization: Through detailed tracking and analysis, the Time Tracking Web Tool contributes to cost optimization by identifying areas where resources can be utilized more efficiently. This outcome supports organizations in reducing unnecessary expenses and maximizing cost-effectiveness. 5.10 Streamlined Invoicing and Billing: Finally, the Time Tracking Web Tool facilitates streamlined invoicing and billing processes by accurately recording billable hours and tasks. This outcome reduces errors, ensuring transparent financial transactions and contributing to overall financial efficiency. VI. RESULTS AND DISCUSSIONS 6.1 Data Analysis: 6.1.1 Data Collection Methods: Utilized timestamps for accurate time entry, providing a detailed chronological record of resource behavior. User contributions through manual task entries, categorization, and project associations added context to the dataset. 6.1.2 Exploratory Data Analysis (EDA): Identified the proportion of time spent on documentation, coding, SQL queries, internet usage, and other categories. Discovered peak productivity periods, recurring task sequences, and deviations from established routines. 6.2 Insights into Resource Time Allocation: 6.2.1 Activity-Based Analysis: Identified activities consuming significant work hours, aiding task prioritization. Analyzed daily fluctuations, weekly variations, and long-term productivity trends. 6.2.2 Project-Based Analysis: Evaluated efficiency by examining task completion rates. Assessed resource time allocation against project schedules for timely progress. 6.3 Comparison with Initial Objectives: 6.3.1 Evaluation of System Design and Functionality: Considered user feedback and adoption metrics to evaluate the user interface. Ensured accuracy by comparing manually entered data with real-time tracking results. 6.3.2 Achievement of Analytics Objectives: Evaluated the system's ability to identify temporal patterns through time-series analysis. Examined the accuracy of predictive modeling techniques in forecasting future resource trends. 6.3.3 Impact on Decision-Making Precision and Agility: Gathered feedback on the utility of customizable reports and dashboards for informed decision-making. Assessed the impact of business intelligence integration on a holistic view of resource management. 6.4 Future Enhancements and Recommendations: 6.4.1 Identified Areas for Improvement: Recommendations for additional training or support features based on user feedback. Suggestions for refining analytics algorithms based on their effectiveness. 6.4.2 Future Development Roadmap: Considerations for integrating with productivity tools and collaboration platforms for enhanced utility. Recommended continuous feedback loops for iterative development and adaptation to organizational needs. EFFICIENCY METER Easy To Monitor Your Task Improve your efficiency with our straightforward Time Tracking Tool created to help you manage your time effectively. Stay organized, meet deadlines, and enhance your workflow with our easy-to-use tool, ensuring smooth time management. Key Features: - Straightforward time tracking for tasks and projects. - Create uncomplicated reports for a brief overview. How It Works: - **Track Time**: Use the user-friendly interface to log your work hours. - **Set Goals**: Define tasks and deadlines for your projects. - **Review Reports**: Get a quick overview through straightforward reports. V. CONCLUSION The Resource Time Tracker, through exploration and implementation, has yielded profound insights into resource time management. Key findings include a detailed understanding of time allocation, identification of most time-consuming tasks, temporal productivity patterns, and the impact of project-based time allocation. Evaluation confirms the system's efficacy in achieving objectives, with positive user satisfaction metrics affirming usability and decision-making impact. The system's implementation carries substantial implications, refining resource allocation strategies, optimizing project workflows, and enhancing overall resource efficiency. Continuous user feedback is pivotal for ongoing success, necessitating future adaptations based on user suggestions and evolving needs. A forward-looking development roadmap aligns with identified improvement areas, catering to organizational needs and technological advancements. Ethical considerations are paramount as the Resource Time Tracker deals with sensitive data. Future work includes implementing additional privacy measures, transparent communication, and compliance with data protection regulations. Ongoing collaboration with stakeholders remains integral, ensuring alignment with organizational goals and priorities. In conclusion, the Resource Time Tracker signifies a significant advancement in resource time management. It optimizes resource allocation, enhances project efficiency, and fosters a culture of data-driven decision-making. Moving forward, integration of user feedback, adherence to ethical considerations, and strategic technological enhancements will sustain its relevance and effectiveness, marking a milestone in advancing resource management practices for long-term organizational success. REFERENCES
{"Source-Url": "https://ijcrt.org/papers/IJCRT2401259.pdf", "len_cl100k_base": 4191, "olmocr-version": "0.1.49", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 27544, "total-output-tokens": 5415, "length": "2e12", "weborganizer": {"__label__adult": 0.00027298927307128906, "__label__art_design": 0.00038814544677734375, "__label__crime_law": 0.0002346038818359375, "__label__education_jobs": 0.0022907257080078125, "__label__entertainment": 6.61611557006836e-05, "__label__fashion_beauty": 0.0001533031463623047, "__label__finance_business": 0.0014028549194335938, "__label__food_dining": 0.00031185150146484375, "__label__games": 0.0004813671112060547, "__label__hardware": 0.0008397102355957031, "__label__health": 0.0003483295440673828, "__label__history": 0.00020301342010498047, "__label__home_hobbies": 0.00011092424392700197, "__label__industrial": 0.0004181861877441406, "__label__literature": 0.00017082691192626953, "__label__politics": 0.00012493133544921875, "__label__religion": 0.0002135038375854492, "__label__science_tech": 0.0159454345703125, "__label__social_life": 0.00010311603546142578, "__label__software": 0.01538848876953125, "__label__software_dev": 0.95947265625, "__label__sports_fitness": 0.00024390220642089844, "__label__transportation": 0.0004270076751708984, "__label__travel": 0.0001990795135498047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27029, 0.02605]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27029, 0.28613]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27029, 0.8752]], "google_gemma-3-12b-it_contains_pii": [[0, 3273, false], [3273, 7496, null], [7496, 10676, null], [10676, 12793, null], [12793, 14454, null], [14454, 16852, null], [16852, 19920, null], [19920, 22502, null], [22502, 22745, null], [22745, 23380, null], [23380, 24432, null], [24432, 27029, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3273, true], [3273, 7496, null], [7496, 10676, null], [10676, 12793, null], [12793, 14454, null], [14454, 16852, null], [16852, 19920, null], [19920, 22502, null], [22502, 22745, null], [22745, 23380, null], [23380, 24432, null], [24432, 27029, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27029, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27029, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27029, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27029, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27029, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27029, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27029, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27029, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27029, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27029, null]], "pdf_page_numbers": [[0, 3273, 1], [3273, 7496, 2], [7496, 10676, 3], [10676, 12793, 4], [12793, 14454, 5], [14454, 16852, 6], [16852, 19920, 7], [19920, 22502, 8], [22502, 22745, 9], [22745, 23380, 10], [23380, 24432, 11], [24432, 27029, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27029, 0.0]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
fe180b68aec2466b1b480e4b9f09c81af70dce8a
The following full text is a preprint version which may differ from the publisher's version. For additional information about this publication click this link. http://hdl.handle.net/2066/159592 Please be advised that this information was generated on 2018-01-14 and may be subject to change. Refactoring of Legacy Software using Model Learning and Equivalence Checking: an Industrial Experience Report Mathijs Schuts\textsuperscript{1}, Jozef Hooman\textsuperscript{2,3} and Frits Vaandrager\textsuperscript{3} \textsuperscript{1} Philips, Best, The Netherlands \texttt{mathijs.schuts@philips.com} \textsuperscript{2} Embedded Systems Innovation (ESI) by TNO, Eindhoven, The Netherlands \texttt{jozef.hooman@tno.nl} \textsuperscript{3} Department of Software Science, Radboud University, Nijmegen, The Netherlands \texttt{f.vaandrager@cs.ru.nl} Abstract. Many companies struggle with large amounts of legacy software that is difficult to maintain and to extend. Refactoring legacy code typically requires large efforts and introduces serious risks because often crucial business assets are hidden in legacy components. We investigate the support of formal techniques for the rejuvenation of legacy embedded software, concentrating on control components. Model learning and equivalence checking are used to improve a new implementation of a legacy control component. Model learning is applied to both the old and the new implementation. The resulting models are compared using an equivalence check of a model checker. We report about our experiences with this approach at Philips. By gradually increasing the set of input stimuli, we obtained implementations of a power control service for which the learned behaviour is equivalent. 1 Introduction The high-tech industry creates complex cyber physical systems. The architectures for these systems evolved over many decades through a constant stream of product innovations. This usually leads to so-called legacy components that are hard to maintain and to extend\textsuperscript{24,25}. Typically, these components are based on obsolete technologies, frameworks, and tools. Documentation might not be available or outdated and the original developers are often no longer available. In addition, the existing regression test set for validating the component will be very limited in most cases. Given these characteristics, innovations that require changes of legacy components are risky. Many legacy components implicitly incorporate important business knowledge, hence failures will lead to substantial losses. To avoid a risky greenfield approach, starting from scratch, several techniques are being developed to extract the crucial business information hidden in legacy components in a (semi-)automated way and to use this information to develop a refactored version of the component. There are several approaches to extract this hidden information. Static analysis methods concentrate on the analysis and transformation of source code. For instance, the commercial Design Maintenance System (DMS) has been used in several industrial projects to re-engineer code. DMS is based on abstract syntax tree (AST) representations of programs. Whereas static analysis techniques focus on the internal structure of components, learning techniques aim at capturing the externally visible behaviour of a component. Process mining extracts business logic based on event logs. In [17], a combination of static analysis and process mining has been applied to a financial management system, identifying tasks, actors, and their roles. Process mining can be seen as a passive way of learning which requires an instrumentation of the code to obtain event logs. Active learning techniques [4,22] do not require code instrumentation, but need an adapter to interact with a running system. In this approach, a learning algorithm interacts with a software component by sending inputs and observing the resulting output, and uses this information to construct a state machine model. Active learning has, for instance, been successfully applied to learn models of (and to find mistakes in) implementations of protocols such as TCP [12] and TLS [8], to establish correctness of protocol implementations relative to a given reference implementation [2], and to generate models of a telephone switch [18] and a printer controller [21]. Learning-based testing [11] combines active learning and model checking. In this approach, which requires the presence of a formal specification of the system, model checking is used to guide the learning process. In [11] three industrial applications of learning-based testing are described from the web, automotive and finance domains. In this paper, we report about a novel industrial application of active learning to gain confidence in a refactored legacy component using formal techniques. In the absence of any formal specification of the legacy system, the use of model checking and learning-based testing was not possible. Instead we decided to use a different combination of tools, similar to the approach of [13,2]. The model learning tool LearnLib [15] was used to learn Mealy machine models of the legacy and the refactored implementation. These models were then compared to check if the two implementations are equivalent. Since the manual comparison of large models is not feasible, we used an equivalence checker from the mCRL2 toolset [7] for this task. In brief, our approach can be described as follows (see also Figure 1): 1. Implementation A (the legacy component) is explored by a model learner. The output of the model learner is converted to an input format for the equivalence checker, model MA. 2. Implementation B (the refactored component) is explored by a model learner. The output of the model learner is converted to an input format for the equivalence checker, model MB. 3. The two models are checked by the equivalence checker. The result of the equivalence checker can be: www.semanticdesigns.com - The two models are equivalent. In this case we are done. - The two models are not equivalent and a counterexample is provided: a sequence of inputs $\sigma$ for which the outputs produced by the two models are different. In this case we proceed to step 4. 4. Because models A and B have been obtained through a finite number of tests, we can never be sure that they correctly describe implementations A and B, respectively. Therefore, if we find a counterexample $\sigma$ for the equivalence of models MA and MB, we first check whether implementation A and model MA behave the same for $\sigma$, and whether implementation B and model MB behave the same for $\sigma$. If there is a discrepancy between a model and the corresponding implementation, this means that the model is incorrect and we ask the model learner to construct a new model based on counterexample $\sigma$, that is, we go back to step 1 or 2. Otherwise, counterexample $\sigma$ exhibits a difference between the two implementations. In this case we need to change at least one of the implementations, depending on which output triggered in response to input $\sigma$ is considered unsatisfactory behaviour. Note that also the legacy component A might be changed, because the counterexample might indicate an unsatisfactory behaviour of A. After the change, a corrected implementation needs to be learned again, i.e., we go back to step 1 or 2. Since the learning of an implementation can take a substantial amount of time, we start with a limited subset of input stimuli for the model-learner and increase the number of stimuli once the implementations are equivalent for a smaller number of stimuli. Hence, the approach needs to be executed iteratively. We report about our experiences with the described approach on a real development project at Philips. The project concerns the introduction of a new hardware component, the Power Control Component (PCC). A PCC is used to start-up and shutdown an interventional radiology system. All computers in the system have a software component, the Power Control Service (PCS) which communicates with the PCC over an internal control network during the execution of start-up and shutdown scenarios. To deal with the new hardware of the PCC, which has a different interface, a new implementation of the PCS is needed. Since different configurations have to be supported, with old and new PCC hardware, the old and new PCS software should have exactly the same externally visible behaviour. The PCS is described in Sect. 2 to the extend needed for understanding this paper. Section 3 describes the use of model-learning and model-checking to compare the two PCS implementations for the old and the new PCC. The results of testing the two PCS implementations are described in Sect. 4. Section 5 discusses the scalability of our approach. Concluding remarks can be found in Sect. 6. 2 The Industrial Development Project 2.1 Power Control Service For starting up and shutting down an interventional radiology system multiple components are involved. The Power Control Component (PCC) is a hardware component that gets the mains power input from the hospital. It conditions the mains power, switches the power taps that are connected to system’s internal components and acts as the master of the system when executing start-up and shutdown scenarios. All computers in the system are powered by the PCC and are controlled by the PCC via a Power Control Service (PCS) that connects to the PCC via the system’s internal control network. Figure 2 depicts the PCS in its context. The PCS is a software component that is used to start and stop subsystems via their Session Managers (SMs). In addition to the start-up and shutdown scenarios executed by the PCC, the PCS is also involved during service scenarios such as upgrading the subsystem’s software. Fig. 2. Context Power Control Service In a typical shutdown scenario, the user presses the off button and the shutdown scenario is initiated by the PCC. The PCC sends an event to all PCSs. The PCS stops the SMs. Once the SMs are stopped, the PCS triggers the Operating System (OS) to shutdown. In the end, the OS will stop the PCS. Another scenario is to switch from closed profile to open profile when the system is in the operational state. In closed profile only the clinical application can be executed by the user of the system. Open profile is used during development for testing purposes. In this scenario, the service application triggers the PCS to switch to open profile. The PCS will then stop the SMs. When the PCS is ready, the service application reboots the PC. After the reboot, the OS starts up the PCS and the PCS starts a subset of the SMs based on the SM’s capabilities. In open profile, the service application can also start the clinical application by providing the PCS with the OpenProfileStartApplication trigger. 2.2 Refactoring The PCS implementation for the old PCC is event-based. An event is handled differently based on the value of global flags in the source code. Hence, all state behaviour is implicitly coded by these flags, which makes the implementation unmaintainable. The development of a new implementation for supporting the new PCC is an opportunity to create a maintainable implementation. The new implementation makes the state behaviour explicit by a manually crafted state machine. To be able to support both the old and the new PCC, the PCS software has been refactored such that the common behaviour for both PCCs is extracted. Figure 3(a) depicts the PCS before refactoring. The Host implements the IHost interface that is used by the service application. The implementation of the PCS after refactoring is show in Fig. 3(b). ![Class Diagrams of PCS Design](image) The PcsCommon class implements the ISessionManager interface to control the SMs. The OldPccSupport class contains the legacy implementation for the old PCC whereas a NewPccSupport class deals with the new PCC. Both classes inherit from the PcsCommon class to achieve the same internal interface for the Host. Depending on the configuration, the Host creates an instance of either the OldPccSupport or the NewPccSupport class. The PCS as depicted in Fig. 3(b) is written in C++ and consists of a total of 3365 Lines Of Code (LOC): Host has 741 LOC, PcsCommon has 376 LOC, OldPccSupport has 911 LOC, and NewPccSupport has 1337 LOC. The unit test cases were adapted to include tests for the new implementation. It was known that the unit test set is far from complete. Hence, we investigated the possibility to use model-learning to get more confidence in the equivalence of the old and new implementations. 3 Application of the Learning Approach To learn models of our implementations, we used the LearnLib tool [19], see http://learnlib.de/ For a detailed introduction into LearnLib we refer to [22]. In our application we used the development 1.0-SNAPSHOT of LearnLib and its MealyLearner which is connected to the System Under Learning (SUL) by means of an adapter and a TCP/IP connection. 3.1 Design of the learning environment Figure 4 depicts the design used for learning the PCS component. Creating an initial version of the adapter took about 8 hours, because the test primitives of the existing unit test environment could be re-used. ![Fig. 4. Design learning environment](image) With this design, the PCS can be learned for both the old and the new PCC. The adapter automatically changes the configuration of the PCS such that the PCS knows if it needs to instantiate the old or the new implementation. Depending on the old or new PCC, the adapter instantiates a different PCC stub. 3.2 Learned output The Mealy machine that is the result of a LearnLib session is represented as a "dot" file, which can be visualized using Graphviz⁵. A fragment of a model is shown in Table 1. ```plaintext digraph g { start0 [label="" shape="none"]; s0 [shape="circle" label="0"]; s1 [shape="circle" label="1"]; s2 [shape="circle" label="2"]; s3 [shape="circle" label="3"]; s4 [shape="circle" label="4"]; s5 [shape="circle" label="5"]; s6 [shape="circle" label="6"]; s7 [shape="circle" label="7"]; s8 [shape="circle" label="8"]; s0 -> s1 [label="PCC(StateSystemOn)/PCS(Running)/SM1(Running)/SM2(Running)"]; s0 -> s2 [label="PCC(StateSystemOff)/PCS(Running)/SM1(Stopped)/SM2(Stopped)/Dev(Shutdown)"]; s1 -> s2 [label="PCC(ButtonSystemOff)/PCS(Running)/SM1(Stopped)/SM2(Stopped)/Dev(Shutdown)"]; s1 -> s3 [label="Host(goToOpenProfile)/PCS(Stopped)/SM1(Stopped)/SM2(Stopped)/Dev(OpenProfile)"];... start0 -> s0; } ``` Table 1. Fragment of a learned dot-file 3.3 Checking Equivalence For models with more than five states it is difficult to compare the graphical output of LearnLib for different implementations. Therefore, an equivalence checker is used to perform the comparison. In our case, we used the tool support for mCRL2 (micro Common Representation Language 2) which is a specification language that can be used for specifying system behaviour. The mCRL2 language comes with a rich set of supporting programs for analysing the behaviour of a modelled system.⁷ Once the implementation is learned, a small script is used to convert the output from LearnLib to a mCRL2 model. Basically, the learned Mealy machine is represented as an mCRL2 process `Spec(s:States)`. As an example, the two transitions of state `s0` in the dot-file ⁵ [www.graphviz.org/](http://www.graphviz.org/) ⁷ are translated into the following process algebra construction: \[ (s\equiv s_0) - > (PCC(StateSystemOn) . PCS(Running) . SM1(Running) . SM2(Running) . Spec(s_1)) + (PCC(StateSystemOff) . PCS(Running) . SM1(Stopped) . SM2(Stopped) . Dev(Shutdown) . Spec(s_2)) \] A part of the result of translating the model of Table 1 to mCRL2 is shown in Table 2. \[ \begin{align*} \text{sort States} & = \text{struct } s_0 \mid s_1 \mid s_2 \mid s_3 \mid s_4 \mid s_5 \mid s_6 \mid s_7 \mid s_8; \\ \text{OsStim} & = \text{struct StartPcs \mid StopPcs;} \end{align*} \] Given two (deterministic) Mealy machines, the labelled transition systems for the associated mCRL2 processes are also deterministic. Since the labelled transition systems also do not contain any \(\tau\)-transitions, trace equivalence and bisimulation equivalence coincide, and there is no difference between weak and strong equivalences [10]. Thus, two Mealy machines are equivalent iff the associated mCRL2 processes are (strong) trace equivalent, and the mCRL2 processes are (strong) trace equivalent iff they are (strong) bisimulation equivalent. 3.4 Investigating Counterexamples When the equivalence check indicates that the two models are not equivalent, the mCRL2 tool provides a counterexample. To investigate counterexamples, we created a program that reads a produced counterexample and executes this on the implementations. In the design depicted in Fig. 4, the LearnLib component has been replaced by the counterexample program. As before, switching between the two implementations can be done by instructing the adapter. In this way, the standard logging facilities of program execution are exploited to study the counterexample. 4 Results of Learning the Implementations of the PCS In this section we describe the results of applying the approach of Sect. 3 to the implementations of the PCS component. 4.1 Iteration 1 The first iteration was used to realize the learning environment as is described in Sect. 3.1. An adapter was created to interface between the PCS and LearnLib. Because the communication between the PCS and the adapter is asynchronous, the adapter has to wait some time before the state of the PCS can be examined. In this iteration we performed a few try runs to tweak the wait time needed before taking a sample. In addition, the first iteration was used to get an impression on how long learning the PCS takes with different numbers of stimuli. The necessary waiting time of 10 second after a stimulus for learning the PCS is quite long, and this greatly influenced the time needed for learning models. 4.2 Iteration 2 After a first analysis of the time needed for model learning in iteration 1, we decided to start learning with 9 stimuli. These 9 stimuli were all related to basic start-up/shutdown and service scenarios. We learned the PCS implementation for the old PCC and the PCS implementation for the new PCC. The results are presented in Table 3. The table has a column for the number of stimuli, for the number of states and transitions found, and for the time it took for LearnLib to learn the implementations. <table> <thead> <tr> <th>Stimuli</th> <th>States</th> <th>Transitions</th> <th>Time (in seconds)</th> </tr> </thead> <tbody> <tr> <td>PCS implementation for old PCC</td> <td>9</td> <td>8</td> <td>43</td> </tr> <tr> <td>PCS implementation for new PCC</td> <td>9</td> <td>3</td> <td>8</td> </tr> </tbody> </table> Table 3. Results learning PCS with 9 stimuli Note that learning a model for the old implementation took 9 hours. (This excludes the time used to test the correctness of the final model.) As described in Sect. 3.3, the learned models were converted to mCRL2 processes. Next, the mCRL2 tools found a counterexample starting with: \[ \text{PCC(StateSystemOn), PCS(Running), SM1(Running), SM2(Running), ...} \] We investigated this counterexample and found an issue in the PCS implementation for the new PCC. The new implementation did not make a distinction between the SystemOff event, and the ServiceStop and ServiceShutdown events. Note that before performing the learning experiment the new and old implementations were checked using the existing regression test cases. This issue was not found by the existing unit test cases. ### 4.3 Iteration 3 In the third iteration, the PCS implementation for the new PCC was re-learned after solving the fix. Table 4 describes the results. <table> <thead> <tr> <th>Stimuli</th> <th>States</th> <th>Transitions</th> <th>Time (in seconds)</th> </tr> </thead> <tbody> <tr> <td>PCS implementation for old PCC</td> <td>9</td> <td>8</td> <td>43</td> </tr> <tr> <td>PCS implementation for new PCC</td> <td>9</td> <td>7</td> <td>36</td> </tr> </tbody> </table> **Table 4.** Results learning PCS with 9 stimuli An equivalence check with the mCRL2 tools resulted in a new counterexample of 23 commands: \[ \text{PCC(StateSystemOn), PCS(Running), SM1(Running), SM2(Running),} \] \[ \text{Host(goToOpenProfile), PCS(Stopped), SM1(Stopped), SM2(Stopped),} \] \[ \text{Dev(OpenProfile), OS(StartPcs), PCS(Running), SM1(Stopped),} \] \[ \text{SM2(Stopped), Dev(OpenProfile), Host(openProfileStopApplication),} \] \[ \text{PCS(Running), SM1(Stopped), SM2(Running), Dev(OpenProfile),} \] \[ \text{PCC(ButtonSystemOff), PCS(Running), SM1(Stopped), SM2(Running).} \] When we executed this counterexample on the PCS implementation for the old PCC, we found the following statement in the logging of the PCS: ”Off button not handled because of PCS state (Stopping)”. A quick search in the source code revealed that the stopSessionManagers method prints this statement when the Stopping flag is active. This is clearly wrong, because this flag is set by the previous stimulus, i.e., the openProfileStopApplication stimulus. The PCS implementation for the old PCC was adapted to reset the Stopping flag after handling the openProfileStopApplication stimulus. 4.4 Iteration 4 In the fourth iteration, the PCS implementation for the old PCC was re-learned after solving the fix. Table 5 describes the results after re-learning. Note that, after correcting the error, learning the model for the old implementation only takes slightly more than one hour. When checking the equivalence, the mCRL2 tool reports that the two implementation are (strong) trace equivalent for these 9 stimuli. <table> <thead> <tr> <th>Stimuli</th> <th>States</th> <th>Transitions</th> <th>Time (in seconds)</th> </tr> </thead> <tbody> <tr> <td>PCS implementation for old PCC</td> <td>9</td> <td>7</td> <td>36</td> </tr> <tr> <td>PCS implementation for new PCC</td> <td>9</td> <td>7</td> <td>36</td> </tr> </tbody> </table> Table 5. Results learning PCS with 9 stimuli 4.5 Iteration 5 As a next step we re-learned the implementations for the complete set of 12 stimuli; the results are shown in Table 6. Note that learning the new implementation takes approximately 3.5 hours. The mCRL2 tools report that the two obtained models with 12 stimuli are trace equivalence and bisimulation equivalent. <table> <thead> <tr> <th>Stimuli</th> <th>States</th> <th>Transitions</th> <th>Time (in seconds)</th> </tr> </thead> <tbody> <tr> <td>PCS implementation for old PCC</td> <td>12</td> <td>9</td> <td>65</td> </tr> <tr> <td>PCS implementation for new PCC</td> <td>12</td> <td>9</td> <td>65</td> </tr> </tbody> </table> Table 6. Results learning PCS with 12 stimuli 5 Scalability of the Learning Approach Using model learning we found issues in both a legacy software component and in a refactored implementation. After fixing these issues, model learning helped to increase confidence that the old and the new implementations behave the same. Although this is a genuine industrial case study, the learned Mealy machine models are very small. Nevertheless, learning these tiny models already took up to 9 hours. For applying these techniques in industry there is an obvious need to make model learning more efficient in terms of the time needed to explore a system under learning. Clearly, our approach has been highly effective for the PCC case study. But will it scale? Below we present an overview of some recent results that make us optimistic that indeed our approach can be scaled to a large class of more complex legacy systems. 5.1 Faster implementations The main reason why model learning takes so long for the PCC case study is the long waiting time in between input events. As a result, running a single test sequence (a.k.a. membership query) took on average about 10 seconds. One of the authors was involved in another industrial case study in which a model for a printer controller was learned with 3410 states and 77 stimuli [21]. Even though more than 60 million test sequences were needed to learn it, the task could be completed within 9 hours because on average running a single test sequence took only 0.0005 seconds. For most software components the waiting times can be much smaller than for the PCS component studied in this paper. In addition, if the waiting times are too long then sometimes it may be possible to modify the components (just for the purpose of the model learning) and reduce the response times. For our PCC case study such an approach is difficult. The PCS controls the Session Managers (SMs), which are Windows services. After an input event we want to observe the resulting state change of the SMs, but due to the unreliable timing of the OS we need to wait quite long. In order to reduce waiting times we would need to speed up Windows. 5.2 Faster learning and testing algorithms There has been much progress recently in developing new algorithms for automata learning. In particular, the new TTT learning algorithm that has been introduced by Isberner [16] is much faster than the variant of Angluin’s $L^*$ algorithm [4] that we used in our experiments. Since the models for the PCS components are so simple, the $L^*$ algorithm does not need any intermediate hypothesis: the first model that $L^*$ learns is always correct (that is, extensive testing did not reveal any counterexample). The TTT algorithm typically generates many more intermediate hypotheses than $L^*$. This means that it becomes more important which testing algorithm is being used. But also in the area of conformance testing there has been much progress recently [9,21]. Figure 5 displays the results of some experiments that we did using an implementation of the TTT algorithm that has become available very recently in LearnLib, in combination with a range of testing algorithms from [9,21]. As one can see, irrespective of the test method that is used, the TTT algorithm reduces the total number of input events needed to learn the final PCS model with a factor of about 3. 5.3 Using parallelization and checkpointing Learning and testing can be easily parallelized by running multiple instances of the system under learning (in our case the PCS implementation) at the same time. Henrix [14] reports on experiments in which doubling the number of parallel instances nearly doubles the execution speed (on average with a factor 1.83). Another technique that may speed-up learning is to save and restore software states of the system under learning (checkpointing). The benefit is that if the learner wants to explore different outgoing transitions from a saved state $q$ it only Fig. 5. Experiments with TTT algorithm for final PCS implementation for new PCC. The used test methods (W, Wp, hybrid adaptive distinguishing sequences, hybrid UIOv) were all randomised. For each test method 100 runs were performed. In each case 95% of the runs were in the shaded area. The dotted lines give the median run for a given test method. needs to restore $q$, which usually is much faster than resetting the system and bringing it back to $q$ by an appropriate sequence of inputs. Henrix [14] reports on experiments in which checkpointing with DMTCP [5] speeds up the learning process with a factor of about 1.7. 5.4 Using abstraction and restriction The number of test/membership queries of most learning algorithms grows linearly with the number of inputs. However, these algorithms usually assume an oracle that provides counterexamples for incorrect hypothesis models. Such an oracle is typically implemented using a conformance testing algorithm. In practice, conformance testing often becomes a bottleneck when the number of inputs gets larger. Thus we seek methods that help us to reduce the number of inputs. To get confidence that two implementations with a large number of stimuli exhibit the same behaviour, a simple but practical approach is to apply model learning for multiple smaller subsets of stimuli. This will significantly reduce the learning complexity, also because the set of reachable states will typically be smaller for a restricted number of stimuli. Models learned for a subset of the inputs may then be used to generate counterexamples while learning models for larger subsets on inputs. Smeenk \cite{20} reports on some successful experiments in which this heuristic was used. A different approach, which has been applied successfully in many case studies, is to apply abstraction techniques that replace multiple concrete inputs by a single abstract input. One may, for instance, forget certain parameters of an input event, or only record the sign of an integer parameter. We refer to \cite{110} for recent overviews of these techniques. 6 Concluding Remarks We presented an approach to get confidence that a refactored software component has equivalent external control behaviour as its non-refactored legacy software implementation. From both the refactored implementation and its legacy implementation, a model is obtained by using model learning. Both learned models are then compared using an equivalence checker. The implementations are learned and checked iteratively with increasing sets of stimuli to handle scalability. By using this approach we found issues in both the refactored and the legacy implementation in an early stage of the development, before the component was integrated. In this way, we avoided costly rework in a later phase of the development. As future work, we intend to apply our approach to other software components that will be refactored, including a substantially larger component. Acknowledgements We are most grateful to Joshua Moerman for helping with the experiments with the TTT algorithm. We also thank Petra van den Bos for careful proofreading of an earlier version. This research was supported by STW project 11763 (ITALIA) and the Dutch national program COMMIT. References
{"Source-Url": "http://repository.ubn.ru.nl/bitstream/handle/2066/159592/159592.pdf?sequence=1", "len_cl100k_base": 6695, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 34177, "total-output-tokens": 8832, "length": "2e12", "weborganizer": {"__label__adult": 0.00035643577575683594, "__label__art_design": 0.0004122257232666016, "__label__crime_law": 0.0002846717834472656, "__label__education_jobs": 0.0014944076538085938, "__label__entertainment": 6.777048110961914e-05, "__label__fashion_beauty": 0.00016963481903076172, "__label__finance_business": 0.00028061866760253906, "__label__food_dining": 0.0003383159637451172, "__label__games": 0.0006289482116699219, "__label__hardware": 0.0014886856079101562, "__label__health": 0.0003981590270996094, "__label__history": 0.00021767616271972656, "__label__home_hobbies": 0.00011390447616577148, "__label__industrial": 0.000560760498046875, "__label__literature": 0.00024771690368652344, "__label__politics": 0.00021076202392578125, "__label__religion": 0.0003986358642578125, "__label__science_tech": 0.026336669921875, "__label__social_life": 8.052587509155273e-05, "__label__software": 0.004947662353515625, "__label__software_dev": 0.9599609375, "__label__sports_fitness": 0.00028514862060546875, "__label__transportation": 0.0006461143493652344, "__label__travel": 0.00017392635345458984}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34372, 0.04091]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34372, 0.44983]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34372, 0.89002]], "google_gemma-3-12b-it_contains_pii": [[0, 294, false], [294, 2846, null], [2846, 6007, null], [6007, 7884, null], [7884, 9911, null], [9911, 12016, null], [12016, 13388, null], [13388, 15492, null], [15492, 16604, null], [16604, 18886, null], [18886, 21316, null], [21316, 23571, null], [23571, 26640, null], [26640, 28246, null], [28246, 30958, null], [30958, 34372, null]], "google_gemma-3-12b-it_is_public_document": [[0, 294, true], [294, 2846, null], [2846, 6007, null], [6007, 7884, null], [7884, 9911, null], [9911, 12016, null], [12016, 13388, null], [13388, 15492, null], [15492, 16604, null], [16604, 18886, null], [18886, 21316, null], [21316, 23571, null], [23571, 26640, null], [26640, 28246, null], [28246, 30958, null], [30958, 34372, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34372, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34372, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34372, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34372, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34372, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34372, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34372, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34372, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34372, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34372, null]], "pdf_page_numbers": [[0, 294, 1], [294, 2846, 2], [2846, 6007, 3], [6007, 7884, 4], [7884, 9911, 5], [9911, 12016, 6], [12016, 13388, 7], [13388, 15492, 8], [15492, 16604, 9], [16604, 18886, 10], [18886, 21316, 11], [21316, 23571, 12], [23571, 26640, 13], [26640, 28246, 14], [28246, 30958, 15], [30958, 34372, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34372, 0.09249]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
3e07d6e209ee2664cd8bef879dc115043d4c04bd
[REMOVED]
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01354411/file/978-3-642-19997-4_15_Chapter.pdf", "len_cl100k_base": 7257, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 42866, "total-output-tokens": 9417, "length": "2e12", "weborganizer": {"__label__adult": 0.0003440380096435547, "__label__art_design": 0.000499725341796875, "__label__crime_law": 0.00031304359436035156, "__label__education_jobs": 0.0008707046508789062, "__label__entertainment": 0.00010097026824951172, "__label__fashion_beauty": 0.00017130374908447266, "__label__finance_business": 0.0005354881286621094, "__label__food_dining": 0.0003497600555419922, "__label__games": 0.000499725341796875, "__label__hardware": 0.0008473396301269531, "__label__health": 0.0005216598510742188, "__label__history": 0.00032639503479003906, "__label__home_hobbies": 9.131431579589844e-05, "__label__industrial": 0.00039577484130859375, "__label__literature": 0.00046324729919433594, "__label__politics": 0.00027751922607421875, "__label__religion": 0.0005021095275878906, "__label__science_tech": 0.06890869140625, "__label__social_life": 0.0001659393310546875, "__label__software": 0.01543426513671875, "__label__software_dev": 0.9072265625, "__label__sports_fitness": 0.00021898746490478516, "__label__transportation": 0.0005540847778320312, "__label__travel": 0.0002112388610839844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37946, 0.02506]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37946, 0.49373]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37946, 0.91067]], "google_gemma-3-12b-it_contains_pii": [[0, 1051, false], [1051, 3216, null], [3216, 6551, null], [6551, 9147, null], [9147, 12327, null], [12327, 15476, null], [15476, 17757, null], [17757, 20728, null], [20728, 22505, null], [22505, 24595, null], [24595, 27746, null], [27746, 29828, null], [29828, 31331, null], [31331, 32723, null], [32723, 35718, null], [35718, 37946, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1051, true], [1051, 3216, null], [3216, 6551, null], [6551, 9147, null], [9147, 12327, null], [12327, 15476, null], [15476, 17757, null], [17757, 20728, null], [20728, 22505, null], [22505, 24595, null], [24595, 27746, null], [27746, 29828, null], [29828, 31331, null], [31331, 32723, null], [32723, 35718, null], [35718, 37946, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37946, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37946, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37946, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37946, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37946, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37946, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37946, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37946, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37946, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37946, null]], "pdf_page_numbers": [[0, 1051, 1], [1051, 3216, 2], [3216, 6551, 3], [6551, 9147, 4], [9147, 12327, 5], [12327, 15476, 6], [15476, 17757, 7], [17757, 20728, 8], [20728, 22505, 9], [22505, 24595, 10], [24595, 27746, 11], [27746, 29828, 12], [29828, 31331, 13], [31331, 32723, 14], [32723, 35718, 15], [35718, 37946, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37946, 0.0]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
6c1cfdee192deedea8da80cf7ced1aefb2ded77b
**Who Tests the Testers?** Avoiding the Perils of Automated Testing John Wrenn Computer Science Brown University USA jswrenn@cs.brown.edu Shriram Krishnamurthi Computer Science Brown University USA sk@cs.brown.edu Kathi Fisler Computer Science Brown University USA kfisler@cs.brown.edu **ABSTRACT** Instructors routinely use automated assessment methods to evaluate the semantic qualities of student implementations and, sometimes, test suites. In this work, we distill a variety of automated assessment methods in the literature down to a pair of assessment models. We identify pathological assessment outcomes in each model that point to underlying methodological flaws. These theoretical flaws broadly threaten the validity of the techniques, and we actually observe them in multiple assignments of an introductory programming course. We propose adjustments that remedy these flaws and then demonstrate, on these same assignments, that our interventions improve the accuracy of assessment. We believe that with these adjustments, instructors can greatly improve the accuracy of automated assessment. **CCS CONCEPTS** - **Social and professional topics → Student assessment; CS1;** - **Software and its engineering → Software defect analysis;** **ACM Reference Format:** 1 INTRODUCTION Instructors routinely rely on automated assessment methods to evaluate student work on programming assignments. In principle, automated techniques improve the scalability and reproducibility of assessment. However, while more reproducible than non-automated methods, automated techniques are not, ipso facto, more accurate. Automated techniques also make it easy to perform flawed assessments and then demonstrate, on these same assignments, that our interventions improve the accuracy of assessment. We believe that with these adjustments, instructors can greatly improve the accuracy of automated assessment. Before reviewing related work (section 2) and defining terminology (section 3), we present (section 4) three models for assessing implementations and test suites submitted in response to programming problems. Section 5 describes a process for instructors by which our novel model can be combined with another in a manner that iteratively improves the outcomes of both until they are identical, and section 6 evaluates these models and this process experimentally in the context of assessing both implementations and test suites. Section 7 discusses implications for those who develop or use automated assessments for programming assignments. 2 RELATED WORK Automatic assessment of student implementations and test suites is typically done by testing their behavior against a reference artifact (rather than through proof-based formal methods [31]). We focus our work (and thus this section) on assignments for which the inputs and outputs are data values (as opposed to, say, GUIs which require their own style of testing techniques [12, 17, 35]). In this work, we explore methods for assessing implementations and test suites submitted in response to programming problems. In particular, we consider how student-submitted artifacts may be used to enhance instructor-provided ones within the context of automated assessment. This is hardly a new question: as discussed in section 2, many authors use student artifacts to assess other students’ work. However, we find that the models in the literature for doing this can have significant flaws that can unfairly reward or penalize students. As we will show, the key to including student artifacts in a fair way builds on screening them with particular kinds of instructor-provided artifacts, both implementations and test suites, both correct and incorrect. Concretely, we analyze two common methods for assessing student implementations. We explore the methods both foundationally and experimentally, using data from an introductory course. We highlight the perils of these approaches, and present an improved model and technique with which instructors can immunize their assessments from these perils. The contributions of this paper are: 1. Identification of conceptual pathologies in existing methods for automated assessment, 2. Experimental evidence that these issues arise in practice, and 3. A new method for assessing implementations and test suites that mitigates these pathologies. After reviewing related work (section 2) and defining terminology (section 3), we present (section 4) three models for assessing implementations (one of them novel). Section 5 describes a process for instructors by which our novel model can be combined with another in a manner that iteratively improves the outcomes of both until they are identical, and section 6 evaluates these models and this process experimentally in the context of assessing both implementations and test suites. Section 7 discusses implications for those who develop or use automated assessments for programming assignments. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. **ICER ’18, August 13–15, 2018, Espoo, Finland** © 2018 Copyright held by the owner/author(s). Publication rights licensed to the Association for Computing Machinery. ACM ISBN 978-1-4503-5628-2/18/08... $15.00 https://doi.org/10.1145/3230977.3230999 2.1 Evaluating Implementation Correctness Goldwasser [18] asked students to submit a collection of interesting inputs. He then ran each input through each of the student implementations and an instructor-written one, checking whether the two agreed on their computed output. He notes the challenge of this approach when the outputs are non-deterministic. Many testing frameworks support assertions that consist of conditions to check against the run-time behavior of an implementation. While such assertions can be embedded in the implementation itself, we focus here on ones that are provided as a standalone artifact (as this is a better fit for automated testing). These assertions can check that a specific input yields a specific output, or that the output of a given function always satisfies a stated predicate (such as lying within a range of numbers). Assertions are part of most unit-testing frameworks; some languages even include constructs for these assertions directly in the language itself (e.g., Pyret [5], the Racket student languages [14], and Rust [6]). Some forms of assertion-based testing generate the inputs to use in testing, rather than require students or instructors to provide them manually. Tools such as QuickCheck [3] generate test cases from formal specifications of a program’s expected behavior (then test the program against the same formal specification). Many instructors assess student implementations using a test suite of their own creation [2, 15, 16, 19, 21, 22, 24, 27, 36]). This approach is supported by major automatic assessment tools, such as ASSYST [23], Web-CAT [7], and Marmoset [34]. Some instructors also leverage student-written tests for testing other students’ implementations. In the literature, this approach is most closely associated with all-pairs style evaluations, in which student test suites and implementations are assessed by running every test suite against every implementation [11, 18, 25]. This approach also appears in research on students’ testing abilities, such as Edwards’ proposed metric of “bug revealing capability” [8–10]. Broadly, student test suites can be appropriated for the task of assessing any corpus of implementations whose correctness is unknown—not just those of students. For instance, Shams and Edwards [30] use student test suites to filter out mutations of an initially-correct reference implementation whose faultiness is not detectable by any student or instructor test suite. 2.2 Evaluating Test Suites Student test suites are typically assessed against two metrics: whether the tests conform to the specification (correctness), and whether the tests cover the interesting inputs to a problem (thoroughness) [27]. Assessing correctness of a test suite typically entails running it against an instructor-written implementation [2, 8–10, 27, 32]. This check is particularly important when using student tests to assess each others’ implementations [8–10, 25]. Code coverage is often used as a proxy for thoroughness; ASSYST [23], Web-CAT [7], and Marmoset [34] all take this approach. Code coverage is attractive because it reflects professional software engineering practice [26] and is not labor-intensive [8]. However, a growing body of evidence challenges the appropriateness of coverage as a measure of thoroughness [1, 9, 20], in both professional and pedagogic contexts. Alternatively, instructors may run student test suites against a corpus of incorrect implementations, checking what fraction of these a test suite rejects. This corpus may be sourced from students [10, 11, 18, 25, 33], from machine-generated mutations of a reference implementation [1, 30, 33], or crafted by the instructor [2, 27]. 3 ASSUMPTIONS AND TERMINOLOGY We assume that instructors assess implementations by running tests against them, where each test indicates both an input to the program and the expected output (whether directly or via some sort of assertion). We do not assume that instructors are trying to handle all forms of assessment automatically; style and design assessments, for example, may be handled through separate processes and are out of the scope of this paper. This paper focuses on automated assessment for functional correctness. We further assume that instructors are willing to perform some manual inspection of some testing results as part of calibrating the artifacts against which automation will assess student work. We will use the term conforms to describe test suites or implementations that are consistent with a given specification (usually provided by the problem statement). For a test suite to accurately flag non-conformant implementations, it needs to be fairly thorough (a term we introduced in section 2.2). Our definition of thoroughness suggests that it targets a relative, rather than absolute, standard. Completely thorough test suites are generally not achievable in practice: most programs have an infinite number of behaviors, which cannot be covered in a finite number of tests. Nevertheless, we assume that instructors are trying to be thorough relative to the bugs that are likely in student implementations. We call tests that are nonconformant (either because they assert something nonsensical, or they mis-represent the specification) invalid. When assessing an implementation against a test suite, we say that the test suite accepts the implementation if every individual test in the suite passes on the implementation. If even one test fails, we will say that the test suite rejects the implementation. Given a set of test suites to check an implementation, we will say (par abus de langage) that the implementation is correct (relative to those suites) if every test suite accepts the implementation. Otherwise, we will say that the implementation is faulty. 4 MODELS OF ASSESSMENT In this paper we study and contrast three models for assessing student implementations, the first two of which are commonly used in prior work. We give each a name and describe its general form, though of course individual uses of each model may differ slightly. Figure 1 pictorially summarizes our models and the workflows that define them. The upper part shows a student implementation running against one or more test suites. The lower part shows which implementations a student test suite must pass to be run against other student implementations. We study these models in two contexts: (1) assessing student implementations, and (2) assessing student test suites. Each model outputs a judgment of whether each student implementation is faulty or not. Having established the correctness of implementations, an instructor may then evaluate the accuracy of each student’s test suite by checking how closely its judgments of implementation correctness match the judgments made by the model. who tests the testers? icer ’18, august 13–15, 2018, espoo, finland axiomatic algorithmic: single algorithmic: multiple Axiomatic Algorithmic: Single Algorithmic: Multiple A I B D A I B B C D I 2 I 3 I 1 D I 2 D I all are run against student implementation instructor implementation student test suite instructor test suite Figure 1: Three models of assessing implementations, each based on a different collection of test suites. Hexagons are implementations, squares with concave corners are test suites. Solid artifacts (labeled I) are instructor-provided while hollow ones are from students, with letters and numbers differentiating them as needed. import number_sort from impl check: number_sort([3,2,1,0]) is [0,1,2,3] end Instructor Test Suite import number_sort from impl check: number_sort([3,2,2,1]) is [1,2,2,3] end A Clever Test Suite import number_sort from impl check: number_sort([0,1,2,3]) is [3,2,1,0] end A Faulty Test Suite Figure 2: Three contrived test suites for (ascending) number_sort: (1) an instructor test suite; (2) a student test suite that, by checking with an input that includes duplicate elements, can catch a bug that the instructor test suite cannot; and (3) a student test suite that, by expecting the sort to occur in descending order, is invalid with respect to the assignment specification. 4.1 Axiomatic (Axm) The axiomatic approach, shown at the top left of fig. 1, is the simplest and most common: Model Summary: Each student implementation is assessed against a single instructor-written test suite. In the figure, student A’s implementation is being assessed against the instructor test suite. Pitfall. This method relies solely on the judgment of the instructor test suite. If this test suite is incorrect (which it usually isn’t, but could be in subtle ways), conformant student implementations may be labeled as faulty. What is even more likely, we contend, is that the instructor test suite may be insufficiently thorough, in which case student implementations that are actually nonconformant may be wrongly labeled correct. As a simplistic but illustrative example, assume that students have been asked to implement a function, number_sort, that sorts its input in ascending order. Figure 2 shows a rudimentary instructor test suite (left). Another test suite (center) tests something beyond the instructor suite (in this case, correct handling of duplicate elements); we call such test suites clever. If a student implementation did not handle duplicate elements, the instructor test suite will accept it while the clever suite rejects it. The test suite on the right violates the specification by expecting descending order; such a test suite is invalid (as defined in section 3). Instructors might assume (based on their experience with the material) that their test suites are correct and fairly thorough (especially for assignments they have given multiple times), but this model provides no inherent mechanism for validating this belief. The authors of this paper, in particular, were victims of this hubris (we return to this in section 6.1). Assessment Impact. If the instructor test suite is insufficiently thorough, faulty implementations may be labeled as correct. These students would not receive feedback about these undetected flaws. Furthermore, if these labels are used as a basis to assess student test suites, students who manage to detect bugs the instructor does not will be penalized, since their suite’s judgments about correctness disagree with the judgments of the instructor’s suite. 4.2 Algorithmic: Single Implementation (AlgSing) How might we raise the thoroughness of the test suite used for assessing student implementations? Numerous existing tools and methodologies augment instructor test suites with student-written ones (e.g., [8–10, 18, 30]). Of course, student test suites could be incorrect, so this approach needs a method to determine which ones can be trusted for this task. Validating students’ test suites against an instructor implementation is a obvious (and oft-taken) approach: Model Summary: Each student implementation is assessed against the instructor test suite as well as student test-suites that are correct relative to the instructor implementation. In the first test-suite assessment in fig. 1 (lower left), B’s, C’s, and D’s test suites are evaluated against the instructor implementation. B’s and D’s are consistent with the implementation—the check marks on them indicate that they have passed this check—but C’s is not. C’s test suite is subsequently ignored, but B’s and D’s are added to the pool of test suites against which A’s implementation is tested. The instructor test suite has presumably already been checked for correctness against the instructor implementation, or is axiomatics taken as correct. Note that this model does not consider thoroughness as a criterion for including a student test suite. A test suite that is not thorough might not add much testing power, but it won’t inaccurately mark an implementation as faulty. The other authors cited in section 2 include student tests at the granularity of a single test case. For simplicity sake, our work considers only entire test suites. Note that this does not change the flaws we identify, only the potential magnitude of our measurements. A First Experiment. When we tried this technique on submissions for one of our assignments, the number of faulty implementations skyrocketed, from 28% of implementations identified as faulty by the instructor test suite alone, to 89% when student tests were also used. (We discuss details in section 6.) What happened? This drastically different assessment outcome resulted from trusting tests that made assertions beyond the bounds of the specification. We illustrate the issue using two succinct example problems (that are simpler than those from our real data): (1) Implement a function that computes a distance metric between two non-empty lists of values. (2) Implement a function that transforms a list of numbers into a binary search tree. (The assignment does not specify in which branch duplicates should be placed.) Each of these specifications, as given, admits multiple, functionally-distinguishable implementations. Respectively, the student implementations (1) may do anything if either of the inputs are empty; (2) may place equal values in either the left or right subtree. The authors of these implementations are likely to write tests that assert whichever particular behavior they happened to choose. These tests, while correct with respect to the student’s own implementation, are not appropriate tests of all implementations. In general, such tests, which we term over-zealous, can exceed the bounds of the specification in two ways: - Be overly liberal in what they supply for inputs; e.g., if the specification asks for a function that is only defined on non-empty lists, then a test that supplies the function with an empty list is over-zealous. - Be overly conservative in what they accept as outputs; e.g., if the specification does not dictate to which side of the binary search tree duplicate elements should be placed, a test that assumes duplicates go to a particular side is over-zealous. Pitfall of Over-Zealous Tests. Consider a student test suite whose over-zealous test cases coincidentally conform to the specific behavior of the instructor implementation. Since that suite will pass the instructor implementation, it will be labeled correct and then used to judge whether other student implementations are correct. Consequently, any other student implementations that behave differently (even if they satisfy the specification) will be marked faulty by that over-zealous test suite. If two student test suites over-zealously test different aspects of the specification, and are both incorporated into the implementation assessment process, it can be virtually impossible for any implementation to be deemed correct. Both forms arose in our experiment, resulting in the dramatic increase in the percentage of implementations that were deemed faulty. This experiment illustrates why eliminating over-zealous test cases from the implementation labeling process is crucial. Assessment Impact. If over-zealous tests are not eliminated, any conformant implementation that diverges even slightly from the instructor implementation may be wrongly judged as faulty. Furthermore, if this flawed labeling is used to assess student test suites, students will “fail” to identify these “faulty” implementations, and will be penalized for apparently having un-thorough test suites. 4.3 Algorithmic: Multiple Implementation (AlgMult) Fundamentally, the problem created by over-zealous tests is one of over-fitting: while the specification describes a space of implementations, just one sample from that space (a single instructor implementation) is used to determine whether all other implementations conform to that specification. To mitigate this flaw, an instructor can craft multiple correct implementations in a manner to be defined shortly. When assessing implementations, only test suites that are deemed correct against all of the instructor implementation are used in assessing other student implementations. **Model Summary:** Each student implementation is assessed against the instructor test suite as well as student test-suites that are correct relative to multiple instructor implementations. In fig. 1, B’s and D’s test suites are checked against three instructor implementations. B’s is consistent with all three, but D’s appears to be over-zealous, failing implementation 2. Therefore, D’s test suite is no longer considered, whereas B’s test suite (whose checks denote passed instructor implementations) can be added to the pool of test suites for assessing A’s implementation. *How should instructor implementations be different?* Different instructor implementations should reflect different scenarios allowed by the specification (e.g., guarding against different kinds of over-zealous tests). In particular, different implementations might admit more inputs than the specification requires, or might produce outputs that are consistent with the specification in different ways. For example: 1. If the specification only dictates how a function behaves on non-empty lists, then given an empty list, one instructor implementation might throw an exception while another returns an innocuous value. 2. If the specification does not dictate to which side of a binary search tree duplicate elements should be placed, one instructor implementation might place them on the left, while another places them on the right. Such implementations are adversarial in that they check for violations of the robustness principle. A good set of adversarial implementations would be diverse enough that an over-zealous test suite would reject at least one of them. If this happens, over-zealous test suites would be ruled out before being used to assess other student implementations. These restrictions against over-zealousness may appear to pose a high bar on student tests. Indeed they do, but the bar is not unattainable: in another experiment (section 6.1), the addition of adversarial instructor implementations reduced the fraction of student test suites trusted for assessing implementations from 79% to 35%. It is important to remember, however, that this high bar is for a test suite to assess other student implementations; it is not necessarily the bar we would use to grade A’s implementation. We observe in passing that nothing in our description limits adversarial implementations to screening student tests. The fruits of labor to obtain additional tests by any means—from colleagues, by crowdsourcing, by the instructors themselves, etc.—should all pass through the same adversarial process before being used to assess student work. This burden is nevertheless worth bearing due to the problems created by the two more common methods of assessment (AxF and AlgSing). --- 5 TESTING THE TESTER A common flaw underlies the vulnerabilities of all three models: if an instructor does not adequately consider some aspect of the problem, their assessments of students may suffer. Taken individually, they provide neither a resolution nor a means to detect this flaw. While AlgSing partially defends against the severe threat of mistrusting a student test, its defense relies on instructors’ sufficient development of adversarial implementations. Instructors can avoid this risk entirely by using AxF instead of an algorithmic model to grade student implementations, but that leaves AxF’s risk of penalizing students who detect bugs that the instructor failed to write tests for. By leveraging both axiomatic and algorithmic labeling, an instructor can detect and resolve this flaw. Consider that for an assignment with an adequate set of adversarial implementations and an instructor test suite that is not out-matched by any valid student test, AxF and AlgSing must result in an identical correctness labeling of student implementations. If either of these conditions is false, there must exist an incorporated student’s test that identifies some implementation as faulty that the instructor’s test suite identified as correct. In this event, one of two possibilities must be true: (1) that the student test is, in fact, nonconformant, but there was not an adversarial implementation to identify it as such, or (2) that the student test is, in fact, conformant, and captures a behavior not explored by any test in the instructor’s suite. The instructor should examine the test case in question, identify whether it is conformant, and either create an adversarial implementation that rules it out, or incorporate it into their test suite. In section 6, we apply this process to quantify the impact of these assessment flaws on a number of assignments in an introductory programming course. 6 EVALUATION ON COURSE DATA To assess the extent to which these perils may actually impact the robustness of course assessment, we applied the models to reassess the submitted programs and test suites of students from a semester-long accelerated introduction to computer science course at a highly-selective private US university. The course is primarily taken by students with prior programming experience; students place into it based on a series of programming assignments over the summer. In one semester, the course covers most of the same material as the department’s year-long introductory sequences (fundamentals of programming, data structures, core algorithms, and big-O algorithm analysis). The course teaches functional programming (many students who place into it have prior experience with object-oriented programming), following techniques from the How to Design Programs [13] curriculum. Both this curriculum and the course emphasize testing. Students are required to submit test suites for every assignment. Test suites are graded for both correctness and thoroughness, and are weighted similarly to implementations in determining final We explored only four assignments because constructing multiple assignments was laborious. On each assignment, there were a few students (on Nile) working with our data. For each assignment under study, we assessed student implementations under: (i) Axm, using the instructor test suite that was used during the semester to grade student implementations; (ii) AlgSing, using the instructor implementation that was used during the semester to grade the validity of student test suites; and then (iii) AlgMult, using the criterion specified in section 5 to develop adversarial implementations. We quantify the impact of Axm’s and AlgSing’s vulnerabilities by contrasting their outcomes to that of AlgMult. Assignments Under Study. For the analysis for this paper, we selected four assignments that are quite different from each other and representative of the course overall: - DocDiff, where 91 students implemented and tested programs computing a document similarity metric using a bag-of-words model [29]. - Nile, where 70 students implemented and tested a rudimentary recommendation system. - Filesystem, where 76 students implemented and tested rudimentary Unix-style commands for traversing a (in-memory) file structure with mutually-dependent datatypes [13]. - MapReduce, where 38 pairs of students implemented and tested the essence of MapReduce [4] (implemented sequentially), and applied it to multiple problems. This included redoing some previous assignments (including Nile) in terms of the MapReduce paradigm, using their implementation. We explored only four assignments because constructing multiple adversarial implementations is a potentially time-consuming process. For each assignment we constructed between two (for Nile) and seven (for MapReduce) adversarial implementations. The differing number of students submitting for each assignment reflects students dropping the course (after DocDiff), then working in pairs (on MapReduce); the assignments are listed in the order in which they were assigned. On each assignment, there were a few (2-3, though 9 for Nile) submissions that were not included in the analysis (and are not reflected in the above counts): these assignments either had compile-time errors or threw run-time exceptions that we were not able to resolve with a few minutes of work. ### 6.1 Impact on Implementation Assessment The models produced drastically different assessments of implementation correctness. Table 1 summarizes the percentage of student implementations that were deemed faulty under each of the three models (the MapReduce data were mentioned in section 4). Very few implementations are deemed faulty under Axm, the majority are deemed faulty under AlgSing, and AlgMult lies in between (that the AlgMult percentages are no smaller than those for Axm matches our expectation based on their definitions). With the Axm model, we noted that an insufficiently thorough instructor test suite may fail to detect all faulty implementations. We assumed that our test suites for these assignments were thorough, but had not validated this belief. Contrasting the first and third columns of table 1, we find that a substantial proportion of students who were notified that their implementations were correct actually had faults in their submissions. These data confirm that there is significant room to improve the thoroughness of our tests: both DocDiff and MapReduce show notable differences in the percentage of faulty implementations flagged between Axm and AlgMult. With the algorithmic models, instructors bolster their own test suites with student tests but, we noted, face the risk and consequences of inadvertently mis-trusting an over-zealous student test. In the case of AlgSing, instructors rely on just one known-correct implementation to filter out invalid tests. Contrasting the second and third columns of table 1, we find that an substantial proportion of the implementations marked faulty by AlgSing were, in fact, correct. These data show that a single implementation was not sufficient for filtering student tests. Table 2 shows the percentage of students whose tests were incorporated by AlgSing and AlgMult. Contrasting its two columns, we find that AlgSing consistently over-trusted student tests. ### 6.2 Impact on Assessing Test Suites Next, we explore the impact of these perils on test-suite assessment, working with our MapReduce data. **Methodology:** The models in this paper classify implementations as correct or faulty. As we mention in section 4, we can then use this classification as a ground truth to assess the accuracy of student tests. We do this by applying each test suite to a collection of (assessed) implementations, and comparing the test suite’s classification of their correctness against that provided by the model. We perform this analysis on a collection of 53 MapReduce implementations. This collection contains all 38 student implementations, --- We do not report statistical significance, because the nature of these analyses introduce considerable nuance and difficulty in designing a statistical test. Regardless, for students, these differences have personal significance. Figure 3: Assessment of test suites on MapReduce. In the top row, each dot represents a test suite; its location encodes its respective true-positive rate (the % of faulty implementations it accepted) and true-negative rate (the % of correct implementations that it rejected). Below, a kernel density estimation plot shows the relative commonality of true-positive and true-negative rates. as well as seven (adversarial) correct and eight faulty specially-crafted implementations. These latter implementations were included to make sure that the corpus contained a handful of each kind of implementation (since we could not predict where the student implementations would fall). We quantify the closeness of each student test suite’s classification to the classification of the underlying models using the standard metrics of binary classifiers: - **true-positive rate**, the fraction of faulty implementations that the test suite appropriately identifies as faulty;\(^3\) - **true-negative rate**, the fraction of correct implementations that the test suite appropriately identifies as correct. Figure 3 depicts the resulting true-positive and true-negative rates of each test suite relative to the classifications produced by AXM, AlgSing, and AlgMult (one column each, respectively). These graphs illustrate, from the perspective of assessing student test suites, the drastically different outcomes that can arise depending on which model is used to label implementations. **AXM Perils**: In the AXM model, a student that writes a test that identifies a bug missed by the instructor test suite is penalized for their thoroughness, as their test suite’s judgment of correctness is observed as disagreeing with the judgment of the instructor’s test suite. In the context of test suite assessment, this is reflected as a decrease in true-negative rate. This pathology significantly impacted the outcomes of our test suite assessments performed atop AXM. The gaps in our test suites were accessible enough for many students to find (even though we had refined these test suites over several years). On DocDiff, Filesystem, and MapReduce, respectively, 63%, 9% and 29% of students identified at least one faulty implementation that instructor test suite missed. (Only on Nile did no student test more cleverly than the instructor had.) A contrast of the AXM and AlgMult columns of fig. 3 bears this pathology out in the context of test suite assessment. The true-negative density curve of AXM is shifted slightly to the left of that of AlgMult, indicating that AXM assessed students as having lower true-negative rates than AlgMult did. Furthermore, we note that AXM penalizes equally students who write invalid tests and students who test more cleverly than the instructor. Thus, an instructor using AXM might incorrectly conclude that their best students failed to understand the problem specification. **AlgSing Perils**: In theory, AlgSing and AlgMult (which acknowledge that students may find bugs the instructor does not) remedy this pathology. However, as discussed in section 6.1, incorrectly incorporating student tests can easily give rise to a catastrophically inaccurate assessment of implementations, which in turn leads to inaccurate assessment of test suites. Under AlgSing, most students have very high true-negative rates and very low true-positive rates. This came about in part because so few implementations were labeled correct by AlgMult (see the middle column of table 1). Thus, there is much less nuance in the true-negative rates, as reflected in the horizontal bands of points in the scatter plot. \(^3\)While associating “positive” with “faulty” may seem backwards, the goal of thoroughness is to accurately identify faulty implementations. ### 6.3 Takeaway The theoretical flaws of the standard models had real, substantial impacts on our assessments. Using the technique in section 5 to develop an AlgMult assessment, we identified and corrected numerous shortcomings in our grading artifacts. This process required close examination of each assignment statement, and we also encountered ways in which our assignments could be made clearer. Thus, in addition to improving our grading system with this process, we have improved the assignments themselves. ### 7 DISCUSSION In an era of growing enrollments and on-line courses, it is essential to understand the nuances of automated assessment, especially since it seems to naturally fit some aspects of computing. In particular, this fit can mask worrisome weaknesses. With automated assessment widespread in everything from K-12 and tertiary courses to MOOCs to programming competitions to job placement sites and more, its foundations require greater scrutiny. In this paper we look closely at automated assessment of programs and of their first-cousins, test suites. Through pure reasoning, we show that the standard models (sections 4.1 and 4.2) can suffer from significant measurement flaws. We present a new model of assessment (section 4.3) and a corrective technique that utilizes it (section 5). The results of section 6 validate all these claims in practice when assessing both implementation and test suite quality. The problems we find in these models are disturbing in two ways. First, the flaws can be subtle, so instructors and students may never notice them. Indeed, as we have noted, in some cases the assessment results in students appearing to do better than their true performance. This may give students a false sense of confidence in their abilities. Second, it is not trivial to extrapolate from the feedback of these models to identifying a systemic flaw in students’ work. Especially in massive or disconnected settings, it may be difficult to identify the problems we raise. The sheer volume of data available may blind some people to the true quality of the data. On a personal note, we can relate how easy these flaws are to overlook. Like many others educators, we had used the two flawed methods for nearly two decades, growing increasingly dependent on them with growing class sizes (a widespread phenomenon in the US). The initial purpose of this study was simply to test the quality of student tests, in comparison to an earlier study by Edwards and Shams [10]. As we began to perform our measurements, we wondered how stable they were, and started to use different methods to evaluate stability. When we noticed wild fluctuations—which made our analyses highly unreliable—we began to investigate why small changes to the implementation set would have large effects, which led to unearthing the problems reported in this paper. We therefore conclude with a salutary warning. While automated assessments are valuable and have their place, their use—as with any machine-generated artifact that draws on a large set of data—requires significant reflection. Happily, we demonstrate that the method of multiple adversarial implementations (section 4.3) avoids the pathologies we have found in automated assessment, enabling us to draw on a larger pool of inputs (namely, to consider student test suites and implementations as well), which in turn results in better evaluation of student implementations and test suites.
{"Source-Url": "http://cs.brown.edu/~sk/Publications/Papers/Published/wkf-who-tests-testers/paper.pdf", "len_cl100k_base": 7862, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 30888, "total-output-tokens": 8577, "length": "2e12", "weborganizer": {"__label__adult": 0.0008244514465332031, "__label__art_design": 0.00116729736328125, "__label__crime_law": 0.0008401870727539062, "__label__education_jobs": 0.1744384765625, "__label__entertainment": 0.00022971630096435547, "__label__fashion_beauty": 0.0005540847778320312, "__label__finance_business": 0.000950336456298828, "__label__food_dining": 0.00107574462890625, "__label__games": 0.00150299072265625, "__label__hardware": 0.0016689300537109375, "__label__health": 0.0012941360473632812, "__label__history": 0.0010967254638671875, "__label__home_hobbies": 0.00039577484130859375, "__label__industrial": 0.0011148452758789062, "__label__literature": 0.00188446044921875, "__label__politics": 0.0007824897766113281, "__label__religion": 0.0011739730834960938, "__label__science_tech": 0.082763671875, "__label__social_life": 0.0006842613220214844, "__label__software": 0.01102447509765625, "__label__software_dev": 0.7119140625, "__label__sports_fitness": 0.0007109642028808594, "__label__transportation": 0.0014133453369140625, "__label__travel": 0.000492095947265625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40594, 0.04354]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40594, 0.49972]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40594, 0.92635]], "google_gemma-3-12b-it_contains_pii": [[0, 6142, false], [6142, 12974, null], [12974, 15770, null], [15770, 22066, null], [22066, 28167, null], [28167, 33355, null], [33355, 33745, null], [33745, 40594, null], [40594, 40594, null]], "google_gemma-3-12b-it_is_public_document": [[0, 6142, true], [6142, 12974, null], [12974, 15770, null], [15770, 22066, null], [22066, 28167, null], [28167, 33355, null], [33355, 33745, null], [33745, 40594, null], [40594, 40594, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40594, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40594, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40594, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40594, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 40594, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40594, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40594, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40594, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40594, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40594, null]], "pdf_page_numbers": [[0, 6142, 1], [6142, 12974, 2], [12974, 15770, 3], [15770, 22066, 4], [22066, 28167, 5], [28167, 33355, 6], [33355, 33745, 7], [33745, 40594, 8], [40594, 40594, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40594, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
8df01554591dd743582a840cc3a35961105796a1
A New Algorithm for Semantic Web Service Matching Bo Jiang School of Computer and Information Engineering, Zhejiang Gongshang University, Hangzhou, Zhejiang, China Email: nancybijiang@mail.zjgsu.edu.cn Zhiyuan Luo School of Computer and Information Engineering, Zhejiang Gongshang University, Hangzhou, Zhejiang, China Email: miluo003@163.com Abstract—Considering the existing problems in Web service such as lack of semantic information and interoperability, atomic services can not cooperate with each other and be combined into composite services. In this paper, based on extended UDDI, we present a semantic annotation framework for Web service description and composition. The novel service matching algorithm not only takes into account the height factor of ontology tree and local density factor for the impact of semantic distance, but also the degree of semantic overlap. Experimental results show that the proposed service matching algorithm improved the efficiency of Web service discovery, and thus Web service can be composed easily to form more complicated inter-related services. Index Terms—ontology, interoperability, semantic annotation, Web service matching I. INTRODUCTION Nowadays, academic as well as industrial communities focuses part of their research and development activities on technologies like data exchange, Web service discovery and composition, security, performance evaluation and so on. The fundamental architecture of Web service implements SOA (Service Oriented Architecture)[1] which enables applications to be integrated on different network platform. SOA guides the creation of collaborative services that are loosely coupled and independent of their implementation technologies and enables a variety of services to interact with each other. Web service is the most suitable technical solution to implement SOA[2]. Based on Web service, data and information on the Web can be interacted with each other and be integrated effectively, which solves the problems brought out by heterogeneous information systems. A set of technical specifications supports Web service, such as Extensible Markup Language (XML), Simple Object Access Protocol (SOAP), Web service Description Language (WSDL), Universal Description, and Discovery and Integration (UDDI). With the increasing number of Web service on the network, the way to find users’ needs from the mass of service becomes more and more critical. Matching effect of Web service is directly related to the quality of the services that users invoked. Therefore it affects the implementation of Web service composition and the effect. As it is known, the recall and precision rates of UDDI and WSDL-based service matching are relatively low [3]. Combining ontology with Web service, semantics enables automatic service matching. The purpose of this work is to present, a semantic annotation framework for describing Web service and extending UDDI to support semantic Web service. Furthermore, an improved algorithm based on semantic similarity computation improves the efficiency of service discovery, which sets up a solid foundation for service composition. The rest of this paper is organized as follows. Section 2 is mainly related to the work concerning semantic annotation. Section 3 is devoted to the related work concerning semantic annotation and extended UDDI. Section 4 focuses on improved service matching algorithm. Section 5 presents experimental validation of the proposed algorithm and contrast to other algorithms. Section 6 concludes the paper and suggests some future works. II. RELATED WORK A. Definition of Terms There are plenty of relative terms and concepts about semantic Web service; definitions [4] of related concepts are as follows: - Ontology: A philosophical term which means: “the knowledge of what is to be in oneself”. Ontology used in data processing indicates a structured set of knowledge in a domain. Ontology is an explicit share specification of the various conceptualizations in a particular domain. Semantic annotation: An annotation is a link to its semantic description assigned to an entity in the text. A semantic annotation is referent to a relative ontology. Reasoner: A mapping engine is a reasoner to match service advertisements with requests. The reasoner provides a semantic algorithm to match inputs and outputs of Web service during the matchmaking process. Matching: The matching operates two concepts according to some similarity features of Web service. There are some functional properties of services such as inputs, output, precondition and effects (IOPE). The matching of relevant concepts for Web service was introduced by Paolocci [5]. Similarity measure: The reasoner defines four levels of similarity between two concepts A and B. Equivalence: A and B are seen as equivalent and they represent the same concept. Subsumption: Concept A is more general than concept B. Opposite subsumption (plugin): Concept A is subsumed by the concept B, which means that concept B is more general than concept A. Difference (fail): Concept A and concept B are different. B. Semantic Description Languages of Web Service Numerous semantic Web service frameworks are proposed and promoted for standardization by W3C. The most prominent ones are OWL-S (Ontology Web Language for Services), WSMO (Web service Modeling Language), WSDL-S (Web service Description Language Semantic) and SAWSDL (Semantic Annotation for Web service Description Language), which evolved from the WSDL-S specification. All these works were studied according to the following criterion: - Resource: Semantic description (XML schema, WSDL, UDDI, and etc). - Property: It means what described semantically in the document, such as inputs and outputs. - Language: It describes the representation language of the semantic model (WSDL, OWL). - Annotation: It specifies if the annotations are independent or are saved in other documents. - Model: It specifies if the model of semantic domain is internal or external. - Matching: It indicates the type of the matching algorithm and its properties. III. SEMANTIC ANNOTATION FRAMEWORK AND EXTENDED UDDI There are two ways to use ontologies for Web service description. The first way is adding semantic information directly in the existing Web service standards, and using domain ontology to annotate directly on the WSDL file directly. The second way is to define ontology of Web service description, such as WSMO or OWL-S [6]. Web service description augments the concept annotation of domain-specific ontology. [7] builds a domain ontology to describe service functional information and provide consistent description. In addition, a matching algorithm was given for the requester and provider, but the paper didn’t show how to realize the ontology mapping. [8] used domain ontology to annotate WSDL. [9] was based on the latest SAWSDL specifications of W3C. Automatically matching algorithm took into account services function and interface parameters. However, related concepts could not be expressed effectively based on the domain ontology. A. Semantic Annotation Framework In this paper, we proposed a semi-automatic semantic annotation framework based on the existing semantic annotation. On one hand, the framework inherits Paolucci’s approach [3]. The WSDL document is automatically converted to DAML-S according to the mapping relationship. On the other hand, the framework introduces the OWL ontology so as to add semantic information to services. WSDL document does not include the yellow pages and white pages information of Web service. Therefore, the framework utilizes editing tools to add involved description information that further improves the semantic description of Web service. In this paper, the semantic annotation framework is shown in Fig.1. ![Semantic annotation framework](image) Figure 1. Semantic annotation framework. 1. Firstly, we use WSDL Document parser tool to parse WSDL Document. In this paper, we parse WSDL document with WSDL2OWL-S tool. But the parse results only translate WSDL information; this method can’t provide semantic support of WSDL document XSD vocabulary. There are four OWL-S file generated after WSDL document has been parsed. 2. The XML vocabulary in WSDL document with related concepts in the domain ontology corresponding up through the human-computer interaction, it is also semantic annotation of WSDL document in an XML vocabulary. This step is based on ontology; we set up corresponding ontology before annotation of WSDL document. 3. The WSDL document itself doesn’t contain description of yellow page and white page Web service information, TextDescription and other information also need to be added to the Profile file after WSDL has been parsed. After the above there steps, we also need Profile file, Process file, Grounding file and Service file combined into a complete OWL-S service description file. B. Extended UDDI It is known to all that pure WSDL doesn’t have semantic description capabilities, which leads to low precision for Web service matching results. OWL-S provides machine-understandable semantic information for Web service, and can effectively improve the performance of the WSDL description. Therefore, it is necessary to extend UDDI so as to support OWL-S specification. The semantic extended UDDI achieves two main functions. (1) Conversion mechanism from OWL-S to UDDI is established by referencing tModel to store semantic information of Web service; and (2) Clients (requesters) can query Web service referencing tModel type from the service library by invoking the service interface of UDDI API and obtaining the URL of the service description. Web service matching performance can be greatly improved by using extended UDDI. In order to make the UDDI registry to support OWL-S specification and store semantic information of Web service, it is necessary to establish mapping from the OWL-S to the UDDI, information in the OWL-S Profile which is embedded to UDDI. In this Paper, we refer to [11] which proposed DAML-S Profile to UDDI mapping mechanism, it implements OWL-S to UDDI mapping by extending the tModel type. The main idea of this method is described as follows: if the UDDI data elements correspond to elements of OWL-S Profile, then it can be mapped directly; if no corresponding data elements in UDDI, and then it need create a new tModel type, so both of them produce a mapping relation. IV. ALGORITHM OF SERVICE MATCHING AND MATCHING FRAMEWORK A. Algorithm of Classic Matching Based on domain ontology of semantic Web service matching, researchers have proposed various calculation approaches. In [3], Matching results were divided into four categories according to the degree of service matching. The categories are ‘exact’, ‘plugin’, ‘subsume’ and ‘fail’. It is shown in Fig. 2. ``` DegreeOfMatch (outB, outA): if outB =outA then return exact if outB subclassOf outA then return exact if outA subsumes outB then return plugin if outB subsumes outA then return subsumes otherwise fail ``` Figure 2. Degree of service matching. In the above approach, outB corresponds to one output of the request and outA corresponds to one output of the advertisement. If outB=outA then outB and outA are equivalent, which we label as exact. The second clause is that outB subclassOf outA, and then the result is still exact. If outA subsumes outB than outA is a set that includes outB, the result is plugin. If outB subsumes outA, then the provider does not completely fulfill the request, and the result is subsumes. Failure occurs when there is no subsumption relation between advertisement and request which is identified. B. Algorithm of Semantic Matching By introducing Web service ontology, Web service matching is converted into the calculation of concept similarity in the domain ontology library. This paper presents an improved ontology-based semantic similarity algorithm technology, which also improves the semantic matching algorithm. The basic idea of the algorithm is building domain ontology as reference. Based on domain-specific ontology, we refer the depth factor and the local density factor when we calculate semantic similarity based on the semantic distance, and we also introduce degree of semantic overlap to calculate the semantic similarity of any two concepts. In this paper, we mainly consider the input parameters and output parameters of Web service in the semantic Web service matching. So we define an abstract Web service semantic description model as follows: **Definition 1:** We define Web service semantic description model as \( W=\langle I, O \rangle \). \( I=\langle I_1, I_2, ..., I_n \rangle \) is a concept vector, which represents semantic description of n input parameters in W. \( I_1, I_2, ..., I_n \) represent their corresponding semantic concepts in domain ontology basis of n input parameters. \( O=\langle O_1, O_2, ..., O_m \rangle \) is a concept vector, representing semantic description of m output parameters in W. \( O_1, O_2, ..., O_m \) represent their corresponding semantic concepts in the domain ontology basis of m output parameters. So we can change Web service matching into the matching between requestor’s service semantic description model \( W=\langle I, O \rangle \) and Web service semantic description model in \( W=\langle I', O' \rangle \). Furthermore it can convert into the match between concept vector I and vector \( I' \), vector O and vector \( O' \) in the unified domain ontology. In the network environment, providers and requestors of the Web service often have different understandings and needs of the same type of service, which reflected in the Web service is that requested services and Services provided may be inconsistent at the input and output of the dimensions and order. It is the reason why semantic descriptions of Web service are independent with each other. So the concept vectors I, I', O and O' may have different situations in aspects of the dimension and the elements order. We must define the match of any two concept vectors in the same domain ontology before defining the concept of Web service matching formally. **Definition 2:** We define the calculation function of similarity of any two concepts in the same domain ontology as \( \text{Sim}(C_1, C_2) \). \( C_1, C_2 \) are any two concepts in the domain ontology, and the value of Sim(C1, C2) is between 0 and 1 (including 0 and 1). The bigger the value is, the more similar two concepts are. Based on definition 2, we can define the same similarity of any two concepts vectors A, B in the same domain ontology. **Definition 3:** Assume that \( A= (A_1, A_2, \ldots, A_n) \), \( B= (B_1, B_2, \ldots, B_n) \) are any two concept vectors in the same domain ontology, and the correlation matrix is as follows. \[ S_{ab} = \begin{pmatrix} \text{Sim}(A_1, B_1) & \text{Sim}(A_1, B_2) & \cdots & \text{Sim}(A_1, B_n) \\ \text{Sim}(A_2, B_1) & \text{Sim}(A_2, B_2) & \cdots & \text{Sim}(A_2, B_n) \\ \vdots & \vdots & \ddots & \vdots \\ \text{Sim}(A_n, B_1) & \text{Sim}(A_n, B_2) & \cdots & \text{Sim}(A_n, B_n) \end{pmatrix} \] So that we can get the formula of any two concept vectors in the same domain ontology: \[ S(A, B) = \frac{1}{n \cdot m} \sum_{i=1}^{n} \max_{j \in [1, m]} \left( \text{Sim}(A_i, B_j) \right) \] (1) Semantic Web service matching can be abstracted as matching of semantic Web service description model, and matching of the semantic description model is based on the vector concept. Vector concept matching translates into matching of vector concept elements by definition 3. Matching degree of them can be calculated with similarity calculation. In the information query field, it is always the case that calculating the similarity between the two concepts is related to calculating the semantic distance of two concepts. So the larger semantic distance between two concepts, the lower the similarity; the smaller the semantic distance between concepts, the higher their similarity. Semantic distance is the shortest length of all relation chain between two concepts in the same domain ontology library. \( \text{Dis}(C_1, C_2) \) denotes the shortest distance between concept C1 and concept C2. Semantic distance is the most important factor to determine the semantic similarity. **Definition 4:** the semantic similarity distance formula is as follows: \[ \text{Sim}_{\text{Dis}}(C_1, C_2) = \frac{1}{1 + \text{Dis}(C_1, C_2)} \] (2) \[ \text{Dis}(C_1, C_2) = \sum_{i=1}^{n} \text{Weight}(\text{Concept } C_1 \text{ and concept } C_2) \] **Definition 5:** Formula for calculating the node weights is as follows: \[ \text{Weight}(C) = \text{Weight}(\text{depth}(C)) \ast \text{Weight}(\text{density}) \] (3) Weight(C) signifies the concept C corresponding to the weight of the height factor. Weight(density) is local closed weight factor of node C. Weight(depth(C)) = 1/(2*Dep(C)). Dep(C) is the height of node C. **Definition 6:** The similarity formula for the degree of semantic overlap is as follows: \[ \text{Sim}_{\text{Coin}}(C_1, C_2) = \frac{P(C_1) \cap P(C_2)}{\max(P(C_1), P(C_2))} \] (4) \( P(C_1) \) represents the amount of parent nodes in the concept C1. It is the amount of nodes from the concept C1 dating back to the root node. \( P(C_1) \cap P(C_2) \) represents the amount of parent nodes whose concepts is shared by C1 and C2. \( \max(P(C_1), P(C_2)) \) represents the bigger amount of parent nodes between the concept C1 and concept C2. Based on the above analysis and definitions, we can draw an integrated semantic similarity calculation. **Definition 7:** Comprehensive formula for calculating the semantic similarity is as follows: \[ \text{Sim}(C_1, C_2) = \alpha \ast \text{Sim}_{\text{Dis}}(C_1, C_2) + (1 - \alpha) \ast \text{Sim}_{\text{Coin}}(C_1, C_2) \] (5) \( \alpha \) can be adjusted according to different applications and requirements as a regulatory factor. **Definition 8:** The formula for calculating the similarity of input and output is as follows: \[ \text{Sww} = \gamma \ast \text{Sim}(I, O) + (1 - \gamma) \ast \text{Sim}(I', O') \] (6) \( I, O \) and \( I', O' \) are the input and output of Web service W and W'. 0.5 < \( \gamma < 1 \). Output is more important than input for users and users have a greater control over the output. Therefore, we increase the weight of output when calculating the similarity. **C. Web Service Matching Framework** In this section, a Web service matching framework has been proposed, as shown in Fig 3. ![Figure 3. Web service matching framework.](image) In this framework, an agency provides server that deployment extended UDDI, provider/requester could publish or search base on semantic Web service by it. We describe the process of release and find service as follows: provider describes Web service with OWL-S according to domain ontology library, and then publish to server by published API; requester describes service request with OWL-S according to domain ontology, and send to server by query API; server deals with request, service request and registered services match by our matching algorithm in the UDDI, finally, server returns appropriate Web service to requester. V. EXPERIMENT A. Construction of Domain Ontology and Datasets Experimental IDE: Eclipse 3.52, the third-party tools used: Protégé and Protégé-OWL. We refer the relevant domain ontology, and then utilize Protégé to build a relatively perfect tourist domain ontology, as shown in Fig.4. A great many experimental data are required to support our semantic Web service matching algorithm in order to compare to other algorithms. It is very difficult to do experiments to test semantic Web service which are described with WSDL. On one hand, semantic Web service applications are still in the primary stage and semantic annotation Web service are rare. On the other hand, it will cost too much time and effort to convert Web service into effective semantic Web service. In this paper, we explore semantic Web service matching and use the program which can automatically create random pair concepts to directly generate a large number of Web service description languages as the experimental test datasets. We program to generate 950 Web service semantic descriptions and annotate 50 services manually based on our semantic framework. There are 1000 released semantic Web service description, and we manually describe 10 service requests as well. We match published semantic Web service description with Web service matching algorithm for each service request, and matching results are sorted according to similarity values. B. Experimental Results and Analysis In this experiment, there are three kinds of semantic similarity calculation methods based on semantic distance. Because our algorithm introduces the degree of semantic overlap, we did two sets of experiments to prove the effect of the semantic overlap degree. (1) Semantic similarity algorithm based on distance, Weight = 1, we mark it as algorithm I. (2) GCSM semantic distance calculation algorithm [10] is marked as algorithm II in Fig. 5. (3) The proposed improved algorithm we proposed is marked as algorithm III. In this experiment, γ = 0.7, the output parameter of semantic similarity weight is 0.7, the input parameter similarity weight is 0.3. We use three semantic matching algorithms respectively to calculate the semantic similarity between the services in set and the query target service. The results are shown in Fig. 5. Figure 5. Three algorithms comparison. Another set of experiments is to consider the degree of semantic overlap, α = 0.7, and semantic overlap degree weight is 0.3. The results are shown in Fig. 6. Figure 6. Algorithms comparison. There are 10 semantic service requests matched in test set. Fig. 7 shows average precision and average recall ratio of relationship in the algorithm I, II and III. As shown in Fig. 7, precision rate and recall rate of algorithm I was significantly lower than the algorithm II and algorithm III, while algorithm III slightly higher than algorithm II. VI. CONCLUSION AND THE FUTURE WORK This paper presented an available approach to crack the Web service matching problem. It presented a new annotation framework for implementing the transformation of WSDL documents to OWL-S service descriptions and extended query interfaces utilizing UDDI. The paper defined a semantic description model and proposed an improved algorithm to obtain good query results, the algorithm likes general algorithm considering the height factor of ontology tree and local density factor, but more than that, we also introduce the degree of semantic overlap. The results of experiments showed the feasibility and effectiveness of the proposed algorithm. With the rapid development of the technology of Web service, we will confront more and more new problems and numerous challenges. Further research about other approaches to solve the issue is necessary. We promote the technological development in the process of solving problems about engineering techniques. But there are some problems to solve. For example in this algorithm, there are some Web service matching factors that we ignore, such as precondition and effects of Web service operations, along with Quality of Service (QOS). So we need a more comprehensive model of semantic Web service description and further improve the proposed algorithm in order to further enhance the semantic Web service matching capability. ACKNOWLEDGMENT We would like to thank all those who helped in the preparation of this paper. In particular, I am grateful to Weifeng Pan Ph. D and Tao He for their constructive suggestions. REFERENCES Bo Jiang was born in 1970. She received the PhD degree in Computer Science from Zhejiang University, China, in 2007. She is a professor in College of Computer and Information Engineering, Zhejiang Gongshang University, P.R.China. Her current research interests include CSCW, artificial intelligence and ubiquitous computing. Bo Jiang is a member of ACM and IEEE. Zhiyuan Luo was born in Anhui of China in 1th July 1986. He is major in artificial intelligence technology and application, and is a graduate student in the school of Computer and Information Engineering, Zhejiang Gongshang University. His current main research interests include ontology engineering and web intelligence.
{"Source-Url": "https://pdfs.semanticscholar.org/72d6/0534bbf859b770f55b312aa0a1b4b757888c.pdf", "len_cl100k_base": 5335, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 23499, "total-output-tokens": 6338, "length": "2e12", "weborganizer": {"__label__adult": 0.00036835670471191406, "__label__art_design": 0.0004417896270751953, "__label__crime_law": 0.000499725341796875, "__label__education_jobs": 0.0009074211120605468, "__label__entertainment": 0.00011676549911499023, "__label__fashion_beauty": 0.00019419193267822263, "__label__finance_business": 0.0003383159637451172, "__label__food_dining": 0.0004117488861083984, "__label__games": 0.0005431175231933594, "__label__hardware": 0.0009794235229492188, "__label__health": 0.0008025169372558594, "__label__history": 0.0003387928009033203, "__label__home_hobbies": 9.262561798095704e-05, "__label__industrial": 0.00042724609375, "__label__literature": 0.0005955696105957031, "__label__politics": 0.0003421306610107422, "__label__religion": 0.0005822181701660156, "__label__science_tech": 0.1279296875, "__label__social_life": 0.00016248226165771484, "__label__software": 0.0220184326171875, "__label__software_dev": 0.8408203125, "__label__sports_fitness": 0.000278472900390625, "__label__transportation": 0.0005688667297363281, "__label__travel": 0.00022935867309570312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26236, 0.02029]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26236, 0.31777]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26236, 0.87278]], "google_gemma-3-12b-it_contains_pii": [[0, 4038, false], [4038, 8713, null], [8713, 14402, null], [14402, 19292, null], [19292, 21828, null], [21828, 26236, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4038, true], [4038, 8713, null], [8713, 14402, null], [14402, 19292, null], [19292, 21828, null], [21828, 26236, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26236, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26236, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26236, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26236, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26236, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26236, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26236, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26236, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26236, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26236, null]], "pdf_page_numbers": [[0, 4038, 1], [4038, 8713, 2], [8713, 14402, 3], [14402, 19292, 4], [19292, 21828, 5], [21828, 26236, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26236, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
6717048b1f804f14b870a31691841a048cd28c0a
Reference Modeling for Organizational Change: Applying Collaborative Techniques for Business Engineering Jan vom Broke European Research Center for Information Systems (ERCIS) Oliver Thomas European Research Center for Information Systems (ERCIS) Follow this and additional works at: http://aisel.aisnet.org/amcis2006 Recommended Citation http://aisel.aisnet.org/amcis2006/88 This material is brought to you by the Americas Conference on Information Systems (AMCIS) at AIS Electronic Library (AISeL). It has been accepted for inclusion in AMCIS 2006 Proceedings by an authorized administrator of AIS Electronic Library (AISeL). For more information, please contact elibrary@aisnet.org. Reference Modeling for Organizational Change: Applying Collaborative Techniques for Business Engineering Jan vom Brocke European Research Center for Information Systems (ERCIS) University of Muenster (Germany) jan.vom.brocke@ercis.de Oliver Thomas Institute for Information Systems (IWi) at the German Research Center for Artificial Intelligence (DFKI) Saarland University, Saarbruecken (Germany) thomas@iwi.uni-sb.de ABSTRACT The orientation on the technical content of a reference model can increase the efficiency of processes in business engineering projects. Despite this, the use of reference models in the field of business engineering has not yet established itself in practice. The article at hand addresses this problem from a pragmatic perspective focusing on the designers’ needs in specific modeling situations. Our work has revealed that we are in need of comprehensive infrastructures which provide various kinds of design support. Apart from methodological contributions, work in the fields of organizational and technological infrastructure design is needed. In order to illustrate and evaluate our approach, we will present a study which applies the findings for the set-up of an infrastructure that makes use of collaborative techniques. The infrastructure will be presented with respect to each building block, including the presentation of a prototype. This pragmatic approach thus, results in collaborative reference modeling and presents a way of using reference modeling for organizational change. Keywords Organizational design, business engineering, distributed modeling, reuse models, reference modeling, collaborative work. INFRASTRUCUTRES FOR REFERENCE MODELING AS A MEANS OF BUSINESS ENGINEERING The Potentials of Reference Modeling for Business Engineering The field of ‘business engineering’ emerged at the start of the 1990ies as a management trend. It aims at enriching existing approaches with respect to both the development of operational information systems and business strategies for process design (Cornes 1990; Kruse et al. 1993; Österle 1995; Scheer 1994). From today’s perspective, business engineering can be seen as a method and model-based design theory for businesses in the information age (Österle et al. 2003). Using the methods and models made available by business engineering, business information systems can be designed, implemented, and adapted according to specific business needs. At the same time, improvements to business operations made possible by innovations in information technology (IT) are also targeted. Thus, the goal of business engineering is to systematically align business applications and operations with the help of engineering principles. Nowadays, business processes have established themselves as the organizational objects of design for business engineering (Davenport 1993; Hammer et al. 1993). Thus, with regard to corporate strategy, both the design of business processes and the analysis of the demands for their IT-support are of importance in business engineering projects. The design of business processes must follow a comprehensive approach encompassing the planning and the control, as well as the management of the operational workflows. Information modeling has proved useful in supporting the systematic procedure in process design (Fowler 1997; Hay 2003; Kilov 2002; Wand et al. 2002). Modeling techniques such as, for example, the unified modeling language (UML) (Rumbaugh et al. 2004) or the event-driven process chain (EPC) (Keller et al. 1992), serve as methodological approaches for the construction of models. Software tools for business process modeling, such as IBM Rational or the ARIS-Toolset, can support the business engineer by way of system components for the collection, design, analysis, and simulation of business process models. The extensive demand for information models in business engineering warrants the need for reference modeling concepts. The intention of reference modeling is to systematically reuse information models in systems reengineering (Thomas 2005; vom Brocke 2003). The approach is based on the finding that, despite various differences between design processes, general design patterns can be identified capable of solving design problems for a wide range of applications. Thus, the goal of reference models is to cover these general patterns in order to raise the efficiency and effectiveness of specific modeling processes (Becker et al. 2004a; Fettke et al. 2003; Mertins et al. 2006; Scheer et al. 2000). To give a definition, a reference model is a special information model that can be reused in the design process of other business process models (vom Brocke 2003). Well-known examples of reference models in the scientific field are the reference model for industrial enterprises from Scheer (1994), as well as the SAP R/3-reference model resulting from commercial practice (Curran et al. 1998). The Dilemma of Reference Modeling One of the most important questions for the reference model constructor is what makes a reference model he has created a marketable product. If one abstracts from the argumentation of questions of quality, then the user of reference models will orient himself above all, on the effort required for its adaptation. A user will acknowledge the usefulness of a reference model when the effort needed for the construction of his specific model is considerably reduced by using the reference model. The constructor of a reference model is therefore urged to keep the adaptation needs of the reference model for his “customer” as low as possible. The effort needed for the adaptation of a reference model to an enterprise-specific situation is low when many of the use-case specifics are represented by the reference model. However, the more specific a reference model is, the fewer the enterprises are for which it can be applied—i.e. the potential demand for the reference model is lower. An increase in the demand for a reference model is, in turn only possible, by increasing the classes of use-cases to which it applies. This however, results in an increase in effort for the adaptation of the reference model by the respective user and this, in turn, reduces the usefulness of the reference model for the user. Although the problem described above, referred to by Becker et al. as the “dilemma of reference modeling” (Becker et al. 2002, p. 26) is an abstracting argument, it does make clear that reference model developers are confronted with elementary problems in identifying a market for their products. Nevertheless, some authors consider this market to be just around the corner. Lang for example, sees a potential for the development of a market for the building blocks developed by him in his approach to designing business processes using such reference process building blocks. In his “Looking to the Future” (Lang 1997, p. 206) he presumes that service enterprises will concentrate on the creation and evolutionary advancement of reference process building block libraries according to uniform standards. Maicher is also of the opinion that the “development and management of […] reference models is becoming a competitive factor within the field of consulting” (Maicher 1999, p. 182). The Need for Work on Infrastructures Supporting Reference Modeling in Practice Findings indicate that one reason for the rare practical use of reference modeling in business engineering may lay in the fact that reference modeling is still in a rather early stage of development (vom Brocke 2003; Thomas et al. 2004; Becker et al. 2004a; Fettke et al. 2003). Most contributions focus on methodological aspects which may not suffice to put business engineers in the position of building and using reference models in operational design processes. In order to increase the practical use of reference modeling, a pragmatic approach is required. This approach is characterized by focusing on the specific context situation of a modeling project from which a more holistic view, pertaining to the needs of the stakeholder involved in the design process, is derived. According to these needs, a comprehensive infrastructure is then built comprising helpful settings for the design and use of reference models in business engineering. Methodological aspects may also play an important role within this infrastructure. At the same time, however, the infrastructure is not only limited to these aspects. Further aspects may also be relevant and there might even be situations in which limitations in methodology may be compensated by the appropriate pragmatic arrangements. These ideas will be examined in detail in the following section. We will identify the essential building blocks in a reference modeling environment which are then structured within a comprehensive framework. The Building Blocks of an Infrastructure for Reference Modeling An infrastructure for supporting business engineers in reusing conceptional models must be oriented towards the specific needs of a certain design situation. However, certain fields of action relevant for designing the infrastructure can be distinguished. A description of these fields within a framework can serve as a guideline for the implementation of specific infrastructures. In order to derive relevant fields of action, a framework describing specific aspects for the implementation of design processes in information systems science can be applied (vom Brocke 2003). Figure 1 presents an overview of this framework along with the fields of action for building an infrastructure for the reuse of conceptional models in business engineering. The framework emphasizes the fact that the implementation of design processes is an interdisciplinary task. Thus, the work calls for contributions from various perspectives which must be integrated according to specific requirements and opportunities. This model particularly shows, that apart from the methodological aspects of model design focused on in theory, contributions in the field of technological and organizational infrastructure are needed. ![Figure 1. Framework for the Design of Infrastructures for Reference Modeling](image) Seen against the background of this framework, we can identify three fields of action for the design of an infrastructure for reference modeling: - **Organizational Infrastructure:** Relevant stakeholders in a certain reference modeling situation must be identified and efficient ways of coordination between them established. In detail, this indicates the need to take into account the user’s perspective at an early stage in the modeling process. Further stakeholders could for example be, business associates, scientific communities and shareholders. - **Methodological Infrastructure:** Appropriate guidelines for describing business processes using models are needed. These guidelines should focus on certain characteristics which models should have in order to meet the requirements of certain modeling situations. Thus, rules are derived describing ways of building models accordingly. - **Technological Infrastructure:** In order to make use of reference modeling in practice, application systems supporting the settings considered relevant within the other fields are needed. From a methodological perspective, it is mainly the functionality of case tools that is addressed. Thus, available tools must be examined and used accordingly. In addition, seen from an organizational perspective, systems supporting various ways of cooperation are needed. This, functionality which is typically provided by knowledge management systems, work group systems, or project management systems is important. According to the model, the fields of action described above must be designed in view of specific modeling situations. These situations are characterized by certain requirements and opportunities which direct the settings in the fields. In order to meet the situation properly, various interdependencies between the settings in the different fields must be taken into account. For example, the technological conditions have an effect as an enabler or as a restriction for both organizational and technical settings. Thus, the design follows a balanced manner, aiming at a so-called ‘fit of design’. A study will be presented in the following section which analyzes the impact of the approach of building infrastructures for reference modeling support. This study was chosen to emphasize the special characteristics of the framework. Thus, the study particularly shows the impact of work in the field of organizational and technological design on the practicability of reference modeling. A STUDY ON BUILDING AN INFRASTRUCTURE FOR REFERENCE MODELING APPLYING COLLABORATIVE TECHNIQUES FOR BUSINESS ENGINEERING The Potentials of Collaborative Techniques for Reference Modeling in Business Engineering The effects of the approach on reference modeling presented in this paper which aim at building a comprehensive infrastructure for special modeling situations can be illustrated by a concept called “collaborative reference modeling”. Within this concept, reference modeling is primarily addressed from an organizational perspective, deriving consecutive settings in the field of technological and methodological infrastructure. The essential idea of collaborative reference modeling is to share models with a greater range of shareholders in order to both continuously check and improve them from various perspectives. Accordingly, the infrastructure should provide efficient ways of transferring and discussing modeling results during the entire life cycle of certain business areas. Given such an infrastructure, both a division of labor and an increase in model quality could be achieved. As a result, an essential contribution to business engineering could be achieved in practice. In order to design an appropriate infrastructure for collaborative reference modeling, efficient means of collaboration from an organizational perspective must first be analyzed. These findings then set the main requirements for the design of the technical infrastructure which is then used to implement the organizational processes in practice. In addition, findings in the field of methodological infrastructures can be derived which make the collaborative design of reference models in practice easier. The following passage briefly introduces these perspectives. Organizational Infrastructure: Networking of Stakeholders For collaborative purposes, mechanisms of network organizations (Håkansson 1989; Klein 1993) can be applied in the organizational infrastructure of reference modeling. In particular, preliminary work in the field of organizing reuse-based engineering can be applied (Mili et al. 2002; Ommering 2002; Tracz 1995). According to the transaction cost theory, the arrangements may be carried out by hierarchy, market or hybrid forms of coordination (Coase 1937; Williamson 1985). A deeper analysis of the alternatives to reference modeling (vom Brocke et al. 2004) shows that the network organization, as a hybrid mode, is a promising means for reference modeling. On the one hand, it guarantees certain standardization necessary for developing shared mental models, while on the other, it leaves a critical degree of flexibility important for involving a wide range of stakeholders. On the basis of the AGIL-scheme (Klein 1993), a brief outline of the underlying mechanisms of the network organization in reference modeling can be given. A strong impact on coordination comes from the individual return each stakeholder expects from his or her participation in the network. In particular, suppliers of reference models face a wide range of customers, whereas the customers themselves profit from transparency over a greater range of models. The design of reference models can focus on highly specialized solutions which significantly contributes to model quality. Thanks to a stronger coupling compared to markets, people tend to establish a common understanding of their business in networks. In reference modeling, this gives way to the establishment of shared mental models pertaining to the semantic context of an application domain. Whereas the information system infrastructure provides a methodology for describing the semantic context, its design and application are carried out on an organizational level. This shared context is vital for efficient collaboration, because the understanding of models is strongly influenced by personal perception. Due to the history of shared experiences, social relations evolve in networks. These relations are helpful in order in modeling projects. Assets, such as the reputation of stakeholders, give ground for vague requirements specifications which facilitate flexible responses in a dynamically changing environment. This way, both the quality and the efficiency of the design, are supported. In addition, governing structures are also evident in networks. In software development for example, open source-communities represent an example of rather liberal and self-regulatory governing structures. In these arrangements the influence of single stakeholders results from their contribution to the network. In the practice of reference modeling however, collaboration might also be meaningful in projects with a restricted audience. Take, for example, development projects carried out by ERP-system providers involving various experts worldwide and a selected group of customers. In these applications, a more centralized governing structure could be established. Technological Infrastructure: Collaborative Platforms In order to start collaboration, we need information systems which support model sharing (Gomaa 1995). In particular, this means the support of processes for both exchanging and discussing models within a shared semantic context. The essential functionality is illustrated in Figure 2 with an example for a prototypical implementation (see www.herbie-group.de). Figure 2: Elements of a Collaborative Platform for Reference Modeling Features for exchanging models, i.e. the up and downloading of models on a shared repository, build the foundation for the collaborative design. Internet-technology offers promising means for accessing the repository in a flexible manner via a web-browser. On the basis of standard exchange-formats like XML, higher-level formats complying with the syntax of modeling languages are path leading. For the language of EPC for example, the format EPML is provided (Mendling et al. 2004). Standards like WebDAV make it possible to integrate the platform with local file-servers which facilitate the processes of model exchange. Beyond the technical aspects, it is essential to capture the semantics of the models to be shared on the platform (Mili et al. 1995). For this purpose, feature-based techniques can be applied. Apart from the area of domain engineering (Kang et al. 1998), these techniques are subject to the field of knowledge management, especially information retrieval. In this field, quite a number of appropriate techniques are being developed, ranging from simple taxonomies to more complex anthologies (Daconta et al. 2003; Whitman et al. 2001). However, the appropriate application of these methods in practice still seems to be challenging. Services for discussing models are needed in order to support the continuous improvement of the reference models disseminated on the platform. In contrast to conventional community platforms, these services should be made available in relation to each single model. In reference modeling, such a close connection is essential for directing the discussion towards special contributions and thereby, increasing the efficiency of the collaboration. Because the preferences for the topics of discussion differ from case to case, various channels of communication should be offered for each model, including newsgroups for asynchronous communication and chat rooms for synchronous communication. Methodological Infrastructure: The Encapsulation of Models In the study described in this paper, settings in the organizational field gave way to a new approach in reference modeling characterized by collaboration. As a technological basis to the approach, collaborative systems can be applied which offer special functionalities for sharing knowledge on the basis of conceptional models. Thus, the study mainly gives an example of the impact of the organizational and technological infrastructure, yet it also illustrates the fact that new methodological findings can be derived. In the study, for example, special requirements for the design of models can be gained in order to then easily share them in a collaborative manner. Throughout the network of stakeholders, models represented in various modeling languages can be shared. For example, UML and EPC models can be distributed one by one. However, the efficiency of sharing the models could be increased by encapsulating them according to certain standards (vom Brocke 2003). An example of such a standard is shown in Figure 3. In the example, models for accounting in the procurement and distribution process of retail information systems (Becker et al. 2004b) are encapsulated in one component for ‘accounts payable’. The framework incorporates principles from component-based software engineering (Mili et al. 2002; Szyperski 1998). This essentially means that multiple models must be structured in such a way that a combination of them fulfils a certain modeling purpose. In addition, a description of the collection is given which serves to hide implementation details and to identify models by their essential semantic contribution. For this purpose, the framework provides interfaces on multiple layers: in detail, there are interfaces which specify the overall subject, the content provided to cover it and the representation available describing the content. In the interface which specifies the subject, the overall contribution of the model is described on a pragmatic level. In addition to identifiers, the purpose of the collected models is characterized so that the component may be easily found by its contribution. For this purpose, both a textual and a taxonomy-based description are considered. The taxonomy-based description is especially helpful in large-scale networks because it builds the foundation for mechanisms of information retrieval (Mili et al. 2002). In particular, work on semantic descriptions carried out in the field of knowledge management can be applied for collaborative reference modeling. According to this type of specification, the component shown in Figure 3 is characterized by the framework to provide ‘Conceptional Models for Accounts Payable and Accounts Receivable…’ which address companies in the branches of ‘Retail’ and ‘Industry’, as well as in the ‘Service’ branch. The content that is necessary for fulfilling the overall purpose of the component is specified by an additional interface on a more detailed layer. In this interface, items of the taxonomy serve to differentiate content regarding various views in information modeling. On the basis of systems-thinking, focusing on either the behavior or the properties of a described system can differentiate models. Further differentiations can be implemented by the taxonomy, including either a wider or a more detailed set of views. The component describing ‘accounts payable’, for example, needs descriptions of behavioral... aspects from the processing of ‘Incoming Payments’ and ‘Outgoing Payments’, as well as from ‘Reminders’. As a foundation, properties described in the ‘Account Current’ are needed. In a collaborative environment, the content of each type can be represented in various modeling languages because stakeholders have different preferences. Therefore, a special interface must be created which specifies the representations available. The semantic description here serves to characterize the stockholder’s perspective for which a representation is made. The ERM representing the ‘Account Current’, for example, addresses ‘Software Engineers’. Additional rules are required to support the integration of models in order to ensure a consistency in construction. A relationship-type named ‘Payment Supplier Posting’ must be available which corresponds to the function ‘Post-Payment Supplier’ as part of the behavioral design. CONCLUSION AND FURTHER RESEARCH This article presents an approach to reference modeling from a pragmatic perspective. Apart from focusing on methodological aspects, we recommend considering the type of modeling situation relevant. Thus, the overall aim is to provide business engineers with an infrastructure which facilitates the reuse of models in their daily work. Following this approach, a framework was introduced which illustrates major building blocks of such an infrastructure. Findings show that apart from methodological aspects, it is particularly the organizational and technological aspects which play a major role. In order to illustrate and evaluate the approach, a study applying the theoretical framework was presented. This study describes a design for an infrastructure for reference modeling which facilitates the collaboration of various stakeholders in business engineering. It is argued that such a collaborative setting, derived from organizational theory, can strongly contribute to the use of reference modeling in practice. For each building block in the infrastructure, a detailed description of relevant contributions is given. According to this study, which focuses on the potential for collaboration, further modeling situations must be analyzed. This way, a kind of reference guide for the infrastructure design can be given. The rationale for further research work in the field of reference modeling lies in the fact that theory and practice have as yet not established an standardized reference modeling language. The reference modeling-specific extensions of established languages from information modeling developed primarily in the scientific world (for example, for ERM or EPC) were, up to now, hardly ever used in practice. Reference modeling research must balance between formal precision and pragmatic usability in the development of such a modeling language: if for example, modeling languages have a formal semantic, then they are suited to machine processing, however the interpretation of real-world coherences, can become complicated. In this context, research which deals with this conflict is highly desirable. Therefore, in the future, a central task in reference modeling research should be not only to show the consequences of results for the scientific world, but also for modeling practice. Problematic in the construction of reference modeling languages is that they must be aimed not only at the creation, but also the use of reference models. The construction techniques used for the adaptation of a model by the user must thus be imbedded in the languages. The effort for developing such languages is however so high, that it often outweighs the benefits such reference model adaptations bring to modeling projects. Reference modeling research must therefore, also address questions of profitability in reference model use in the future. ACKNOWLEDGMENTS This publication is based on work done in cooperation between the “Center for Internet Economy” at ERCIS (grant number 01AK704) and the research project “Reference Model-Based Customizing with Vague Data” at DFKI (grant number SCHE 185/25–1). The authors wish to thank the German Federal Ministry of Education and Research (BMBF) and the German Research Foundation (DFG) for financial support. The authors would also like to thank the anonymous reviewers for their comments which helped to improve an earlier version of this paper. REFERENCES
{"Source-Url": "https://aisel.aisnet.org/cgi/viewcontent.cgi?referer=&httpsredir=1&article=1636&context=amcis2006", "len_cl100k_base": 5254, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 24070, "total-output-tokens": 7944, "length": "2e12", "weborganizer": {"__label__adult": 0.0006265640258789062, "__label__art_design": 0.002147674560546875, "__label__crime_law": 0.00087738037109375, "__label__education_jobs": 0.0190582275390625, "__label__entertainment": 0.0002484321594238281, "__label__fashion_beauty": 0.00042939186096191406, "__label__finance_business": 0.052001953125, "__label__food_dining": 0.0008130073547363281, "__label__games": 0.0010404586791992188, "__label__hardware": 0.0013103485107421875, "__label__health": 0.001377105712890625, "__label__history": 0.0009698867797851562, "__label__home_hobbies": 0.0002865791320800781, "__label__industrial": 0.0026035308837890625, "__label__literature": 0.00154876708984375, "__label__politics": 0.0008444786071777344, "__label__religion": 0.0008654594421386719, "__label__science_tech": 0.35595703125, "__label__social_life": 0.00033211708068847656, "__label__software": 0.052093505859375, "__label__software_dev": 0.501953125, "__label__sports_fitness": 0.0003867149353027344, "__label__transportation": 0.0015010833740234375, "__label__travel": 0.0004012584686279297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35178, 0.02638]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35178, 0.42454]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35178, 0.88236]], "google_gemma-3-12b-it_contains_pii": [[0, 869, false], [869, 5203, null], [5203, 10540, null], [10540, 13440, null], [13440, 18520, null], [18520, 21267, null], [21267, 24457, null], [24457, 29573, null], [29573, 33573, null], [33573, 35178, null]], "google_gemma-3-12b-it_is_public_document": [[0, 869, true], [869, 5203, null], [5203, 10540, null], [10540, 13440, null], [13440, 18520, null], [18520, 21267, null], [21267, 24457, null], [24457, 29573, null], [29573, 33573, null], [33573, 35178, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35178, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35178, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35178, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35178, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35178, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35178, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35178, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35178, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35178, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35178, null]], "pdf_page_numbers": [[0, 869, 1], [869, 5203, 2], [5203, 10540, 3], [10540, 13440, 4], [13440, 18520, 5], [18520, 21267, 6], [21267, 24457, 7], [24457, 29573, 8], [29573, 33573, 9], [33573, 35178, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35178, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07